英文:
conn (net.Conn) does not write always on socket
问题
我已经完成了一个使用服务器和客户端通过TCP套接字发送信息的应用程序。
问题是,如果服务器端的zfs.ReceiveSnapshot
函数没有返回错误(err == nil),那么conn.Write([]byte("0"))
就不起作用,客户端就无法接收到任何字节继续执行,并且无法关闭连接...
我向您展示服务器和客户端的代码。
服务器:
package main
import (
"net"
"github.com/mistifyio/go-zfs"
)
func main() {
// 监听传入的连接
l, _ := net.Listen("tcp", ":7766")
// 应用程序关闭时关闭监听器
defer l.Close()
fmt.Println("正在监听端口 7766...")
for {
// 监听传入的连接
conn, _ := l.Accept()
// 在新的 goroutine 中处理连接
go handleRequest(conn)
}
}
// 处理传入的请求
func handleRequest(conn net.Conn) {
// 接收快照
_, err := zfs.ReceiveSnapshot(conn, "tank/replication")
if err != nil {
conn.Write([]byte("1"))
zfs.ReceiveSnapshot(conn, "tank/replication")
conn.Close()
} else {
conn.Write([]byte("0"))
conn.Close()
}
}
客户端:
package main
import (
"net"
"github.com/mistifyio/go-zfs"
)
func main() {
conn, _ := net.Dial("tcp", "192.168.99.5:7766")
for i := 0; i < 1; i++ {
// 获取 tank/test 中的所有快照
take, _ := zfs.Snapshots("tank/test")
// 选择快照
snap := take[0].Name
ds1, _ := zfs.GetDataset(snap)
// 发送第一个快照
ds1.SendSnapshot(conn)
defer conn.Close()
buff := make([]byte, 1024)
n, _ := conn.Read(buff)
if n != 0 {
snap = take[1].Name
ds2, _ := zfs.GetDataset(snap)
zfs.SendSnapshotIncremental(conn, ds1, ds2, zfs.IncrementalStream)
conn.Close()
}
}
}
[编辑]:
如果ReceiveSnapshots
返回一个错误,conn.Write([]byte)
会写入"1",客户端会接收到它,执行SendSnapshotIncremental
(当n != 0
时),并在客户端关闭连接... 但是,如果ReceiveSnapshot
没有返回错误,conn.Write([]byte)
就不会写入"0",只有在服务器端使用Ctrl+C关闭连接。
英文:
I have done an application with a server and client to send information with a socket TCP
The problem is, if the function zfs.ReceiveSnapshot (in server side) does not return an error (err == nil), conn.Write([]byte("0"))
does not work and the client does not receive any byte to continue and it cannot close the connection...
I show you the code of server and client
Server:
package main
import (
"net"
"github.com/mistifyio/go-zfs"
)
func main() {
// Listen for incoming connections
l, _ := net.Listen("tcp", ":7766")
// Close the listener when the application closes
defer l.Close()
fmt.Println("Listening on port 7766...")
for {
// Listen for an incoming connection.
conn, _ := l.Accept()
// Handle connections in a new goroutine.
go handleRequest(conn)
}
}
// Handles incoming requests
func handleRequest(conn net.Conn) {
// Receive snapshot
_, err := zfs.ReceiveSnapshot(conn, "tank/replication")
if err != nil {
conn.Write([]byte("1"))
zfs.ReceiveSnapshot(conn, "tank/replication")
conn.Close()
} else {
conn.Write([]byte("0"))
conn.Close()
}
}
Client:
package main
import (
"net"
"github.com/mistifyio/go-zfs"
)
func main() {
conn, _ := net.Dial("tcp", "192.168.99.5:7766")
for i := 0; i < 1; i++ {
// Get all snapshots in tank/test
take, _ := zfs.Snapshots("tank/test")
// Select snapshots
snap := take[0].Name
ds1, _ := zfs.GetDataset(snap)
// Send first snapshot
ds1.SendSnapshot(conn)
defer conn.Close()
buff := make([]byte, 1024)
n, _ := conn.Read(buff)
if n != 0 {
snap = take[1].Name
ds2, _ := zfs.GetDataset(snap)
zfs.SendSnapshotIncremental(conn, ds1, ds2, zfs.IncrementalStream)
conn.Close()
}
}
}
[EDIT]:
If ReceiveSnapshots
returns an error, conn.Write([]byte ) writes "1", the cliente receives it, execute SendSnapshotIncremental (it does if n != 0) and close the connection in client side... But, if ReceiveSnapshot
does not return an error, conn.Write([]byte ) does not write "0" just only I close the connection in server side with ctrl+C
答案1
得分: 1
我相信问题出在以下几行代码:
buff := make([]byte, 1024)
n, _ := conn.Read(buff)
在这种情况下,n是读取的字节数,而不是其中任何一个字节的值。
我会这样修改:
buff := make([]byte, 1)
n,err := conn.Read(buff)
buff = buff[:n]
if len(buff) == 0 {
//错误处理
}
if(buff[0] != 0){...
英文:
I believe the problem is in these lines:
buff := make([]byte, 1024)
n, _ := conn.Read(buff)
n in this case is the number of bytes read, not the value of any of them.
I'd do:
buff := make([]byte, 1)
n,err := conn.Read(buff)
buff = buff[:n]
if len(buff) == 0 {
//error
}
if(buff[0] != 0){...
答案2
得分: 0
[编辑和解决]
对此进行了几处修改以解决问题...
服务器端:
func handleRequest(conn net.Conn) {
// 服务器端快照数量
take, _ := zfs.Snapshots("tank/replication")
count := len(take)
if count < 1 {
conn.Write([]byte("0"))
} else {
conn.Write([]byte("1"))
}
// 接收快照
zfs.ReceiveSnapshot(conn, "tank/replication")
}
count
返回服务器端的快照数量,所以如果没有快照,客户端将执行 case
,否则执行默认情况。
客户端:
func main() {
// 获取最后两个快照
take, _ := zfs.Snapshots("tank/test")
snap := take[0].Name
ds1, _ := zfs.GetDataset(snap)
snap = take[1].Name
ds2, _ := zfs.GetDataset(snap)
// 新建服务器连接
conn, _ := net.Dial("tcp", "192.168.99.5:7766")
// 读取服务器端数据
buff := bufio.NewReader(conn)
n, _ := buff.ReadByte()
// 执行正确的情况
switch string(n) {
case "0":
ds1.SendSnapshot(conn)
conn.Close()
default:
zfs.SendSnapshotIncremental(conn, ds1, ds2, zfs.IncrementalStream)
conn.Close()
}
}
我不确定这个解决方案是否很好,但它是可行的!
英文:
[Edit & Solved]
Several modifications to solve this...
Server:
func handleRequest(conn net.Conn) {
// Number of snapshots in server side
take, _ := zfs.Snapshots("tank/replication")
count := len(take)
if count < 1 {
conn.Write([]byte("0"))
} else {
conn.Write([]byte("1"))
}
// Receive snapshot
zfs.ReceiveSnapshot(conn, "tank/replication")
}
count
returns the number of snapshots that server has, so if there are nothing, the client will execute the case, in other case the default.
Client:
func main() {
// Get the last two snapshots
take, _ := zfs.Snapshots("tank/test")
snap := take[0].Name
ds1, _ := zfs.GetDataset(snap)
snap = take[1].Name
ds2, _ := zfs.GetDataset(snap)
// New server connection
conn, _ := net.Dial("tcp", "192.168.99.5:7766")
// Read data of server side
buff := bufio.NewReader(conn)
n, _ := buff.ReadByte()
// Execute the correct case
switch string(n) {
case "0":
ds1.SendSnapshot(conn)
conn.Close()
default :
zfs.SendSnapshotIncremental(conn, ds1, ds2, zfs.IncrementalStream)
conn.Close()
}
}
I don't know if it's so good this solution but it works!
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论