英文:
Server failing to parse packets when flooded too fast
问题
当我用每秒发送太多数据包来淹没服务器时,我遇到了以下错误:
2014/11/28 12:52:49 main.go:59: 加载插件: print
2014/11/28 12:52:49 main.go:86: 在 0.0.0.0:8080 上启动服务器
2014/11/28 12:52:59 server.go:15: 客户端已连接: 127.0.0.1:59146
2014/11/28 12:52:59 server.go:43: 从客户端 127.0.0.1:59146 接收到数据: &main.Observation{SensorId:"1", Timestamp:1416492023}
2014/11/28 12:52:59 server.go:29: 从 127.0.0.1:59146 读取错误: zlib: 无效的头部
2014/11/28 12:52:59 server.go:18: 关闭与 127.0.0.1:59146 的连接
它能够解码一个数据包(有时候可能是2或3个),然后出现错误。以下是执行淹没操作的代码:
import socket
import struct
import json
import zlib
import time
def serialize(data):
data = json.dumps(data)
data = zlib.compress(data)
packet = struct.pack('!I', len(data))
packet += data
return len(data), packet
message = {
'sensor_id': '1',
'timestamp': 1416492023,
}
length, buffer = serialize([message])
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
while True:
client.send(buffer)
#time.sleep(0.0005)
当我取消注释time.sleep()
调用时,服务器正常工作。似乎每秒发送太多数据包会导致服务器崩溃。为什么会这样?
以下是相关的Go代码。连接处理程序:
func (self *Server) handleConnection(connection net.Conn) {
for {
connection.SetReadDeadline(time.Now().Add(30 * time.Second))
observations, err := self.Protocol.Unserialize(connection)
if err != nil {
log.Printf("从 %s 读取错误: %s\n", connection.RemoteAddr(), err)
return
}
}
以下是反序列化器:
// Length Value protocol to read zlib compressed, JSON encoded packets.
type ProtocolV2 struct{}
func (self *ProtocolV2) Unserialize(packet io.Reader) ([]*Observation, error) {
var length uint32
if err := binary.Read(packet, binary.BigEndian, &length); err != nil {
return nil, err
}
buffer := make([]byte, length)
rawreader := bufio.NewReader(packet)
if _, err := rawreader.Read(buffer); err != nil {
return nil, err
}
bytereader := bytes.NewReader(buffer)
zreader, err := zlib.NewReader(bytereader)
if err != nil {
return nil, err
}
defer zreader.Close()
var observations []*Observation
decoder := json.NewDecoder(zreader)
if err := decoder.Decode(&observations); err != nil {
return nil, err
}
return observations, nil
}
英文:
Here's the error I get when I flood the server with too many packets per second:
2014/11/28 12:52:49 main.go:59: loading plugin: print
2014/11/28 12:52:49 main.go:86: starting server on 0.0.0.0:8080
2014/11/28 12:52:59 server.go:15: client has connected: 127.0.0.1:59146
2014/11/28 12:52:59 server.go:43: received data from client 127.0.0.1:59146: &main.Observation{SensorId:"1", Timestamp:1416492023}
2014/11/28 12:52:59 server.go:29: read error from 127.0.0.1:59146: zlib: invalid header
2014/11/28 12:52:59 server.go:18: closing connection to: 127.0.0.1:59146
It manages to decode one packet (sometimes, maybe 2 or 3) then errors out. Here's the code doing the flooding:
import socket
import struct
import json
import zlib
import time
def serialize(data):
data = json.dumps(data)
data = zlib.compress(data)
packet = struct.pack('!I', len(data))
packet += data
return len(data), packet
message = {
'sensor_id': '1',
'timestamp': 1416492023,
}
length, buffer = serialize([message])
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
while True:
client.send(buffer)
#time.sleep(0.0005)
When I uncomment the time.sleep()
call, the server works fine. It seems too many packets/per second is killing the server. Why?
Here's the relevent Go code. The connection handler:
func (self *Server) handleConnection(connection net.Conn) {
for {
connection.SetReadDeadline(time.Now().Add(30 * time.Second))
observations, err := self.Protocol.Unserialize(connection)
if err != nil {
log.Printf("read error from %s: %s\n", connection.RemoteAddr(), err)
return
}
}
And here's the unserializer:
// Length Value protocol to read zlib compressed, JSON encoded packets.
type ProtocolV2 struct{}
func (self *ProtocolV2) Unserialize(packet io.Reader) ([]*Observation, error) {
var length uint32
if err := binary.Read(packet, binary.BigEndian, &length); err != nil {
return nil, err
}
buffer := make([]byte, length)
rawreader := bufio.NewReader(packet)
if _, err := rawreader.Read(buffer); err != nil {
return nil, err
}
bytereader := bytes.NewReader(buffer)
zreader, err := zlib.NewReader(bytereader)
if err != nil {
return nil, err
}
defer zreader.Close()
var observations []*Observation
decoder := json.NewDecoder(zreader)
if err := decoder.Decode(&observations); err != nil {
return nil, err
}
return observations, nil
}
答案1
得分: 0
在Python脚本的客户端部分似乎出现了错误。
没有检查client.send的返回值,所以脚本没有正确处理部分写入的情况。基本上,当套接字缓冲区已满时,只会写入部分消息,导致服务器无法解码消息。
这段代码有问题,但添加等待使其工作,因为它防止套接字缓冲区满。
您可以使用client.sendall来确保写操作完成。
有关更多信息,请参阅Python文档:
https://docs.python.org/2/library/socket.html
https://docs.python.org/2/howto/sockets.html#using-a-socket
现在在Go服务器中,也存在类似的问题。文档中提到:
> Read将数据读入p中。它返回读入p的字节数。它在底层Reader上最多调用一次Read,因此n可能小于len(p)。在EOF时,计数将为零,err将为io.EOF。
rawreader.Read调用可能返回比您期望的字节数少。您可能希望使用io包的ReadFull()函数来确保完整读取消息。
英文:
It seems there is an error on client side in the Python script.
The return of client.send is not checked, so the script does not handle partial writes in a correct way. Basically, when the socket buffer is full, only part of the message will be written, resulting in the server unable to decode the message.
This code is broken, but adding the wait makes it work because it prevents the socket buffer to be full.
You can use client.sendall instead to ensure the write operations are complete.
More information in the Python documentation:
https://docs.python.org/2/library/socket.html
https://docs.python.org/2/howto/sockets.html#using-a-socket
Now in the Go server, there is also a similar problem. The documentation says:
> Read reads data into p. It returns the number of bytes read into p. It calls Read at most once on the underlying Reader, hence n may be less than len(p). At EOF, the count will be zero and err will be io.EOF.
The rawreader.Read call may return less bytes than you expect. You may want to use the ReadFull() function of the io package to ensure the full message is read.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论