英文:
Golang Server Timeout
问题
我有一个非常简单的Go服务器:
package main
import (
"net/http"
"fmt"
"log"
)
func test(w http.ResponseWriter, r *http.Request) {
fmt.Println("No bid")
http.Error(w, "NoBid", 204)
}
func main() {
http.HandleFunc("/test/bid", test)
http.ListenAndServe(":8080", nil)
log.Println("Done serving")
}
然后我运行了Apache基准测试工具:
ab -c 50 -n 50000 -p post.txt http://127.0.0.1:8080/test/bid
服务器运行并响应了大约15000个请求,然后超时了。我想知道为什么会发生这种情况,以及是否有什么我可以做的。
英文:
I have a very simple go server:
package main
import(
"net/http"
"fmt"
"log"
)
func test(w http.ResponseWriter, r *http.Request){
fmt.Println("No bid")
http.Error(w, "NoBid", 204)
}
func main() {
http.HandleFunc("/test/bid", test)
http.ListenAndServe(":8080", nil)
log.Println("Done serving")
}
I then run the apache benchmark tool:
ab -c 50 -n 50000 -p post.txt http://127.0.0.1:8080/test/bid
The Server runs and responds to about 15000 requests and then times out. I was wondering why this happens and if there is something I can do about this.
答案1
得分: 3
如果你在Linux上运行,可能是因为打开的文件太多,所以无法创建连接,你需要更改系统配置以支持更多的连接。
例如,
编辑/etc/security/limits.conf
,添加以下内容:
* soft nofile 100000
* soft nofile 100000
以打开更多的文件。
编辑/etc/sysctl.conf
文件,添加以下内容:
# 使用更多的端口
net.ipv4.ip_local_port_range = 1024 65000
# 保持连接的超时时间
net.ipv4.tcp_keepalive_time = 300
# 允许重用连接
net.ipv4.tcp_tw_reuse = 1
# 快速恢复连接
net.ipv4.tcp_tw_recycle = 1
英文:
If you running in Linux, Maybe too many open files, so it can't create connection, You need change system config to support more connections.
For example,
edit /etc/security/limits.conf
add
* soft nofile 100000
* soft nofile 100000
To open more file.
edit /etc/sysctl.conf
# use more port
net.ipv4.ip_local_port_range = 1024 65000
# keep alive timeout
net.ipv4.tcp_keepalive_time = 300
# allow reuse
net.ipv4.tcp_tw_reuse = 1
# quick recovery
net.ipv4.tcp_tw_recycle = 1
答案2
得分: 2
我在我的Linux AMD64笔记本上尝试复制您的问题,但没有成功-即使使用以下命令也可以正常工作:
ab -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
不过,这会打开大约28,000个套接字,这可能会触发系统上的限制。
一个更真实的测试可能是打开keepalives,最多可以达到400个套接字:
ab -k -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
这个测试的结果是:
服务器软件:
服务器主机名:127.0.0.1
服务器端口:8080
文档路径:/test/bid
文档长度:6字节
并发级别:200
测试所用时间:33.807秒
完成的请求数:500000
失败的请求数:0
写入错误数:0
Keep-Alive请求数:500000
总传输量:77000000字节
总发送体积:221500000
HTML传输量:3000000字节
每秒请求数:14790.04 [#/sec] (平均值)
每个请求的时间:13.523 [ms] (平均值)
每个请求的时间:0.068 [ms] (平均值,所有并发请求)
传输速率:2224.28 [Kbytes/sec] 接收
6398.43 kb/s 发送
8622.71 kb/s 总计
连接时间 (ms)
最小 平均[+/-sd] 中位数 最大
连接: 0 0 0.1 0 11
处理: 0 14 5.2 13 42
等待: 0 14 5.2 13 42
总计: 0 14 5.2 13 42
在特定时间内完成请求的百分比 (ms)
50% 13
66% 16
75% 17
80% 18
90% 20
95% 21
98% 24
99% 27
100% 42 (最长请求)
我建议您尝试使用ab
命令并加上-k
选项,并尝试调整系统以支持大量打开的套接字。
英文:
I tried to replicate your problem on my linux amd64 laptop with no success - it worked fine even with
ab -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
There were about 28,000 sockets open though which may be bumping a limit on your system.
A more real world test might be to turn keepalives on which maxes out at 400 sockets
ab -k -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
The result for this was
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /test/bid
Document Length: 6 bytes
Concurrency Level: 200
Time taken for tests: 33.807 seconds
Complete requests: 500000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 500000
Total transferred: 77000000 bytes
Total body sent: 221500000
HTML transferred: 3000000 bytes
Requests per second: 14790.04 [#/sec] (mean)
Time per request: 13.523 [ms] (mean)
Time per request: 0.068 [ms] (mean, across all concurrent requests)
Transfer rate: 2224.28 [Kbytes/sec] received
6398.43 kb/s sent
8622.71 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 11
Processing: 0 14 5.2 13 42
Waiting: 0 14 5.2 13 42
Total: 0 14 5.2 13 42
Percentage of the requests served within a certain time (ms)
50% 13
66% 16
75% 17
80% 18
90% 20
95% 21
98% 24
99% 27
100% 42 (longest request)
I suggest you try ab
with -k
and take a look at tuning your system for lots of open sockets
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论