英文:
Performance Drop whilst testing Go Language
问题
我一直在使用http_load测试用Go编写的简单Web服务器。当以100个并行运行1秒钟的测试时,我看到完成了16k个请求。然而,运行10秒钟的测试结果是完成了大约相同数量的请求,但速度只有1秒测试的1/10。
此外,如果我连续运行几个1秒钟的测试,第一个测试将完成16k个请求,而随后的测试只能完成100-200个请求。
我想知道在处理这么多请求时,性能为什么会达到一个上限,以及在上述Web服务器的实现中是否有我忽略的东西。
英文:
I've been testing out a simple web server written in Go with http_load. When running the test for 1 second with 100 parallel I've seen 16k requests completed. However, running the test for 10 seconds results in a similar number of requests being completed at around 1/10th of the rate of the 1 second test.
Additionally, if I run several 1 second test close together, the first test will complete 16k requests and the subsequent tests will complete just 100-200 requests.
package main
import "net/http"
func main() {
bytes := make([]byte, 1024)
for i := 0; i < len(bytes); i++ {
bytes[i] = 100
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write(bytes)
})
http.ListenAndServe(":8000", nil)
}
I'm wondering whether there is any reason as to why the performance would hit a ceiling whilst dealing with this number of requests and whether there is something I have missed in the implementation of the above web server.
答案1
得分: 1
这可能是你自己系统的限制,而不是Go服务器的限制。如果你尝试使用http_load访问类似Google的网站,也会发生类似的降级情况:
如你所见,当你持续访问Google时(google.txt包含单个URL“http://google.com”),速率会大幅下降。这很可能是由于你的系统限制(程序可以打开的最大连接数、内存、CPU等)所致。
英文:
This is probably a limitation on your own system rather than the go server. The same kind of degradation happens if you try hitting something like google with http_load:
$> http_load -parallel 100 -seconds 10 google.txt
1000 fetches, 100 max parallel, 219000 bytes, in 10.0006 seconds
219 mean bytes/connection
99.9944 fetches/sec, 21898.8 bytes/sec
msecs/connect: 410.409 mean, 4584.36 max, 16.949 min
msecs/first-response: 279.595 mean, 3647.74 max, 35.539 min
HTTP response codes:
code 301 -- 1000
$> http_load -parallel 100 -seconds 50 google.txt
729 fetches, 100 max parallel, 159213 bytes, in 50.0008 seconds
218.399 mean bytes/connection
14.5798 fetches/sec, 3184.21 bytes/sec
msecs/connect: 1588.57 mean, 36192.6 max, 17.944 min
msecs/first-response: 237.376 mean, 33816.7 max, 33.092 min
2 bad byte counts
HTTP response codes:
code 301 -- 727
$> http_load -parallel 100 -seconds 100 google.txt
1091 fetches, 100 max parallel, 223161 bytes, in 100 seconds
204.547 mean bytes/connection
10.91 fetches/sec, 2231.61 bytes/sec
msecs/connect: 1652.16 mean, 35860.4 max, 17.825 min
msecs/first-response: 319.259 mean, 35482.1 max, 31.892 min
HTTP response codes:
code 301 -- 1019
As you can see, the rate goes down quite a bit the longer you hit google (google.txt contains the single url "http://google.com"). This is most likely due to limitations in your system (the maximum num of open connections your programs can have, memory, cpu, etc...).
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论