英文:
"localhost: no such host" after 250 connections in Go when using ResponseWriter.Write
问题
我有以下的http客户端/服务器代码:
服务器
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Println("Req: ", r.URL)
w.Write([]byte("OK")) // <== 有问题的代码行
// w.WriteHeader(200) // 正常工作
})
log.Fatal(http.ListenAndServe(":5008", nil))
}
客户端
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
_, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
当我运行上面的客户端与服务器进行通信时,连接到第250个时,客户端会报错:error: Get http://localhost:5008/250: dial tcp: lookup localhost: no such host
,之后的连接都无法成功。
如果我将服务器中的代码行w.Write([]byte("OK"))
改为w.WriteHeader(200)
,那么连接数量就没有限制,一切都按预期工作。
我在这里漏掉了什么?
英文:
I have the following http client/server code:
Server
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Println("Req: ", r.URL)
w.Write([]byte("OK")) // <== PROBLEMATIC LINE
// w.WriteHeader(200) // Works as expected
})
log.Fatal(http.ListenAndServe(":5008", nil))
}
Client
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
_, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
When I run the client above against the server, then after 250 connections I get the following error from client.Do: <br>
error: Get http://localhost:5008/250: dial tcp: lookup localhost: no such host
and no more connections will succeed.
If I change the line in server from w.Write([]byte("OK"))
==> w.WriteHeader(200)
however then there is no limit to the amount of connections and it works as expected.
What am I missing here?
答案1
得分: 14
你没有关闭请求体。当你从服务器进行任何写操作时,连接会保持打开状态,因为响应还没有被读取。当你只调用WriteHeader时,响应已完成,连接可以被重用或关闭。
老实说,我不知道为什么保持打开的连接会导致域名查找失败。根据250与圆整数256非常接近的事实,我猜测操作系统可能设置了人为限制,而你正在遇到这个限制。也许允许的最大文件描述符数是256?这看起来很低,但这可以解释这个问题。
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
}
resp.Body.Close()
time.Sleep(10 * time.Millisecond)
}
}
英文:
You are not closing the body. When you do any writes from the server, the connection is left open because the response has not been read yet. When you just WriteHeader, the response is done and the connection can be reused or closed.
To be completely honest, I do not know why leaving open connections causes domain lookups to fail. Based on the fact that 250 is awfully close to the round number 256, I would guess there is an artificial limitation placed by the OS that you are hitting. Perhaps the max FDs allowed is 256? Seem low, but it would explain the problem.
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
}
resp.Body.Close()
time.Sleep(10 * time.Millisecond)
}
}
答案2
得分: 6
应用程序必须按照net/http包文档开头的描述,在客户端关闭响应体。
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
resp.Body.Close() // <---- 关闭是必需的
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
如果应用程序不关闭响应体,则底层网络连接可能不会关闭或返回给客户端的连接池。在这种情况下,每个新请求都会创建一个新的网络连接。该进程最终会达到文件描述符限制,并且任何需要文件描述符的操作都将失败。这包括名称查找和打开新连接。
OS X上的默认打开文件描述符限制为256。我预计客户端应用程序会在接近该限制时失败。
由于每个与服务器的连接在服务器上使用一个文件描述符,因此服务器也可能已达到其文件描述符限制。
当从服务器代码中删除w.Write([]byte("OK"))
时,响应体的长度为零。这会触发客户端对零长度响应体的优化,其中在应用程序关闭响应体之前,连接会被关闭或返回给连接池。
英文:
The application must close the response body on the client as described at the beginning of the net/http package docmentation.
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
resp.Body.Close() // <---- close is required
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
If the application does not close the response body, then the underlying network connection may not be closed or returned to the the client's connection pool. In this case, each new request creates a new network connection. The process eventually hits the file descriptor limit and anything that requires a file descriptor will fail. This includes name lookups and opening new connections.
The default limit for number of open file descriptors on OS X is 256. I'd expect the client application to fail just short of that limit.
Because each connection to the server uses a file descriptor on the server, the server may also have reached its file descriptor limit.
The response body has zero length when w.Write([]byte("OK"))
is removed from the server code. This triggers an optimization in the client for zero length response bodies where the connection is closed or returned to the pool before the application closes the response body.
答案3
得分: 1
同时在使用MAC OSX时,进行并发的POST请求时也遇到了同样的问题:
> 在进行250个请求后,会出现该错误;
使用的是go1.8.3版本。
解决我的问题的方法是关闭req和res的body:
for i := 0; i < 10; i++ {
res, err := client.Do(req)
if err == nil {
globalCounter.Add(1)
res.Body.Close()
req.Body.Close()
break
} else {
log.Println("错误:", err, "正在重试...", i)
}
}
英文:
Also using MAC OSX having the same problem when doing post request concurrently:
> After 250 requests it will have that error;
Using go1.8.3.
The fix for my problem is to close both req and res body:
for i := 0; i < 10; i++ {
res, err := client.Do(req)
if err == nil {
globalCounter.Add(1)
res.Body.Close()
req.Body.Close()
break
} else {
log.Println("Error:", err, "retrying...", i)
}
}
答案4
得分: 1
我正在使用go版本go1.9.4 linux/amd64。
我尝试了不同的方法来解决这个问题。没有什么帮助,但是我找到了一个链接:http://craigwickesser.com/2015/01/golang-http-to-many-open-files/。
除了resp.Body.Close()之外,我还必须添加req.Header.Set("Connection", "close")。
func PrettyPrint(v interface{}) (err error) {
b, err := json.MarshalIndent(v, "", " ")
if err == nil {
fmt.Println(string(b))
}
return
}
func Json(body []byte, v interface{}) error {
return json.Unmarshal(body, v)
}
func GetRequests(hostname string, path string) []map[string]interface{} {
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
Dial: (&net.Dialer{
Timeout: 0,
KeepAlive: 0,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
httpClient := &http.Client{Transport: transport}
req, reqErr := http.NewRequest("GET", "https://"+hostname+path, nil)
req.SetBasicAuth("user", "pwd")
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "close")
if reqErr != nil {
fmt.Println("error in making request", reqErr)
}
resp, err := httpClient.Do(req)
if err != nil {
fmt.Println("error in calling request", err)
}
defer resp.Body.Close()
content, err := ioutil.ReadAll(resp.Body)
// fmt.Println(string(content))
var json_resp_l []map[string]interface{}
Json(content, &json_resp_l)
if len(json_resp_l) == 0 {
var json_resp map[string]interface{}
Json(content, &json_resp)
if json_resp != nil {
json_resp_l = append(json_resp_l, json_resp)
}
}
PrettyPrint(json_resp_l)
return json_resp_l
}
func main() {
GetRequests("server.com", "/path")
}
英文:
I am using go version go1.9.4 linux/amd64
I tried different ways to solve this problem. Nothing helped but
http://craigwickesser.com/2015/01/golang-http-to-many-open-files/
Along with resp.Body.Close(), I had to add req.Header.Set("Connection", "close")
func PrettyPrint(v interface{}) (err error) {
b, err := json.MarshalIndent(v, "", " ")
if err == nil {
fmt.Println(string(b))
}
return
}
func Json(body []byte, v interface{}) error {
return json.Unmarshal(body, v)
}
func GetRequests(hostname string, path string) []map[string]interface{} {
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
Dial: (&net.Dialer{
Timeout: 0,
KeepAlive: 0,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
httpClient := &http.Client{Transport: transport}
req, reqErr := http.NewRequest("GET", "https://"+hostname+path, nil)
req.SetBasicAuth("user", "pwd")
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "close")
if reqErr != nil {
fmt.Println("error in making request", reqErr)
}
resp, err := httpClient.Do(req)
if err != nil {
fmt.Println("error in calling request", err)
}
defer resp.Body.Close()
content, err := ioutil.ReadAll(resp.Body)
// fmt.Println(string(content))
var json_resp_l []map[string]interface{}
Json(content, &json_resp_l)
if len(json_resp_l) == 0 {
var json_resp map[string]interface{}
Json(content, &json_resp)
if json_resp != nil {
json_resp_l = append(json_resp_l, json_resp)
}
}
PrettyPrint(json_resp_l)
return json_resp_l
}
func main() {
GetRequests("server.com", "/path")
}
答案5
得分: 1
我遇到了相同的“没有这样的主机”错误,尽管它不是对本地主机的请求。这个错误发生在大约250个请求之后。当然,我也写了defer resp.Body.Close()
,但错误仍然发生。
最后,我找到了一种在HTTP/1.1请求中指定req.Header.Set("Connection", "close")
的方法。这意味着在HTTP/1.1中连接是短暂的。
一般来说,一旦连接建立,最好保持连接活动,所以Connection: close
可能不是推荐的选项。然而,它可以是一个有效的选择,例如,如果你想发出许多请求进行负载验证。
英文:
I've encountered the same no such host
error, although it's not a request to localhost. It happened after about 250 requests. I obviously wrote defer resp.Body.Close()
as well, of course, but the error kept happening.
I finally arrived at a way to specify req.Header.Set("Connection", "close")
in the HTTP/1.1 request. It means that the connection is short-lived in the HTTP/1.1.
In general it is better to keep connections alive once connected, so Connection: close
may NOT be the recommended. However, it can be a valid option, for example, if you want to issue many requests for load verification.
答案6
得分: 0
我认为关键原因是你应该为每个http.Client
使用共享的http.Transport
,http.Transport
会对连接进行池化并重用它们以获得更好的性能。
英文:
I think the key reason is you should use a sharing http.Transport
for each http.Client
, the http.Transport
will be pooling the connections and reuse them for better performance.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论