在Go中查找[HOST]:没有这样的主机错误

huangapple go评论104阅读模式
英文:

Lookup [HOST]: no such host error in Go

问题

我有一个测试程序,可以并行获取URL,但是当我将并行数增加到大约1040时,我开始收到“lookup www.httpbin.org: no such host”错误。

经过一些谷歌搜索,我发现其他人说没有关闭响应会导致这个问题,但是我用res.Body.Close()关闭了它。

这里有什么问题?非常感谢。

package main

import (
    "fmt"
    "net/http"
    "io/ioutil"
)

func get(url string) ([]byte, error) {

    client := &http.Client{}
    req, _ := http.NewRequest("GET", url, nil)
 
    res, err := client.Do(req)

    if err != nil {
        fmt.Println(err)
        return nil, err
    } 

    bytes, read_err := ioutil.ReadAll(res.Body)
    res.Body.Close()

    fmt.Println(bytes)

    return bytes, read_err
}

func main() {
    for i := 0; i < 1040; i++ {
        go get(fmt.Sprintf("http://www.httpbin.org/get?a=%d", i))
    }
}
英文:

I have this test program which will fetch url parallel, but when I increase the parallel number to about 1040, I start to get lookup www.httpbin.org: no such host error.

After some Google, I found others say that not close the response will cause the problem, but I do close that with res.Body.Close().

What's the problem here? thanks very much.

package main

import (
    &quot;fmt&quot;
    &quot;net/http&quot;
    &quot;io/ioutil&quot;
)

func get(url string) ([]byte, error) {

    client := &amp;http.Client{}
    req, _ := http.NewRequest(&quot;GET&quot;, url, nil)
 
    res, err := client.Do(req)

    if err != nil {
        fmt.Println(err)
        return nil, err
    } 

    bytes, read_err := ioutil.ReadAll(res.Body)
    res.Body.Close()

    fmt.Println(bytes)

    return bytes, read_err
}

func main() {
    for i := 0; i &lt; 1040; i++ {
        go get(fmt.Sprintf(&quot;http://www.httpbin.org/get?a=%d&quot;, i))
    }
}

答案1

得分: 16

技术上说,您的进程受到内核的限制,最多可以打开约1000个文件描述符。根据上下文的不同,您可能需要增加这个数字。

在您的shell中运行以下命令(注意最后一行):

$ ulimit -a
-t: CPU时间(秒) 无限制
-f: 文件大小(块) 无限制
-d: 数据段大小(KB) 无限制
-s: 栈大小(KB) 8192
-c: 核心文件大小(块) 0
-v: 地址空间(KB) 无限制
-l: 锁定内存大小(KB) 无限制
-u: 进程数 709
-n: 文件描述符 2560

要临时增加:

$ ulimit -n 5000
(无输出)

然后验证文件描述符限制:

$ ulimit -n
5000

英文:

well technically your process is limited (by the Kernel) to about 1000 open file descriptors. Depending on the context you might need to increase this number.

In your shell run (note the last line):

$ ulimit -a
-t: cpu time (seconds)         unlimited
-f: file size (blocks)         unlimited
-d: data seg size (kbytes)     unlimited
-s: stack size (kbytes)        8192
-c: core file size (blocks)    0
-v: address space (kb)         unlimited
-l: locked-in-memory size (kb) unlimited
-u: processes                  709
-n: file descriptors           2560

To increase (temporarly):

$ ulimit -n 5000
(no output)

Then verify the fd limit:

$ ulimit -n
5000

答案2

得分: 12

这是因为您的代码中可能有多达1040个并发调用,因此您可能处于一个有1040个打开但尚未关闭的状态。

您需要限制使用的goroutine数量。

以下是一种可能的解决方案,最多允许100个并发调用:

func getThemAll() {
    nbConcurrentGet := 100
    urls := make(chan string, nbConcurrentGet)
    for i := 0; i < nbConcurrentGet; i++ {
        go func() {
            for url := range urls {
                get(url)
            }
        }()
    }
    for i := 0; i < 1040; i++ {
        urls <- fmt.Sprintf("http://www.httpbin.org/get?a=%d", i)
    }
}

如果您在程序的主函数中调用此函数,它可能在所有任务完成之前停止。您可以使用sync.WaitGroup来防止这种情况:

func main() {
    nbConcurrentGet := 100
    urls := make(chan string, nbConcurrentGet)
    var wg sync.WaitGroup
    for i := 0; i < nbConcurrentGet; i++ {
        go func() {
            for url := range urls {
                get(url)
                wg.Done()
            }
        }()
    }
    for i := 0; i < 1040; i++ {
        wg.Add(1)
        urls <- fmt.Sprintf("http://www.httpbin.org/get?a=%d", i)
    }
    wg.Wait()
    fmt.Println("Finished")
}
英文:

That's because you may have up to 1040 concurrent calls in your code so you may very well be in a state with 1040 body opened and none yet closed.

You need to limit the number of goroutines used.

Here's one possible solution with a limit to 100 concurrent calls max :

func getThemAll() {
	nbConcurrentGet := 100
	urls :=  make(chan string, nbConcurrentGet)
    for i := 0; i &lt; nbConcurrentGet; i++ {
        go func (){
			for url := range urls {
				get(url)
			}
		}()
    }
	for i:=0; i&lt;1040; i++ {
		urls &lt;- fmt.Sprintf(&quot;http://www.httpbin.org/get?a=%d&quot;, i)
	}
}

If you call this in the main function of your program, it may stop before all tasks are finished. You can use a sync.WaitGroup to prevent it :

func main() {
	nbConcurrentGet := 100
	urls :=  make(chan string, nbConcurrentGet)
	var wg sync.WaitGroup
    for i := 0; i &lt; nbConcurrentGet; i++ {
        go func (){
			for url := range urls {
				get(url)
				wg.Done()
			}
		}()
    }
	for i:=0; i&lt;1040; i++ {
		wg.Add(1)
		urls &lt;- fmt.Sprintf(&quot;http://www.httpbin.org/get?a=%d&quot;, i)
	}
	wg.Wait()
	fmt.Println(&quot;Finished&quot;)
}

huangapple
  • 本文由 发表于 2012年10月18日 18:43:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/12952833.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定