英文:
Go, tcp too many open files debug
问题
这是一个简单的Go http(tcp)连接测试脚本:
func main() {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, client")
}))
defer ts.Close()
var wg sync.WaitGroup
for i := 0; i < 2000; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
resp, err := http.Get(ts.URL)
if err != nil {
panic(err)
}
greeting, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
panic(err)
}
fmt.Printf("%s", i, greeting)
}(i)
}
wg.Wait()
}
如果我在Ubuntu上运行这个脚本,会得到以下错误信息:
panic: Get http://127.0.0.1:33202: dial tcp 127.0.0.1:33202: too many open files
其他帖子建议确保关闭连接,而我在这里已经做到了。还有人建议增加最大连接数的限制,可以使用ulimit
命令或尝试sudo sysctl -w fs.inotify.max_user_watches=100000
,但仍然无法解决问题。
如何在单个服务器上运行数百万个tcp连接的goroutines?目前只有2000个连接就会崩溃。
谢谢!
英文:
Here's a straightforward Go http (tcp) connection test script
func main() {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, client")
}))
defer ts.Close()
var wg sync.WaitGroup
for i := 0; i < 2000; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
resp, err := http.Get(ts.URL)
if err != nil {
panic(err)
}
greeting, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
panic(err)
}
fmt.Printf("%s", i, greeting)
}(i)
}
wg.Wait()
}
And If I run this in Ubuntu I get:
panic: Get http://127.0.0.1:33202: dial tcp 127.0.0.1:33202: too many open files
Other posts say to make sure Close
the connection, which I am doing it all here.
And others say to increase the limit of maximum connection with ulimit
or try sudo sysctl -w fs.inotify.max_user_watches=100000
but still does not work.
How do I run millions of tcp connection goroutines in a single server?
It crashes only with 2,000 connections.
Thanks,
答案1
得分: 51
我认为你需要更改最大文件描述符。我之前在一个开发虚拟机上遇到了相同的问题,需要更改文件描述符的最大值,而不是更改inotify的设置。
顺便说一下,你的程序在我的虚拟机上运行得很好。
➜ cmd git:(wip-poop) ✗ ulimit -a
-t: CPU时间(秒) 无限制
-f: 文件大小(块) 无限制
-d: 数据段大小(KB) 无限制
-s: 栈大小(KB) 8192
-c: 核心文件大小(块) 0
-v: 地址空间(KB) 无限制
-l: 锁定内存大小(KB) 无限制
-u: 进程数 1418
-n: 文件描述符 4864
但是在我运行以下命令后:
➜ ulimit -n 500
➜ ulimit -n
500
我遇到了以下错误:
panic: Get http://127.0.0.1:51227: dial tcp 127.0.0.1:51227: socket: too many open files
不要陷入Praveen的陷阱
请注意,ulimit
不等于ulimit -n
。
英文:
I think you need to change your max file descriptors. I have run into the same problem on one of my development VMs before and needed to change the file descriptors max, not anything with inotify settings.
FWIW, your program runs fine on my VM.
·> ulimit -n
120000
But after I run
·> ulimit -n 500
·> ulimit -n
500
I get:
panic: Get http://127.0.0.1:51227: dial tcp 127.0.0.1:51227: socket: too many open files
** Don't fall into the trap that Praveen did **
Note ulimit
!= ulimit -n
.
➜ cmd git:(wip-poop) ✗ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 1418
-n: file descriptors 4864
答案2
得分: 18
Go的http包默认不指定请求超时时间。你应该在你的服务中始终包含一个超时时间。如果客户端不关闭他们的会话会怎么样?你的进程将保持活动状态,不断击中ulimits的旧会话。一个恶意行为者可以故意打开成千上万个会话,对你的服务器进行DOS攻击。高负载服务也应该调整ulimits,但是需要设置后备的超时时间。
确保你指定了超时时间:
http.DefaultClient.Timeout = time.Minute * 10
你可以通过监控进程打开的文件来验证:
lsof -p [PID_ID]
英文:
Go’s http package doesn’t specify request timeouts by default. You should always include a timeout in your service. What if a client doesn't close their session? Your process will keep alive old sessions hitting ulimits. A bad actor could intentionally open thousands of sessions, DOSing your server. Heavy load services should adjust ulimits as well but timeouts for backstop.
Ensure you specify a timeout:
http.DefaultClient.Timeout = time.Minute * 10
You can validate before and after by monitoring files opened by your process:
lsof -p [PID_ID]
答案3
得分: 16
如果你想运行数百万个打开/读取/关闭套接字的go例程,那么你最好提高ulimit的值,或者打开/读取/关闭套接字并将读取的值传递给go例程,但我建议使用带缓冲的通道来控制你想要打开的文件描述符的数量。
const (
// 这里可以指定允许打开的最大文件描述符数量
maxFileDescriptors = 100
)
func main() {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, client")
}))
defer ts.Close()
var wg sync.WaitGroup
maxChan := make(chan bool, maxFileDescriptors)
for i := 0; i < 1000; i++ {
maxChan <- true
wg.Add(1)
go func(url string, i int, maxChan chan bool, wg *sync.WaitGroup) {
defer wg.Done()
defer func(maxChan chan bool) { <-maxChan }(maxChan)
resp, err := http.Get(url)
if err != nil {
panic(err)
}
greeting, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
err = resp.Body.Close()
if err != nil {
panic(err)
}
fmt.Printf("%d: %s", i, string(greeting))
}(ts.URL, i, maxChan, &wg)
}
wg.Wait()
}
希望对你有帮助!
英文:
If you want to run millions of go routines that open/read/close a socket, well you better up your ulimit, or open/read/close the socket and pass the value read in to the go-routine, but I would use a buffered channel to control how many file descriptors you want to be able to open.
const (
// this is where you can specify how many maxFileDescriptors
// you want to allow open
maxFileDescriptors = 100
)
func main() {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, client")
}))
defer ts.Close()
var wg sync.WaitGroup
maxChan := make(chan bool, maxFileDescriptors)
for i := 0; i < 1000; i++ {
maxChan <- true
wg.Add(1)
go func(url string, i int, maxChan chan bool, wg *sync.WaitGroup) {
defer wg.Done()
defer func(maxChan chan bool) { <-maxChan }(maxChan)
resp, err := http.Get(url)
if err != nil {
panic(err)
}
greeting, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
err = resp.Body.Close()
if err != nil {
panic(err)
}
fmt.Printf("%d: %s", i, string(greeting))
}(ts.URL, i, maxChan, &wg)
}
wg.Wait()
}
答案4
得分: 9
HTTP/1.1默认使用持久连接:
HTTP/1.1与之前的HTTP版本的一个重要区别是,持久连接是任何HTTP连接的默认行为。
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
解决方案是在事务完成后通知服务器客户端希望关闭连接。可以通过设置Connection头部字段req.Header.Set("Connection", "close")或在http.Request上将Close属性设置为true来实现:
req.Close = true。通过这样做,“打开的文件过多”问题得到解决,因为程序不再保持HTTP连接打开,从而不会使用文件描述符。
我通过添加req.Close = true和req.Header.Set("Connection", "close")来解决了这个问题。我认为这比更改ulimit更好。
来源:http://craigwickesser.com/2015/01/golang-http-to-many-open-files/
英文:
HTTP/1.1 uses persistent connections by default:
A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
The solution was to inform the server that the client wants to close the connection after the transaction is complete. This can be done by setting the Connection header,
req.Header.Set("Connection", "close") or by setting the Close property to true on the http.Request:
req.Close = true After doing that, the “too many open files” issue went away as the program was no longer keeping HTTP connections open and thus not using up file descriptors.
I solved this by adding req.Close = true and req.Header.Set("Connection", "close"). I think it's better than changing ulimit.
source: http://craigwickesser.com/2015/01/golang-http-to-many-open-files/
答案5
得分: 3
我不是一个翻译器,但我可以帮你将英文翻译成中文。以下是你提供的内容的翻译:
我还必须手动设置关闭连接头部,以避免文件描述符问题:
r, _ := http.NewRequest(http.MethodDelete, url, nil)
r.Close = true
res, err := c.Do(r)
res.Body.Close();
没有 r.Close = true 和 res.Body.Close(),我会遇到文件描述符限制。有了这两个设置,我可以随意发起请求。
英文:
I had to also to manually set the close connection header to avoid the file descriptor issue:
r, _ := http.NewRequest(http.MethodDelete, url, nil)
r.Close = true
res, err := c.Do(r)
res.Body.Close();
Without r.Close = true and res.Body.Close() I hit the file descriptor limit. With both I could fire off as many as I needed.
答案6
得分: 2
临时允许更多的打开文件:
通过运行以下命令查看当前设置:
sudo launchctl limit maxfiles
通过运行以下命令将限制增加到65535个文件。如果您的站点文件较少,您可以选择设置较低的软(65535)和硬(200000)限制。
sudo launchctl limit maxfiles 65535 200000
ulimit -n 65535
sudo sysctl -w kern.maxfiles=200000
sudo sysctl -w kern.maxfilesperproc=65535
请注意,您可能需要为每个新的shell设置这些限制。
英文:
To temporarily allow more open files:
View your current settings by running:
sudo launchctl limit maxfiles
Increase the limit to 65535 files by running the following commands. If your site has fewer files, you can set choose to set lower soft (65535) and hard (200000) limits.
sudo launchctl limit maxfiles 65535 200000
ulimit -n 65535
sudo sysctl -w kern.maxfiles=200000
sudo sysctl -w kern.maxfilesperproc=65535
Note that you might need to set these limits for each new shell.
答案7
得分: 1
将ulimit更改以避免错误"打开文件太多"
默认情况下,Linux的最大ulimit为4096,Mac的最大ulimit为1024,
您可以通过输入以下命令将ulimit更改为4096:
ulimit -n 4096
如果要超过4096,您需要修改etc/security文件夹中的limits.conf文件,并将硬限制设置为100000,方法是添加以下行:"* hard core 100000"
英文:
change the ulimit to avoid the error "too many open files"
by default max ulimit is 4096 for linux and 1024 for mac,
u can change ulimit to 4096 by typing
ulimit -n 4096
for beyond 4096 you need to modify limits.conf in etc/security folder for linux and set hard limit to 100000 by adding this line "* hard core 100000"
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论