当使用并发的`os/exec.Command.Wait()`时,出现了Go内存泄漏的问题。

huangapple go评论67阅读模式
英文:

Go memory leak when doing concurrent os/exec.Command.Wait()

问题

我遇到了一个情况,一个Go程序占用了15GB的虚拟内存,并且不断增长。这个问题只在我们的CentOS服务器上出现,在我的OSX开发机上无法复现。

我是不是发现了Go的一个bug,还是我做错了什么?

我已经将问题简化为一个简单的演示,现在我来描述一下。首先构建并运行这个Go服务器:

package main

import (
	"net/http"
	"os/exec"
)

func main() {
	http.HandleFunc("/startapp", startAppHandler)
	http.ListenAndServe(":8081", nil)
}

func startCmd() {
	cmd := exec.Command("/tmp/sleepscript.sh")
	cmd.Start()
	cmd.Wait()
}

func startAppHandler(w http.ResponseWriter, r *http.Request) {
	startCmd()
	w.Write([]byte("Done"))
}

创建一个名为/tmp/sleepscript.sh的文件,并将其权限设置为755:

#!/bin/bash
sleep 5

然后对/startapp发起多个并发请求。在bash shell中,你可以这样做:

for i in {1..300}; do (curl http://localhost:8081/startapp &); done

现在,虚拟内存应该已经达到了几个GB。如果你重新运行上面的循环,虚拟内存将会每次增长几个GB。

更新1: 问题是我在CentOS上遇到了OOM问题。(感谢@nos)

更新2: 通过使用daemonize并同步对Cmd.Run()的调用来解决了这个问题。感谢@JimB确认了.Wait()在自己的线程中运行是POSIX API的一部分,并且没有办法避免调用.Wait()而不泄漏资源。

英文:

I am running into a situation where a go program is taking up 15gig of virtual memory and continues to grow. The problem only happens on our CentOS server. On my OSX devel machine, I can't reproduce it.

Have I discovered a bug in go, or am I doing something incorrectly?

I have boiled the problem down to a simple demo, which I'll describe now. First build and run this go server:

package main

import (
	"net/http"
	"os/exec"
)

func main() {
	http.HandleFunc("/startapp", startAppHandler)
	http.ListenAndServe(":8081", nil)
}

func startCmd() {
	cmd := exec.Command("/tmp/sleepscript.sh")
	cmd.Start()
	cmd.Wait()
}

func startAppHandler(w http.ResponseWriter, r *http.Request) {
	startCmd()
	w.Write([]byte("Done"))
}

Make a file named /tmp/sleepscript.sh and chmod it to 755

#!/bin/bash
sleep 5

And then make several concurrent requests to /startapp. In a bash shell, you can do it this way:

for i in {1..300}; do (curl http://localhost:8081/startapp &); done

The VIRT memory should now be several gigabytes. If you re-run the above for loop, the VIRT memory will continue to grow by gigabytes every time.

Update 1: The problem is that I am hitting OOM issues on CentOS. (thanks @nos)

Update 2: Worked around the problem by using daemonize and syncing the calls to Cmd.Run(). Thanks @JimB for confirming that .Wait() running in it's own thread is part of the POSIX api and there isn't a way to avoid calling .Wait() without leaking resources.

答案1

得分: 3

每个请求都需要Go生成一个新的操作系统线程来等待子进程。每个线程将消耗2MB的堆栈和更大的VIRT内存块(虽然这不太相关,因为它是虚拟的,但你可能仍然会遇到ulimit设置的限制)。Go运行时会重用线程,但目前它们从未被销毁,因为大多数使用大量线程的程序会再次使用它们。

如果你同时发出300个请求,并在等待它们完成之前不再发出其他请求,内存应该会稳定下来。然而,如果在其他请求完成之前继续发送更多请求,你将耗尽一些系统资源:内存、文件描述符或线程。

关键点是生成子进程并调用wait不是免费的,如果这是一个真实的使用案例,你需要限制可以同时调用startCmd()的次数。

英文:

Each request you make requires Go to spawn a new OS thread to Wait on the child process. Each thread will consume a 2MB stack, and a much larger chunk of VIRT memory (that's less relevant, since it's virtual, but you may still be hitting a ulimit setting). Threads are reused by the Go runtime, but they are currently never destroyed, since most programs that use a large number of threads will do so again.

If you make 300 simultaneous requests, and wait for them to complete before making any others, memory should stabilize. However if you continue to send more requests before the others have completed, you will exhaust some system resource: either memory, file descriptors, or threads.

The key point is that spawning a child process and calling wait isn't free, and if this were a real-world use case you need to limit the number of times startCmd() can be called concurrently.

huangapple
  • 本文由 发表于 2015年12月18日 07:16:29
  • 转载请务必保留本文链接:https://go.coder-hub.com/34346064.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定