Golang内存爆炸:newdefer

huangapple go评论138阅读模式
英文:

golang memory explosion: newdefer

问题

我有一个程序,它监听UDP上的日志流量,尝试解析日志,然后将其插入Redis。在一定程度的流量下,内存似乎会"爆炸"(从几百兆字节迅速增加到几十亿字节)。

在这种情况发生后不久,我获取了堆剖析数据,结果如下:

(pprof) top100 -cum
总计:1731.3 MB
0.0 0.0% 0.0% 1731.3 100.0% gosched0
1162.5 67.1% 67.1% 1162.5 67.1% newdefer
0.0 0.0% 67.1% 1162.5 67.1% runtime.deferproc
0.0 0.0% 67.1% 1162.0 67.1% main.TryParse
0.0 0.0% 67.1% 438.0 25.3% runtime.main
301.5 17.4% 84.6% 437.5 25.3% main.main
136.0 7.9% 92.4% 136.0 7.9% runtime.malg
0.0 0.0% 92.4% 136.0 7.9% runtime.newproc
0.0 0.0% 92.4% 136.0 7.9% runtime.newproc1
1.5 0.1% 92.5% 131.3 7.6% main.RedisCuller
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*conn).Do
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*conn).readReply
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*pooledConnection).Do
95.8 5.5% 98.0% 95.8 5.5% cnew
0.0 0.0% 98.0% 95.8 5.5% runtime.cnewarray
34.0 2.0% 100.0% 34.0 2.0% runtime.convT2E
0.0 0.0% 100.0% 0.5 0.0% main.init
0.0 0.0% 100.0% 0.5 0.0% net/http/pprof.init
0.0 0.0% 100.0% 0.5 0.0% sync.(*Once).Do
0.0 0.0% 100.0% 0.5 0.0% syscall.Getenv
0.0 0.0% 100.0% 0.5 0.0% time.init

当程序处于"正常"状态时,剖析数据如下:

(pprof) top20 -cum
总计:186.7 MB
0.0 0.0% 0.0% 186.7 100.0% gosched0
0.5 0.3% 0.3% 122.7 65.7% main.RedisCuller
0.0 0.0% 0.3% 103.5 55.4% github.com/garyburd/redigo/redis.(*pooledConnection).Do
0.0 0.0% 0.3% 103.0 55.2% github.com/garyburd/redigo/redis.(*conn).Do
0.0 0.0% 0.3% 103.0 55.2% github.com/garyburd/redigo/redis.(*conn).readReply
88.2 47.2% 47.5% 88.2 47.2% cnew
0.0 0.0% 47.5% 88.2 47.2% runtime.cnewarray
0.0 0.0% 47.5% 57.0 30.5% main.TryParse
57.0 30.5% 78.0% 57.0 30.5% newdefer
0.0 0.0% 78.0% 57.0 30.5% runtime.deferproc
34.0 18.2% 96.3% 34.0 18.2% runtime.convT2E
1.5 0.8% 97.1% 6.5 3.5% main.main
0.0 0.0% 97.1% 6.5 3.5% runtime.main
5.0 2.7% 99.7% 5.0 2.7% runtime.malg
0.0 0.0% 99.7% 5.0 2.7% runtime.newproc
0.0 0.0% 99.7% 5.0 2.7% runtime.newproc1
0.0 0.0% 99.7% 0.5 0.3% bufio.NewWriter
0.0 0.0% 99.7% 0.5 0.3% bufio.NewWriterSize
0.0 0.0% 99.7% 0.5 0.3% github.com/garyburd/redigo/redis.(*Pool).get
0.0 0.0% 99.7% 0.5 0.3% github.com/garyburd/redigo/redis.(*pooledConnection).get

我在代码中只有一个延迟函数,用于解析函数(因为它经常会失败):

for {
rlen, _, err := sock.ReadFromUDP(buf[0:])
checkError(err)
raw := logrow.RawRecord(string(buf[:rlen]))
go TryParse(raw, c)
}

...

func TryParse(raw logrow.RawRecord, c chan logrow.Record) {
defer func() {
if r := recover(); r != nil {
//log.Printf("Failed Parse due to panic: %v", raw)
return
}
}()
rec, ok := logrow.ParseRawRecord(raw)
if !ok {
return
//log.Printf("Failed Parse: %v", raw)
} else {
c <- rec
}
}

有人看到我做错了什么明显的事情,可能导致内存突然膨胀吗?或者可以提供一些指导来确定问题吗?

编辑(关于logrow.Record通道的更多代码):

c := make(chan logrow.Record)
...
go RedisInserter(c, bucket, retention, pool)

func RedisInserter(c chan logrow.Record, bucket, retention int, p *redis.Pool) {
for rec := range c {
logrow.SendToRedis(rec, bucket, retention, p)
}
}

英文:

I have an program that listens on udp for log traffic, attempts to parse it, and then insert it into Redis. At a certain level of traffic, the memory seems to "explode" (Rapidly increase from a few hundred Megabytes to Gigabytes.

I've grabbed a heap profile shortly after this happens and it returns the following:

(pprof) top100 -cum
Total: 1731.3 MB
     0.0   0.0%   0.0%   1731.3 100.0% gosched0
  1162.5  67.1%  67.1%   1162.5  67.1% newdefer
     0.0   0.0%  67.1%   1162.5  67.1% runtime.deferproc
     0.0   0.0%  67.1%   1162.0  67.1% main.TryParse
     0.0   0.0%  67.1%    438.0  25.3% runtime.main
   301.5  17.4%  84.6%    437.5  25.3% main.main
   136.0   7.9%  92.4%    136.0   7.9% runtime.malg
     0.0   0.0%  92.4%    136.0   7.9% runtime.newproc
     0.0   0.0%  92.4%    136.0   7.9% runtime.newproc1
     1.5   0.1%  92.5%    131.3   7.6% main.RedisCuller
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*conn).Do
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*conn).readReply
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*pooledConnection).Do
    95.8   5.5%  98.0%     95.8   5.5% cnew
     0.0   0.0%  98.0%     95.8   5.5% runtime.cnewarray
    34.0   2.0% 100.0%     34.0   2.0% runtime.convT2E
     0.0   0.0% 100.0%      0.5   0.0% main.init
     0.0   0.0% 100.0%      0.5   0.0% net/http/pprof.init
     0.0   0.0% 100.0%      0.5   0.0% sync.(*Once).Do
     0.0   0.0% 100.0%      0.5   0.0% syscall.Getenv
     0.0   0.0% 100.0%      0.5   0.0% time.init

When the program is "healthy" the profile looks more like:

(pprof) top20 -cum
Total: 186.7 MB
     0.0   0.0%   0.0%    186.7 100.0% gosched0
     0.5   0.3%   0.3%    122.7  65.7% main.RedisCuller
     0.0   0.0%   0.3%    103.5  55.4% github.com/garyburd/redigo/redis.(*pooledConnection).Do
     0.0   0.0%   0.3%    103.0  55.2% github.com/garyburd/redigo/redis.(*conn).Do
     0.0   0.0%   0.3%    103.0  55.2% github.com/garyburd/redigo/redis.(*conn).readReply
    88.2  47.2%  47.5%     88.2  47.2% cnew
     0.0   0.0%  47.5%     88.2  47.2% runtime.cnewarray
     0.0   0.0%  47.5%     57.0  30.5% main.TryParse
    57.0  30.5%  78.0%     57.0  30.5% newdefer
     0.0   0.0%  78.0%     57.0  30.5% runtime.deferproc
    34.0  18.2%  96.3%     34.0  18.2% runtime.convT2E
     1.5   0.8%  97.1%      6.5   3.5% main.main
     0.0   0.0%  97.1%      6.5   3.5% runtime.main
     5.0   2.7%  99.7%      5.0   2.7% runtime.malg
     0.0   0.0%  99.7%      5.0   2.7% runtime.newproc
     0.0   0.0%  99.7%      5.0   2.7% runtime.newproc1
     0.0   0.0%  99.7%      0.5   0.3% bufio.NewWriter
     0.0   0.0%  99.7%      0.5   0.3% bufio.NewWriterSize
     0.0   0.0%  99.7%      0.5   0.3% github.com/garyburd/redigo/redis.(*Pool).get
     0.0   0.0%  99.7%      0.5   0.3% github.com/garyburd/redigo/redis.(*pooledConnection).get

The only defer I have in my code is surrounding the parsing function (since it can frequently fail):

    for {
            rlen, _, err := sock.ReadFromUDP(buf[0:])
            checkError(err) 
            raw := logrow.RawRecord(string(buf[:rlen]))
            go TryParse(raw, c)
    }

    ...

    func TryParse(raw logrow.RawRecord, c chan logrow.Record) {
        defer func() {
                if r := recover(); r != nil {
                        //log.Printf(&quot;Failed Parse due to panic: %v&quot;, raw)
                        return
                }
        }()
        rec, ok := logrow.ParseRawRecord(raw)
        if !ok {
                return
                //log.Printf(&quot;Failed Parse: %v&quot;, raw)
        } else {
                c &lt;- rec
        }
}

Does anyone see anything obvious I am doing wrong that could cause memory to suddenly balloon? Or maybe provide some direction in pinning it down?

Edit (More code around the logrow.Record Channel):

c := make(chan logrow.Record)
...
go RedisInserter(c, bucket, retention, pool)

func RedisInserter(c chan logrow.Record, bucket, retention int, p *redis.Pool) {
        for rec := range c {
                logrow.SendToRedis(rec, bucket, retention, p)
        }
}

答案1

得分: 1

原来问题是由于闭包延迟函数中的泄漏导致的(这是Go语言本身的问题),我们可以在以下代码中看到:

    defer func() {
            if r := recover(); r != nil {
                    //log.Printf("Failed Parse due to panic: %v", raw)
                    return
            }
    }()

因此,使用最新版本的Go重新构建解决了这个问题(参考:https://codereview.appspot.com/10784043/)。然而,明智的做法当然是设计ParseRecordFunction,使其不会尝试超出边界并导致恐慌。

英文:

Turned out to be a leak in closured deferred function (a problem in Go itself) as we see in:

    defer func() {
            if r := recover(); r != nil {
                    //log.Printf(&quot;Failed Parse due to panic: %v&quot;, raw)
                    return
            }
    }()

So rebuilding with the latest version of Go resolved the problem ( Reference: https://codereview.appspot.com/10784043/ ). However, the smart thing to do is to of course design the ParseRecordFunction so it doesn't end up trying to slice out of bounds and cause a panic.

huangapple
  • 本文由 发表于 2013年9月24日 20:27:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/18981622.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定