大型分配的数据块导致了巨大的垃圾回收性能问题。

huangapple go评论84阅读模式
英文:

Huge GC perforamance problems with big allocated data blocks

问题

我刚刚注意到,如果在程序中分配一个巨大的内存块,垃圾回收器(GC)会占用程序的所有时间。

以下是一个示例代码:

func main() {
    //////////////// !!!!!!!
    /* 如果我取消下面两行的注释,程序将运行得很快 */
    nodesPool := make([]int, 300e6, 300e6)
    _ = nodesPool
    //////////////////////7

    file, _ := os.Open("result.txt")
    defer file.Close()

    reader := bufio.NewReader(file)

    var lastLinkIdx = 1 // 不使用第一个元素,使用0作为保留值

    cnt := 0
    totalB := 0

    for {
        l, err := reader.ReadString('\n')
        if err == io.EOF {
            fmt.Println("EOF")
            break
        }

        cnt += 1
        totalB += len(l)

        lines := strings.Split(l, ":")
        nodeId, _ := strconv.Atoi(lines[0])
        _ = nodeId

        linkIdsStr := strings.Split(lines[1], ",")
        var ii = len(linkIdsStr)
        _ = ii
        /* ... */
    }

    fmt.Println("pool ", cnt, totalB, lastLinkIdx)
}

我认为垃圾回收器尝试以某种方式移动巨大的内存块,实际上是否可能在垃圾回收器之外分配内存,但仍然让垃圾回收器为所有其他库提供服务,因为即使是ReadLine也需要它。

以下是带有内存块的性能分析结果:

Total: 1445 samples
428 29.6% 29.6% 722 50.0% runtime.sweepone
375 26.0% 55.6% 375 26.0% markroot
263 18.2% 73.8% 263 18.2% runtime.xadd
116 8.0% 81.8% 116 8.0% strings.Count
98 6.8% 88.6% 673 46.6% strings.genSplit
34 2.4% 90.9% 44 3.0% runtime.MSpan_Sweep
25 1.7% 92.7% 729 50.4% MCentral_Grow
17 1.2% 93.8% 19 1.3% syscall.Syscall
9 0.6% 94.5% 9 0.6% runtime.memclr
9 0.6% 95.1% 9 0.6% runtime.memmove

以下是没有内存块的性能分析结果:

98 27.0% 27.0% 98 27.0% strings.Count
93 25.6% 52.6% 228 62.8% strings.genSplit
45 12.4% 65.0% 45 12.4% scanblock
24 6.6% 71.6% 28 7.7% runtime.MSpan_Sweep
13 3.6% 75.2% 74 20.4% runtime.mallocgc
12 3.3% 78.5% 12 3.3% runtime.memclr
8 2.2% 80.7% 8 2.2% MHeap_ReclaimList
8 2.2% 82.9% 11 3.0% syscall.Syscall
6 1.7% 84.6% 44 12.1% MHeap_Reclaim
6 1.7% 86.2% 6 1.7% markonly

希望对你有所帮助!

英文:

I have just noticed that if I allocate a huge memory block in programm. GC will eat all programm time.

Here is POC.

https://gist.github.com/martende/252f403f0c17cb489de4

func main() {
    //////////////// !!!!!!!
    /* If I uncomment 2 lines below programm runs fast */
 	nodesPool := make([]int, 300e6, 300e6)
    _ = nodesPool
	//////////////////////7

	file, _ := os.Open("result.txt")
	defer file.Close()

    reader := bufio.NewReader(file)

 	var lastLinkIdx = 1 // Dont use first element use 0 as saver

	cnt:=  0
	totalB := 0

    for {
	   l, err := reader.ReadString('\n')
	   if err == io.EOF {
	fmt.Println("EOF")
		break
	}

	cnt+=1		
	totalB+=len(l)
	
	lines := strings.Split(l, ":")
	nodeId,_ := strconv.Atoi(lines[0])
	_ = nodeId
	
	linkIdsStr  := strings.Split(lines[1], ",")
	var ii = len(linkIdsStr)
	_ = ii
    /*		... */
}

fmt.Println("pool ",cnt,totalB,lastLinkIdx)


}

I think that GC tries somehow move the huge memory block , is it actually possible to allocate memory out of GC but leave GC for all other libraries becuase even ReadLine need it.

Here is profiling with memory block.

Total: 1445 samples
     428  29.6%  29.6%      722  50.0% runtime.sweepone
     375  26.0%  55.6%      375  26.0% markroot
     263  18.2%  73.8%      263  18.2% runtime.xadd
     116   8.0%  81.8%      116   8.0% strings.Count
      98   6.8%  88.6%      673  46.6% strings.genSplit
      34   2.4%  90.9%       44   3.0% runtime.MSpan_Sweep
      25   1.7%  92.7%      729  50.4% MCentral_Grow
      17   1.2%  93.8%       19   1.3% syscall.Syscall
       9   0.6%  94.5%        9   0.6% runtime.memclr
       9   0.6%  95.1%        9   0.6% runtime.memmove

Here is profiling without memory block.

  98  27.0%  27.0%       98  27.0% strings.Count
  93  25.6%  52.6%      228  62.8% strings.genSplit
  45  12.4%  65.0%       45  12.4% scanblock
  24   6.6%  71.6%       28   7.7% runtime.MSpan_Sweep
  13   3.6%  75.2%       74  20.4% runtime.mallocgc
  12   3.3%  78.5%       12   3.3% runtime.memclr
   8   2.2%  80.7%        8   2.2% MHeap_ReclaimList
   8   2.2%  82.9%       11   3.0% syscall.Syscall
   6   1.7%  84.6%       44  12.1% MHeap_Reclaim
   6   1.7%  86.2%        6   1.7% markonly

答案1

得分: 1

Dmitry Vyukov来自Go团队,他在这里表示这是一个Go运行时性能问题,你可以通过大量分配来触发该问题,并且作为一种解决方法,"你可以在大对象变为死对象后立即收集它,并在此之后增加GOGC的值。"

广义上来说,GitHub上的问题指出运行时创建了许多内存管理结构(spans),然后无限期地保留它们,并且在每次垃圾回收时都需要进行清理。根据问题标签,修复计划定于Go 1.5版本。

他提供的解决方法示例代码如下:

package main

import (
	"runtime"
	"runtime/debug"
)

var x = make([]byte, 1<<20)
var y []byte
var z []byte

func main() {
	y = make([]byte, 1<<30)
	y = nil
	runtime.GC()
	debug.SetGCPercent(1000)
	for i := 0; i < 1e6; i++ {
		z = make([]byte, 8192)
	}
}

(一些评论是关于完全不同的答案和代码示例,专注于避免分配,我已经将它们编辑掉了。没有办法“告诉”StackOverflow这是一个新的答案,所以它们仍然存在。)

英文:

Dmitry Vyukov of the Go team says this is a Go runtime performance issue you can trigger with a huge allocation, and that as a workaround, "you can collect the large object as soon as it become

展开收缩
dead and increase GOGC right after that."

Broadly, the GitHub issue says that the runtime creates a lot of memory-management structures (spans) that it then keeps around indefinitely and has to sweep on every GC. Going by the issue tags, a fix is targeted for Go 1.5.

His sample with workaround is:

package main

import (
	&quot;runtime&quot;
	&quot;runtime/debug&quot;
)

var x = make([]byte, 1&lt;&lt;20)
var y []byte
var z []byte

func main() {
	y = make([]byte, 1&lt;&lt;30)
	y = nil
	runtime.GC()
	debug.SetGCPercent(1000)
	for i := 0; i &lt; 1e6; i++ {
		z = make([]byte, 8192)
	}
}

(Some comments are about a totally different answer and code sample focused on avoiding allocations that I've edited out. There's no way to "tell" StackOverflow this is a new answer, so they remain.)

huangapple
  • 本文由 发表于 2014年12月10日 17:50:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/27397844.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定