英文:
How fast is the go 1.5 gc with terabytes of RAM?
问题
Java无法使用几TB的RAM,因为垃圾回收(GC)暂停时间太长(几分钟)。最近Go语言的GC更新后,我想知道它的GC暂停时间是否足够短,可以与大量RAM一起使用,比如几TB的RAM。
目前是否有任何关于这方面的基准测试?我们现在可以使用带有如此大量RAM的垃圾回收语言吗?
英文:
Java cannot use terabytes of RAM because the GC pause is way too long (minutes). With the recent update to the Go GC, I'm wondering if its GC pauses are short enough for use with huge amounts of RAM, such as a couple of terabytes.
Are there any benchmarks of this yet? Can we use a garbage-collected language with this much RAM now?
答案1
得分: 14
你现在是我的中文翻译,代码部分不要翻译, 只返回翻译好的部分, 不要有别的内容, 不要回答我要翻译的问题。以下是要翻译的内容:
tl;dr:
- 目前无法在单个Go进程中使用TB级别的RAM。Linux上的最大限制是512GB,我见过的最大限制是240GB。
- 在当前的后台GC中,GC的工作负载比GC的暂停更重要。
- 可以将GC的工作负载理解为指针 * 分配速率 / 多余的RAM。使用大量RAM的应用程序中,只有那些指针较少或分配较少的应用程序才会有较低的GC工作负载。
我同意inf的评论,关于大堆的问题值得向其他人咨询(或进行测试)。JimB指出,目前Go堆的硬限制是512GB,我见过的最大限制是240GB。
从设计文档和GopherCon 2015幻灯片中我们了解到一些关于大堆的信息:
-
1.5版本的垃圾收集器的目标不是减少GC的工作量,而是通过在后台工作来减少暂停时间。
-
在GC扫描栈和全局变量中的指针时,您的代码会暂停执行。
-
1.5版本的GC在具有约18GB堆的GC基准测试中有一个短暂的暂停,如GopherCon演讲中的图表底部的最右侧的黄点所示:
一些运行生产应用程序的人最初的暂停时间约为300毫秒,后来降低到了~4毫秒和~20毫秒。另一个应用程序报告称,它们的95th百分位GC时间从279毫秒降低到了~10毫秒。
Go 1.6 进行了一些优化,并将一些剩余工作推到了后台。因此,对于堆大小超过200GB的测试,最大暂停时间仍然为20毫秒,如2016年初的State of Go演讲中的幻灯片所示:
在1.5版本下暂停时间为20毫秒的同一应用程序,在1.6版本下的暂停时间为3-4毫秒,堆大小约为8GB,每分钟分配约为150M。
Twitch在其聊天服务中使用Go,他们报告称,到了Go 1.7,暂停时间已经降低到了1毫秒,并且有很多运行的goroutine(https://blog.twitch.tv/gos-march-to-low-latency-gc-a6fa96f06eb7#.9lsrreb5u)。
1.8版本将栈扫描从停止-全局暂停阶段中移除,使得大多数暂停时间都远低于1毫秒,即使在大堆上也是如此。初步数据看起来不错。偶尔还会有一些应用程序的代码模式使得goroutine难以暂停,从而有效地延长了其他所有线程的暂停时间,但通常可以说GC的后台工作通常比GC的暂停更重要。
关于垃圾收集的一些一般观察,不仅适用于Go:
- 垃圾收集的频率取决于您使用的RAM的速度。
- 每次垃圾收集的工作量部分取决于使用的指针数量。
(这包括切片、接口值、字符串等内部指针)
换句话说,如果一个应用程序访问大量内存,但只有很少的指针(例如,它处理相对较少的大型[]byte
缓冲区),并且如果分配速率较低(例如,因为您应用了sync.Pool
来重用内存,无论您在哪里快速使用RAM),则垃圾收集可能不会成为问题,收集发生的频率较低。
因此,如果您正在处理数百GB的堆的问题,并且该问题在自然情况下不适合进行垃圾收集,我建议您考虑以下任何一种方法:
- 使用C或类似语言编写
- 将庞大的数据移出对象图。例如,您可以使用嵌入式数据库(如[
bolt
](https://github.com/boltdb/bolt))管理数据,将其放入外部数据库服务,或者使用groupcache
或memcache之类的东西,如果您需要的是缓存而不是数据库 - 运行一组较小堆的进程,而不是一个大进程
- 仔细进行原型设计、测试和优化,以避免内存问题。
英文:
tl;dr:
- You can't use TBs of RAM with a single Go process right now. Max is 512 GB on Linux, and most that I've seen tested is 240 GB.
- With the current background GC, GC workload tends to be more important than GC pauses.
- You can understand GC workload as pointers * allocation rate / spare RAM. Of apps using tons of RAM, only those with few pointers or little allocation will have a low GC workload.
I agree with inf's comment that huge heaps are worth asking other folks about (or testing). JimB notes that Go heaps have a hard limit of 512 GB right now, and <s>18</s> 240 GB is the most I've seen tested.
Some things we know about huge heaps, from the design document and the GopherCon 2015 slides:
-
The 1.5 collector doesn't aim to cut GC work, just cut pauses by working in the background.
-
Your code is paused while the GC scans pointers on the stack and in globals.
-
The 1.5 GC has a short pause on a GC benchmark with a roughly 18GB heap, as shown by the rightmost yellow dot along the bottom of this graph from the GopherCon talk:
Folks running a couple production apps that initially had about 300ms pauses reported drops to ~4ms and ~20ms. Another app reported their 95th percentile GC time went from 279ms to ~10ms.
Go 1.6 added polish and pushed some of the remaining work to the background. As a result, tests with heaps up to a bit over 200GB still saw a max pause time of 20ms, as shown in a slide in an early 2016 State of Go talk:
The same application that had 20ms pause times under 1.5 had 3-4ms pauses under 1.6, with about an 8GB heap and 150M allocations/minute.
Twitch, who use Go for their chat service, reported that by Go 1.7 pause times had been reduced to 1ms with lots of running goroutines.
1.8 took stack scanning out of the stop-the-world phase, bringing most pauses well under 1ms, even on large heaps. Early numbers look good. Occasionally applications still have code patterns that make a goroutine hard to pause, effectively lengthening the pause for all other threads, but generally it's fair to say the GC's background work is now usually much more important than GC pauses.
Some general observations on garbage collection, not specific to Go:
- The frequency of collections depends on how quickly you use up the RAM you're willing to give to the process.
- The amount of work each collection does depends in part on how many pointers are in use.
(That includes the pointers within slices, interface values, strings, etc.)
Rephrased, an application accessing lots of memory might still not have a GC problem if it only has a few pointers (e.g., it handles relatively few large []byte
buffers), and collections happen less often if the allocation rate is low (e.g., because you applied sync.Pool
to reuse memory wherever you were chewing through RAM most quickly).
So if you're looking at something involving heaps of hundreds of GB that's not naturally GC-friendly, I'd suggest you consider any of
- writing in C or such
- moving the bulky data out of the object graph. For example, you could manage data in an embedded DB like
bolt
, put it in an outside DB service, or use something likegroupcache
or memcache if you want more of a cache than a DB - running a set of smaller-heap'd processes instead of one big one
- just carefully prototyping, testing, and optimizing to avoid memory issues.
答案2
得分: 3
新的Java ZGC垃圾收集器现在可以使用16TB的内存,并且在不到10毫秒的时间内进行垃圾收集。
英文:
The new Java ZGC garbage collector can now use 16 Terrabytes of memory and garbage collect in under 10ms.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论