relationship between container_memory_working_set_bytes and process_resident_memory_bytes and total_rss

huangapple go评论143阅读模式
英文:

relationship between container_memory_working_set_bytes and process_resident_memory_bytes and total_rss

问题

我正在寻求理解以下指标之间的关系:container_memory_working_set_bytesprocess_resident_memory_bytestotal_rss(container_memory_rss)+ file_mapped,以便更好地为OOM(内存耗尽)可能性设置系统警报。

根据我的理解(目前令我困惑的是),如果一个容器/Pod正在运行一个使用Go编写的已编译程序的单个进程,为什么container_memory_working_set_bytesprocess_resident_memory_bytes相比差别如此之大(几乎多了10倍)?

此外,container_memory_working_set_bytescontainer_memory_rss + file_mapped之间的关系在这里是奇怪的,这是我没有预料到的,阅读了这里之后。

匿名和交换缓存内存的总量(包括透明大页),它等于来自memory.status文件的total_rss的值。这不应与真实的常驻集大小或cgroup使用的物理内存量混淆。rss + file_mapped将为您提供cgroup的常驻集大小。它不包括被交换出的内存,但包括来自共享库的内存,只要这些库的页面实际上在内存中。它包括所有堆栈和堆内存。

因此,对于在给定cgroup中运行的容器,cgroup的总常驻集大小是rss + file_mapped,这个值如何小于container_working_set_bytes,这让我感到这些统计数据有些不正确。

以下是用于构建上述图表的PROMQL查询:

  • process_resident_memory_bytes{container="sftp-downloader"}
  • container_memory_working_set_bytes{container="sftp-downloader"}
  • go_memstats_heap_alloc_bytes{container="sftp-downloader"}
  • container_memory_mapped_file{container="sftp-downloader"} + container_memory_rss{container="sftp-downloader"}
英文:

I'm looking to understanding the relationship of

container_memory_working_set_bytes vs process_resident_memory_bytes vs total_rss (container_memory_rss) + file_mapped so as to better equipped system for alerting on OOM possibility.

relationship between container_memory_working_set_bytes and process_resident_memory_bytes and total_rss

It seems against my understanding (which is puzzling me right now) given if a container/pod is running a single process executing a compiled program written in Go.

Why is the difference between container_memory_working_set_bytes is so big(nearly 10 times more) with respect to process_resident_memory_bytes

Also the relationship between container_memory_working_set_bytes and container_memory_rss + file_mapped is weird here, something I did not expect, after reading here

> The total amount of anonymous and swap cache memory (it includes transparent hugepages), and it equals to the value of total_rss from memory.status file. This should not be confused with the true resident set size or the amount of physical memory used by the cgroup. rss + file_mapped will give you the resident set size of cgroup. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.

So cgroup total resident set size is rss + file_mapped how does this value is less than container_working_set_bytes for a container that is running in the given cgroup

Which make me feels something with this stats that I'm not correct.

Following are the PROMQL used to build the above graph

  • process_resident_memory_bytes{container="sftp-downloader"}

  • container_memory_working_set_bytes{container="sftp-downloader"}

  • go_memstats_heap_alloc_bytes{container="sftp-downloader"}

  • container_memory_mapped_file{container="sftp-downloader"} + container_memory_rss{container="sftp-downloader"}

答案1

得分: 8

所以关系看起来是这样的:

container_working_set_in_bytes = container_memory_usage_bytes - total_inactive_file

container_memory_usage_bytes的意思是容器使用的总内存(但它也包括文件缓存,即操作系统在内存压力下可以释放的非活动文件),减去非活动文件得到container_working_set_in_bytes

container_memory_rsscontainer_working_sets之间的关系可以用以下表达式总结:

container_memory_usage_bytes = container_memory_cache + container_memory_rss

缓存反映了当前在内存中缓存的磁盘上存储的数据。它包含了上面提到的活动文件和非活动文件。

这解释了为什么container_working_set较高。

参考文献#1

参考文献#2

英文:

So the relationship seems is like this

container_working_set_in_bytes = container_memory_usage_bytes - total_inactive_file

container_memory_usage_bytes as its name implies means the total memory used by the container (but since it also includes file cache i.e inactive_file which OS can release under memory pressure) substracting the inactive_file gives container_working_set_in_bytes

Relationship between container_memory_rss and container_working_sets can be summed up using following expression

container_memory_usage_bytes = container_memory_cache + container_memory_rss 

cache reflects data stored on a disk that is currently cached in memory. it contains active + inactive file (mentioned above)

This explains why the container_working_set was higher.

Ref #1

Ref #2

答案2

得分: 0

不是一个真正的答案,但还是有两个不同的观点。

这个链接是否有助于理解图表?这里

在我的日常工作中,我们遇到了一些与不同工具(与Go运行时外部的工具)如何计算和显示执行Go程序的进程的内存使用有关的问题。加上Go在Linux上的GC实际上并不将释放的内存页面释放给内核,而只是通过madvise(2)将这些页面标记为MADV_FREE,释放了相当大量内存的GC周期不会导致外部工具(通常是cgroups统计)对“进程RSS”的读数产生任何明显的变化。

因此,我们通过定期调用runtime.ReadMemStats(和runtime/debug.ReadGCStats)在任何使用Go编写的主要服务中导出我们自己的指标,借助一个专门为此编写的简单包。这些读数反映了Go运行时对其控制的内存的真实情况。

顺便说一下,内存统计信息中的NextGC字段非常有用,如果为容器设置了内存限制,观察该读数是否达到或超过内存限制,容器中的进程肯定最终会被oom_killer关闭。

英文:

Not really an answer, but still two assorted points.

Does this help to make sense of the chart?

Here at my $dayjob, we had faced various different issues with how different tools external to the Go runtime count and display memory usage of a process executing a program written in Go.
Coupled with the fact Go's GC on Linux does not actually release freed memory pages to the kernel but merely madvise(2)s it that such pages are MADV_FREE, a GC cycle which had freed quite a hefty amount of memory does not result in any noticeable change of the readings of the "process' RSS" taken by the external tooling (usually cgroups stats).

Hence we're exporting our own metrics obtained by periodically calling runtime.ReadMemStats (and runtime/debug.ReadGCStats) in any major serivice written in Go — with the help of a simple package written specifically for that. These readings reflect the true idea of the Go runtime about the memory under its control.

By the way, the NextGC field of the memory stats is super useful to watch if you have memory limits set for your containers because once that reading reaches or surpasses your memory limit, the process in the container is surely doomed to be eventually shot down by the oom_killer.

huangapple
  • 本文由 发表于 2021年7月8日 00:18:16
  • 转载请务必保留本文链接:https://go.coder-hub.com/68289644.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定