英文:
How do I force my JVM process to always occupy x GB ram?
问题
与我之前的问题相关。
我将一个Java进程的Xms设置为512M,Xmx设置为6G。我有三个这样的进程。
我的总内存是32 GB。其中2G总是被占用。
我执行了free
命令,以确保至少有27G可用。但我的作业在任何时候只需要最多18 GB。
一切都正常运行。每个作业占用大约4到5 GB,但实际使用了大约3到4 GB。我明白Xmx并不意味着进程应始终占用6 GB。
当另一个X进程与另一个用户在同一服务器上启动时,它占用了14G。然后我的一个进程失败了。
我明白我需要增加内存或管理冲突的作业。
这里的问题是,我如何强制我的作业始终使用6 GB,并且为什么在这种情况下会抛出GC限制已达到错误?
我使用visualvm进行监视。还有jstat。
欢迎任何建议。
英文:
It is related to my previous question
I set Xms as 512M, Xmx as 6G for one java process. I have three such processes.
My total ram is 32 GB. Out of that 2G is always occupied.
I executed free
command to ensure that minimum 27G is free. But my jobs required only 18 GB max at any time.
It was running fine. Each job occupied around 4 to 5 GB but used around 3 to 4 GB. I understand that Xmx doesn't mean that process should always occupy 6 GB
When another X process started on the same server with another user, it has occupied 14G. Then one of my process got failed.
I understand that I need to increase ram or manage both collision jobs.
Here the question is that how can I force my job to use 6 GB always and why does it throw GC limit reached error in this case?
I used visualvm to monitor them. And jstat also.
Any advises are welcome.
答案1
得分: 3
简单回答: -Xmx
不是 JVM 的硬限制。它仅限制了 JVM 内部 Java 可用的堆内存。降低 -Xmx
可以稳定进程内存大小以适应您的需求。
详细回答: JVM 是一个复杂的机器,可以将其视为您的 Java 代码的操作系统。虚拟机确实需要额外的内存来进行自身的管理(例如垃圾回收元数据)、线程堆栈大小占用的内存、"堆外" 内存(例如通过 JNI 由本地代码分配的内存;缓冲区)等。
-Xmx
仅限制对象的堆内存大小:直接在您的 Java 代码中处理的内存。此设置不考虑其他任何内存。
还有一个较新的 JVM 设置 -XX:MaxRam
(1,2),它尝试将整个进程内存限制在该限制内。
从您的另一个问题中:
> 这是多线程。100 个读取线程,100 个写入线程。每个线程都有自己连接到数据库的连接。
请记住,操作系统的 I/O 缓冲区也需要内存来执行其自身的功能。
如果您有超过 200 个线程,您还需支付代价:N*(堆栈大小)
,每个线程的 Young Gen 中约占用 N*(TLAB 大小)
(动态可调整大小):
java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab
size_t MinTLABSize = 2048
intx ThreadStackSize = 1024
仅此项就需要大约半个千兆字节的内存(可能更多)!
> 线程堆栈大小(以 K 为单位)。(0 表示使用默认堆栈大小)[Sparc: 512; Solaris x86: 320(5.0 及更早版本为 256);Sparc 64 位:1024;Linux amd64:1024(5.0 及更早版本为 0);其他所有系统为 0。] - Java HotSpot VM Options;Linux x86 JDK source
简而言之:-Xss
(堆栈大小)默认取决于虚拟机和操作系统环境。
线程本地分配缓冲区更加复杂,有助于减轻分配争用/资源锁定。有关此设置的解释可参考此处,了解其功能:TLAB 分配 和 TLAB 和堆可分析性。
进一步阅读:"本地内存跟踪" 和 问题:"Java 使用的内存比堆大小多得多"
> 为什么在这种情况下会抛出 GC 限制已达到的错误。
"GC 超出限制"。简而言之:每个 GC 周期回收的内存太少,而自适应调整决定中止。您的进程需要更多内存。
> 当在同一台服务器上使用另一个用户启动另一个 X 进程时,它占用了 14GB。然后我的一个进程失败了。
关于连续运行多个大内存进程的另一个要点,请考虑以下内容:
java -Xms28g -Xmx28g <...>;
# 上面的进程完成
java -Xms28g -Xmx28g <...>; # 崩溃,无法分配足够的内存
当第一个进程完成时,您的操作系统需要一些时间来清零由结束进程释放的内存,然后才能将这些物理内存区域分配给第二个进程。这个任务可能需要一些时间,在此期间,您不能立即启动另一个要求完整 28GB 堆内存的 "大" 进程(在 WinNT 6.1 上观察到)。这可以通过以下方式解决:
- 减小
-Xms
,以便分配在第二个进程的生命周期后发生 - 减小总体的
-Xmx
堆大小 - 延迟启动第二个进程
英文:
Simple answer: -Xmx
is not a hard limit to JVM. It only limits the heap available to Java inside JVM. Lower your -Xmx
and you may stabilize process memory on a size that suits you.
Long answer: JVM is a complex machine. Think of this like an OS for your Java code. The Virtual Machine does need extra memory for its own housekeeping (e.g. GC metadata), memory occupied by threads' stack size, "off-heap" memory (e.g. memory allocated by native code through JNI; buffers) etc.
-Xmx
only limits the heap size for objects: the memory that's dealt with directly in your Java code. Everything else is not accounted for by this setting.
There's a newer JVM setting -XX:MaxRam
(1, 2) that tries to keep the entire process memory within that limit.
From your other question:
> It is multi threading. 100 reader, 100 writer threads. Each one has it's own connection to the database.
Keep in mind that the OS' I/O buffers also need memory for their own function.
If you have over 200 threads, you also pay the price: N*(Stack size)
, and approx. N*(TLAB size)
reserved in Young Gen for each thread (dynamically resizable):
java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab
size_t MinTLABSize = 2048
intx ThreadStackSize = 1024
Approximately half a gigabyte just for this (and probably more)!
> Thread Stack Size (in Kbytes). (0 means use default stack size)
[Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.] - Java HotSpot VM Options; Linux x86 JDK source
In short: -Xss
(stack size) defaults depend on the VM and OS environment.
Thread Local Allocation Buffers are more intricate and help against allocation contention/resource locking. Explanation of the setting here, for their function: TLAB allocation and TLABs and Heap Parsability.
Further reading: "Native Memory Tracking" and Q: "Java using much more memory than heap size"
> why does it throw GC limit reached error in this case.
"GC overhead limit exceeded". In short: each GC cycle reclaimed too little memory and the ergonomics decided to abort. Your process needs more memory.
> When another X process started on the same server with another user, it has occupied 14g. Then one of my process got failed.
Another point on running multiple large memory processes back-to-back, consider this:
java -Xms28g -Xmx28g <...>;
# above process finishes
java -Xms28g -Xmx28g <...>; # crashes, cant allocate enough memory
When the first process finishes, your OS needs some time to zero out the memory deallocated by the ending process before it can give these physical memory regions to the second process. This task may need some time and until then you cannot start another "big" process that immediately asks for the full 28GB of heap (observed on WinNT 6.1). This can be worked around with:
-
Reduce
-Xms
so the allocation happens later in 2nd processes' life-time -
Reduce overall
-Xmx
heap -
Delay the start of the second process
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论