如何分析大小约为 35-40GB 的大型堆转储文件。

huangapple go评论60阅读模式
英文:

How can I analyse large size heap dump around of 35-40 GB

问题

以下是您要翻译的内容:

我必须分析大小为 35-40GB 的 Java 堆转储,除了具有大内存的远程服务器之外,无法在本地机器上加载。

我发现 https://stackoverflow.com/questions/7254017/tool-for-analyzing-large-java-heap-dumps 是迄今为止最好的链接。但是在配置了所有内容并正确执行了所有命令行之后,我无法获得任何报告文件。

我的 ParseHeapDump.sh 文件如下:

#!/bin/sh
#
# This script parses a heap dump.
#
# Usage: ParseHeapDump.sh <path/to/dump.hprof> [report]*
#
# The leak report has the id org.eclipse.mat.api:suspects
# The top component report has the id org.eclipse.mat.api:top_components
#
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$@" -vmargs -Xms8g -Xmx10g -XX:-UseGCOverheadLimit

MemoryAnalyzer.ini 文件如下:

-startup
plugins/org.eclipse.equinox.launcher_1.5.0.v20180512-1130.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.700.v20180518-1200
java -Xmx8g -Xms10g -jar plugins/org.eclipse.equinox.launcher_1.5.0.v20180512-1130.jar -consoleLog -consolelog -application org.eclipse.mat.api.parse "$@"
-vmargs
-Xms8g
-Xmx10g

请告诉我是否在配置中有任何错误,或者向我推荐市场上是否有其他可用的工具。

英文:

I have to analyse java heap dump of size 35-40GB, which can't be loaded on local machine except of remote servers of large memory.

I found https://stackoverflow.com/questions/7254017/tool-for-analyzing-large-java-heap-dumps as the best link till now. But after configuring all the things and properly executing all the command lines, I was not able to get any report file.

My ParseHeapDump.sh file looks as

#!/bin/sh
#
# This script parses a heap dump.
#
# Usage: ParseHeapDump.sh &lt;path/to/dump.hprof&gt; [report]*
#
# The leak report has the id org.eclipse.mat.api:suspects
# The top component report has the id org.eclipse.mat.api:top_components
#
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse &quot;$@&quot; -vmargs -Xms8g -Xmx10g -XX:-UseGCOverheadLimit

and MemoryAnalyzer.ini file looks as

-startup
plugins/org.eclipse.equinox.launcher_1.5.0.v20180512-1130.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.700.v20180518-1200
java -Xmx8g -Xms10g -jar plugins/org.eclipse.equinox.launcher_1.5.0.v20180512-1130.jar -consoleLog -consolelog -application org.eclipse.mat.api.parse &quot;$@&quot;
-vmargs
-Xms8g
-Xmx10g

Please tell me If I'm doing any mistake in configuration or suggest me any other tool available in the market.

答案1

得分: 3

处理大型堆转储是一项挑战。无论是VisualVM还是Eclipse Memory Analyzer,都需要太多的内存来处理几十GB的堆转储数据。

商业性能分析工具显示出更好的结果(尤其是YourKit),尽管我不确定它们的实际限制是什么。

为了定期处理100GB以上的数据,我提出了一个无界面解决方案heaplib,它基于VisualVM(实际上是Netbeans)的代码库。

Heaplib既不是图形化工具,也不是交互式工具。它面向自动化报告。该工具允许您编写用于堆分析的OQL/JavaScript代码(或者如果您愿意,可以使用Java),尽管为了满足内存需求,其功能受到了一定的限制。处理100GB的数据可能需要几个小时,但对于非交互式工作流来说,这是可以接受的。

heaplib:https://github.com/aragozin/heaplib

英文:

Processing large heap dump is a challenge. Both VisualVM and Eclipse Memory Analyzer required too much memory to process heap dumps in order of few dozen of GiB.

Commercial profilers show better result (YourKit in particular) though I not sure of their practical limit.

To routinely process 100+ GiB, I came up you with headless solution heaplib, which based on code base from VisualVM (Netbeans actually).

Heaplib is neigther graphical, nor interactive. It is oriented to automated reporting.
Tool allows you to write code for heap analysis in OQL/JavaScript (or Java if you wish), though capabilities are limited to accommodate memory requirements. Processing of 100GiB could take hour, but for non interactive workflow it is acceptable.

答案2

得分: 0

代码部分不要翻译,只返回翻译好的部分:

挑战在于RAM应大于堆转储hprof文件的大小。通常情况下,我们运行Windows系统的笔记本电脑的RAM都小于等于16GB。因此,在本地系统中分析35-40GB大小的堆转储文件几乎是不可能的。

在拥有足够RAM(大于35-40GB)的远程Unix服务器上配置MAT,并通过命令行运行它(无论如何,GUI始终比命令行慢)。

分配8GB-10GB的内存是不起作用的。因此,最好增加分配给Java进程的堆内存。

-vmargs
-Xms40g
-Xmx40g

可以在这里找到相关的详细回答。
https://stackoverflow.com/a/76298700/5140851

英文:

Challenge is that RAM should be larger than the heap dump hprof file. And usually our laptop hosting windows system has RAM <=16 GB. Thus analyzing heap dump of 35-40 gb in local system is almost impossible.

Configure MAT in your remote unix server having enough RAM ( > 35-40GB). And run it using command line. (GUI is always slower than command line anyway.)

Assigning 8GB-10GB wont work. So Better to increase the heap assigned to java process.

-vmargs
-Xms40g
-Xmx40g

Check by descriptive answer here.
https://stackoverflow.com/a/76298700/5140851

huangapple
  • 本文由 发表于 2020年9月22日 01:42:21
  • 转载请务必保留本文链接:https://go.coder-hub.com/63997436.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定