16
votes

I'm working on a single machine with 512GB RAM (addressed by several AMD Opteron 6212 CPUs). There is currently about 300GB RAM free. Running a large java computation by running

java path/to/myApp -Xms280g -Xmx280g > output.txt

should make Java reserve 280GB immediately, and error if that fails. Strangely, no error occurs but top only shows a memory usage of 30.4GB but it doesn't crash. How can this happen? Isn't java supposed to crash if the initial heap size cannot be allocated?

And effectively, I get OutOfMemory/Java heap space/GC overhead limit errors once the 30.4GB are full, well before the 280GB is ever reached. Running with 250GB or 300GB yields a similar 30.3GB ~ 30.4GB limit. I'm running OpenJDK 64-bit server VM with OpenJDK Runtime Environment (IcedTea6) on Gentoo Linux, and there is plenty of free RAM (over 300GB).

4
Have you checked the behavior and the parameters also with VisualVM. Like if it really allocates it at start time and not during processing some data?MortalFool
The program actually starts with only 30ish GB assigned and crashes while the matrix is being built, so the program really starts even though the initialization was not succesful.user1111929
What is the result of java -version? Also, you may find this useful.Elliott Frisch
I'm assuming that you're running on Linux here, but it could be that while Java is allocating the memory, Linux hasn't given you the memory yet. See: C program on Linux to exhaust memory and Allocating more memory then there existsrm5248
I ran a quick test; it would appear as though Java doesn't allocate the entire heap when it starts up. I can allocate a 16GB heap for Java on my computer(8GB memory + 8GB swap), but the amount of free space as shown by free -m only drops by 60MB. That doesn't really help, but it could at least partially explain why it's not allocating all of it(using HotSpot 1.7.0_25 on Debian). Quick thought here: Are you sure you can allocate 30+GB of memory?(Check ulimit -a)rm5248

4 Answers

15
votes

The order of the parameters is incorrect. You are passing -Xms280g -Xmx280g as arguments to your own program and not to the JVM. The correct is:

java -Xms280g -Xmx280g path/to/myApp

3
votes

If you want the memory specified in -Xms to be grabbed during the initialization of your application then use

java -XX:+AlwaysPreTouch -Xms2G -Xmx2G ......

AlwaysPreTouch will demand every memory page during the initialization of the JVM rather than just keeping what is not needed as “virtual”. However, note that this will have some latency in the starting of the JVM.

Use the aforementioned switch and then check with top. You will get full 2G (in fact a little more) for your JVM.

0
votes

Try adding -d64 parameter in the cmdline

java -d64 path/to/myApp -Xms280g -Xmx280g > output.txt
0
votes

So you are reaching roughly to 32 GB as limit. While Oracle speak about Compressed oops in the Hotspot JVM it also says about 32GB.

On an LP64 system, though, the heap for any given run may have to be around 1.5 times as large as for the corresponding ILP32 system (assuming the run fits both modes). This is due to the expanded size of managed pointers. Memory is pretty cheap, but these days bandwidth and cache is in short supply, so significantly increasing the size of the heap just to get over the 4Gb limit is painful.

(Additionally, on x86 chips, the ILP32 mode provides half the usable registers that the LP64 mode does. SPARC is not affected this way; RISC chips start out with lots of registers and just widen them for LP64 mode.)

Compressed oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to. This allows applications to address up to four billion objects (not bytes), or a heap size of up to about 32Gb.

Link here