4
votes

This is happening on linux 2.6.18-238.5.1.el5 with a 64 bit app. My process stack size is 10MB. However, after a (successful) call to JNI_CreateJavaVM I only seem to have 1-2 MB left on the stack. If I got past it - I get memory fault as if I'm overflowing the stack.

A few notes:

  1. If I DON'T create a JVM then I get have access to the whole my my 10MB stack.
  2. The same test program with the same makefile runs fine on Solaris even with a call to JVM

Test source:

#include <jni.h>
#include <stdio.h>
#include <stdlib.h>

void CreateVM(JavaVM ** jvm) {

    JNIEnv *env;
    JavaVMInitArgs vm_args;
    JavaVMOption options[1];
    options[0].optionString = (char*)"-Xcheck:jni";

    vm_args.version = JNI_VERSION_1_6;
    vm_args.nOptions = 0;
    vm_args.options = options;
    vm_args.ignoreUnrecognized = 0;

    int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
    if(ret < 0) {
        printf("\nUnable to Launch JVM\n");
        exit(1);
    }

    if ( env->ExceptionCheck() == JNI_TRUE ) {
        printf("exception\n");
        exit(1);
    }
}

void f() {
    printf("inside...\n");
    //eat up a few megs of stack
    char stackTest[0x2FFFFF];
    printf("...returning");
}

int main(int argc, char* argv[]) {
    JavaVM * jvm;
    CreateVM(&jvm);

    f();

    printf("exiting...\n");
    return 0;
}

Build command:

g++ -m64 CTest.cpp -I/import/bitbucket/JDK/jdk1.6.0_26/include -I/import/bitbucket/JDK/jdk1.6.0_26/include/linux -L/import/bitbucket/JDK/jdk1.6.0_26/jre/lib/amd64 -L/import/bitbucket/JDK/jdk1.6.0_26/jre/lib/amd64/server -ljava -ljvm

2
Can you do an strace -f a.out and post results in the internet?osgx

2 Answers

0
votes

Your stack-eater seems buggy, but it is not if -O0 is used

Also, JVM on sun can vary or it can use less stack space when it runs on solaris.

How did you limit stack size on Linux and Solaris?

Update: Yes, JVM uses different settings on OS Solaris and OS Linux:

-XX:ThreadStackSize=512 Thread Stack Size (in Kbytes). (0 means use default stack size) [Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.]

I don't know is this setting about main thread, but this indicates that solaris jvm will use different settings for memory than linux amd64 jvm.

=== UPDATE2

the very first operations in JNI_CreateJavaVM are thread creation because JVM itself is highly threaded:

  result = Threads::create_vm((JavaVMInitArgs*) args, &can_try_again);
  if (result == JNI_OK) {
    JavaThread *thread = JavaThread::current();
    /* thread is thread_in_vm here */
    *vm = (JavaVM *)(&main_vm);

So, thread is created in call of CreateJavaVM

Change "CompilerThreadStackSize" global variable, because (!!!!!!!!!AAA I lost all sources I added to answer!!! Why no draft-autosave when editing????)

On AMD64 Linux, JVM has 4M compiler thread stack by default and Solaris SPARC64's has 2M compiler thread stack by default. Usual threads got 1M stacks on linux and 2M stacks on Solaris.

Use 1 to limit compiler stack size on linux

-XX:CompilerThreadStackSize to adjust the stack size

the value is in kb. Try to set it to 2048 on both OS.

0
votes

Ok. Now I can reproduce a SIGSEGV in f(); I use i386 jvm, and a bit older. Let's debug the memory allocation.

$ cat gdb.how
b main
r
b mmap
commands
x/x $sp+4
x/x $sp+8
bt
c
end
c

You may change a $sp+4 and $sp+8 to +8 and +16 or like. First outputs from gdb should look like "00000000, 00001000". I suggest you have debuging symbols for jvm (I do).

$ gdb  -x gdb.how ./a.out  > gdb.log
quit
y

Now, lets view, how memory is allocated for threads:

$ grep Breakpoint\ 2, -A4 gdb.log | grep pthread_create -B 2 | grep 0x00 |cut -d : -f 2 |
perl -e '$a=0;while(<>){s/0x0//;$a+=$_;};print "ibase=16\n".uc($a)."\n";'|bc
4616192

It is sum of all thread's stack sizes. You can delete some parts of this command to view the real allocations, I have 7 threads created with Sun JVM.

Now you can try to change some options and check, how much memory was allocated to threads stacks.

What I get... This is interesting. I have ulimit -s as 8192. If I do start of the ./a.out with ./a.out, I got a SEGV

$ ./a.out
Segmentation fault

But if I do start as (./a.out ) (start in bash-subshell); there is no Segmentation fault.