I'm trying to allocate shared gpu memory (has nothing to do with shared memory technology) with cuda. The memory is shared between an intel and nvidia gpu. To allocate memory I'm using cudaMallocManaged and the maximum allocation size is 2GB (which is also the case for cudaMalloc), so the size of the dedicated memory.
Is there a way to allocate gpu shared memory or RAM from host, which can then be used in kernel?