4
votes

For example, if I have a GPU with 2GB RAM and in my app allocate large array, like 1GB, as mapped memory (page-locked host memory that is mapped to GPU address space, allocated with cudaHostAlloc()), will the amount of available GPU memory be reduced for that 1GB of mapped memory, or will I still have (close to) 2GB as I had before allocation and use?

1

1 Answers

5
votes

Mapping host memory so that it appears in the GPU address space does not consume memory from the GPU on-board memory.

You can verify this in a number of ways, such as using cudaMemGetInfo