I have some confusion about unified virtual memory.
The documentation behind the link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#unified-virtual-address-space) says it can be used when...
When the application is run as a 64-bit process, a single address space is used for the host and all the devices of compute capability 2.0 and higher.
But this link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements) says it needs:
a GPU with SM architecture 3.0 or higher (Kepler class or newer)
Furthermore, the first link says that I can use cudaHostAlloc. The second one then uses cudaMallocManaged.
Are there 2 different things between this 'Unified' term or is the documentation just a bit incoherent?