I would like to know if pytorch
is using my GPU. It's possible to detect with nvidia-smi
if there is any activity from the GPU during the process, but I want something written in a python
script.
Is there a way to do so?
This should work:
import torch
torch.cuda.is_available()
>>> True
torch.cuda.current_device()
>>> 0
torch.cuda.device(0)
>>> <torch.cuda.device at 0x7efce0b03be0>
torch.cuda.device_count()
>>> 1
torch.cuda.get_device_name(0)
>>> 'GeForce GTX 950M'
This tells me CUDA is available and can be used in one of your devices (GPUs). And currently, Device 0
or the GPU GeForce GTX 950M
is being used by PyTorch
.
As it hasn't been proposed here, I'm adding a method using torch.device
, as this is quite handy, also when initializing tensors on the correct device
.
# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
Edit: torch.cuda.memory_cached
has been renamed to torch.cuda.memory_reserved
. So use memory_cached
for older versions.
Output:
Using device: cuda
Tesla K80
Memory Usage:
Allocated: 0.3 GB
Cached: 0.6 GB
As mentioned above, using device
it is possible to:
To move tensors to the respective device
:
torch.rand(10).to(device)
To create a tensor directly on the device
:
torch.rand(10, device=device)
Which makes switching between CPU and GPU comfortable without changing the actual code.
As there has been some questions and confusion about the cached and allocated memory I'm adding some additional information about it:
torch.cuda.max_memory_cached(device=None)
Returns the maximum GPU memory managed by the caching allocator in bytes for a
given device.
torch.cuda.memory_allocated(device=None)
Returns the current GPU memory usage by tensors in bytes for a given device.
You can either directly hand over a device
as specified further above in the post or you can leave it None and it will use the current_device()
.
Additional note: Old graphic cards with Cuda compute capability 3.0 or lower may be visible but cannot be used by Pytorch!
Thanks to hekimgil for pointing this out! - "Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability that we support is 3.5."
After you start running the training loop, if you want to manually watch it from the terminal whether your program is utilizing the GPU resources and to what extent, then you can simply use watch
as in:
$ watch -n 2 nvidia-smi
This will continuously update the usage stats for every 2 seconds until you press ctrl+c
If you need more control on more GPU stats you might need, you can use more sophisticated version of nvidia-smi
with --query-gpu=...
. Below is a simple illustration of this:
$ watch -n 3 nvidia-smi --query-gpu=index,gpu_name,memory.total,memory.used,memory.free,temperature.gpu,pstate,utilization.gpu,utilization.memory --format=csv
which would output the stats something like:
Note: There should not be any space between the comma separated query names in --query-gpu=...
. Else those values will be ignored and no stats are returned.
Also, you can check whether your installation of PyTorch detects your CUDA installation correctly by doing:
In [13]: import torch
In [14]: torch.cuda.is_available()
Out[14]: True
True
status means that PyTorch is configured correctly and is using the GPU although you have to move/place the tensors with necessary statements in your code.
If you want to do this inside Python code, then look into this module:
https://github.com/jonsafari/nvidia-ml-py or in pypi here: https://pypi.python.org/pypi/nvidia-ml-py/
From practical standpoint just one minor digression:
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
This dev
now knows if cuda or cpu.
And there is a difference how you deal with model and with tensors when moving to cuda. It is a bit strange at first.
import torch
import torch.nn as nn
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
t1 = torch.randn(1,2)
t2 = torch.randn(1,2).to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t2) # tensor([[ 0.5117, -3.6247]], device='cuda:0')
t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t1.is_cuda) # False
t1 = t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]], device='cuda:0')
print(t1.is_cuda) # True
class M(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(1,2)
def forward(self, x):
x = self.l1(x)
return x
model = M() # not on cuda
model.to(dev) # is on cuda (all parameters)
print(next(model.parameters()).is_cuda) # True
This all is tricky and understanding it once, helps you to deal fast with less debugging.
On the office site and the get start page, check GPU for PyTorch as below:
import torch
torch.cuda.is_available()
Reference: PyTorch|Get Start
To check if there is a GPU available:
torch.cuda.is_available()
If the above function returns False
,
CUDA_VISIBLE_DEVICES
. When the value of CUDA_VISIBLE_DEVICES
is -1, then all your devices are being hidden. You can check that value in code with this line: os.environ['CUDA_VISIBLE_DEVICES']
If the above function returns True
that does not necessarily mean that you are using the GPU. In Pytorch you can allocate tensors to devices when you create them. By default, tensors get allocated to the cpu
. To check where your tensor is allocated do:
# assuming that 'a' is a tensor created somewhere else
a.device # returns the device where the tensor is allocated
Note that you cannot operate on tensors allocated in different devices. To see how to allocate a tensor to the GPU, see here: https://pytorch.org/docs/stable/notes/cuda.html
The other answers detail how to check if a GPU is available on your current machine.
To check if a given tensor is used by (i.e. stored and operated on) the GPU, you can access the device
or is_cuda
attributes:
>>> my_tensor.is_cuda
True
For a model you will need to check each of its parameters (e.g. weights tensors):
>>> all(p.is_cuda for p in my_model.parameters())
True
Almost all answers here reference torch.cuda.is_available()
. However, that's only one part of the coin. It tells you whether the GPU (actually CUDA) is available, not whether it's actually being used. In a typical setup, you would set your device with something like this:
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
but in larger environments (e.g. research) it is also common to give the user more options, so based on input they can disable CUDA, specify CUDA IDs, and so on. In such case, whether or not the GPU is used is not only based on whether it is available or not. After the device has been set to a torch device, you can get its type
property to verify whether it's CUDA or not.
if device.type == 'cuda':
# do something
Simply from command prompt or Linux environment run the following command.
python -c 'import torch; print(torch.cuda.is_available())'
The above should print True
python -c 'import torch; print(torch.rand(2,3).cuda())'
This one should print the following:
tensor([[0.7997, 0.6170, 0.7042], [0.4174, 0.1494, 0.0516]], device='cuda:0')
If you are here because your pytorch always gives False
for torch.cuda.is_available()
that's probably because you installed your pytorch version without GPU support. (Eg: you coded up in laptop then testing on server).
The solution is to uninstall and install pytorch again with the right command from pytorch downloads page. Also refer this pytorch issue.
devices = torch.get_all_devices() # [0, 1, 2] or whatever their name is
– Charlie Parker[torch.cuda.device(i) for i in range(torch.cuda.device_count())]
– vinzeelist(range(torch.cuda.device_count()))
. Thanks though! – Charlie Parker