6
votes

When I want to put the model on the GPU, I get the following error:

"RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu"

However, all of the above had been put on the GPU:

for m in model.parameters():
    print(m.device) #return cuda:0
if torch.cuda.is_available():
    model = model.cuda()
    test = test.cuda() # test is the Input

Windows 10 server
Pytorch 1.2.0 + cuda 9.2
cuda 9.2
cudnn 7.6.3 for cuda 9.2

2
you need to send your inputs also to cuda. so for example your X_train_batch and label_batch etc...basilisk

2 Answers

8
votes

You need to move the model, the inputs, and the targets to Cuda:

if torch.cuda.is_available():
   model.cuda()
   inputs = inputs.cuda() 
   target = target.cuda()
0
votes

This error occurs when PyTorch tries to compute an operation between a tensor stored on a CPU and one on a GPU. At a high level there are two types of tensor - those of your data, and those of the parameters of the model, and both can be copied to the same device like so:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

data = data.to(device)
model = model.to(device)