2
votes

RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 1; 11.91 GiB total capacity; 10.12 GiB already allocated; 21.75 MiB free; 56.79 MiB cached)

I encountered the preceding error during pytorch training.
I'm using pytorch on jupyter notebook. Is there a way to free up the gpu memory in jupyter notebook?

1
Try with a smaller batch size Instead of free memory manually. By the way, you can use torch.cuda.empty_cache () to clear memory but not recommended.Dishin H Goyani

1 Answers

0
votes

adjust batch_size or

https://pytorch.org/docs/stable/notes/faq.html

total_loss = 0
for i in range(10000):
    optimizer.zero_grad()
    output = model(input)
    loss = criterion(output)
    loss.backward()
    optimizer.step()
    total_loss += loss