4
votes

I am trying to run a pytorch code with jupyter notebook and I got this error

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.23 GiB already allocated; 18.83 MiB free; 1.25 GiB reserved in total by PyTorch)

I had already find answer. and most of all say just reduce the batch size. I have tried reduce the batch size from 20 to 10 to 2 and 1. Right now still can't run the code

My GPU is Quadro p620

1
What model are you using and what's the input size? 2GB of GPU memory won't be enough for any complex models, especially for tasks that require bigger input sizes. - Michael Jungo
@MichaelJungo I'm using this model github.com/alterzero/RBPN-PyTorch/blob/master/main.py (Video Super-resolution) My input size is " torch.Size([2, 3, 64, 64]) " - S.Xu
Even though the input is rather small, the super resolution will end up using more memory as it's increasing the size. I don't think that 2GB are enough to train that model. If you just want to use the pre-trained model, you should make sure to disable gradients, either by setting torch.set_grad_enabled(False) or by using the torch.no_grad context manager. Otherwise, you will unfortunately have to run it on the CPU. - Michael Jungo
Try using a batch size of 1. If it still doesn't fit, then might not be possible with 64, 64 input size. Maybe try using Google Colab (if you're just testing) since it gives 8 or 11 GB GPUs. - akshayk07
@akshayk07 I just have change my GPU to RTX2080 (8GB) I still get this error CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 8.00 GiB total capacity; 6.02 GiB already allocated; 1.25 MiB free; 6.03 GiB reserved in total by PyTorch) - S.Xu

1 Answers

1
votes

You can free up the existing memory by

nvidia-smi

note the process and kill using

sudo kill -9 <pid>