0
votes

I have been trying to train with PyTorch for Yolact following the guide: https://github.com/dbolya/yolact

Current GPU is RTX2070 and cudatoolkit of 11.1.1 is used.

When I run the following:

python train.py --config=yolact_base_config --batch_size=8

I keep running into this error

RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 7.79 GiB total capacity; 5.98 GiB already allocated; 22.88 MiB free; 6.40 GiB reserved in total by PyTorch)

I have tried decreasing the maximum size of the pictures in max_size under config.py but the CUDA error continue to persist. I decrease the batch size as well but it does not seem to improve.

1

1 Answers

0
votes

there are several reason for this error some of them are

  1. your model is too dense with layers so you GPU is unable to train it
  2. the image input could be too large for the GPU to process

and may be some other reasons what you can try is, look at your configs and make the batch size small may be if you are using 8 then try decreasing it to 4 or 3 or whatever and see where your training takes off

or

you can try reducing the input size if the image ... or my be try both