I am training a large model which exceeds the GPU memory (more than 11 GB). I am wondering if there are any way in tensorflow to swap the GPU memory into main memory. Some loss of efficiency is acceptable. Training the model fully on CPU solves the problem of memory but is too slow.
1
votes
1 Answers
1
votes
There are some classes in tensorflow to create networks with a swap_memory argument.
e.g. for RNN you can use tf.nn.dynamic_rnn or tf.nn.raw_rnn
There is also a more generic looping class tf.while_loop with this argument.
But there is imho no general option for using memory swapping.
Just take a look at tensorflow.org and use its search function. You can find relevant classes using swap_memory