Training with a batch size of 128/64/32 used to simply empty out the GPU memory after several epochs. However, running stochastic batch training actually makes the program stuck at 0% of the 1st epoch.
--------------- Epoch 1 ---------------
0%| | 0/486 [00:00<?, ?it/s]2019-06-18 18:04:58.581233: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 1207959552 exceeds 10% of system memory.
2019-06-18 18:04:59.208729: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 1207959552 exceeds 10% of system memory.
2019-06-18 18:04:59.827425: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 1207959552 exceeds 10% of system memory.
2019-06-18 18:05:00.497830: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 1207959552 exceeds 10% of system memory.
2019-06-18 18:05:01.173273: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 1207959552 exceeds 10% of system memory.
The GPU used is a GTX 1080, does anyone have any insights? Thanks in advance.