0
votes

I got the following error:

anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " Traceback (most recent call last):

trainstep = tf.train.AdamOptimizer(0.0001).minimize(lossobj)

File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 196, in minimize grad_loss=grad_loss) File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 253, in compute_gradients colocate_gradients_with_ops=colocate_gradients_with_ops) File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py", line 469, in gradients in_grads = _AsList(grad_fn(op, *out_grads)) File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/array_grad.py", line 504, in _ExtractImagePatchesGrad rows_out = int(ceil(rows_in / stride_r)) TypeError: unsupported operand type(s) for /: 'NoneType' and 'long'

there is look like gather ops is wrong.

1
This looks surprising. How do you build your lossobj?user1454804
@user1454804 Because my input placeholder for weight and height (shape[1] & [2]) I set with None to handle dynamic input size.Rudy Chp

1 Answers

1
votes

I see that this is an old issue, but I have found a quick work-around for some cases of this. Chances are, you are feeding your input using a placeholder and one of the dimensions of the placeholder shape is "None". If you set that dimension to your batch size, it will no longer be an unknown shape.