8
votes

Problem: a very long RNN net

N1 -- N2 -- ... --- N100

For a Optimizer like AdamOptimizer, the compute_gradient() will give gradients to all training variables.

However, it might explode during some step.

A method like in how-to-effectively-apply-gradient-clipping-in-tensor-flow can clip large final gradient.

But how to clip those intermediate ones?

One way might be manually do the backprop from "N100 --> N99", clip the gradients, then "N99 --> N98" and so on, but that's just too complicated.

So my question is: Is there any easier method to clip the intermediate gradients? (of course, strictly speaking, they are not gradients anymore in the mathematical sense)

2
Rough idea -- wrap each of your layers into a py_func that uses custom gradient as done here. The custom gradient function would take vector of backward values and return the clipped version.Yaroslav Bulatov
clipping weights and/or activations might also help to prevent large gradientsgizzmole

2 Answers

2
votes
@tf.custom_gradient
def gradient_clipping(x):
  return x, lambda dy: tf.clip_by_norm(dy, 10.0)
0
votes

You can use the custom_gradient decorator to make a version of tf.identity which clips intermediate exploded gradients.

``` from tensorflow.contrib.eager.python import tfe

@tfe.custom_gradient def gradient_clipping_identity(tensor, max_norm): result = tf.identity(tensor)

def grad(dresult): return tf.clip_by_norm(dresult, max_norm), None

return result, grad ```

Then use gradient_clipping_identity as you'd normally use identity and your gradients will be clipped in the backward pass.