I am in the process of re-writing code that is compatible with TF 2.0. Unfortunately, almost every example provided by the website uses the keras API. I, however, want to write code with raw tensorflow functions.
At some point, the new way of calculating and applying gradients during the training process looks something like this (code stolen from here):
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
pred = logistic_regression(x)
loss = cross_entropy(pred, y)
# Compute gradients.
gradients = g.gradient(loss, [W, b])
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, [W, b]))
The thing that causes problems here is the fact that I have to specify the trainable variables. In this particular case, it is easy because W and b have been created manually. It's also easy when using a keras model through the use of model.trainable_variables
.
In my model, I am using dense layers provided by tensorflow, e.g. tf.keras.layers.Dense
. The function provided in tensorflow 1.x for this usecase was tf.trainable_variables()
, but it does not exist anymore.
How do I access their internal weights to pass them to the GradientTape?