0
votes

I want to do something like what tfp.layers.Conv2DReparameterization does but simpler - no priors etc.

Given an augmented input x of shape [num_particles, batch, in_height, in_width, in_channels] and a filter of mean f_mean and standard deviation f_std shape [filter_height, filter_width, in_channels, out_channels] which are trainable variables, I use the reparameterization trick to get filter samples:

filter_samples = f_mean + f_std * tf.random_normal(([num_particles] + f_mean.shape))

Thus, filter_samples is of shape [num_particles, filter_height, filter_width, in_channels, out_channels].

Then, I want to do:

output = tf.nn.conv2d(x, filter_samples, padding='SAME') # or VALID

where output should be of shape [num_particles] + standard convolution output shape.

For dense layers, it works to just do a tf.matmul(x, filter_samples), but for conv2d I'm not sure about the results and I can't find the implementation code to check it. Implementing it myself would end up slower than TF code, so I want to avoid it.

For SAME padding, the resulting shape seems okay, for VALID the batch dim is changed making me believe it doesn't work as I expect.

Just to make it clear, I need the output to have the num_particles dim. Code is TF1.x

Any ideas on how to get that?

1

1 Answers

0
votes

I think there is some code to do similar in tfp.experimental.nn. We can follow up in the github issues you filed/responded to.