0
votes

Hi so what I exactly want is if we have matrix W and vector V such as:

V=[1,2,3,4]
W=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1]]

we should got the result:

result=[[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]]

I found this method on the website:

V = tf.constant([1,2,4], dtype=tf.float32)
W = tf.constant([[1,2,3,4],[1,2,3,4],[1,2,3,4]], dtype=tf.float32)
tf.multiply(tf.expand_dims(V,1),W)
## produce: [[1,2,3,4],[2,4,6,8],[4,8,12,16]] 

which is exactly what I want but when I implement this on my model it also include the batch size of the vector in which result in error such

with input shapes: [?,1,297], [?,297,300].

which I assume is the same error which this can produce

V = tf.constant([[1,2,4]], dtype=tf.float32)
W = tf.constant([[[1,2,3,4],[1,2,3,4],[1,2,3,4]]], dtype=tf.float32)
tf.multiply(tf.expand_dims(V,1),W)

I wanted to know what is the standard procedure to get each element from the softmax output vector and multiply them as weight for each vector in the feature tensor

1

1 Answers

1
votes

I found that by using

V = tf.constant([[1,2,4]], dtype=tf.float32)
W = tf.constant([[[1,2,3,4],[1,2,3,4],[1,2,3,4]]], dtype=tf.float32)
h2=tf.keras.layers.multiply([W,tf.expand_dims(V,2)])

the keras layer will ignore the batch size part for us but we have to change the parameter of expand dim because we still have to consider the batch size of V before feed to the layer.