I have a matrix A which is defined as a tensor in tensorflow, of n rows and p columns. Moreover, I have say k matrices B1,..., Bk with p rows and q columns. My goal is to obtain a resulting matrix C of n rows and q columns where each row of C is the matrix product of the corresponding row in A with one of the B matrices. Which B to choose is determined by a give index vector I of dimension n that can take values ranging from 1 to k. In my case, the B are weight variables while I is another tensor variable given as input.
An example of code in numpy would look as follows:
A = array([[1, 0, 1],
           [0, 0, 1],
           [1, 1, 0],
           [0, 1, 0]])
 B1 = array([[1, 1],
            [2, 1],
            [3, 6]])
 B2 = array([[1, 5],
             [3, 2],
             [0, 2]])
 B = [B1, B2]
 I = [1, 0, 0, 1]
 n = A.shape[0]
 p = A.shape[1]
 q = B1.shape[1]
 C = np.zeros(shape = (n,q))
 for i in xrange(n):
      C[i,:] = np.dot(A[i,:],B[I[i]])
How can this be translated in tensor flow?
In my specific case the variables are defined as:
A = tf.placeholder("float", [None, p])
B1 = tf.Variable(tf.random_normal(p,q))
B2 = tf.Variable(tf.random_normal(p,q))
I = tf.placeholder("float",[None])