I am new in tensorflow so this might be an easy question, but it is really stuck me
I am tring to implement this paper by keras, background is tensorflow
In first stage of training, he used softmax_pair
if we got this output from last fc
vertical is batch size and this is NoneType
x11 x12 x13 x14...
x21 x22 x23 x24...
x31 x32 x33 x34...
...
and we do exponential, so we have
e11 e12 e13 e14...
e21 e22 e23 e24...
e31 e32 e33 e34...
...
and then, I am stuck here
e11/(e11+e12) e12/(e11+e12) e13/(e13+e14) e14/(e13+e14)...
e21/(e21+e22) e22/(e21+e22) e23/(e23+e24) e24/(e23+e24)...
e31/(e31+e32) e32/(e31+e32) e33/(e33+e34) e34/(e33+e34)...
...
I don't know how to do pairwise additiontf.transpose
and tf.segment_sum
might be great
but after research I found transpose is expensive
further more, after tf.segment_sum
I only have half size of tensor,
I don't know how to double it
oh and I am thinking how to produce segment_ids
so how can I do this calculate? Thanks!!
----------update
The part I talked in paper is Fig.3
The fc output is P2c-1 and P2c, which is mean possibility of class c appear or not appear in the image
c=1,2,3...num of class
Is transpose not expensive? sometimes I see this,e.g. the comment ,perhaps I misunderstood this?
The tensorflow docs for tf.transpose
state that unlike numpy tensorflow returns a new tensor -> memory.
4
? Why do you think transpose is expensive? - sygi