PyTorch's torch.transpose
function only transposes 2D inputs. Documentation is here.
On the other hand, Tensorflow's tf.transpose
function allows you to transpose a tensor of N
arbitrary dimensions.
Can someone please explain why PyTorch does not/cannot have N-dimension transpose functionality? Is this due to the dynamic nature of the computation graph construction in PyTorch versus Tensorflow's Define-then-Run paradigm?