Does pytorch support repeating a tensor without allocating significantly more memory?
Assume we have a tensor
t = torch.ones((1,1000,1000))
t10 = t.repeat(10,1,1)
Repeating t 10 times will require take 10x the memory. Is there a way how I can create a tensor t10 without allocating significantly more memory?
Here is a related question, but without answers.