9
votes

I was wondering if I can build an image resize module in Pytorch that takes a torch.tensor of 3*H*W as the input and return a tensor as the resized image.

I know it is possible to convert tensor to PIL Image and use torchvision, but I also hope to back propagate gradients from the resized image to the original image, and the following example will return such error (in PyTorch 0.4.0 on Windows 10):

import numpy as np
from torchvision import transforms

t2i = transforms.ToPILImage()
i2t = transforms.ToTensor()

trans = transforms.Compose(
    t2i, transforms.Resize(size=200), i2t]
)

test = np.random.normal(size=[3, 300, 300])
test = torch.tensor(test, requires_grad=True)
resized = trans(test)
resized.backward()

print(test.grad)

Traceback (most recent call last):
  File "D:/Projects/Python/PyTorch/test.py", line 41, in <module>
    main()
  File "D:/Projects/Python/PyTorch/test.py", line 33, in main
    resized = trans(test)
  File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchvision\transforms\transforms.py", line 42, in __call__
    img = t(img)
  File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchvision\transforms\transforms.py", line 103, in __call__
    return F.to_pil_image(pic, self.mode)
  File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchvision\transforms\functional.py", line 102, in to_pil_image
    npimg = np.transpose(pic.numpy(), (1, 2, 0))
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.

It seems like I cannot "imresize" a tensor without detaching it from autograd first, but detaching it prevents me from computing gradients.

Is there a way to build a torch function/module that does the same thing as torchvision.transforms.Resize that is autograd compatiable? Any help is much appreciated!

2
Bilinear resize is just linear combinations of surrounding pixel values, and this operation is mathematically differentiable, it makes no sense why PyTorch cannot backpropagate its gradients...Lotayou
You are probably thinking about convolutional layers? A handy example is github.com/yunjey/pytorch-tutorial/blob/master/tutorials/…xxbidiao
I feel your problem is much similar to spatial transformer network where they also learn thee affine transformation parameters. arxiv.org/pdf/1506.02025.pdfmacharya
autograd only works on pytorch operations, it is not magic. under the hood, it calls a backward() function for each function and so it cant work on none pytorch functions like PIL's resizeSeparius
Does this answer your question? How to resize a PyTorch tensor?iacob

2 Answers

0
votes

I just figured it out how to preserve the gradients when implementing custom loss function.

The trick is to attach your result to the dummy gradients

def custom_loss(tensor1, tensor2):
    # convert tensors to PIL image, doing calculation, we have output = 0.123
    grad = (tensor1 + tensor2).sum()
    loss = grad - grad + output
    return loss