3
votes

I am using the C++ frontend for PyTorch and am struggling with a relatively basic indexing problem.

I have an 8 by 6 Tensor such as the one below:

[ Variable[CUDAFloatType]{8,6} ] 
                 0           1           2           3           4           5
0       1.7107e-14  4.0448e-17  4.9708e-06  1.1664e-08  9.9999e-01  2.1857e-20
1       1.8288e-14  5.9356e-17  5.3042e-06  1.2369e-08  9.9999e-01  2.4799e-20
2       2.6828e-04  9.0390e-18  1.7517e-02  1.0529e-03  9.8116e-01  6.7854e-26
3       5.7521e-10  3.1037e-11  1.5021e-03  1.2304e-06  9.9850e-01  1.4888e-17
4       1.7811e-13  1.8383e-15  1.6733e-05  3.8466e-08  9.9998e-01  5.2815e-20
5       9.6191e-06  2.6217e-23  3.1345e-02  2.3024e-04  9.6842e-01  2.9435e-34
6       2.2653e-04  8.4642e-18  1.6085e-02  9.7405e-04  9.8271e-01  6.3059e-26
7       3.8951e-14  2.9903e-16  8.3518e-06  1.7974e-08  9.9999e-01  3.6993e-20

I have another Tensor with just 8 elements in it such as:

[ Variable[CUDALongType]{8} ] 
 0
 3
 4
 4
 4
 4
 4
 4

I would like to index the rows of my first tensor using the second to produce:

        0           
0       1.7107e-14  
1       1.2369e-08
2       9.8116e-01  
3       9.9850e-01  
4       9.9998e-01
5       9.6842e-01  
6       9.8271e-01  
7       9.9999e-01

I have tried a few different approaches including index_select but it seems to produce an output that has the same dimensions as the input (8x6).

In Python I think I could index with Python's built-in indexing as discussed here: https://github.com/pytorch/pytorch/issues/1080

Unfortunately, in C++ I can only index a Tensor with a scalar (zero-dimensional Tensor) so I don't think that approach works for me here.

How can I achieve my desired result without resorting to loops?

1
you want to look at pytorch.org/docs/stable/torch.html#torch.gather instead of index_selectShai

1 Answers

2
votes

It turns out you can do this in a couple different ways. One with gather and one with index. From the PyTorch discussions where I asked the same question:

Using torch::gather

auto x = torch::randn({8, 6});
int64_t idx_data[8] = { 0, 3, 4, 4, 4, 4, 4, 4 };
auto idx = x.type().toScalarType(torch::kLong).tensorFromBlob(idx_data, 8);
auto result = x.gather(1, idx.unsqueeze(1));

Using the C++ specific torch::index

auto x = torch::randn({8, 6});
int64_t idx_data[8] = { 0, 3, 4, 4, 4, 4, 4, 4 };
auto idx = x.type().toScalarType(torch::kLong).tensorFromBlob(idx_data, 8);
auto rows = torch::arange(0, x.size(0), torch::kLong);
auto result = x.index({rows, idx});