如何在张量(图像)中进行位移而不必使用循环?

huangapple go评论100阅读模式
英文:

How to make shifts in tensors (images) without having to make loops?

问题

我想对张量(图像集合)执行位移操作,例如使用torch.roll或其他方法。但我要对每个张量应用的每个位移都不同(我将有另一个包含位移的张量)。是否有一种有效的方法可以"一次性"执行,而不必循环遍历每个图像?

例如,我有以下简单的张量,它对应于两幅图像:

torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(2, 2, 2)

tensor([[[1, 2],
         [3, 4]],

        [[5, 6],
         [7, 8]]])

我需要对第一张图像在x轴上进行1px的位移,并对第二张图像在y轴上进行位移:

tensor([[[2, 1],
         [4, 3]],

        [[7, 8],
         [5, 6]]])
英文:

I want to perform displacements on tensors (set of images), for example using torch.roll or some other method. But each displacement I am going to apply to each tensor is different (I would have another tensor with the displacements). Is there any efficient way to be able to do it "all at once" without having to loop through each of the images?

for example I have the following simple tensor, which would correspond to two images

torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(2,2,2)

tensor([[[1, 2],
         [3, 4]],

        [[5, 6],
         [7, 8]]])

And I need to apply a shift of 1px in x-axis to the first one and in y-axis to the second one.

tensor([[[2, 1],
         [4, 3]],

        [[7, 8],
         [5, 6]]])

答案1

得分: 1

扩展评论我认为使用for循环可能是实现这个目标的最佳方式

```python
import torch
data = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(2, 2, 2)

# 为每个元素定义在y轴和x轴上的位移量
shifts = [(0, 1), (1, 0)]

# 分别移动每个图像并将它们连接起来
data_shifted = []
for d, s in zip(data, shifts):
    data_shifted.append(torch.roll(d, s, dims=(0, 1)))
result = torch.stack(data_shifted, dim=0)

print(result)

其结果为

tensor([[[2, 1],
         [4, 3]],

        [[7, 8],
         [5, 6]]])

如果你愿意,可以用一行的推导式代替这个循环

result = torch.stack([torch.roll(d, s, dims=(0, 1)) for d, s in zip(data, shifts)], dim=0)

所有这些都是自动求导兼容的,供你参考。


<details>
<summary>英文:</summary>

Expanding on the comments I think a for loop is likely the best way to do this

```python
import torch
data = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(2, 2, 2)

# define shift amount in y-axis and x-axis for each element
shifts = [(0, 1), (1, 0)]

# shift each image separately and concatenate them
data_shifted = []
for d, s in zip(data, shifts):
    data_shifted.append(torch.roll(d, s, dims=(0, 1)))
result = torch.stack(data_shifted, dim=0)

print(result)

which results in

tensor([[[2, 1],
         [4, 3]],

        [[7, 8],
         [5, 6]]])

The loop could be replaced with a one-line comprehension if you prefer

result = torch.stack([torch.roll(d, s, dims=(0, 1)) for d, s in zip(data, shifts)], dim=0)

All of this is autograd compatible just FYI

huangapple
  • 本文由 发表于 2023年2月27日 01:10:51
  • 转载请务必保留本文链接:https://go.coder-hub.com/75573670.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定