如何查找并替换指定索引的元素

huangapple go评论86阅读模式
英文:

How do I find and replace the element of the specified index

问题

现在有两个距离矩阵,分别为up和down,它们的大小都是(batch_size, pomo_size, problem_size, problem_size)。假设batch_size=1,pomo_size=2,problem_size=5。

up = tensor([[[[8, 5, 6, 8, 2],
              [2, 9, 7, 7, 0],
              [5, 9, 1, 7, 7],
              [0, 5, 8, 6, 3],
              [5, 9, 1, 0, 2]],

             [[2, 8, 1, 4, 7],
              [0, 0, 2, 2, 7],
              [4, 7, 7, 9, 4],
              [6, 6, 7, 1, 3],
              [3, 9, 9, 7, 2]]]])
              
down = tensor([[[[6, 1, 7, 9, 1],
                [7, 6, 2, 7, 9],
                [5, 3, 8, 6, 3],
                [0, 8, 6, 3, 3],
                [8, 9, 4, 8, 1]],

               [[7, 2, 0, 6, 0],
                [6, 7, 5, 3, 9],
                [4, 8, 6, 6, 1],
                [9, 3, 2, 2, 5],
                [2, 5, 2, 1, 8]]]])

现在我们还有一个大小为(batch_size, pomo_size, selected_length)的序列selected_node_list,假设selected_length=8。

selected_node_list = torch.tensor([[[0, 1, 2, 3, 4, 1, 3, 2],
                                   [1, 0, 3, 4, 0, 2, 4, 3]]])

我想要找到在down中由链接序列表示的边,并将相应边的值替换为up中的值。例如,取第一批次的第一个pomo,在up中找到的元素(0, 1) = 5 (1, 2) = 7 (2, 3) = 7 (3, 4) = 3 (4, 1) = 9 (1, 3) = 7 (3, 2) = 8 (2, 0) = 5,然后用down中的相应元素替换它,得到第一批次的第一个pomo矩阵如下:

[[6, 5, 7, 9, 1],
 [7, 6, 7, 7, 9],
 [5, 3, 8, 7, 3],
 [0, 8, 8, 3, 3],
 [8, 9, 4, 8, 1]]

另外,实现多个for循环的简单方法耗时较长,是否有人可以使用torch中的gather和scatter_函数等方法进行快速批处理实现?

需要注意的一点是:
由selected_node_list序列的最后一个点和第一个点组成的边也需要被替换。

完整的输出down如下:

tensor([[[[6, 5, 7, 9, 1],
          [7, 6, 7, 7, 9],
          [5, 3, 8, 7, 3],
          [0, 8, 8, 3, 3],
          [8, 9, 4, 8, 1]],

         [[7, 2, 1, 4, 0],
          [0, 7, 5, 3, 9],
          [4, 8, 6, 6, 4],
          [9, 6, 2, 2, 3],
          [3, 5, 2, 7, 8]]]])
英文:

Let's say I now have two distance matrices up and down, both of size (batch_size,pomo_size,problem_size,problem_size).
Suppose batch_size=1,pomo_size=2,problem_size=5.

up = tensor([[[[8, 5, 6, 8, 2],
[2, 9, 7, 7, 0],
[5, 9, 1, 7, 7],
[0, 5, 8, 6, 3],
[5, 9, 1, 0, 2]

[[2, 8, 1, 4, 7],
[0, 0, 2, 2, 7],
[4, 7, 7, 9, 4],
[6, 6, 7, 1, 3],
[3, 9, 9, 7, 2]]]])
down = tensor([[[[6, 1, 7, 9, 1],
[7, 6, 2, 7, 9],
[5, 3, 8, 6, 3],
[0, 8, 6, 3, 3],
[8, 9, 4, 8, 1]

[[7, 2, 0, 6, 0],
[6, 7, 5, 3, 9],
[4, 8, 6, 6, 1],
[9, 3, 2, 2, 5],
[2, 5, 2, 1, 8]]]])

Now we also have a sequence selected_node_list of size (batch_size,pomo_size,seleced_length), assuming selected_length = 8

selected_node_list = torch.tensor([[[0, 1, 2, 3, 4, 1, 3, 2],
                  [1, 0, 3, 4, 0, 2, 4, 3]]])

I want to find the edge represented by the link sequence in down, and then replace the value of the corresponding edge with the value of up. For example, take the first pomo of the first batch, Element that is found in the up side (0, 1) = 5 (1, 2) = 7 (2, 3) = 7 (3, 4) = 3 (4, 1) = 9 (1, 3) = 7 (3, 2) = 8 (2, 0) = 5
Replace it with the corresponding element in down, and get the first batch, the first pomo matrix is

 [[6, 5, 7, 9, 1],
    [7, 6, 7, 7, 9],
    [5, 3, 8, 7, 3],
    [0, 8, 8, 3, 3],
    [8, 9, 4, 8, 1]

In addition, the simple way of the realization of multiple for loop more time-consuming, is there anyone can use the torch, gather, torch. Scatter_ function such as batch rapid implementation way?

One points to note:
The edge consisting of the last and first point of the selected_node_list sequence is also the edge that needs to be replaced.

complete output down is:

 tensor([[[[6, 5, 7, 9, 1],
              [7, 6, 7, 7, 9],
              [5, 3, 8, 7, 3],
              [0, 8, 8, 3, 3],
              [8, 9, 4, 8, 1]],
    
             [[7, 2, 1, 4, 0],
              [0, 7, 5, 3, 9],
              [4, 8, 6, 6, 4],
              [9, 6, 2, 2, 3],
              [3, 5, 2, 7, 8]]]]) 

答案1

得分: 0

你可以在PyTorch中使用索引和赋值。

import torch

up = torch.tensor([[[[8, 5, 6, 8, 2],
                    [2, 9, 7, 7, 0],
                    [5, 9, 1, 7, 7],
                    [0, 5, 8, 6, 3],
                    [5, 9, 1, 0, 2]],

                   [[2, 8, 1, 4, 7],
                    [0, 0, 2, 2, 7],
                    [4, 7, 7, 9, 4],
                    [6, 6, 7, 1, 3],
                    [3, 9, 9, 7, 2]]]])

down = torch.tensor([[[[6, 1, 7, 9, 1],
                      [7, 6, 2, 7, 9],
                      [5, 3, 8, 6, 3],
                      [0, 8, 6, 3, 3],
                      [8, 9, 4, 8, 1]],

                     [[7, 2, 0, 6, 0],
                      [6, 7, 5, 3, 9],
                      [4, 8, 6, 6, 1],
                      [9, 3, 2, 2, 5],
                      [2, 5, 2, 1, 8]]]])

selected_node_list = torch.tensor([[[0, 1, 2, 3, 4, 1, 3, 2],
                                   [1, 0, 3, 4, 0, 2, 4, 3]]])

batch_size, pomo_size, selected_length = selected_node_list.shape

for b in range(batch_size):
    for p in range(pomo_size):
        for i in range(selected_length - 1):
            src = selected_node_list[b, p, i]
            dest = selected_node_list[b, p, i+1]
            up_value = up[b, p, src, dest]
            down[b, p, src, dest] = up_value

print(down)

如果你需要进一步的帮助,请告诉我。

英文:

you can use indexing and assignment in PyTorch.

import torch

up = torch.tensor([[[[8, 5, 6, 8, 2],
                    [2, 9, 7, 7, 0],
                    [5, 9, 1, 7, 7],
                    [0, 5, 8, 6, 3],
                    [5, 9, 1, 0, 2]],

                   [[2, 8, 1, 4, 7],
                    [0, 0, 2, 2, 7],
                    [4, 7, 7, 9, 4],
                    [6, 6, 7, 1, 3],
                    [3, 9, 9, 7, 2]]]])

down = torch.tensor([[[[6, 1, 7, 9, 1],
                      [7, 6, 2, 7, 9],
                      [5, 3, 8, 6, 3],
                      [0, 8, 6, 3, 3],
                      [8, 9, 4, 8, 1]],

                     [[7, 2, 0, 6, 0],
                      [6, 7, 5, 3, 9],
                      [4, 8, 6, 6, 1],
                      [9, 3, 2, 2, 5],
                      [2, 5, 2, 1, 8]]]])

selected_node_list = torch.tensor([[[0, 1, 2, 3, 4, 1, 3, 2],
                                   [1, 0, 3, 4, 0, 2, 4, 3]]])

batch_size, pomo_size, selected_length = selected_node_list.shape

for b in range(batch_size):
    for p in range(pomo_size):
        for i in range(selected_length - 1):
            src = selected_node_list[b, p, i]
            dest = selected_node_list[b, p, i+1]
            up_value = up[b, p, src, dest]
            down[b, p, src, dest] = up_value

print(down)

答案2

得分: 0

以下是您要翻译的内容:

关键思想:计算索引并从UP中选择值

您可以使用torch.where函数来实现这个目标。

然而,要根据多个维度的条件使用torch.where很困难,所以我们可以先将所有项展平,然后在计算后重新制回原始形状。

展平张量

batch_size, pomo_size, problem_size = up.size()[:-1]
select_size = selected_node_list.size()[-1] - 1
index_flattened = selected_node_list[:, :, :-1] * problem_size + selected_node_list[:, :, 1:]
up_flattened = up.view(batch_size, pomo_size, problem_size ** 2) # 批次大小,pomo大小,问题大小 ^ 2。实际上,您可以使用-1作为最后一个参数
down_flattened = down.view(batch_size, pomo_size, -1) # 批次大小,pomo大小,问题大小 ^ 2
print(index_flattened)

输出:tensor([[[ 1, 7, 13, 19, 21, 8, 17], [ 5, 3, 19, 20, 2, 14, 23]]])

此时,up和down张量的形状为(批次大小,pomo大小,问题大小 ^ 2),索引也根据成对重新计算。

创建掩码张量

我们应该创建一个与(批次大小,pomo大小,问题大小 ^ 2)形状相同的张量,如果我们应该从上面选择值则为1,否则为0。

实际上,可以使用for循环轻松创建掩码张量,但您也可以使用一些复杂的方法来创建,制作多个与索引每个索引对应的独热向量,然后对它们求和。

pivots = torch.arange(0, problem_size ** 2).view(1, 1, 1, -1).expand(batch_size, pomo_size, select_size, -1)
index = index_flattened.unsqueeze(-1).expand(-1, -1, -1, problem_size ** 2)
mask_flattened = torch.where(pivots == index, torch.ones_like(pivots), torch.zeros_like(pivots))
mask_flattened = torch.sum(mask_flattened, dim=-2)
print(mask_flattened)

输出:`tensor([[[0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0]]])

计算结果并重塑

out = torch.where(mask_flattened == 1, up_flattened, down_flattened) # 如果mask_flattened的值为1,则从上面获取值

out = out.view(batch_size, pomo_size, problem_size, problem_size) # 重塑

print(out)

输出:

tensor([[[[6, 5, 7, 9, 1],
          [7, 6, 7, 7, 9],
          [5, 3, 8, 7, 3],
          [0, 8, 8, 3, 3],
          [8, 9, 4, 8, 1]],

         [[7, 2, 1, 4, 0],
          [0, 7, 5, 3, 9],
          [4, 8, 6, 6, 4],
          [9, 3, 2, 2, 3],
          [3, 5, 2, 7, 8]]]])

这就是您想要看到的内容。

编辑于2023年06月18日

我忽略了还需要做一件事。

您可以在一开始添加selected_node_list = torch.concat([selected_node_list, selected_node_list[:, :, 0].unsqueeze(-1)], dim=-1)以确保它能正常工作。

英文:

Key idea: calculate index and choose value from UP if index matches

You can use torch.where function to achieve this.

However, it is hard to use torch.where on condition depending on multiple dimensions, so we can flatten all items first, then remake into original shape after calculation.

Flatten tensors

batch_size, pomo_size, problem_size = up.size()[:-1]
select_size = selected_node_list.size()[-1] - 1
index_flattened = selected_node_list[:, :, :-1] * problem_size + selected_node_list[:, :, 1:]
up_flattened = up.view(batch_size, pomo_size, problem_size ** 2) # batch size, pomo size, problem size ^ 2. actually you can use -1 for last argument
down_flattened = down.view(batch_size, pomo_size, -1) # batch size, pomo size, problem size ^ 2
print(index_flattened)

Output: tensor([[[ 1, 7, 13, 19, 21, 8, 17],
[ 5, 3, 19, 20, 2, 14, 23]]])

At this moment, up and down tensor has (batch size, pomo size, problem size ^ 2) shape, and index is also recalculated based on pairs.

Create mask tensors

We should make a tensor that has same shape as (batch size, pomo size, problem size ^ 2), and has 1 if we should pick value from up and 0 otherwise.

Actually, it is easy to create mask tensor with for loop, but you can create without with some complicated ways... Making multiple one-hot vectors per index and summing them.

pivots = torch.arange(0, problem_size ** 2).view(1, 1, 1, -1).expand(batch_size, pomo_size, select_size, -1)
index = index_flattened.unsqueeze(-1).expand(-1, -1, -1, problem_size ** 2)
mask_flattened = torch.where(pivots == index, torch.ones_like(pivots), torch.zeros_like(pivots))
mask_flattened = torch.sum(mask_flattened, dim=-2)
print(mask_flattened)

Output: tensor([[[0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0]]])

Calculate result and reshape

out = torch.where(mask_flattened == 1, up_flattened, down_flattened) # get value from up if mask_flattened has value 1

out = out.view(batch_size, pomo_size, problem_size, problem_size) # reshape

print(out)

Output

tensor([[[[6, 5, 7, 9, 1],
          [7, 6, 7, 7, 9],
          [5, 3, 8, 7, 3],
          [0, 8, 8, 3, 3],
          [8, 9, 4, 8, 1]],

         [[7, 2, 1, 4, 0],
          [0, 7, 5, 3, 9],
          [4, 8, 6, 6, 4],
          [9, 3, 2, 2, 3],
          [3, 5, 2, 7, 8]]]])

And this is what you want to see.

Edited 2023.06.18

I missed I have to change one more thing.

You can add selected_node_list = torch.concat([selected_node_list, selected_node_list[:, :, 0].unsqueeze(-1)], dim=-1) at the very beginning to work correctly.

huangapple
  • 本文由 发表于 2023年6月16日 10:43:14
  • 转载请务必保留本文链接:https://go.coder-hub.com/76486650.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定