如何在PyTorch中以最快的方式随机采样每行的布尔张量中为True的2个索引?

huangapple go评论91阅读模式
英文:

How to randomly sample 2 indices of True values for each row in a boolean tensor in PyTorch, in the fastest way?

问题

  1. 我有一个大小为`NxK``K2`PyTorch布尔张量其中每行保证至少有2`True`
  2. 对于每一行我想要获取具有`True`值的2**随机**单元格的索引
  3. 例如假设我有以下张量
  4. ```python
  5. tensor([[False, True, False, False, True],
  6. [ True, True, True, False, True],
  7. [ True, True, False, True, False],
  8. [False, True, False, False, True],
  9. [False, True, False, False, True]])

因此,可能的输出是:

  1. tensor([[1, 4],
  2. [4, 2],
  3. [0, 3],
  4. [4, 1],
  5. [4, 1]])

因为在第一行,我们在第1和第4列有True值,而在第二行,我们在第4和第2列有True值,依此类推。

另一个可能的输出是:

  1. tensor([[4, 1],
  2. [1, 4],
  3. [0, 1],
  4. [1, 4],
  5. [1, 4]])

因为现在每行选择了不同的随机True值。

我目前为此实现了一个pytorch-numpy-pytorch解决方案,使用了np.random.choice

  1. available_trues = [t.nonzero(as_tuple=False).flatten() for t in input_tensor]
  2. only_2_trues = [np.random.choice(t.cpu(), size=(2,), replace=False) for t in available_trues]
  3. only_2_trues = torch.from_numpy(np.stack(only_2_trues)).cuda()

但是这个解决方案不是矢量化的(因为它使用了列表),而且它需要在CPU和GPU之间来回移动数据。由于我在大型矩阵上工作,并且执行此操作多次,这会导致严重减速。

我想知道是否可以在不使用列表推导或不移动数据的情况下完成这个操作(甚至两者都不用:])。

  1. <details>
  2. <summary>英文:</summary>
  3. I&#39;m given a PyTorch boolean tensor of size `NxK` (`K&gt;=2`), which is guaranteed to have at least 2 `True` values in each row.
  4. For each row, I want to get the indices of 2 **random** cells with `True` values.
  5. For example, let&#39;s say I have the following tensor:

tensor([[False, True, False, False, True],
[ True, True, True, False, True],
[ True, True, False, True, False],
[False, True, False, False, True],
[False, True, False, False, True]])

  1. So a possible output would be:

tensor([[1, 4],
[4, 2],
[0, 3],
[4, 1],
[4, 1]])

  1. Because on the first row, we have `True` value on column 1 and 4, and on the second row we have `True` value on column 4 and 2, and so on.
  2. Another possible output would be:

tensor([[4, 1],
[1, 4],
[0, 1],
[1, 4],
[1, 4]])

  1. Because now a different random `True` values was selected for each row.
  2. I&#39;ve currently implemented a pytorch-numpy-pytorch solution for that, using `np.random.choice`:

available_trues = [t.nonzero(as_tuple=False).flatten() for t in input_tensor]
only_2_trues = [np.random.choice(t.cpu(), size=(2,), replace=False) for t in available_trues]
only_2_trues = torch.from_numpy(np.stack(only_2_trues)).cuda()

  1. But this solution is not vectorized (since it works using lists) and it requires moving the data back and forth between the CPU and the GPU. Since I&#39;m working on large matrices and running this operation many times, this causes a major slowdown.
  2. I wonder if it can be done without list comprehension or without moving the data (or even without both :] )
  3. </details>
  4. # 答案1
  5. **得分**: 3
  6. Initially, I was thinking if this can be done with `torch.topk` but then realized that it will be deterministic.
  7. 我最初考虑是否可以使用`torch.topk`来完成,但后来意识到这将是确定性的。
  8. I was then able to make it work using `torch.multinomial` with some extra setup of a probability matrix.
  9. 然后,我能够通过使用`torch.multinomial`以及一些额外的概率矩阵设置来使其工作。
  10. Assume the matrix is `m`.
  11. 假设矩阵为`m`
  12. ```python
  13. p = matrix / matrix.sum(axis=1).unsqueeze(1)
  14. tensor([[0.0000, 0.5000, 0.0000, 0.0000, 0.5000],
  15. [0.2500, 0.2500, 0.2500, 0.0000, 0.2500],
  16. [0.3333, 0.3333, 0.0000, 0.3333, 0.0000],
  17. [0.0000, 0.5000, 0.0000, 0.0000, 0.5000],
  18. [0.0000, 0.5000, 0.0000, 0.0000, 0.5000]])

Then

然后

  1. p.multinomial(num_samples=2)
  2. tensor([[4, 1],
  3. [2, 4],
  4. [1, 0],
  5. [1, 4],
  6. [1, 4]])

Each time you run, you get different results.

每次运行时,您会得到不同的结果。

Obviously you can combine the above steps into one, I'm just showcasing what exact is the p matrix doing.

显然,您可以将上述步骤合并为一步,我只是展示了p矩阵的确切作用。

英文:

Initially I was thinking if this can be done with torch.topk but then realized that it will be deterministic.

I was then able to make it work using torch.multinomial with some extra setup of a probability matrix.

Assume the matrix is m.

  1. p = matrix / matrix.sum(axis=1).unsqueeze(1)
  2. tensor([[0.0000, 0.5000, 0.0000, 0.0000, 0.5000],
  3. [0.2500, 0.2500, 0.2500, 0.0000, 0.2500],
  4. [0.3333, 0.3333, 0.0000, 0.3333, 0.0000],
  5. [0.0000, 0.5000, 0.0000, 0.0000, 0.5000],
  6. [0.0000, 0.5000, 0.0000, 0.0000, 0.5000]])

Then

  1. p.multinomial(num_samples=2)
  2. tensor([[4, 1],
  3. [2, 4],
  4. [1, 0],
  5. [1, 4],
  6. [1, 4]])

Each time you run, you get different results.

Obviously you can combine the above steps into one, I'm just showcasing what exact is the p matrix doing.

huangapple
  • 本文由 发表于 2023年7月12日 23:53:38
  • 转载请务必保留本文链接:https://go.coder-hub.com/76672427.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定