将NumPy向量减法转换为PyTorch张量减法。

huangapple go评论68阅读模式
英文:

Convert Numpy vector subtraction to Pytorch tensor subtraction

问题

我试图使用这段代码(来自这里),但在Pytorch中使用它(这是一个N体模拟):

mass = 20.0 * torch.ones((500, 1)) / 500  # 所有粒子的总质量为20
pos = torch.randn(500, 3)
G = 1.0

# 所有粒子的位置 r = [x, y, z]
dx = pos[:, 0:1].T - pos[:, 0:1]
dy = pos[:, 1:2].T - pos[:, 1:2]
dz = pos[:, 2:3].T - pos[:, 2:3]

inv_r3 = (dx**2 + dy**2 + dz**2)
inv_r3[inv_r3 > 0] = inv_r3[inv_r3 > 0]**(-1.5)

ax = G * (dx * inv_r3) @ mass
ay = G * (dy * inv_r3) @ mass
az = G * (dz * inv_r3) @ mass

# 合并加速度分量
a = torch.cat((ax, ay, az), dim=1)

我知道可以根据PyTorch的维度进行拆分:

dx = torch.tensor(pos[:, 0:1]).T - torch.tensor(pos[:, 0:1])

问题是,我的张量的维度要比3维大得多(例如,torch.rand(500, 1000)而不是np.random.randn(500, 3)),所以像这样分割它(例如,x = pos[:, 0:1])不太实际。是否有一种方法可以使用具有大维度的PyTorch张量而无需按维度拆分它?

英文:

I'm trying to use this code (from here) but in Pytorch (it's an N-body simulation):

mass = 20.0*np.ones((500,1))/500  # total mass of particles is 20
pos  = np.random.randn(500,3)
G = 1.0

# positions r = [x,y,z] for all particles
x = pos[:,0:1]
y = pos[:,1:2]
z = pos[:,2:3]

# matrix that stores all pairwise particle separations: r_j - r_i
dx = x.T - x
dy = y.T - y
dz = z.T - z

inv_r3 = (dx**2 + dy**2 + dz**2)
inv_r3[inv_r3>0] = inv_r3[inv_r3>0]**(-1.5)

ax = G * (dx * inv_r3) @ mass
ay = G * (dy * inv_r3) @ mass
az = G * (dz * inv_r3) @ mass

# pack together the acceleration components
a = np.hstack((ax,ay,az))

I know I can break it down per dimension in pytorch:

dx = torch.tensor(pos[:,0:1]).T - torch.tensor(pos[:,0:1])

The issue is that my tensor is of much larger size than 3 dimension (e.g., torch.rand(500,1000) instead of np.random.randn(500,3)) so breaking it as done here (e.g., x = pos[:,0:1]) is not very practical. Is there a way to have the same code but with a Pytorch tensor of large dimensions without splitting it per dimension?

答案1

得分: 1

让我们首先处理微分部分。

differentials = pos.T[:, None, :] - torch.swapaxes(pos.T[:, None, :], 1, 2)

第一部分对pos进行转置并在中间添加了额外的轴。这样做有效地将列分开。然后,轴交换在1D数组中执行了与转置相同的操作,但仅应用于所需的轴。

如果您愿意,也可以将pos.T[:, None, :]保存到一个中间变量中。

对于inv_r3,它只是将不同轴的微分的平方相加,这些微分由轴0索引。

inv_r3 = torch.sum(differentials**2, axis=0)

其余的行保持不变,尽管加速度项已经在一起,只需进行挤压和转置。

总结一下:

import torch

N = 500
dims = 3
mass = 20.0 * torch.ones((N, 1)) / N
pos = torch.randn(N, dims)
G = 1.0

differentials = pos.T[:, None, :] - torch.swapaxes(pos.T[:, None, :], 1, 2)
inv_r3 = torch.sum(differentials**2, axis=0)
inv_r3[inv_r3 > 0] = inv_r3[inv_r3 > 0]**(-1.5)
acceleration = (G * differentials * inv_r3 @ mass).squeeze(axis=-1).T
英文:

Let's first deal with the differential terms.

differentials = pos.T[:,None,:] - torch.swapaxes(pos.T[:,None,:], 1, 2)

The first part transposes pos and adds an extra axis in the middle. Doing that effectively separates the columns from each other. Then, the axis swap does what the transpose did in the 1D array, but only applies it to the desired axes.

You could also save the pos.T[:,None,:] to an intermediate variable if you so desire.

For inv_r3, that is just summing together the squares of the differentials, which are indexed by axis 0.

inv_r3 = torch.sum(differentials**2, axis=0)

The remaining lines are unchanged, though the acceleration terms are already together, it just needs to be squeezed and transposed.

All together:

import torch

N = 500
dims = 3
mass = 20.0*torch.ones((N,1))/N
pos = torch.randn(N,dims)
G = 1.0

differentials = pos.T[:,None,:] - torch.swapaxes(pos.T[:,None,:], 1, 2)
inv_r3 = torch.sum(differentials**2, axis=0)
inv_r3[inv_r3>0] = inv_r3[inv_r3>0]**(-1.5)
acceleration = (G*differentials*inv_r3@mass).squeeze(axis=-1).T

huangapple
  • 本文由 发表于 2023年6月26日 11:09:19
  • 转载请务必保留本文链接:https://go.coder-hub.com/76553302.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定