向量连接

huangapple go评论66阅读模式
英文:

Vectors Concatenation

问题

假设我有三个向量 A、B、C

> 向量大小为 256
> 向量大小为 256
> 向量大小为 256

现在我想按以下方式进行串联:

> AB= 向量大小将为 512
> AC = 向量大小将为 512
> BC = 向量大小将为 512

然而, 我需要将所有串联的向量限制为 256,如下所示:

> AB= 向量大小将为 256
> AC = 向量大小将为 256
> BC = 向量大小将为 256

一种方法是取两个向量的每两个值的平均值,例如 A 的第一个索引值B 的第一个索引值A 的第二个索引值B 的第二个索引值 ... 等等。类似地,对其他向量进行串联。

如何实现这个任务:

x # torch.Size([32, 3, 256]) # 32 是批量大小,3 是向量 A、向量 B、向量 C,256 是每个向量的维度

def my_fun(self, x):
        iter = x.shape[0]
        counter = 0
        new_x = torch.zeros((10, x.shape[1]), dtype=torch.float32, device=torch.device('cuda'))
        for i in range(0, x.shape[0] - 1):
            iter -= 1
            for j in range(0, iter):
                mean = (x[i, :] + x[i+j, :])/2
                new_x[counter, :] = torch.unsqueeze(mean, 0)
                counter += 1
        final_T = torch.cat((x, new_x), dim=0)
        return final_T

ref = torch.zeros((x.shape[0], 15, x.shape[2]), dtype=torch.float32, device=torch.device('cuda'))
for i in range (x.shape[0]):
    ref[i, :, :] = self.my_fun(x[i, :, :])

但是这种实现计算成本很高。一个原因是我是 按批量 进行迭代,这使得计算成本很高。有没有更高效的方法来实现这个任务?

英文:

Suppose I have three vectors A, B, C

> A vector size of 256
> B vector size of 256
> C vector size of 256

Now I want to do concatenation in the following way:

> AB= vector size will be 512
> AC = vector size will be 512
> BC = vector size will be 512

However, I need to restrict all the concatenated vectors to 256, like:

> AB= vector size will be 256
> AC = vector size will be 256
> BC = vector size will be 256

One way is to take the mean of each two values of the two vectors like A first index value and B first index value, A second index value and B second index value ... etc. Similarly, in the concatenation of other vectors.

How I implement this:

x # torch.Size([32, 3, 256]) # 32 is Batch size, 3 is vector A, vector B, vector C and 256 is each vector dimension

def my_fun(self, x):
        iter = x.shape[0]
        counter = 0
        new_x = torch.zeros((10, x.shape[1]), dtype=torch.float32, device=torch.device('cuda'))
        for i in range(0, x.shape[0] - 1):
            iter -= 1
            for j in range(0, iter):
                mean = (x[i, :] + x[i+j, :])/2
                new_x[counter, :] = torch.unsqueeze(mean, 0)
                counter += 1
        final_T = torch.cat((x, new_x), dim=0)
        return final_T

ref = torch.zeros((x.shape[0], 15, x.shape[2]), dtype=torch.float32, device=torch.device('cuda'))
for i in range (x.shape[0]):
    ref[i, :, :] = self.my_fun(x[i, :, :])

But this implementation is computationally expensive. One reason is I am iterating batch-wise which makes it computationally expensive. Is there any efficient way to implement this task?

答案1

得分: 1

Torch内置了一个mean方法,可以逐元素计算均值。

import torch
import numpy as np
import itertools as it

allvectors = torch.stack((a, b, c), dim=0)
values = it.combinations([0, 1, 2], 2)

for i, j in values:
    pairedvectors = torch.stack((allvectors[i], allvectors[j]), dim=0)
    mean = torch.mean(pairedvectors, dim=0)
    

对于三个示例向量:

a = torch.from_numpy(np.zeros((5)))
b = torch.from_numpy(np.ones((5)))
c = torch.from_numpy(np.ones((5)) * 5)

结果为以下向量:

tensor([0.5000, 0.5000, 0.5000, 0.5000, 0.5000], dtype=torch.float64)
tensor([2.5000, 2.5000, 2.5000, 2.5000, 2.5000], dtype=torch.float64)
tensor([3., 3., 3., 3., 3.], dtype=torch.float64)
英文:

Torch has a builtin mean method, which can calculate means element-wise.

import torch
import numpy as np
import itertools as it

allvectors=torch.stack((a,b,c), dim=0)
values=it.combinations([0,1,2], 2)

for i,j in values:
    pairedvectors=torch.stack((allvectors[i],allvectors[j]), dim=0)
    mean=torch.mean(pairedvectors,dim=0)

for 3 example vectors:

a=torch.from_numpy(np.zeros((5)))
b=torch.from_numpy(np.ones((5)))
c=torch.from_numpy(np.ones((5))*5)

It results in the following vectors:

tensor([0.5000, 0.5000, 0.5000, 0.5000, 0.5000], dtype=torch.float64)
tensor([2.5000, 2.5000, 2.5000, 2.5000, 2.5000], dtype=torch.float64)
tensor([3., 3., 3., 3., 3.], dtype=torch.float64)

huangapple
  • 本文由 发表于 2023年2月6日 19:46:18
  • 转载请务必保留本文链接:https://go.coder-hub.com/75360898.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定