IndexError: index 4 is out of bounds for dimension 0 with size 4.

huangapple go评论115阅读模式
英文:

IndexError: index 4 is out of bounds for dimension 0 with size 4?

问题

以下是您提供的代码的翻译部分:

我正在尝试使用PyTorch编写一个神经网络它可以学习例如对于两个图像执行非常简单的A+B操作
在编写代码时我遇到了这个错误我的批处理大小batch_size为4

class myNet(nn.Module):
    def __init__(self):
        super(myNet, self).__init__()
        
        self.fc1 = nn.Linear(3072, 3072)  # 设置FC层
        self.fc2 = nn.Linear(3072, 3072)  # 设置另一个FC层
        self.fc3 = nn.Linear(3072, 3072)  # 设置另一个FC层

    def forward(self, input1, input2):
        a = self.fc1(input1)
        print(a.size())
        b = self.fc2(input2)
        print(b.size())
        # 现在我们可以将`a`和`b`重塑为2D并连接它们
        combined = [a[i] + b[i] for i in range(3072)]
        print(combined.size())
        out = self.fc3(combined)
        print(out.size())
        return out

然后是:

num_epochs = 10
losses = []
batch_size = 4

for epoch in range(num_epochs):
    for i, (im1, im2, mid) in enumerate(trainloader):
        #im1= im1/255
        im1 = torch.from_numpy(np.array(im1, dtype='float32'))
        im1 = im1.to(device)
       
        #im2= im2/255
        im2 = torch.from_numpy(np.array(im2, dtype='float32'))
        im2 = im2.to(device)
        
        #mid= mid/255
        mid = torch.from_numpy(np.array(mid, dtype='float32'))
        mid = mid.to(device)
       
        # 前向传播
        outputs = model(im1, im2)
        
        # 损失
        loss = criterion(outputs, mid)
        losses.append(loss.item())

        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        
        # 更新参数
        optimizer.step()
        
        # 报告
        if (i + 1) % 50 == 0:
            print('Epoch [%2d/%2d], Step [%3d/%3d], Loss: %.4f'
                  % (epoch + 1, num_epochs, i + 1, train_size // batch_size, loss.item()))

运行代码时,我得到了您提到的错误。我知道这与批次有关,但不知道如何修复它。有没有修复它的想法,或者可能有其他编写此代码的方式有帮助的建议?

英文:

I am tying to write a NN in pytorch that learns to give for example: very simple A+B for two images.
Writing it, I got this error. my batch_size is 4.

class myNet(nn.Module):
    def __init__(self):
        super(myNet, self).__init__()
        
        self.fc1 = nn.Linear(3072, 3072)  # set up FC layer
        self.fc2 = nn.Linear(3072, 3072)  # set up the other FC layer
        self.fc3 = nn.Linear(3072, 3072)  # set up the other FC layer

    def forward(self, input1, input2):
        a = self.fc1(input1)
        print(a.size())
        b = self.fc2(input2)
        print(b.size())
        # now we can reshape `c` and `f` to 2D and concat them
        combined = [a[i]+b[i] for i in range(3072)]
        print(combined.size())
        out = self.fc3(combined)
        print(out.size())
        return out

and then:

num_epochs = 10
losses = []
batch_size= 4

for epoch in range(num_epochs):
    for i, (im1, im2, mid) in enumerate(trainloader):
        #im1= im1/255
        im1 = torch.from_numpy(np.array(im1, dtype='float32'))
        im1 = im1.to(device)
       
        
        #im2= im2/255
        im2 = torch.from_numpy(np.array(im2, dtype='float32'))
        im2 = im2.to(device)
        
        #mid= mid/255
        mid = torch.from_numpy(np.array(mid, dtype='float32'))
        mid = mid.to(device)
       
        # forwad pass
        outputs = model(im1, im2)
        #outputs = outputs.double()
        
        # loss
        loss = criterion(outputs, mid)
        losses.append(loss.item())

        # backward pass
        optimizer.zero_grad()
        loss.backward()
        
        # update parameters
        optimizer.step()
        
        # report
        if (i + 1) % 50 == 0:
            print('Epoch [%2d/%2d], Step [%3d/%3d], Loss: %.4f'
                  % (epoch + 1, num_epochs, i + 1, train_size // batch_size, loss.item()))

and running it, I get the mentioned error.
I know its relate to batches. But do not know how to fix it
Any ideas to fix it? or maybe another way to write this code is helpful.

答案1

得分: 0

在以下一行代码中:

combined = [a[i]+b[i] for i in range(3072)]

你正在循环遍历ab的第一个维度,范围为[0, 3072[。鉴于两个张量都是按照 PyTorch 的形状约定以批次为先的方式构造的,你可能会超出边界。你可能想要的操作是循环遍历第二维度,这将对应于特征维度:

combined = [a[:,i]+b[:,i] for i in range(3072)]

这与使用torch.Tensor.__add__操作符相同:

combined = a + b

然而,请注意这更像是一个加法融合,而不是连接融合。根据你的需求,你可能需要根据你实际希望如何合并这两个特征张量来修改这段代码。

英文:

On the following line:

combined = [a[i]+b[i] for i in range(3072)]

You are looping over the first dimension of a and b on [0, 3072[. Given that both tensors are shaped batch first, per PyTorch shape convention, you are going out of bound. What you are probably looking to do is loop over the second dimension instead which would correspond to your feature dimension.

combined = [a[:,i]+b[:,i] for i in range(3072)]

This is identical to using the torch.Tensor.__add__ operator:

combined = a + b 

Do note however that this is more of a addition fusion rather than a concatenation fusion.
Depending on your needs though, you might have to modify this piece of code with respect to how you actually want the two feature tensors to be merged.

huangapple
  • 本文由 发表于 2023年2月27日 18:15:33
  • 转载请务必保留本文链接:https://go.coder-hub.com/75579158.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定