RuntimeError: Given groups=1, weight of size [128, 55, 11, 11], expected input[64, 57, 28, 28] to have 55 channels, but got 57 channels instead

huangapple go评论67阅读模式
英文:

RuntimeError: Given groups=1, weight of size [128, 55, 11, 11], expected input[64, 57, 28, 28] to have 55 channels, but got 57 channels instead

问题

以下是您提供的代码的翻译部分:

PyTorch Lightning 模型

class CPM(pl.LightningModule):
    def __init__(self, k, verbose=False):
        # 模型的初始化和结构定义
        # ...
    
    # 各个阶段的前向传播函数定义
    # ...

    def forward(self, image, center_map):
        # 模型的前向传播
        # ...

    def training_step(self, batch, batch_idx):
        # 训练过程中的步骤
        # ...

    def validation_step(self, batch, batch_idx):
        # 验证过程中的步骤
        # ...

    def test_step(self, batch, batch_idx):
        # 测试过程中的步骤
        # ...

    def configure_optimizers(self):
        # 优化器配置
        # ...

    def train_dataloader(self):
        # 训练数据加载器
        # ...

数据集

对于数据集,我使用了 Kaggle 上的 freihand 数据集。然后,我使用了 Torch DataLoader 来加载数据集,该数据集返回原始图像、关键点和中心图,其中中心图是具有本地化关键点的图像。

错误

/tmp/ipykernel_27/995635534.py in _stage2(self, pool3_stage2_map, conv7_stage1_map, pool_center_map)
     89         x = F.relu(self.conv4_stage2(pool3_stage2_map))
     90         x = torch.cat([x, conv7_stage1_map, pool_center_map], dim=1)
---> 91         x = F.relu(self.Mconv1_stage2(x))
     92         x = F.relu(self.Mconv2_stage2(x))
     93         x = F.relu(self.Mconv3_stage2(x))

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
    445 
    446     def forward(self, input: Tensor) -> Tensor:
--> 447         return self._conv_forward(input, self.weight, self.bias)
    448 
    449 class Conv3d(_ConvNd):

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    442                             _pair(0), self.dilation, self.groups)
    443         return F.conv2d(input, weight, bias, self.stride,
--> 444                         self.padding, self.dilation, self.groups)
    445 
    446     def forward(self, input: Tensor) -> Tensor:

RuntimeError: 给定 groups=1weight 的大小为 [128, 55, 11, 11]预期输入[64, 57, 28, 28]应具有 55 个通道但实际有 57 个通道

希望这有助于您理解代码和问题。如果您需要任何进一步的帮助,请随时提问。

英文:

I'm trying to train a Convolutional-Pose-Model (CPM) using Pytorch and in the below you can see the torch-lightning model along with the error I encountered. How do I solve this issue ?

pytorch lightning model

class CPM(pl.LightningModule):
def __init__(self, k, verbose = False):
super(CPM, self).__init__()
self.k = k
self.verbose = verbose
self.criterion = nn.MSELoss().cuda()
self.pool_center = nn.AvgPool2d(kernel_size=9, stride=8, padding=1)
self.conv1_stage1 = nn.Conv2d(3, 128, kernel_size=9, padding=4)
self.pool1_stage1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv2_stage1 = nn.Conv2d(128, 128, kernel_size=9, padding=4)
self.pool2_stage1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv3_stage1 = nn.Conv2d(128, 128, kernel_size=9, padding=4)
self.pool3_stage1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv4_stage1 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.conv5_stage1 = nn.Conv2d(32, 512, kernel_size=9, padding=4)
self.conv6_stage1 = nn.Conv2d(512, 512, kernel_size=1)
self.conv7_stage1 = nn.Conv2d(512, self.k + 1, kernel_size=1)
self.conv1_stage2 = nn.Conv2d(3, 128, kernel_size=9, padding=4)
self.pool1_stage2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv2_stage2 = nn.Conv2d(128, 128, kernel_size=9, padding=4)
self.pool2_stage2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv3_stage2 = nn.Conv2d(128, 128, kernel_size=9, padding=4)
self.pool3_stage2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.conv4_stage2 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.Mconv1_stage2 = nn.Conv2d(32 + self.k + 2, 128, kernel_size=11, padding=5)
self.Mconv2_stage2 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv3_stage2 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv4_stage2 = nn.Conv2d(128, 128, kernel_size=1, padding=0)
self.Mconv5_stage2 = nn.Conv2d(128, self.k + 1, kernel_size=1, padding=0)
self.conv1_stage3 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.Mconv1_stage3 = nn.Conv2d(32 + self.k + 2, 128, kernel_size=11, padding=5)
self.Mconv2_stage3 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv3_stage3 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv4_stage3 = nn.Conv2d(128, 128, kernel_size=1, padding=0)
self.Mconv5_stage3 = nn.Conv2d(128, self.k + 1, kernel_size=1, padding=0)
self.conv1_stage4 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.Mconv1_stage4 = nn.Conv2d(32 + self.k + 2, 128, kernel_size=11, padding=5)
self.Mconv2_stage4 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv3_stage4 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv4_stage4 = nn.Conv2d(128, 128, kernel_size=1, padding=0)
self.Mconv5_stage4 = nn.Conv2d(128, self.k + 1, kernel_size=1, padding=0)
self.conv1_stage5 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.Mconv1_stage5 = nn.Conv2d(32 + self.k + 2, 128, kernel_size=11, padding=5)
self.Mconv2_stage5 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv3_stage5 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv4_stage5 = nn.Conv2d(128, 128, kernel_size=1, padding=0)
self.Mconv5_stage5 = nn.Conv2d(128, self.k + 1, kernel_size=1, padding=0)
self.conv1_stage6 = nn.Conv2d(128, 32, kernel_size=5, padding=2)
self.Mconv1_stage6 = nn.Conv2d(32 + self.k + 2, 128, kernel_size=11, padding=5)
self.Mconv2_stage6 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv3_stage6 = nn.Conv2d(128, 128, kernel_size=11, padding=5)
self.Mconv4_stage6 = nn.Conv2d(128, 128, kernel_size=1, padding=0)
self.Mconv5_stage6 = nn.Conv2d(128, self.k + 1, kernel_size=1, padding=0)
def _stage1(self, image):
x = self.pool1_stage1(F.relu(self.conv1_stage1(image)))
x = self.pool2_stage1(F.relu(self.conv2_stage1(x)))
x = self.pool3_stage1(F.relu(self.conv3_stage1(x)))
x = F.relu(self.conv4_stage1(x))
x = F.relu(self.conv5_stage1(x))
x = F.relu(self.conv6_stage1(x))
x = self.conv7_stage1(x)
return x
def _middle(self, image):
x = self.pool1_stage2(F.relu(self.conv1_stage2(image)))
x = self.pool2_stage2(F.relu(self.conv2_stage2(x)))
x = self.pool3_stage2(F.relu(self.conv3_stage2(x)))
return x
def _stage2(self, pool3_stage2_map, conv7_stage1_map, pool_center_map):
x = F.relu(self.conv4_stage2(pool3_stage2_map))
x = torch.cat([x, conv7_stage1_map, pool_center_map], dim=1)
x = F.relu(self.Mconv1_stage2(x))
x = F.relu(self.Mconv2_stage2(x))
x = F.relu(self.Mconv3_stage2(x))
x = F.relu(self.Mconv4_stage2(x))
x = self.Mconv5_stage2(x)
return x
def _stage3(self, pool3_stage2_map, Mconv5_stage2_map, pool_center_map):
x = F.relu(self.conv1_stage3(pool3_stage2_map))
x = torch.cat([x, Mconv5_stage2_map, pool_center_map], dim=1)
x = F.relu(self.Mconv1_stage3(x))
x = F.relu(self.Mconv2_stage3(x))
x = F.relu(self.Mconv3_stage3(x))
x = F.relu(self.Mconv4_stage3(x))
x = self.Mconv5_stage3(x)
return x
def _stage4(self, pool3_stage2_map, Mconv5_stage3_map, pool_center_map):
x = F.relu(self.conv1_stage4(pool3_stage2_map))
x = torch.cat([x, Mconv5_stage3_map, pool_center_map], dim=1)
x = F.relu(self.Mconv1_stage4(x))
x = F.relu(self.Mconv2_stage4(x))
x = F.relu(self.Mconv3_stage4(x))
x = F.relu(self.Mconv4_stage4(x))
x = self.Mconv5_stage4(x)
return x
def _stage5(self, pool3_stage2_map, Mconv5_stage4_map, pool_center_map):
x = F.relu(self.conv1_stage5(pool3_stage2_map))
x = torch.cat([x, Mconv5_stage4_map, pool_center_map], dim=1)
x = F.relu(self.Mconv1_stage5(x))
x = F.relu(self.Mconv2_stage5(x))
x = F.relu(self.Mconv3_stage5(x))
x = F.relu(self.Mconv4_stage5(x))
x = self.Mconv5_stage5(x)
return x
def _stage6(self, pool3_stage2_map, Mconv5_stage5_map, pool_center_map):
x = F.relu(self.conv1_stage6(pool3_stage2_map))
x = torch.cat([x, Mconv5_stage5_map, pool_center_map], dim=1)
x = F.relu(self.Mconv1_stage6(x))
x = F.relu(self.Mconv2_stage6(x))
x = F.relu(self.Mconv3_stage6(x))
x = F.relu(self.Mconv4_stage6(x))
x = self.Mconv5_stage6(x)
return x
def forward(self, image, center_map):
pool_center_map = self.pool_center(center_map)
conv7_stage1_map = self._stage1(image)
pool3_stage2_map = self._middle(image)
print(pool3_stage2_map.shape, conv7_stage1_map.shape,pool_center_map.shape)
Mconv5_stage2_map = self._stage2(pool3_stage2_map, conv7_stage1_map,
pool_center_map)
Mconv5_stage3_map = self._stage3(pool3_stage2_map, Mconv5_stage2_map,
pool_center_map)
Mconv5_stage4_map = self._stage4(pool3_stage2_map, Mconv5_stage3_map,
pool_center_map)
Mconv5_stage5_map = self._stage5(pool3_stage2_map, Mconv5_stage4_map,
pool_center_map)
Mconv5_stage6_map = self._stage6(pool3_stage2_map, Mconv5_stage5_map,
pool_center_map)
return conv7_stage1_map, Mconv5_stage2_map, Mconv5_stage3_map, Mconv5_stage4_map, Mconv5_stage5_map, Mconv5_stage6_map
def training_step(self, batch, batch_idx):
imgs, kpts, centermaps = batch 
heats = self.forward(imgs, centermaps)
losses = torch.tensor([self.criterion(heat, kpts) for heat in heats])
loss = torch.sum(losses)
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_idx):
imgs, kpts, centermaps = batch 
heats = self.forward(imgs, centermaps)
losses = torch.tensor([self.criterion(heat, kpts) for heat in heats])
loss = torch.sum(losses)
self.log('val_loss', loss)
return loss
def test_step(self, batch, batch_idx):
imgs, kpts, centermaps = batch 
heat = self.forward(imgs, centermaps)[-1]
return self.get_kpts(heat)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr = 0.001)
def train_dataloader(self):
IMG_SIZE = 244 
HP = HandPose(df, transform = transform, testSize = 0, imgSize = IMG_SIZE, subset = 'train')
Data = data.DataLoader(HP, shuffle = True, batch_size = 64, num_workers = 4)
return Data

Dataset

As for the dataset, I'm using freihand dataset from kaggle. I then used torch dataloader to load the dataset which returns original image, keypoints and centermap which is an image with its localized keypoints.

Error

/tmp/ipykernel_27/995635534.py in _stage2(self, pool3_stage2_map, conv7_stage1_map, pool_center_map)
89         x = F.relu(self.conv4_stage2(pool3_stage2_map))
90         x = torch.cat([x, conv7_stage1_map, pool_center_map], dim=1)
---> 91         x = F.relu(self.Mconv1_stage2(x))
92         x = F.relu(self.Mconv2_stage2(x))
93         x = F.relu(self.Mconv3_stage2(x))
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
1111         # Do not call functions when jit is used
1112         full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
445 
446     def forward(self, input: Tensor) -> Tensor:
--> 447         return self._conv_forward(input, self.weight, self.bias)
448 
449 class Conv3d(_ConvNd):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
442                             _pair(0), self.dilation, self.groups)
443         return F.conv2d(input, weight, bias, self.stride,
--> 444                         self.padding, self.dilation, self.groups)
445 
446     def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [128, 55, 11, 11], expected input[64, 57, 28, 28] to have 55 channels, but got 57 channels instead

答案1

得分: 3

错误堆栈跟踪允许您定位错误,看起来是来自您的 Mconv1_stage2 的滤波器数量。

您通过 [x, conv7_stage1_map, pool_center_map] 输入的连接映射与接下来的卷积层期望的滤波器数量 (32 + self.k + 2) 不匹配。如果这是正确的输入,那么您需要调整 Mconv1_stage2in_channels 数量。

英文:

The error stack trace allows you to locate the error, it seems to come from the number of channels your Mconv1_stage2's filters have.

There is a mismatch between the concatenated map you are feeding through [x, conv7_stage1_map, pool_center_map] and the number of filters the following convolutional layer expects (32 + self.k + 2). If this is the correct input, then you need to adjust the number of in_channels for Mconv1_stage2 .

huangapple
  • 本文由 发表于 2023年2月19日 22:10:00
  • 转载请务必保留本文链接:https://go.coder-hub.com/75500704.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定