使用PyTorch进行回归任务时如何使用conv1d?

huangapple go评论138阅读模式
英文:

how to use conv1d for regression task in pytorch?

问题

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        # 定义卷积层
        self.conv1d = nn.Conv1d(in_channels=26, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.maxpool = nn.MaxPool1d(kernel_size=2)
        # 定义全连接层
        self.fc1 = nn.Linear(in_features=64*13, out_features=256)
        self.fc2 = nn.Linear(in_features=256, out_features=128)
        self.fc3 = nn.Linear(in_features=128, out_features=1)
        # 定义激活函数
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.conv1d(x)
        x = self.relu(x)
        x = self.maxpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        x = self.relu(x)
        x = self.fc3(x)
        return x
英文:

i have a dataset of 6022 number with 26 features and one output. my task is regression. i want to use 1d convolutional layer for my model. then some linear layers after that. i wrote this:

class Model(nn.Module):
def __init__(self):
    super().__init__()
    # define the convolutional layers
    self.conv1d = nn.Conv1d(in_channels=26, out_channels=64, kernel_size=3, stride=1, padding=1)
    self.maxpool = nn.MaxPool1d(kernel_size=2)
    # define the fully connected layers
    self.fc1 = nn.Linear(in_features=64*13, out_features=256)
    self.fc2 = nn.Linear(in_features=256, out_features=128)
    self.fc3 = nn.Linear(in_features=128, out_features=1)
    # define activation functions
    self.relu = nn.ReLU()

def forward(self, x):
    x = self.conv1d(x)
    x = self.relu(x)
    x = self.maxpool(x)
    x = x.view(x.size(0), -1)
    x = self.fc1(x)
    x = self.relu(x)
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x)
    return x

but i get this error message.

RuntimeError: Given groups=1, weight of size [64, 26, 3], expected input[1, 32, 26] to have 26 channels, but got 32 channels instead

how should i rewrite it not to get the error?

答案1

得分: 2

你的输入有 32 个通道而不是 26 个。
你可以在conv1d中更改通道数量,或者像这样转置你的输入:

inputs = inputs.transpose(-1, -2)

另外,你需要将张量传递给ReLU函数,并返回前向传播的输出,所以修改后的模型应该是:

class custom_model(nn.Module):
  def __init__(self):
    super().__init__()
    # 定义卷积层
    self.conv1d = nn.Conv1d(in_channels=26, out_channels=64, kernel_size=3, stride=1, padding=1)
    self.maxpool = nn.MaxPool1d(kernel_size=2)
    # 定义全连接层
    self.fc1 = nn.Linear(in_features=64*16, out_features=256) 
    # in_features 应该是 64*16 而不是 64*13
    self.fc2 = nn.Linear(in_features=256, out_features=128)
    self.fc3 = nn.Linear(in_features=128, out_features=1)
    # 定义激活函数
    self.relu = nn.ReLU()
  
  def forward(self, x):
    x = self.conv1d(x)
    x = self.relu(x)
    x = self.maxpool(x)
    x = x.view(x.size(0), -1)
    x = self.fc1(x)
    x = self.relu(x) # 你需要将 x 传递给 relu
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x)
    return x # 你需要返回输出

如果你想使用卷积层:

class custom_model(nn.Module):
  def __init__(self):
    super().__init__()
    # 定义卷积层
    self.conv1d = nn.Conv1d(in_channels=1, out_channels=64, kernel_size=3, stride=1, padding=1)
    # in_channels=1 而不是 26
    self.maxpool = nn.MaxPool1d(kernel_size=2)
    # 定义全连接层
    self.fc1 = nn.Linear(in_features=64*13, out_features=256) 
    # in_features 将会是 64 *13
    self.fc2 = nn.Linear(in_features=256, out_features=128)
    self.fc3 = nn.Linear(in_features=128, out_features=1)
    # 定义激活函数
    self.relu = nn.ReLU()
  
  def forward(self, x):
    x = self.conv1d(x)
    x = self.relu(x)
    x = self.maxpool(x)
    print(x.shape)
    x = x.view(x.size(0), -1)
    x = self.fc1(x)
    x = self.relu(x) # 你需要将 x 传递给 relu
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x)
    return x # 你需要返回输出

在调用模型时,你应该增加一个维度,如下所示:

model(inputs.unsqueeze(1))

这意味着你的输入将会是形状为 (batch_size, 1, 26)。

英文:

Your inputs has 32 channels not 26.
You can either change the number of channels in conv1d, or transpose your inputs like this:

inputs = inputs.transpose(-1, -2)

Also you have to pass the tensor to relu functions, and to return the output of the forward, so modified version of your model would be

class custom_model(nn.Module):
  def __init__(self):
    super().__init__()
    # define the convolutional layers
    self.conv1d = nn.Conv1d(in_channels=26, out_channels=64, kernel_size=3, stride=1, padding=1)
    self.maxpool = nn.MaxPool1d(kernel_size=2)
    # define the fully connected layers
    self.fc1 = nn.Linear(in_features=64*16, out_features=256) 
    # in_features should be 64*16 not 64*13
    self.fc2 = nn.Linear(in_features=256, out_features=128)
    self.fc3 = nn.Linear(in_features=128, out_features=1)
    # define activation functions
    self.relu = nn.ReLU()
  
  def forward(self, x):
    x = self.conv1d(x)
    x = self.relu(x)
    x = self.maxpool(x)
    x = x.view(x.size(0), -1)
    x = self.fc1(x)
    x = self.relu(x) # you need to pass x to relu
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x)
    return x # you need to return the output

EDIT

if you want to use conv

class custom_model(nn.Module):
  def __init__(self):
    super().__init__()
    # define the convolutional layers
    self.conv1d = nn.Conv1d(in_channels=1, out_channels=64, kernel_size=3, stride=1, padding=1)
    # inchannels=1 instead of 26
    self.maxpool = nn.MaxPool1d(kernel_size=2)
    # define the fully connected layers
    self.fc1 = nn.Linear(in_features=64*13, out_features=256) 
    # in_features will be 64 *13
    self.fc2 = nn.Linear(in_features=256, out_features=128)
    self.fc3 = nn.Linear(in_features=128, out_features=1)
    # define activation functions
    self.relu = nn.ReLU()
  
  def forward(self, x):
    x = self.conv1d(x)
    x = self.relu(x)
    x = self.maxpool(x)
    print(x.shape)
    x = x.view(x.size(0), -1)
    x = self.fc1(x)
    x = self.relu(x) # you need to pass x to relu
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x)
    return x # you need to return the output

And when calling the model, you should add one dim, as follows:

model(inputs.unsqueeze(1))

This means that your input will be of shape (batch_size, 1, 26).

huangapple
  • 本文由 发表于 2023年4月11日 13:59:57
  • 转载请务必保留本文链接:https://go.coder-hub.com/75982790.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定