如何在PyTorch中为CNN模型添加额外的层?

huangapple go评论66阅读模式
英文:

How to add additional layers to CNN model in PyTorch?

问题

我有一个与神经网络相关的问题。在指定模型参数方面,我是一个初学者。我发现了这个关于在PyTorch中构建的DNA序列模型的示例,我想进行改进。在示例中,部署了一个基本的CNN模型,现在我想部署一个更深的模型,具有更多的层次。

基本的CNN模型

这些不是优化过的模型,只是一些初始尝试,用于测试PyTorch在DNA上下文中的使用

class DNA_CNN(nn.Module):
def init(self,
seq_len,
num_filters=32,
kernel_size=3):
super().init()
self.seq_len = seq_len

    self.conv_net = nn.Sequential(
        # 4代表4个核苷酸
        nn.Conv1d(4, num_filters, kernel_size=kernel_size),
        nn.ReLU(inplace=True),
        nn.Flatten(),
        nn.Linear(num_filters*(seq_len-kernel_size+1), 1)
    ) 

def forward(self, xb):
    # 重新整形视图为batch_size x 4通道 x seq_len
    # 转置以正确排序通道
    xb = xb.permute(0,2,1) 
    
    #print(xb.shape)
    out = self.conv_net(xb)
    return out
英文:

I have a question related to neural networks. I am a beginner in terms of specifying model parameters. I found this amazing example about DNA seq model built in PyTorch, which I want to improve. In the example, a basic CNN model was deployed and now I want to deploy a deeper model with more layers.

# basic CNN model
# These aren't optimized models, just something to start with, just testing pytorch with context of DNA
class DNA_CNN(nn.Module):
    def __init__(self,
                 seq_len,
                 num_filters=32,
                 kernel_size=3):
        super().__init__()
        self.seq_len = seq_len
        
        self.conv_net = nn.Sequential(
            # 4 is for the 4 nucleotides
            nn.Conv1d(4, num_filters, kernel_size=kernel_size),
            nn.ReLU(inplace=True),
            nn.Flatten(),
            nn.Linear(num_filters*(seq_len-kernel_size+1), 1)
        ) 

    def forward(self, xb):
        # reshape view to batch_size x 4channel x seq_len
        # permute to put channel in correct order
        xb = xb.permute(0,2,1) 
        
        #print(xb.shape)
        out = self.conv_net(xb)
        return out

答案1

得分: 1

使用相同的填充来实现模块化代码以保持序列的长度在应用卷积之前在边界添加零):

from typing import List
class DNA_CNN(nn.Module):
    def __init__(self,
                 seq_len: int,
                 num_filters: List[int] = [32, 64],
                 kernel_size: int = 3):
        super().__init__()
        self.seq_len = seq_len
        # CNN 模块
        self.conv_net = nn.Sequential()
        num_filters = [4] + num_filters
        for idx in range(len(num_filters) - 1):
            self.conv_net.add_module(
                f"conv_{idx}",
                nn.Conv1d(num_filters[idx], num_filters[idx + 1],
                          kernel_size=kernel_size, padding='same')
            )
            self.conv_net.add_module(f"relu_{idx}", nn.ReLU(inplace=True))
        self.conv_net.add_module("flatten", nn.Flatten())
        self.conv_net.add_module(
            "linear",
            nn.Linear(num_filters[-1]*seq_len, 1)
        )
        
    def forward(self, xb: torch.Tensor):
        """前向传播."""
        xb = xb.permute(0, 2, 1) 
        out = self.conv_net(xb)
        return out

要更改内核大小,您可以传递一个列表给 kernel_size,然后在卷积中使用 kernel_size=kernel_size[idx]

如果出于某种原因您想要移除填充,您可以在卷积中移除 padding='same',并更改 Linear 定义以匹配新的形状:

nn.Linear(num_filters[-1] * (seq_len - (len(num_filters)-1) * (kernel_size-1), 1)
英文:

Modular code to do so using padding same to keep the length of the sequence (by adding zeros in the borders before applying convolutions):

from typing import List
class DNA_CNN(nn.Module):
    def __init__(self,
                 seq_len: int,
                 num_filters: List[int] = [32, 64],
                 kernel_size: int = 3):
        super().__init__()
        self.seq_len = seq_len
        # CNN module
        self.conv_net = nn.Sequential()
        num_filters = [4] + num_filters
        for idx in range(len(num_filters) - 1):
            self.conv_net.add_module(
                f"conv_{idx}",
                nn.Conv1d(num_filters[idx], num_filters[idx + 1],
                          kernel_size=kernel_size, padding='same')
            )
            self.conv_net.add_module(f"relu_{idx}", nn.ReLU(inplace=True))
        self.conv_net.add_module("flatten", nn.Flatten())
        self.conv_net.add_module(
            "linear",
            nn.Linear(num_filters[-1]*seq_len, 1)
        )
        
    def forward(self, xb: torch.Tensor):
        """Forward pass."""
        xb = xb.permute(0, 2, 1) 
        out = self.conv_net(xb)
        return out

To change the kernel size, you can pass a list to kernel_size and simply use kernel_size=kernel_size[idx] in the convolution.

If for some reasons you want to remove the padding you can remove padding='same' in convolution and change the Linear definition to match the new shape:

nn.Linear(num_filters[-1] * (seq_len - (len(num_filters)-1) * (kernel_size-1), 1)

huangapple
  • 本文由 发表于 2023年7月17日 22:46:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/76705628.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定