如何将PyTorch模型架构字符串转换为树状数据结构?

huangapple go评论59阅读模式
英文:

How to convert a Pytorch model architecture string into a Tree data structure?

问题

目标/问题:


我目前正在尝试挑战自己,想看看是否可以根据PyTorch模型的输出字符串来反向构建其架构。我已经完成了大部分工作,但最终陷入了递归地狱(递归不按预期工作)并不得不重新开始。我已经尝试了大约3天了,但似乎无法找到解决方法。

我尝试过的方法:


  • 将每行解析为其组件,offsetnamelayer
  • 使用行的offset参数来控制递归的流程
  • 使用)字符来确定块何时完成以控制递归的流程

无论我采用哪种方法,都会遇到一些问题,很难解决,因为该方法是递归的,而PyTorch模型的结构在块、顺序层等方面各不相同。

我认为将每行解析为其组件是一个很好的开始,因为它简化了问题。我开发了一个适用于我将在其中使用的基本树数据结构:

我想在JavaScript中执行此操作,因为我希望能够将PyTorch模型的字符串作为表单输入,并能够在网站上输出树结构。我认为这将会非常酷(长期目标 如何将PyTorch模型架构字符串转换为树状数据结构?

至少解析每行代码运行得很好。

代码:


树数据结构

class TreeNode {
  constructor(name, type) {
    this.name = name; // 层的名称 - 例如 'conv1'、'conv2'、'bn2'、'fc' 等
    this.type = type; // 层的类型(之后) - 例如 'ReLU(inplace=True)' 等
    this.children = [];
    this.parent = null;
  }

  addChild(node) {
    if (!(node instanceof TreeNode)) {
      throw new Error('无效的节点。节点必须是 TreeNode 的实例。');
    }

    node.parent = this;
    this.children.push(node);
  }

  removeChild(node) {
    const index = this.children.indexOf(node);
    if (index !== -1) {
      this.children.splice(index, 1);
      node.parent = null;
    }
  }

  getChild(index) {
    if (index < 0 || index >= this.children.length) {
      throw new Error('索引超出范围。');
    }

    return this.children[index];
  }

  getChildren() {
    return this.children;
  }

  getParent() {
    return this.parent;
  }

  getName() {
    return this.name;
  }

  getType() {
    return this.type;
  }
}

解析代码

function parseLine(line) {
  const matches = line.match(/^(\s*)(?:\()?(.*?)(?:\))?(:\s+)(.*)$/)
  if (matches) { // 偏移代码计算空格和/2以获得数字(深度级别)
      return {offset: matches[1].substring(4).length/2, name: matches[2], type: matches[4]};
  }
}

function parseInput(input) {
  linesParsed = []
  lines = input.split("\n") 
  lines.shift()// 移除第一个条目
  lines.forEach(line => {
    const parsedLine = parseLine(line)
    if (!(parsedLine == null)) {
      linesParsed.push(parseLine(line))
    }
  });
  console.log(linesParsed)
  const rootNode = new TreeNode('model');
  generateTree(linesParsed, rootNode);
  return rootNode;
}

function generateTree(linesParsed, node) {
  // 递归函数在这里...
}

示例字符串输入(经典的 ResNet):

ResNet(
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer2): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm

<details>
<summary>英文:</summary>

# Goal / Problem:
---
I am currently trying to challenge my self and wanted to see if I could reverse a pytorch models architecture, based off its output string alone. I have gotten most the way through, but I end up getting locked in recursion hell (Things not working as they should recursively) and end up restarting from scratch. Its been around 3 days now of trying to tackle this idea and I cant seem to figure out a way to do it.


# What I have tried:
---
- Parsing each line into its components, `offset`, `name`, `layer`
- Using the `offset` parameter of the line to control flow of the recursion
- Using the `)` characters to know when a block has finished to control the flow of the recursion

Each way I do these, I end up having some issue, and its hard to figure out how to fix it as the method is recursive and the structure of pytorch models vary in blocks, sequential layers, etc.

I think parsing each line into its components was a great start because it simplifies the problem. I developed a basic tree data structure that fits what I will be using it for:

I want to do this in javascript as I want to be able to input a string of a pytorch model as a form, and beable to output the tree on a website. Think it would be pretty neat (Long term goal :) )

Atleast the parsing each line code works great.

# Code
---
## Tree Data Structure
```js
class TreeNode {
  constructor(name, type) {
    this.name = name; // Layer name - ex. &#39;conv1&#39;, &#39;conv2&#39;, &#39;bn2&#39;, &#39;fc&#39; etc.
    this.type = type; // Layer type (After :) - ex. &#39;ReLU(inplace=True)&#39; etc.
    this.children = [];
    this.parent = null;
  }

  addChild(node) {
    if (!(node instanceof TreeNode)) {
      throw new Error(&#39;Invalid node. Node must be an instance of TreeNode.&#39;);
    }

    node.parent = this;
    this.children.push(node);
  }

  removeChild(node) {
    const index = this.children.indexOf(node);
    if (index !== -1) {
      this.children.splice(index, 1);
      node.parent = null;
    }
  }

  getChild(index) {
    if (index &lt; 0 || index &gt;= this.children.length) {
      throw new Error(&#39;Index out of bounds.&#39;);
    }

    return this.children[index];
  }

  getChildren() {
    return this.children;
  }

  getParent() {
    return this.parent;
  }

  getName() {
    return this.name;
  }

  getType() {
    return this.type;
  }
}

Parsing code

function parseLine(line) {
  const matches = line.match(/^(\s*)(?:\()?(.*?)(?:\))?(:\s+)(.*)$/)
  if (matches) { // Offset code counts spaces and /2 to get have a number (depth level)
      return {offset: matches[1].substring(4).length/2, name: matches[2], type: matches[4]};
  }
}

function parseInput(input) {
  linesParsed = []
  lines = input.split(&quot;\n&quot;) 
  lines.shift()// Shift gets rid of first entry
  lines.forEach(line =&gt; {
    const parsedLine = parseLine(line)
    if (!(parsedLine == null)) {
      linesParsed.push(parseLine(line))
    }
  });
  console.log(linesParsed)
  const rootNode = new TreeNode(&#39;model&#39;);
  generateTree(linesParsed, rootNode);
  return rootNode;
}

function generateTree(linesParsed, node) {
  // Recursive fun here ...
}

Sample string input (Good ol' Resnet):

ResNet(
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer2): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer3): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer4): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
    (fc): Linear(in_features=512, out_features=10, bias=True)
  )

答案1

得分: 2

不认为递归在这里是必要的一个简单不是真的循环应该是你所需要的一切出于简单起见我忽略了使用 getters随时可以再次添加):

```js
function generateTree(linesParsed, node) {
  let lastOffset = -1 // this helps us track the steps to move
  for (const line of linesParsed) {
    const curr = new TreeNode(line.name, line.type)

    if (line.offset &lt; lastOffset) {
      // go back up
      const steps = lastOffset - line.offset
      let target = node.parent
      for(let i = 0; i &lt; steps; i++) target = target.parent
      
      target.addChild(curr)
    } else if (line.offset === lastOffset) // sibling
      node.parent.addChild(curr)
    else node.addChild(curr) // child
    lastOffset = line.offset
    node = curr // store reference to last node
  }
}

演示:

<!-- begin snippet: js hide: true console: true babel: false -->

<!-- language: lang-js -->

class TreeNode {
  constructor(name, type) {
    this.name = name; // Layer name - ex. 'conv1', 'conv2', 'bn2', 'fc' etc.
    this.type = type; // Layer type (After :) - ex. 'ReLU(inplace=True)' etc.
    this.children = [];
    this.parent = null;
  }

  addChild(node) {
    if (!(node instanceof TreeNode)) {
      throw new Error('Invalid node. Node must be an instance of TreeNode.');
    }

    node.parent = this;
    this.children.push(node);
  }

  removeChild(node) {
    const index = this.children.indexOf(node);
    if (index !== -1) {
      this.children.splice(index, 1);
      node.parent = null;
    }
  }
}

function parseLine(line) {
  const matches = line.match(/^(\s*)(?:\()?(.*?)(?:\))?(:\s+)(.*)$/)
  if (matches) { // Offset code counts spaces and /2 to get have a number (depth level)
    return { offset: matches[1].substring(4).length / 2, name: matches[2], type: matches[4] };
  }
}

function parseInput(input) {
  linesParsed = []
  lines = input.split("\n")
  lines.shift()
  let count = 0;
  lines.forEach(line => {
    const parsedLine = parseLine(line)
    count += 1;
    if (!(parsedLine == null)) {
      linesParsed.push(parseLine(line))
    }
  });
  const rootNode = new TreeNode('model');
  generateTree(linesParsed, rootNode);
  return rootNode;
}

function generateTree(linesParsed, node) {
  let lastOffset = -1
  for (const line of linesParsed) {
    const curr = new TreeNode(line.name, line.type)
    if (line.offset &lt; lastOffset) {
      const steps = lastOffset - line.offset
      let target = node.parent
      for(let i = 0; i &lt; steps; i++) target = target.parent
      
      target.addChild(curr)
    } else if (line.offset === lastOffset)
      node.parent.addChild(curr)
    else node.addChild(curr)
    lastOffset = line.offset
    node = curr
  }
}

const result = parseInput(`ResNet(
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer2): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size

<details>
<summary>英文:</summary>

I don&#39;t think recursion is necessary here. A simple (not really) loop should be all you need (I neglected using the getters for simplicity sake, feel free to add it in again):
```js
function generateTree(linesParsed, node) {
  let lastOffset = -1 // this helps us track the steps to move
  for (const line of linesParsed) {
    const curr = new TreeNode(line.name, line.type)

    if (line.offset &lt; lastOffset) {
      // go back up
      const steps = lastOffset - line.offset
      let target = node.parent
      for(let i = 0; i &lt; steps; i++) target = target.parent
      
      target.addChild(curr)
    } else if (line.offset === lastOffset) // sibling
      node.parent.addChild(curr)
    else node.addChild(curr) // child
    lastOffset = line.offset
    node = curr // store reference to last node
  }
}

Demo:

<!-- begin snippet: js hide: true console: true babel: false -->

<!-- language: lang-js -->

class TreeNode {
constructor(name, type) {
this.name = name; // Layer name - ex. &#39;conv1&#39;, &#39;conv2&#39;, &#39;bn2&#39;, &#39;fc&#39; etc.
this.type = type; // Layer type (After :) - ex. &#39;ReLU(inplace=True)&#39; etc.
this.children = [];
this.parent = null;
}
addChild(node) {
if (!(node instanceof TreeNode)) {
throw new Error(&#39;Invalid node. Node must be an instance of TreeNode.&#39;);
}
node.parent = this;
this.children.push(node);
}
removeChild(node) {
const index = this.children.indexOf(node);
if (index !== -1) {
this.children.splice(index, 1);
node.parent = null;
}
}
}
function parseLine(line) {
const matches = line.match(/^(\s*)(?:\()?(.*?)(?:\))?(:\s+)(.*)$/)
if (matches) { // Offset code counts spaces and /2 to get have a number (depth level)
return { offset: matches[1].substring(4).length / 2, name: matches[2], type: matches[4] };
}
}
function parseInput(input) {
linesParsed = []
lines = input.split(&quot;\n&quot;)
lines.shift()
let count = 0;
lines.forEach(line =&gt; {
const parsedLine = parseLine(line)
count += 1;
if (!(parsedLine == null)) {
linesParsed.push(parseLine(line))
}
});
const rootNode = new TreeNode(&#39;model&#39;);
generateTree(linesParsed, rootNode);
return rootNode;
}
function generateTree(linesParsed, node) {
let lastOffset = -1
for (const line of linesParsed) {
const curr = new TreeNode(line.name, line.type)
if (line.offset &lt; lastOffset) {
const steps = lastOffset - line.offset
let target = node.parent
for(let i = 0; i &lt; steps; i++) target = target.parent
target.addChild(curr)
} else if (line.offset === lastOffset)
node.parent.addChild(curr)
else node.addChild(curr)
lastOffset = line.offset
node = curr
}
}
const result = parseInput(`ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=10, bias=True)
)`)
// to prevent circular reference
function removeParent(node) {
delete node.parent
// just because unclosed parens are horrid
if(node?.type?.endsWith?.(&quot;(&quot;)) node.type = node.type.slice(0, -1)
node.children.forEach(removeParent)
if(!node.children.length) delete node.children
}
removeParent(result)
document.querySelector(&quot;pre&quot;).innerText = JSON.stringify(result, null, 2)

<!-- language: lang-html -->

&lt;pre&gt;&lt;/pre&gt;

<!-- end snippet -->

huangapple
  • 本文由 发表于 2023年6月6日 05:58:11
  • 转载请务必保留本文链接:https://go.coder-hub.com/76410250.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定