有一个专门的名称来表示与输入相同大小的卷积操作吗?

huangapple go评论54阅读模式
英文:

Is there a name for a convolution that has the same size as the input?

问题

我有一个维度为(C,H,W)的输入张量和尺寸为(H,W)的卷积核。它似乎应该有一个名称,但我不知道,也找不到谷歌或ChatGPT可以为我显示它。这个有没有一个名称?

英文:

I have an input tensor of dimensions (C, H, W) and kernel with size (H, W).

It feels like there should be a name for this but I am unaware and can't get Google or ChatGPT to show it to me. Is there a name for this?

答案1

得分: 0

Turns out I am stupid, this is just a fully connected layer.

ChatGPT had me confused as to how convolutions work but yeah, it is in fact a different kernel for each input channel in every filter, making this "convolution" have one weight for every connection from an input neuron to an output neuron, which is effectively a fully connected layer in disguise.

英文:

Turns out I am stupid, this is just a fully connected layer.

ChatGPT had me confused as to how convolutions work but yeah, it is in fact a different kernel for each input channel in every filter, making this "convolution" have one weight for every connection from an input neuron to an output neuron, which is effectively a fully connected layer in disguise.

答案2

得分: 0

My uni professors call it same convolution, but there is no universal accepted name for this operator in the deep-learning community.

Regarding your answer, convolutional layer and fully connected one are two different things and are not related by the "same convolution" operator. For example, you can obtain the same final shape using this convolutional layer:

  • kernel of 3x3
  • stride = 1
  • dilation = 1
  • padding = 1

Here the exact formula.

In the convolution layer, you have a different set of weights for each output channel you want, and each "pixel" in the output image depends only on the neighborhood covered by the kernel.
In the fully connected layer, each unit (or neuron) has its own set of weights.

Think of the extreme case in which the kernel height and width are the same as the input image, with no stride, no padding, and default dilation at 1. In this case, you obtain 1 output pixel for each output channel you want, and such a pixel depends on the whole image.

I know that convolution can be a little confusing, at first, but I hope that what I write can be useful for you.

英文:

My uni professors call it same convolution, but there is no universal accepted name for this operator in the deep-learning community.

Regarding your answer, convolutional layer and fully connected one are two different things and are not related by the "same convolution" operator. For example, you can obtain the same final shape using this convolutional layer:

  • kernel of 3x3
  • stride = 1
  • dilation = 1
  • padding = 1

Here the exact formula.

In the convolution layer, you have a different set of weights for each output channel you want and each "pixel" in the output image depends only on the neighborhood covered by the kernel.
In the fully connected layer, each unit (or neuron) has its own set of weights.

Think of the extreme case in which the kernel height and width are the same of the input image, with no stride, no padding and default dilation at 1. In this case, you obtain 1 output pixel for each output channel you want and such pixel depends on the whole image.

I know that convolution can be a little confusing, at first, but I hope that what I write can be useful for you.

huangapple
  • 本文由 发表于 2023年3月3日 23:36:04
  • 转载请务必保留本文链接:https://go.coder-hub.com/75629075.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定