For ReLU implemented by PyTorch, 如何处理零点处的导数?

huangapple go评论44阅读模式
英文:

For ReLU implemented by PyTorch, how to handle derivative at the point of ZERO?

问题

ReLU 在 0 处不可导,但在 PyTorch 的实现中,应该进行处理。
因此,默认情况下在 0 处的导数被设为 0 吗?

我尝试在反向传播时将权重和偏置(ReLU 的输入)设为零,权重的梯度为 0,但在残差块的最后一个卷积层中不为零。

英文:

ReLU is not derivable at 0, but in the implementation of PyTorch, it should be handled.
So, the derivative at 0 is set to be 0 by default or?

I tried to set the weights and bias (the input of ReLU) to be zero while backpropagation, and the gradient of weights is 0, but not zero with the last conv layer in the residual block

答案1

得分: 1

PyTorch 在 x = 0 处实现了 ReLU 的导数,输出为零。根据此文章

> 即使 ReLU 激活函数在 0 处不可导,
> 自动求导库如 PyTorch 或 TensorFlow 实现了
> 其在 ReLU'(0) = 0 处的导数。

该文章还详细阐述了在使用 ReLU'(0) = 0 和 ReLU'(0) = 1 时的结果差异,并指出如果数字是较低精度,则影响更强。

英文:

Pytorch implements the derivative of ReLU at x = 0 by outputting zero. According to this article:

> Even though the ReLU activation function is non-differentiable at 0,
> autograd libraries such as PyTorch or TensorFlow implement
> its derivative with ReLU'(0) = 0.

The article also goes on to elaborate on the differences in outcomes between using ReLU'(0) = 0 and ReLU'(0) = 1, and notes that the effect is stronger if the numbers are lower precision.

huangapple
  • 本文由 发表于 2023年5月10日 22:01:01
  • 转载请务必保留本文链接:https://go.coder-hub.com/76219340.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定