英文:
For ReLU implemented by PyTorch, how to handle derivative at the point of ZERO?
问题
ReLU 在 0 处不可导,但在 PyTorch 的实现中,应该进行处理。
因此,默认情况下在 0 处的导数被设为 0 吗?
我尝试在反向传播时将权重和偏置(ReLU 的输入)设为零,权重的梯度为 0,但在残差块的最后一个卷积层中不为零。
英文:
ReLU is not derivable at 0, but in the implementation of PyTorch, it should be handled.
So, the derivative at 0 is set to be 0 by default or?
I tried to set the weights and bias (the input of ReLU) to be zero while backpropagation, and the gradient of weights is 0, but not zero with the last conv layer in the residual block
答案1
得分: 1
PyTorch 在 x = 0
处实现了 ReLU
的导数,输出为零。根据此文章:
> 即使 ReLU 激活函数在 0 处不可导,
> 自动求导库如 PyTorch 或 TensorFlow 实现了
> 其在 ReLU'(0) = 0 处的导数。
该文章还详细阐述了在使用 ReLU'(0) = 0 和 ReLU'(0) = 1 时的结果差异,并指出如果数字是较低精度,则影响更强。
英文:
Pytorch implements the derivative of ReLU
at x = 0
by outputting zero. According to this article:
> Even though the ReLU activation function is non-differentiable at 0,
> autograd libraries such as PyTorch or TensorFlow implement
> its derivative with ReLU'(0) = 0.
The article also goes on to elaborate on the differences in outcomes between using ReLU'(0) = 0 and ReLU'(0) = 1, and notes that the effect is stronger if the numbers are lower precision.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论