Logistic Regression 和单神经元感知器之间的区别是什么?

huangapple go评论67阅读模式
英文:

What is the difference between Logistic Regression and Single Neuron Perceptron?

问题

两者似乎在做相同的事情,想知道它们之间是否有任何区别。

英文:

Both seem to do the same thing, wanted to know if there is any difference between the two.

答案1

得分: 3

如果单一神经元感知器具有S型激活函数,则没有区别。

实际上,我认为Andrew Ng在他的Coursera课程中将逻辑回归作为神经网络的第一个示例。他的Coursera课程链接

英文:

If the single neuron perceptron has a sigmoid activation function, then there is no difference.

In fact, I think Andrew Ng gives logistic regression as his first example of a neural network in his coursera course.

答案2

得分: 0

以下是您要翻译的内容:

Training

  • Logistic Regression: 它最小化对数损失
  • Single Neuron Perceptron: 它可以最小化对数损失或铰链损失。单神经元感知器的架构提供了灵活性,可以更改损失函数,前提是它是可微分的。

Prediction

让w表示权重,x表示输入,b表示偏差,y表示二元分类中的输出,即y ϵ {0, 1} 类别

Logistic Regression: 它有以下三个操作:

  1. 乘积求和:wᵀx + b
  2. Sigmoid函数:σ(wᵀx + b)
  3. 阈值(τ):如果σ(wᵀx + b) >= τ,则 y ϵ 1,否则如果σ(wᵀx + b) < τ,则 y ϵ 0

Single Neuron Perceptron: 它有以下两个操作:

  1. 乘积求和:wᵀx + b
  2. 激活函数

情况1:如果激活函数是阈值函数,那么如果(wᵀx + b) >= τ,则 y ϵ 1,否则如果(wᵀx + b) < τ,则 y ϵ 0

情况2:如果激活函数是Sigmoid函数(σ)

在这种情况下,输出将是介于0和1之间的值,不能直接用于分类。
必须将Sigmoid输出与阈值τ一起使用,将其分类为类0或1,即,

如果σ(wᵀx + b) >= τ,则 y ϵ 1,否则如果σ(wᵀx + b) < τ则 y ϵ 0。

必须对单神经元感知器的架构进行此修改,以使其的行为与逻辑回归完全相同。

这些是逻辑回归和单神经元感知器之间的主要区别和相似之处。

英文:

Let us take the Training and the Prediction aspects of Logistic Regression and Single Neuron Perceptron to understand where they are the same and where they are different.

Training

  • Logistic Regression: It minimizes the log-loss
  • Single Neuron Perceptron: It can minimize either log-loss or hinge loss. The
    architecture of Single Neuron Perceptron provides the flexibility
    needed to change the loss function, provided it is differentiable.

Prediction

Let w be the weight, x be the input, b be the bias, and y be the output in a binary classification i.e., y ϵ {0, 1} classes

Logistic Regression: It has the following 3 operations:

  1. Summation of Products: wᵀx + b
  2. Sigmoid function: σ(wᵀx + b)
  3. Threshold value (τ): If σ(wᵀx + b) >= τ, then y ϵ 1 else if σ(wᵀx +
    b) < τ then y ϵ 0

Single Neuron Perceptron: It has the following 2 operations:

  1. Summation of Products: wᵀx + b
  2. Activation function

Case 1: If Activation function is Threshold function, then if (wᵀx + b) >= τ then y ϵ 1 else if (wᵀx + b) < τ then y ϵ 0

Case 2: If Activation function is a sigmoid function (σ)

In this case, the output will be a value between 0 & 1, which cannot be directly used for classification.
The sigmoid output has to be used along with a Threshold value τ, to classify it to a class 0 or 1, i.e.,

If σ(wᵀx + b) >= τ, then y ϵ 1 else if σ(wᵀx + b) < τ then y ϵ 0.

This modification has to be done to the Single Neuron Perceptron architecture to make it behave exactly like the Logistic Regression.

These are the main differences and similarities between Logistic Regression and Single Neuron Perceptron.

huangapple
  • 本文由 发表于 2020年1月3日 23:05:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/59580874.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定