如何在Pytorch中进行反向传播计算时添加延迟学习?

huangapple go评论56阅读模式
英文:

How to add leaning late while calculating backward propagation in Pytorch

问题

I heard that while doing backward propagation, the weight will be updated using learning late and partial derivative.

但我不知道在反向传播代码中应该放learning_late参数在哪里。而且我很好奇,如果没有设置learning late,默认的learning late是多少?

So, Here is the Code that I want to learn.


import torch
import torch.nn as nn
import torch.optim as optim

class MyNeuralNetwork(nn.Module):
    def __init__(self):
        super(MyNeuralNetwork, self).__init__()
        layer_1=nn.Linear(in_features=2, out_features=2, bias=False)
        weight_1 = torch.tensor([[.3,.25],[.4, .35]]
        
        layer_1.weight = nn.Parameter(weight_1)
        self.layer1 = nn.Sequential(
            layer_1,
            nn.Sigmoid()
        )
        
        layer_2 = nn.Linear(in_features=2, out_features=2, bias=False)
        weight_2 = torch.tensor([[.45, .4],[.7, .6]]
        
        layer_2.weight = nn.Parameter(weight_2)
        self.layer2 = nn.Sequential(
            layer_2,
            nn.Sigmoid()
        )
    
    def forward(self, input):
        output = self.layer1(input)
        output = self.layer2(output)
        
        return output
    
model = MyNeuralNetwork().to("cpu")
print(model)

input = torch.tensor([0.1,0.2]).reshape(1,-1)
target = torch.tensor([0.4,0.6]).reshape(1,-1)

out = model(input)
print(f"output value : {out}")

criterion = nn.MSELoss()
loss = criterion(out, target)
print(f"loss value : {loss}")

model.zero_grad() 
print('↓ layer1.weight before backward propagation ↓')
print(model._modules['layer1']._modules['0'].weight)
print(model._modules['layer2']._modules['0'].weight)
print()

loss.backward() # where can I put the learning late in back propagation.
print('↓ layer1.weight after backward propagation ↓')
print(model._modules['layer1']._modules['0'].weight)
print(model._modules['layer2']._modules['0'].weight)


My question's Point is how to add learning late which I want for train this model.

英文:

I heard that while doing backward propagation, the weight will be updated
using learning late and partial derivative.

But I don't know where to put the learning_late parameter in the backward propagation code. And I'm wonder that without settings of learning late,
what is the default learning late?

So, Here is the Code that I want to learn.


import torch
import torch.nn as nn
import torch.optim as optim
class MyNeuralNetwork(nn.Module):
def __init__(self):
super(MyNeuralNetwork, self).__init__()
layer_1=nn.Linear(in_features=2, out_features=2, bias=False)
weight_1 = torch.tensor([[.3,.25],[.4, .35]])
layer_1.weight = nn.Parameter(weight_1)
self.layer1 = nn.Sequential(
layer_1,
nn.Sigmoid()
)
layer_2 = nn.Linear(in_features=2, out_features=2, bias=False)
weight_2 = torch.tensor([[.45, .4],[.7, .6]])
layer_2.weight = nn.Parameter(weight_2)
self.layer2 = nn.Sequential(
layer_2,
nn.Sigmoid()
)
def forward(self, input):
output = self.layer1(input)
output = self.layer2(output)
return output
model = MyNeuralNetwork().to("cpu")
print(model)
input = torch.tensor([0.1,0.2]).reshape(1,-1)
target = torch.tensor([0.4,0.6]).reshape(1,-1)
out = model(input)
print(f"output value : {out}")
criterion = nn.MSELoss()
loss = criterion(out, target)
print(f"loss value : {loss}")
model.zero_grad() 
print('↓ layer1.weight before backward propagation ↓')
print(model._modules['layer1']._modules['0'].weight)
print(model._modules['layer2']._modules['0'].weight)
print()
loss.backward() # where can I put the learning late in back propagation.
print('↓ layer1.weight after backward propagation ↓')
print(model._modules['layer1']._modules['0'].weight)
print(model._modules['layer2']._modules['0'].weight)

My question's Point is how to add learning late which I want
for train this model.

答案1

得分: 1

答案是,在调用优化器的step函数以更新模型权重时,您需要使用优化器的学习率参数。具体来说,当您创建一个优化器时,您需要像这样指定一个学习率:

optimizer = optim.SGD(model.parameters(), lr=learning_rate)

然后,在进行训练/测试循环时,您会在反向传播后调用优化器的step函数,像这样:

optimizer.step()

这个step函数将使用您指定的学习率来更新模型中的权重。默认的学习率通常是0.01,但您可以根据您的特定需求进行更改。

英文:

The answer is that you need to use the the learning rate parameter of the optimizer when you call the optimizer's step function in order to update the weights of your model. Specifically, when you create an optimizer, you'll need to specify a learning rate like this:

optimizer = optim.SGD(model.parameters(), lr=learning_rate)

Then, when you do a training/Testing loop, you'll call the optimizer's step function after the backward pass like this:

optimizer.step()

This step function will update the weights in the model using the learning rate that you specified. The default learning rate is usually 0.01, but you can change it based on your specific needs.

huangapple
  • 本文由 发表于 2023年2月10日 13:39:03
  • 转载请务必保留本文链接:https://go.coder-hub.com/75407348.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定