英文:
Find optimal weights for multivariate model for classification using Machine learning
问题
I am wondering if there are any suggestions on the best approach to find the optimal weights for a classification problem involving multiple variable inputs. I'm currently unclear on the approach for how to do this using neural nets or multivariate nonlinear regression in tensorflow or pytorch. The formula I'm trying to optimize is below to find the best [w0, w1] for my data [x0, x1, x2, x3].
y = (1 + w0 * x0 + w1 * x1) / (1 + w0 * x2 + w1 * x3)
I am not clear on the approach and suggestions would be great.
英文:
I am wondering if there are any suggestions on the best approach to find the optimal weights for a classification problem involving multiple variable inputs. I'm currently unclear on the approach for how to do this using neural nets or multivariate nonlinear regression in tensorflow or pytorch. The formula I'm trying to optimize is below to find the best [w0, w1] for my data [x0, x1, x2, x3].
y = (1 + w0 * x0 + w1 * x1) / (1 + w0 * x2 + w1 * x3)
I am not clear on the approach and suggestions would be great
答案1
得分: 1
为了在pytorch
中优化多个变量,您可以创建一个张量(tensor),其长度等于您要优化的变量数量。确保通过将requires_grad = True
设置为启用autograd
,在Pytorch中,默认情况下,神经网络的张量此属性已设置为True
,但普通张量默认情况下设置为False
。然后,使用张量的索引创建您的损失函数作为变量,以及像普通神经网络一样使用梯度下降。这是一个示例:
import torch
from torch.optim import SGD
# 初始化权重。
w = torch.randn(2, requires_grad=True)
optimizer = SGD([w], lr=0.01)
# 假设我们有一些数据。
# 这些应该替换为您的实际数据。
x0, x1, x2, x3 = torch.randn(4)
y_target = torch.randn(1)
for i in range(1000): # 优化步骤的数量。
optimizer.zero_grad()
# 根据给定的公式计算y。
y = (1 + w[0] * x0 + w[1] * x1) / (1 + w[0] * x2 + w[1] * x3)
# 使y与y_target具有相同的形状
# (否则会有警告,因为y是标量)。
y = y.unsqueeze(0)
# 计算损失。这里我们使用均方误差(MSE)损失。
loss = torch.nn.functional.mse_loss(y, y_target)
# 反向传播。
loss.backward()
# 更新权重。
optimizer.step()
if i % 100 == 0: # 每100步打印损失。
print(f'迭代 {i}, 损失 {loss.item()}')
print(f'优化后的权重: {w.data}')
请注意,这是您提供的代码的翻译部分,不包括代码部分。
英文:
To optimize for multiple variables in pytorch
, you can create a tensor
of the length of the number of variables that you have. Make sure that your tensor
has autograd
enabled by setting requires_grad = True
, neural nets in Pytorch have this set to True
by default but regular tensors have it set to False
by default. Then create your loss function using indexes of your tensor as the variables, and use gradient descent like you would for a normal neural net. Here's an example for you:
import torch
from torch.optim import SGD
# Initialize weights.
w = torch.randn(2, requires_grad=True)
optimizer = SGD([w], lr=0.01)
# Let's assume we have some data.
# These should be replaced with your actual data.
x0, x1, x2, x3 = torch.randn(4)
y_target = torch.randn(1)
for i in range(1000): # Number of optimization steps.
optimizer.zero_grad()
# Compute y according to the given formula.
y = (1 + w[0] * x0 + w[1] * x1) / (1 + w[0] * x2 + w[1] * x3)
# Make y the same shape as y_target
# (there will be a warning otherwise because y is a scalar)
y = y.unsqueeze(0)
# Compute the loss. Here we use mean squared error (MSE) loss.
loss = torch.nn.functional.mse_loss(y, y_target)
# Backpropagation.
loss.backward()
# Update the weights.
optimizer.step()
if i % 100 == 0: # Print loss every 100 steps.
print(f'Iteration {i}, Loss {loss.item()}')
print(f'Optimized weights: {w.data}')
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论