SGDRegressor的实例超参数tol=-np.infty不起作用。

huangapple go评论83阅读模式
英文:

An instance's hyperparameter of SGDRegressor is tol=-np.infty doesn't work

问题

从handson-ml2书中学习机器学习主题是随机梯度下降的早停在Pycharm中运行以下代码

```py
from copy import deepcopy

np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)

X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size = 0.5, random_state = 10)

# Verileri hazırlama
poly_scaler = Pipeline([
    ("poly_features", PolynomialFeatures(degree = 90, include_bias = False)),
    ("std_scaler", StandardScaler())
    ])

X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)

sgd_reg = SGDRegressor(max_iter = 1, tol = -np.infty, warm_start = True,
                       penalty = None, learning_rate = "constant", eta0 = 0.0005, random_state = 42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None

for epoch in range(1000):
    sgd_reg.fit(X_train_poly_scaled, y_train) 
    y_val_predict = sgd_reg.predict(X_val_poly_scaled)
    val_error = mean_squared_error(y_val, y_val_predict)
    if val_error < minimum_val_error:
        minimum_val_error = val_error
        best_epoch = epoch
        best_model = deepcopy(sgd_reg)

print("best_epoch:", best_epoch, "best_model:", best_model)

我遇到了这个错误:

 raise InvalidParameterError(sklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SGDRegressor must be a float in the range [0, inf) or None. Got -inf instead.

这个错误表示不能将'tol'参数设置为'-inf'。但在书中似乎是有效的。我该如何解决这个问题?


<details>
<summary>英文:</summary>

I&#39;m studying machine learning from handson-ml2 book. The topic is early stopping on stochastic gradient descent. I&#39;m running the following code in Pycharm: 

```py
from copy import deepcopy

np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)

X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size = 0.5, random_state = 10)

# Verileri hazırlama
poly_scaler = Pipeline([
    (&quot;poly_features&quot;, PolynomialFeatures(degree = 90, include_bias = False)),
    (&quot;std_scaler&quot;, StandardScaler())
    ])

X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)

sgd_reg = SGDRegressor(max_iter = 1, tol = -np.infty, warm_start = True,
                       penalty = None, learning_rate = &quot;constant&quot;, eta0 = 0.0005, random_state = 42)
minimum_val_error = float(&quot;inf&quot;)
best_epoch = None
best_model = None

for epoch in range(1000):
    sgd_reg.fit(X_train_poly_scaled, y_train) 
    y_val_predict = sgd_reg.predict(X_val_poly_scaled)
    val_error = mean_squared_error(y_val, y_val_predict)
    if val_error &lt; minimum_val_error:
        minimum_val_error = val_error
        best_epoch = epoch
        best_model = deepcopy(sgd_reg)

print(&quot;best_epoch:&quot;, best_epoch, &quot;best_model:&quot;, best_model)

where I'm getting this error:

 raise InvalidParameterError(sklearn.utils._param_validation.InvalidParameterError: The &#39;tol&#39; parameter of SGDRegressor must be a float in the range [0, inf) or None. Got -inf instead.

This error says you can't set the 'tol' parameter to '-inf'. But in the book it seems working. How can I fix this problem?

答案1

得分: 0

你所指的书中将tol超参数设置为-inf似乎有效的情况可能是由于版本或SGDRegressor类的实现方式的差异造成的。

如果你想复制书中的代码而不进行任何修改,请确保你使用的是与书中相同的库。作为额外的措施,请确保版本匹配。你可以通过执行library.__version__来检查这一点。

或者,如果你愿意对提供的代码进行小的更改,你应该可以简单地将tol设置为0(行为应该是相同的,模型将忽略该参数并继续训练,直到达到max_iter或其他某种停止标准为止)。

英文:

The book you are referring to where setting the tol hyperparameter to -inf appears to work could be due to a difference in versions or implementations in the SGDRegressor class.

If you'd like to copy the code from the book without making any modifications, make sure you're using the same library that was used in the book. As an extra measure, make sure the versions match. You can check this by doing library.__version__.

Else if you're fine with making a small change to the code provided, you should be able to simply set tol to 0 (the behavior should be identical, the model will ignore that parameter and keep on training until max_iter or some other stopping criterion is met).

huangapple
  • 本文由 发表于 2023年6月29日 19:07:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/76580466.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定