使用与训练数据相同的标量对象来缩放测试数据的含义

huangapple go评论53阅读模式
英文:

Understanding the Implications of Scaling Test Data Using the Same Scalar Object as Training Data

问题

我目前正在进行一个机器学习项目,遇到了有关测试数据缩放的困境。我理解在缩放特征时,我们使用训练数据来拟合标量对象,然后使用相同的标量对象来转换训练和测试数据。

然而,我担心在缩放测试数据时可能会出现数据泄漏的问题。因为标量对象是基于从训练数据计算得出的统计属性(例如均值、标准差),我不确定它在不引入测试集信息的情况下能够多大程度上准确地缩放测试数据。

请有人能够澄清一下,当使用相同的标量对象转换测试数据时,是否存在数据泄漏的风险?如果是的话,如何最好地减轻这种风险,确保对模型性能进行可靠评估的最佳方法是什么?

感谢社区提供的任何见解或指导,帮助解决我的困惑,确保在我的机器学习项目中采用适当的缩放方法。

提前感谢您的帮助和专业知识。

英文:

I am currently working on a machine learning project and have encountered a dilemma regarding the scaling of test data. I understand that when scaling features, we fit the scalar object using the training data and then transform both the training and test data using the same scalar object.

However, I have a concern regarding potential data leakage when scaling the test data. As the scalar object is based on the statistical properties (e.g., mean, standard deviation) calculated from the training data, I am unsure how accurately it can scale the test data without incorporating information from the test set.

Could someone please clarify whether there is a risk of data leakage when transforming the test data with the same scalar object used for the training data? If so, what would be the best approach to mitigate this risk and ensure a reliable evaluation of the model's performance?

I appreciate any insights or guidance from the community to help address my confusion and ensure proper scaling practices in my machine learning project.

Thank you in advance for your help and expertise.

答案1

得分: 0

当对数据进行缩放时,您必须只使用训练数据来“学习”缩放参数(创建缩放器),就像您所写的那样。
当您在测试集上使用相同的缩放器时,不会出现泄漏。

在这种情况下,您唯一需要确保的是,首先将数据拆分并在训练集上创建缩放器。确保在应用于测试集时不要重新创建缩放器。

从训练集中学习的方式与模型的参数或分布的最小值和最大值(或任何其他属性)无关。

还要记住的一件事是,如果您希望值在某个范围内,比如[0,1],并且您使用训练集创建了一个缩放器。仍然存在测试集中存在一些极端值的可能性,而您的缩放器可能不会将其映射到相同的范围内。
您可以通过强制将极端值映射到范围的边缘来解决这个可能的问题。

希望这有所帮助。

英文:

When scaling your data, you must "learn" the scaling parameters (creating the scaler) only using your training dataset, just as you wrote.
There is no leakage when you use the same scaler for your test set.

The only thing you should make sure of, in that context, is that you first split your data and create the scaler on the training set. make sure to not re-create the scaler when applying on the test set.

Learning from the training set is the same whether its the model's parameter or its the minimum and maximum of the distribution (or any other property).

Another thing to keep in mind is that if you want the values to be in some range, let's say [0,1] and you created a scaler using your training set. There still is a possibility that the there is some extreme value in the test set and your scaler won't map it into the same range.
You can address this possible issue by forcing extreme values to be mapped to the edges of your range.

I hope this helps.

huangapple
  • 本文由 发表于 2023年6月5日 17:47:54
  • 转载请务必保留本文链接:https://go.coder-hub.com/76405210.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定