矩阵与其转置之间的乘法不是对称的且不是半正定的。

huangapple go评论79阅读模式
英文:

The multiplication between a matrix and its transposed is not symmetric and psd

问题

我想计算一批特征向量的协方差矩阵(这里用torch.randn替代),使用以下代码:

import torch

a = torch.randn(64, 512)
cov_mat = (a.T @ a) / (a.size(0) - 1)

产生的cov_mat是对称且半正定的。所以我在上面的代码中添加了cov_mat = (cov_mat + cov_mat.T) / 2来确保协方差矩阵是对称的。

现在协方差矩阵是对称的,但仍然不是半正定的。我尝试过torch.mm但仍然不起作用。这与精度有关吗?

具体的版本信息如下:
Python 3.7
PyTorch 1.8.1

英文:

I want to compute the covariance matrix of a batch of feature vectors (replaced by torch.randn here) using the code below:

import torch

a = torch.randn(64, 512)
cov_mat = (a.T @ a) / (a.size(0) - 1)

The produced cov_mat is symmetric and positive semidefinite. So I changed the clause above by adding cov_mat = (cov_mat + cov_mat.T) / 2

Now the covariance matrix is symmetric, but still not positive semidefinite. I've tried torch.mm but still didn't work. Does it have something to do with the precision?

Here is the specific version:
Python 3.7
PyTorch 1.8.1

答案1

得分: 1

I got similar issues, and the conclusion was indeed that it was a precision issue. When I executed your code (without your modification), I observed that cov_mat is almost symmetric, with an error around 1e-6. This is actually expected, as explained here:

> Floating point precision limits the number of digits you can accurately rely on.
torch.float32 has precision between 6 and 7 digits regardless of exponent, i.e. precision issues are expected for values < 1e-6. Similarly torch.float64 will have precision issues for values < 1e-15.

If you really need more precision, you can use torch.float64 instead:

# with the default float32:
a = torch.randn(64, 512)
c = (a.T @ a) / (a.size(0) - 1)
((c - c.T) &lt; 1e-7).all()    &gt;&gt;&gt; tensor(False)
((c - c.T) &lt; 1e-6).all()    &gt;&gt;&gt; tensor(True)

# with float64:
a = torch.randn(64, 512, dtype=torch.float64)
c = (a.T @ a) / (a.size(0) - 1)
((c - c.T) == 0).all()      &gt;&gt;&gt; tensor(True)

While your workaround using cov_mat = (cov_mat + cov_mat.T) / 2 also achieves similar precision using float32 (in the sense that the matrix is exactly symmetric), it will make the matrix "less" psd.

Unfortunately, whatever you do, you will always suffer from precision errors (1e-6 for float32, 1e-15 for float64). As such, we can see that the eigenvalues of the covariance matrix may be very slightly below zero. I tried manually setting the eigenvalues to 0, but there still are errors.

TL;DR My final advice would be:

  • if you can afford smaller errors, then just go for float64
  • if you need the matrix to be psd to use specific properties (like the fact that x.T @ C @ x > 0) then clamp the resulting value to be above 0
  • if you absolutely need the matrix to be psd, you can do: c += 1e-12 * torch.eye(a.size(1)) (I don't recommend it if you don't need it because it modifies the matrix)

Hope that helps you!

英文:

I got similar issues, and the conclusion was indeed that it was a precision issue. When I executed your code (without your modification), I observed that cov_mat is almost symmetric, with an error around 1e-6. This is actually expected, as explained here:

> Floating point precision limits the number of digits you can accurately rely on.
torch.float32 has precision between 6 and 7 digits regardless of exponent, i.e. precision issues are expected for values < 1e-6. Similarly torch.float64 will have precision issues for values < 1e-15.

If you really need more precision, you can use torch.float64 instead:

# with the default float32:
a = torch.randn(64, 512)
c = (a.T @ a) / (a.size(0) - 1)
((c - c.T) &lt; 1e-7).all()    &gt;&gt;&gt; tensor(False)
((c - c.T) &lt; 1e-6).all()    &gt;&gt;&gt; tensor(True)

# with float64:
a = torch.randn(64, 512, dtype=torch.float64)
c = (a.T @ a) / (a.size(0) - 1)
((c - c.T) == 0).all()      &gt;&gt;&gt; tensor(True)

While your workaround using cov_mat = (cov_mat + cov_mat.T) / 2 also achieves similar precision using float32 (in the sense that the matrix is exactly symmetric), it will make the matrix "less" psd.

Unfortunately, whatever you do, you will always suffer from precision errors (1e-6 for float32, 1e-15 for float64). As such, we can see that the eigenvalues of the covariance matrix may be very slightly below zero. I tried manually setting the eigenvalues to 0, but there still are errors.

TL;DR My final advice would be:

  • if you can afford smaller errors, then just go for float64
  • if you need the matrix to be psd to use specific properties (like the fact that x.T @ C @ x > 0) then clamp the resulting value to be above 0
  • if you absolutely need the matrix to be psd, you can do: c += 1e-12 * torch.eye(a.size(1)) (I don't recommend it if you don't need it because it modifies the matrix)

Hope that helps you!

huangapple
  • 本文由 发表于 2023年5月11日 15:07:12
  • 转载请务必保留本文链接:https://go.coder-hub.com/76224933.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定