英文:
Maximum Variance Unfolding with CVXPY
问题
我试图复现这篇论文(Weinberger和Saul,2004,doi/10.5555/1597348.1597471)中的结果,其中作者使用最大方差展开(MVU)来学习几个分布的低维表示。MVU的一般问题在丢弃秩约束后如下所示:
这里,'G'是数据的内积矩阵。每个'k'对应一个数据点,每个'l'对应其邻居之一。然后,'e'向量是“指示向量”,除了下标中的一个条目以外,所有条目都为零。第二个约束将数据置于零中心;我在我的公式中忽略了这个约束,以使其尽可能简单。最后,'G'必须是半正定的。
我试图在一组包含相同逐渐旋转的茶壶的400张图像(76x101像素)上复制上述论文中的实验。我首先导入数据和相关的库:
然后,我构建每个数据点的k个最近邻的邻接矩阵表示:
最后,我尝试使用CVXPY执行MVU,以发现这些数据的低维嵌入:
然而,CVXPY告诉我问题是“不可行的”。如果我移除约束,求解器会说问题是“无界的”,这是有道理的,因为我正在最大化决策变量的凸函数。那么,我的问题是,在MVU的制定中,我犯了什么错误?谢谢您!我在上面的代码中省略了一个验证,即邻接图不是断开的。为此,我只需对邻接图执行DFS,并断言访问的节点数等于数据中的行数。对于'n=4'个邻居,这个断言成立。
英文:
I am trying to reproduce the results from this paper (Weinberger and Saul, 2004, doi/10.5555/1597348.1597471), in which the authors use Maximum Variance Unfolding (MVU) to learn a low dimensional representation of a few distributions. The general problem of MVU, after dropping the rank constraint, is as follows:
Here, G
is the inner product matrix of the data. Each k
corresponds to a data point, and each l
corresponds to one of its neighbors. Then, the e
vectors are "indicator vectors" that have zeros in all entries except for the one in the index in the subscript. The second constraint zero-centers the data; I ignored this constraint in my formulation so as to keep it as simple as possible. Finally, G
must be positive semidefinite.
I am trying to replicate the experiments in the paper above on a dataset which consists of 400 images (76x101 pixels) of the same progressively rotated teapot. I start by importing the data and relevant libraries:
import scipy.io
import cvxpy as cvx
import numpy as np
from sklearn.neighbors import NearestNeighbors
data = scipy.io.loadmat('teapots.mat')
data = data["Input"][0][0][0]
# make it (400 x 23028)
data = data.T.astype(np.float64)
Then, I build an adjacency matrix representation of the k-nearest neighbors of each data point:
n_points = data.shape[0]
# create sparse matrix (400 x 400) with the connections
nn = NearestNeighbors(n_neighbors=4).fit(data)
nn = nn.kneighbors_graph(data).todense()
nn = np.array(nn)
Finally, I try to perform MVU to discover low dimensional embeddings for these data using CVXPY:
# inner product matrix of the original data
X = cvx.Constant(data.dot(data.T))
# inner product matrix of the projected points; PSD constrains G to be both PSD and symmetric
G = cvx.Variable((n_points, n_points), PSD=True)
G.value = np.zeros((n_points, n_points))
# spread out points in target manifold
objective = cvx.Maximize(cvx.trace(G))
constraints = []
# add distance-preserving constraints
for i in range(n_points):
for j in range(n_points):
if nn[i, j] == 1:
constraints.append(
(X[i, i] - 2 * X[i, j] + X[j, j]) -
(G[i, i] - 2 * G[i, j] + G[j, j]) == 0
)
problem = cvx.Problem(objective, constraints)
problem.solve(verbose=True, max_iters=10000000)
print(problem.status)
However, CVXPY tells me the problem is infeasible
. If I remove the constraints, the solver says the problem is unbounded
, which makes sense because I'm maximizing a convex function of the decision variable. My question, then, is what mistake(s) I am making in my formulation of MVU. Thank you in advance!
I omitted, in the code above, a verification that the adjacency graph is not disconnected. To do that, I simply perform a DFS on the adjacency graph and assert that the number of visited nodes is equal to the number of rows in the data. For n=4
neighbors, this holds.
答案1
得分: 1
以下是您提供的英文内容的中文翻译:
"这实际上是一个非常困难的SDP问题,因为你有许多接近线性依赖关系。即使是最好的实现也会遇到一些数值问题,需要降低公差或使用其他技巧。它真的有点棘手。
关于你的代码:
- 重新调整你的数据,使得
X
不在数量级为1e+8。这会使求解器立即在数值上出现问题。例如,除以1e+6。实际上,这个建议适用于任何问题。 - 添加归一化约束
sum(G)=0
。如果没有它,它可能会无界,求解器稍后也会出现问题。你会看到在SCS日志输出中出现巨大的值。
通过这些改变,我能够让SCS开始收敛。过了一会儿,它看起来像是:
8500| 1.01e+00 1.11e-04 6.31e+00 -1.64e+06 5.49e-04 6.15e+02
8750| 1.01e+00 1.11e-04 6.39e+00 -1.64e+06 5.49e-04 6.33e+02
不过我不会等它完成。如果你希望它终止,可能需要稍微放宽公差。
我建议你使用cvx.MOSEK
(声明一下,我在那里工作)作为求解器,但似乎CVXPY对MOSEK的编译过程会遇到一些问题,开始占用大量内存并花费很长时间。所以这里是一个在MOSEK Fusion API中的直接实现(包括重新缩放和归一化约束):
from mosek.fusion import *
import scipy.io
import numpy as np
from sklearn.neighbors import NearestNeighbors
import sys
data = scipy.io.loadmat('teapots.mat')
data = data["Input"][0][0][0]
# 使其变为 (400 x 23028)
data = data.T.astype(np.float64)
n_points = data.shape[0]
# 用连接创建稀疏矩阵 (400 x 400)
nn = NearestNeighbors(n_neighbors=4).fit(data)
nn = nn.kneighbors_graph(data).todense()
nn = np.array(nn)
# 原始数据的内积矩阵
X = data.dot(data.T)/10**6
print(X)
# 投影点的内积矩阵;PSD约束使G既是PSD又是对称的
M = Model()
G = M.variable(Domain.inPSDCone(n_points))
M.constraint(Expr.sum(G), Domain.equalsTo(0))
# 在目标流形中扩展点
M.objective(ObjectiveSense.Maximize, Expr.sum(G.diag()))
# 添加保持距离的约束
for i in range(n_points):
for j in range(n_points):
if nn[i, j] == 1:
M.constraint(Expr.add([G.index([i,i]),G.index([j,j]),Expr.mul(-2,G.index([i,j]))]), Domain.equalsTo(X[i, i] - 2 * X[i, j] + X[j, j]))
# 打印日志
M.setLogHandler(sys.stdout)
# 降低精度
M.setSolverParam("intpntCoTolRelGap", 1e-5)
M.setSolverParam("intpntCoTolPfeas",1e-5)
M.setSolverParam("intpntCoTolDfeas",1e-5)
# 求解
M.solve()
Gresult = G.level().reshape((n_points,n_points))
print(Gresult)
解决方案看起来不错:
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 1.5209452618e+06 nrm: 4e+03 Viol. con: 2e-02 var: 0e+00 barvar: 0e+00
Dual. obj: 1.5209449262e+06 nrm: 8e+03 Viol. con: 0e+00 var: 0e+00 barvar: 1e-04
英文:
This is a very hard type of SDP problem actually because you have a lot of near linear dependencies. Even the best implementations will run into some sort of numerical issues and will require reduced tolerances or other tricks. It really is a bit nasty.
Regarding your code:
- Rescale your data so that
X
is not of order 1e+8. It makes the solver immediately go numerically bananas. For example divide by 1e+6. That advice would apply to any problem, actually. - Add the normalizing constraint
sum(G)=0
. Without it it can be unbounded and the solver will also go bananas, just a bit later. You can see huge values appearing in the SCS log output.
With these changes I was able to get SCS to start converging. After a while it looked like
8500| 1.01e+00 1.11e-04 6.31e+00 -1.64e+06 5.49e-04 6.15e+02
8750| 1.01e+00 1.11e-04 6.39e+00 -1.64e+06 5.49e-04 6.33e+02
I will not wait for it to finish though. You may need to fiddle with tolerances to relax them if you want it to terminate.
I would advise you to use cvx.MOSEK
(where, disclamer, I work) as a solver, but it seems the CVXPY compilation procedure to MOSEK runs into some issues and starts eating a lot of RAM taking forever. So instead here is a direct implementation in MOSEK Fusion API (with the rescaling and normalizing constraint):
from mosek.fusion import *
import scipy.io
import numpy as np
from sklearn.neighbors import NearestNeighbors
import sys
data = scipy.io.loadmat('teapots.mat')
data = data["Input"][0][0][0]
# make it (400 x 23028)
data = data.T.astype(np.float64)
n_points = data.shape[0]
# create sparse matrix (400 x 400) with the connections
nn = NearestNeighbors(n_neighbors=4).fit(data)
nn = nn.kneighbors_graph(data).todense()
nn = np.array(nn)
# inner product matrix of the original data
X = data.dot(data.T)/10**6
print(X)
# inner product matrix of the projected points; PSD constrains G to be both PSD and symmetric
M = Model()
G = M.variable(Domain.inPSDCone(n_points))
M.constraint(Expr.sum(G), Domain.equalsTo(0))
# spread out points in target manifold
M.objective(ObjectiveSense.Maximize, Expr.sum(G.diag()))
# add distance-preserving constraints
for i in range(n_points):
for j in range(n_points):
if nn[i, j] == 1:
M.constraint(Expr.add([G.index([i,i]),G.index([j,j]),Expr.mul(-2,G.index([i,j]))]), Domain.equalsTo(X[i, i] - 2 * X[i, j] + X[j, j]))
# Print log
M.setLogHandler(sys.stdout)
# Reduce accuracy
M.setSolverParam("intpntCoTolRelGap", 1e-5)
M.setSolverParam("intpntCoTolPfeas",1e-5)
M.setSolverParam("intpntCoTolDfeas",1e-5)
#Solve
M.solve()
Gresult = G.level().reshape((n_points,n_points))
print(Gresult)
The solution looks OK:
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 1.5209452618e+06 nrm: 4e+03 Viol. con: 2e-02 var: 0e+00 barvar: 0e+00
Dual. obj: 1.5209449262e+06 nrm: 8e+03 Viol. con: 0e+00 var: 0e+00 barvar: 1e-04
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论