英文:
Eigen::Pardiso gives a wrong solution which very close to zero,
问题
I use Eigen3.4和mkl to solve the sparse linear system.
Previously I was able to get a correct solution using Eigen::SimplicialLDLT to solve II\EE, but it took long time. II is a sparse matrix which calculated by J^T * J and EE is a dense vector.J is a jacobian matrix which dimension is 175215 * 175215 and EE is a vector which dimension is 175215 * 1. And J is invertible, the visualization of HH is shown in II .
For my case, Eigen::SimplicialLDLT uses 14s while MATLAB took only 2.5s. Therefore, I try to use Eigen::Pardiso to solve this system. Using this method is really fast, but yields an incorrect solution which is very close to zero. This method seems to have very many parameters that can be set, does it help me to get a correct solution? Or is there another more efficient way to solve this problem?
I have used almost all the methods provided by Eigen, including its own solver and some external solvers such as SuiteSparse but the results are not good, either very slow or incorrect. I want to get a correct solution and the time consumption is similar to MATLAB.
Below is my code:
Eigen::SparseMatrix<double> V = JDRow.transpose() * JDRow + WeightHH;
Eigen::SparseMatrix<double> W = JPSlice.transpose() * JDRow;
Eigen::VectorXd VecErrorS = Eigen::VectorXd::Map(ErrorS.data(), ErrorS.size());
Eigen::VectorXd EP = -JPSlice.transpose() * VecErrorS;
Eigen::VectorXd ED = -JDRow.transpose() * VecErrorS;
Eigen::ArrayXi RowIdSelectVar = IdSelectVar.col(0);
Eigen::ArrayXi ColIdSelectVar = IdSelectVar.col(1);
// Sort the order to variables order
Eigen::ArrayXi ArraySortSelectId = RowIdSelectVar * ValParam.Sizej + ColIdSelectVar;
std::sort(ArraySortSelectId.data(), ArraySortSelectId.data() + ArraySortSelectId.size());
Eigen::ArrayXd SortSelectMap = SelectMap.transpose().reshaped(ValParam.Sizei*ValParam.Sizej,1);
Eigen::VectorXd XH0 = SortSelectMap(ArraySortSelectId);
Eigen::VectorXd EH = -WeightHH * XH0;
Eigen::VectorXd EDEH = ED + EH;
Eigen::SparseMatrix<double> UW = igl::cat(2, U, W);
Eigen::SparseMatrix<double> WT = W.transpose();
Eigen::SparseMatrix<double> WV = igl::cat(2, WT, V);
Eigen::SparseMatrix<double> II = igl::cat(1, UW, WV);
Eigen::VectorXd EE = igl::cat(1,EP,EDEH);
II.makeCompressed();
Eigen::initParallel();
Eigen::ConjugateGradient<Eigen::SparseMatrix<double>> solver;// or other method
solver.compute(II);
Eigen::VectorXd Delta = solver.solve(EE);
英文:
I use Eigen3.4 and mkl to solve the sparse linear system.
Previously I was able to get a correct solution using Eigen::SimplicialLDLT to solve II\EE, but it took long time. II is a sparse matrix which calculated by J^T * J and EE is a dense vector.J is a jacobian matrix which dimension is 175215 * 175215 and EE is a vector which dimension is 175215 * 1. And J is invertible, the visualization of HH is shown in II .
For my case, Eigen::SimplicialLDLT uses 14s while MATLAB took only 2.5s. Therefore, I try to use Eigen::Pardiso to solve this system. Using this method is really fast, but yields an incorrect solution which is very close to zero. This method seems to have very many parameters that can be set, does it help me to get a correct solution? Or is there another more efficient way to solve this problem?
I have used almost all the methods provided by Eigen, including its own solver and some external solvers such as SuiteSparse but the results are not good, either very slow or incorrect. I want to get a correct solution and the time consumption is similar to MATLAB.
Below is my code:
Eigen::SparseMatrix<double> U = JPSlice.transpose() * JPSlice ;
Eigen::SparseMatrix<double> V = JDRow.transpose() * JDRow + WeightHH;
Eigen::SparseMatrix<double> W = JPSlice.transpose() * JDRow;
Eigen::VectorXd VecErrorS = Eigen::VectorXd::Map(ErrorS.data(), ErrorS.size());
Eigen::VectorXd EP = -JPSlice.transpose() * VecErrorS;
Eigen::VectorXd ED = -JDRow.transpose() * VecErrorS;
Eigen::ArrayXi RowIdSelectVar = IdSelectVar.col(0);
Eigen::ArrayXi ColIdSelectVar = IdSelectVar.col(1);
// Sort the order to variables order
Eigen::ArrayXi ArraySortSelectId = RowIdSelectVar * ValParam.Sizej + ColIdSelectVar;
std::sort(ArraySortSelectId.data(), ArraySortSelectId.data() + ArraySortSelectId.size());
Eigen::ArrayXd SortSelectMap = SelectMap.transpose().reshaped(ValParam.Sizei*ValParam.Sizej,1);
Eigen::VectorXd XH0 = SortSelectMap(ArraySortSelectId);
Eigen::VectorXd EH = -WeightHH * XH0;
Eigen::VectorXd EDEH = ED + EH;
Eigen::SparseMatrix<double> UW = igl::cat(2, U, W);
Eigen::SparseMatrix<double> WT = W.transpose();
Eigen::SparseMatrix<double> WV = igl::cat(2, WT, V);
Eigen::SparseMatrix<double> II = igl::cat(1, UW, WV);
Eigen::VectorXd EE = igl::cat(1,EP,EDEH);
II.makeCompressed();
Eigen::initParallel();
Eigen::ConjugateGradient<Eigen::SparseMatrix<double>> solver;// or other method
solver.compute(II);
Eigen::VectorXd Delta = solver.solve(EE);
答案1
得分: 2
如果矩阵很大,最好使用迭代求解器。直接求解器复杂度高,最适合在你预期要解多个具有相同矩阵但不同右手边向量的线性系统时使用(例如,时变线性PDE)。
如果 J 是方阵,那么如果 J 可逆(反过来,如果 J 不可逆,则 A 也不可逆),则 J^TJ 是定正的。共轭梯度法适用于这种情况,并且应该非常快速。如果 J 是矩形的,MxN,其中 M < N(不定系统),则列不是自由的,因此 J 的核不会被减小到{0}。因此,J^TJ 不会是定正的。尽管如此,你仍然可以尝试 CG,但要验证结果。如果 J 是矩形的,M > N,你可能正在尝试最小化一个范数 ||Jx-b||,其中 EE = J^Tb,Eigen 提供了 Eigen::LeastSquaresConjugateGradient 来解决这个问题,而不需要构造 J^TJ。
你可以尝试预条件来加速迭代方案(并提高稳定性)。很可能 Eigen 提供了这个功能。否则,一个极其简单的方案是通过对角线的绝对值来预条件。
你也可以尝试 GMRES,通常非常稳健,特别是如果你对定正性有疑虑。
最后,你可以尝试在scicomp上寻求帮助,可能会有更好的运气。
英文:
How large is your matrix? If it is large, you may be better off using an iterative solver. Direct solvers have high complexity and are best suited to when you expect to solve many linear systems with the same matrix and different RHS vectors (time-dependent linear PDEs, for instance).
If J is square, then J^TJ is definite positive if J is invertible (in turn, if J is not invertible, A is not invertible). The Conjugate Gradient method is tailored to that case and should be very fast. If J is rectangular MxN with M < N (undetermined system), the columns are not free thus the kernel of J is not reduced to {0}. Thus J^TJ would not be definite positive. You can still try CG, though, but verify the results. If J is rectangular with M > N, you may be trying to minimize a norm ||Jx-b||, with EE = J^Tb, and Eigen offers Eigen::LeastSquaresConjugateGradient to solve that without constructing J^TJ.
You can try pre-conditioning to speed up iterative schemes (and improve stability). Most likely Eigen offers that out of the box. Otherwise, an extremely simple scheme is to precondition by the absolute value of the diagonal.
You can try GMRES too which is generally quite robust. In particular if you have doubts about definite positiveness.
As a final note, you may have better luck over at scicomp.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论