Currently, the maximal number of iterations allowed in the Schur decomposition is fixed to 40. While this is good for double, this is clearly not enough for floating point numbers with much higher precision. The same situation probably arises in other iterative algorithms. I see several options: 1 - find a way to automatically compute a max number of iterations based on NumTraits<Scalar>::epsilon() 2 - let the user adjust the max number of iterations 3 - find another way to detect no convergence
This parameter is really problem-dependent. The user should have the possibility to adjust it. For the default value, we can follow the approach in LAPACK and EISPACK where the maximum number of iterations is 30 * size_of_the_matrix; For the point 3, iterative methods for linear systems stops when itmax is reached or when the norm of the current residual is greater than the norm of the right-hand side multiplied by some predefined constant dtol.
https://bitbucket.org/eigen/eigen/changeset/0b631af56bad/ changeset: 0b631af56bad branch: 3.1 user: dnuentsa date: 2012-07-16 11:31:59 summary: bug 479 : Adjust max iterations count wrt matrix size
I thought that Eispack has a maximum of 30 iterations for reducing one sub-diagonal element to zero, and that is also what Eigen used. However, I had another look at the old Fortran code and I see now that it has a maximum of 30 * (number of rows) iterations for the whole reduction to triangular form. Currently, after the change, Eigen allows for 30 * (number of rows) iterations for reducing one sub-diagonal element. If that is indeed the intention - and I don't see a problem with it - then the documentation for m_maxIterations should probably be updated.
Oooh No. I didn't see that iter is set to zero at the convergence of one eigenvalue. I think 30 * (number of rows) is very large for one single sub-diagonal. Can we keep 30 iterations to remove one sub-diagonal as it was done before and 30 * (number of rows) iterations for the whole reduction ?
I think we should do as Lapack, and not limit a single sub-diagonal removal to 30 iterations because 1- the complexity is clearly not constant for all sub-diagonals, and 2- this strategy has been proved to fail on some examples.
Yesterday I changed the max number of iterations used to the same strategy as Lapack/Eispack; see changeset 24a5443ecde9 for the dev branch and changeset 23f17dd6901e for the 3.1 branch. I think we still need to give the user the opportunity to change this, for instance when using very high precision numbers.
Changeset 58c321490416 (only in the dev branch) allows the user to specify the maximum number of iterations using an extra argument of the compute() function: eigensolver.compute(A, true, 100) computes the eigenvalues and eigenvectors of A using at most 100 iterations. In hindsight, I'm wondering whether I should have implemented a different API, like: eigensolver.setMaxIterations(100).compute(A) . This makes the user code perhaps less cryptic. A disadvantage would be that the EigenSolver object gets an additional member variable, to store the maximum number of iterations, making the object occupy slightly more memory.
I prefer the setMaxIterations() API. It is consistent with the sparse solvers, and permits to configure a solver and then pass it to a generic algorithm.
-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/479.