The old SVD has been removed. Need to write a shiny new DivConqSVD implementing Jessup-Sorensen divide-and-conquer SVD, with an efficient (i.e. blocked) bidiagonalization step, and with a fancy API allowing to compute/apply U and V on demand after the decomposition has been done. See mails to list.

HouseholderQR and ColPivHouseholderQR are good models to follow here, to give the skeleton.

This should be working like LAPACK's SGESDD, http://www.netlib.org/lapack/single/sgesdd.f

There is also this offering, http://comments.gmane.org/gmane.comp.lib.eigen/3648 Which simply does eigenanalysis of A*A'. If you only want the strongest one or two dimensions, then you can use its "fast mode" ( power iteration ) As Jitse noted, the squaring nature of the algorithm makes it ill suited for getting the *smallest* singular values,vectors from ill-conditioned matrices. It does well for the dominant ones.

There is start in unsupported/Eigen/SVD done by a group of four student of the ENSIMAG (engineering school). The big merging commit is there: https://bitbucket.org/eigen/eigen/commits/00b39b5bb0ee/ Changeset: 00b39b5bb0ee User: Gauthier Brun <gauthier.brun@gmail.com>, Nicolas Carre <nicolas.carre@ensimag.fr>, Jean Ceccato <jean.ceccato@ensimag.fr>, Pierre Zoppitelli Date: 2013-06-19 00:03:27 Summary: new unsupported and not finished SVD, using a divide and conquert algorithm, with tests and benchmark and the working forks there: https://bitbucket.org/gl27/eigen https://bitbucket.org/gl27/eigen-bdcsvd The performance are not great yet because it relies on JacobiSVD for terminal blocks while there special sparse structure allows for a much more efficient SVD that has yet to be finished. Currently, this new D&C SVD is already slightly faster than JacobiSVD, while the JacobiSVD part still represents more than 75% of the computation time. This is thus very promising.

@Mborgerding: perhaps an even faster and likely more accurate alternative would to to follow this approach: CALCULATING THE SINGULAR VALUES AND PSEUDO-INVERSE OF A MATRIX G. GOLUB AND W. KAHAN http://epubs.siam.org/doi/pdf/10.1137/0702016

I can't get past the paywall, but that's the old non-d&c bidiagonalizing SVD algorithm, right? Obsoleted by the d&c SVD algorithm by Jessup & Sorensen, which the above-mentioned LAPACK SGESDD uses, http://www.cs.colorado.edu/~jessup/SUBPAGES/PS/sorensen.ps.gz

(In reply to comment #6) I'm not sure what is the "old non-d&c bidiagonalizing SVD". Here, after the bidiagonalization they form a selfadjoint tridiagonal matrix on which the eigenvalues are computed. They present two variants: a simple one where the entries are squared, and a second one where the tridiagonal matrix is twice as large but with a null diagonal and that avoids squaring the entries. If the "old non-d&c bidiagonalizing SVD" is simply applying the QR algorithm onto this second variant without explicitly forming it, they yes they are equivalent. Nevertheless, if we explicitly form the first variant and call our SelfAdjointEigenSolver, then this approach should still be faster than what Mborgerding suggested because we avoid computing A*A' (we do this after the bidiagonalization) and the implementation is still very simple. That's all I wanted to say. Do we want this into Eigen? I don't know.

Well, again it's difficult to discuss the specifics without access to the article, but this being a Golub article, I expect it's the same algorithm for bidiagonalizing SVD as given in Golub & van Loan's book. By "old non-d&c bidiagonalizing SVD" I mean the Golub & van Loan book algorithm. It's bidiagonalizing the matrix, and then, applying implicit QR iteration, in a way that's very similar to how selfadjoint eigensolving on a tridiagonal matrix works, except that here the matrix is bidiagonal instead of tridiagonal (and a tridiagonal matrix is never actually formed, IIRC). That's what LAPACK SGESVD does. The d&c algorithm by Jessup & Sorensen, which is what LAPACK SGESDD does, improves on this implicit-QR-iteration-on-bidiagonal-matrix by making it d&c. That is, it splits the bidiagonal matrix in two halves, recursively SVD's the two halves, and the nontrivial problem that it then solves is how to stitch together the SVD decompositions of the two halves of the bidiagonal matrix into a SVD decomposition of the original bidiagonal matrix. At least that's how I remember it ;-) it's a bit far away in my head. But SGESDD is, for all I know, considered a strict improvement over SGESVD, so I'd really like us to focus on a SGESDD-like solution, i.e. Jessup & Sorensen.

The Golub & Kahan article indeed describes what I think of as the standard SVD algorithm, which is the same as is in the Golub & Van Loan book (possibly modulo implementation details, those pesky things that one really should get right). My impression (formed by going to talks, not by deep study) is that divide-and-conquer is indeed considered better, but it is also more complicated and the improvement over the "old" algorithm is not that big because the main work is in the bidiagonalization step (this may be different if you also need U and V, I don't quite remember). I don't remember anything about accuracy.

I agree, that the bigger performance win is from improving bidiagonalization performance, which is orthogonal to SGESVD vs SGESDD. I believe that the claimed high parallelizability/performance claims of SGESDD are obtained by working on an already bidiagonal input. Accuracy-wise, if I remember correctly, Jessup & Sorensen is also better than traditional SVD because traditional SVD has accuracy issues when two singular values are very close to each other, and Jessup & Sorensen is less affected. That would be the bigger reason to prefer Jessup & Sorensen (also, it is /not/ significantly more complicated to implement). Check the paper rather than taking my word, though.

The biggest difficulty with D&C is the conquer step where you basically have to efficiently perform a rank-one update of an existing SVD decomposition that boils down to solving the roots of a rational equation. This is this step that is currently missing in the student work. There seems to be 3 approaches for that: - Newton/bisection: simple because you only do local linear approximations but it is unsafe - you can use simple rational approximation, this is what is implemented in Lapack - FMM: http://www.cs.yale.edu/publications/techreports/tr933.pdf suggests using a fast-multipole-method: sounds fun but maybe risky in terms of practical accuracy/performance.

D&C has been finalized a long time ago.

-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/67.