This bugzilla service is closed. All entries have been migrated to https://gitlab.com/libeigen/eigen
Bug 67 - New divide-and-conquer SVD
Summary: New divide-and-conquer SVD
Status: RESOLVED FIXED
Alias: None
Product: Eigen
Classification: Unclassified
Component: SVD (show other bugs)
Version: unspecified
Hardware: All All
: --- enhancement
Assignee: Nobody
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 19 3.3
  Show dependency treegraph
 
Reported: 2010-10-16 04:52 UTC by Benoit Jacob
Modified: 2019-12-04 09:43 UTC (History)
5 users (show)



Attachments

Description Benoit Jacob 2010-10-16 04:52:35 UTC
The old SVD has been removed. Need to write a shiny new DivConqSVD implementing Jessup-Sorensen divide-and-conquer SVD, with an efficient (i.e. blocked) bidiagonalization step, and with a fancy API allowing to compute/apply U and V on demand after the decomposition has been done.

See mails to list.
Comment 1 Benoit Jacob 2012-10-05 17:46:21 UTC
HouseholderQR and ColPivHouseholderQR are good models to follow here, to give the skeleton.
Comment 2 Benoit Jacob 2013-03-20 18:31:11 UTC
This should be working like LAPACK's SGESDD,
http://www.netlib.org/lapack/single/sgesdd.f
Comment 3 Mborgerding 2013-03-21 03:22:14 UTC
There is also this offering,  http://comments.gmane.org/gmane.comp.lib.eigen/3648
Which simply does eigenanalysis of A*A'.  If you only want the strongest one or two dimensions, then you can use its "fast mode" ( power iteration ) 

As Jitse noted, the squaring nature of the algorithm makes it ill suited for getting the *smallest* singular values,vectors from ill-conditioned matrices.  It does well for the dominant ones.
Comment 4 Gael Guennebaud 2013-06-20 10:52:43 UTC
There is start in unsupported/Eigen/SVD done by a group of four student of the ENSIMAG (engineering school). The big merging commit is there:

https://bitbucket.org/eigen/eigen/commits/00b39b5bb0ee/
Changeset:   00b39b5bb0ee
User:        Gauthier Brun <gauthier.brun@gmail.com>, Nicolas Carre <nicolas.carre@ensimag.fr>, Jean Ceccato <jean.ceccato@ensimag.fr>, Pierre Zoppitelli
Date:        2013-06-19 00:03:27
Summary:     new unsupported and not finished SVD, using a divide and conquert algorithm, with tests and benchmark

and the working forks there:

https://bitbucket.org/gl27/eigen
https://bitbucket.org/gl27/eigen-bdcsvd

The performance are not great yet because it relies on JacobiSVD for terminal blocks while there special sparse structure allows for a much more efficient SVD that has yet to be finished.

Currently, this new D&C SVD is already slightly faster than JacobiSVD, while the JacobiSVD part still represents more than 75% of the computation time. This is thus very promising.
Comment 5 Gael Guennebaud 2013-06-20 13:49:49 UTC
@Mborgerding: perhaps an even faster and likely more accurate alternative would to to follow this approach:

CALCULATING THE SINGULAR VALUES AND PSEUDO-INVERSE OF A MATRIX
G. GOLUB AND W. KAHAN
http://epubs.siam.org/doi/pdf/10.1137/0702016
Comment 6 Benoit Jacob 2013-06-20 13:54:13 UTC
I can't get past the paywall, but that's the old non-d&c bidiagonalizing SVD algorithm, right? Obsoleted by the d&c SVD algorithm by Jessup & Sorensen, which the above-mentioned LAPACK SGESDD uses,

http://www.cs.colorado.edu/~jessup/SUBPAGES/PS/sorensen.ps.gz
Comment 7 Gael Guennebaud 2013-06-20 15:22:46 UTC
(In reply to comment #6)
I'm not sure what is the "old non-d&c bidiagonalizing SVD". Here, after the bidiagonalization they form a selfadjoint tridiagonal matrix on which the eigenvalues are computed. They present two variants: a simple one where the entries are squared, and a second one where the tridiagonal matrix is twice as large but with a null diagonal and that avoids squaring the entries. If the "old non-d&c bidiagonalizing SVD" is simply applying the QR algorithm onto this second variant without explicitly forming it, they yes they are equivalent.

Nevertheless, if we explicitly form the first variant and call our SelfAdjointEigenSolver, then this approach should still be faster than what Mborgerding suggested because we avoid computing A*A' (we do this after the bidiagonalization) and the implementation is still very simple. That's all I wanted to say. Do we want this into Eigen? I don't know.
Comment 8 Benoit Jacob 2013-06-20 17:10:28 UTC
Well, again it's difficult to discuss the specifics without access to the article, but this being a Golub article, I expect it's the same algorithm for bidiagonalizing SVD as given in Golub & van Loan's book.

By "old non-d&c bidiagonalizing SVD" I mean the Golub & van Loan book algorithm. It's bidiagonalizing the matrix, and then, applying implicit QR iteration, in a way that's very similar to how selfadjoint eigensolving on a tridiagonal matrix works, except that here the matrix is bidiagonal instead of tridiagonal (and a tridiagonal matrix is never actually formed, IIRC). That's what LAPACK SGESVD does.

The d&c algorithm by Jessup & Sorensen, which is what LAPACK SGESDD does, improves on this implicit-QR-iteration-on-bidiagonal-matrix by making it d&c. That is, it splits the bidiagonal matrix in two halves, recursively SVD's the two halves, and the nontrivial problem that it then solves is how to stitch together the SVD decompositions of the two halves of the bidiagonal matrix into a SVD decomposition of the original bidiagonal matrix.

At least that's how I remember it ;-) it's a bit far away in my head. But SGESDD is, for all I know, considered a strict improvement over SGESVD, so I'd really like us to focus on a SGESDD-like solution, i.e. Jessup & Sorensen.
Comment 9 Jitse Niesen 2013-06-20 17:36:33 UTC
The Golub & Kahan article indeed describes what I think of as the standard SVD algorithm, which is the same as is in the Golub & Van Loan book (possibly modulo implementation details, those pesky things that one really should get right).

My impression (formed by going to talks, not by deep study) is that divide-and-conquer is indeed considered better, but it is also more complicated and the improvement over the "old" algorithm is not that big because the main work is in the bidiagonalization step (this may be different if you also need U and V, I don't quite remember). I don't remember anything about accuracy.
Comment 10 Benoit Jacob 2013-06-20 19:24:14 UTC
I agree, that the bigger performance win is from improving bidiagonalization performance, which is orthogonal to SGESVD vs SGESDD. I believe that the claimed high parallelizability/performance claims of SGESDD are obtained by working on an already bidiagonal input.

Accuracy-wise, if I remember correctly, Jessup & Sorensen is also better than traditional SVD because traditional SVD has accuracy issues when two singular values are very close to each other, and Jessup & Sorensen is less affected. That would be the bigger reason to prefer Jessup & Sorensen (also, it is /not/ significantly more complicated to implement). Check the paper rather than taking my word, though.
Comment 11 Gael Guennebaud 2013-06-20 19:46:11 UTC
The biggest difficulty with D&C is the conquer step where you basically have to efficiently perform a rank-one update of an existing SVD decomposition that boils down to solving the roots of a rational equation. This is this step that is currently missing in the student work. There seems to be 3 approaches for that:
- Newton/bisection: simple because you only do local linear approximations but it is unsafe
- you can use simple rational approximation, this is what is implemented in Lapack
- FMM: http://www.cs.yale.edu/publications/techreports/tr933.pdf suggests using a fast-multipole-method: sounds fun but maybe risky in terms of practical accuracy/performance.
Comment 12 Gael Guennebaud 2015-06-16 19:37:28 UTC
D&C has been finalized a long time ago.
Comment 13 Nobody 2019-12-04 09:43:57 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/67.

Note You need to log in before you can comment on or make changes to this bug.