From Eigen
Revision as of 12:49, 15 October 2018 by Chtz (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Choice of algorithms

"The Fast drives out the Slow even if the Fast is wrong." --- W. Kahan

Any matrix library has to make trade-offs between speed and reliability. Eigen's bias is towards reliability. That may be related to the fact that its founder is a former non-applied mathematician.

As our table of decompositions shows, we have a bias towards safer, pivoting algorithms. Where applicable, we tend to provide both a partial-pivoting algorithm that offers a quite good speed/reliability trade-off, and a full-pivoting algorithm that's entirely uncompromising on reliability. See for example FullPivLU.

Where we haven't had time to implement these different algorithms and had to start with just one, our bias is again toward reliability. For example, our only SVD decomposition is JacobiSVD and we feel comfortable claiming that, thanks to its fully two-sided Jacobi algorithm, it's the most reliable SVD that you'll find in any library; actually, even LAPACK's working note on the subject only uses a one-sided Jacobi algorithm. This isn't to say that we won't implement faster and less reliable SVD algorithms too in the future.

We also clearly document on that table the Eigen implementation maturity of each algorithm.

Test coverage

Our main test suite generates more than 500 executables, each of which having in average 100 lines of Eigen-using code, testing all of Eigen's features across a wide range of numeric types.

We also have specific tests ("failtests") checking that bad user code that should not compile, actually does not compile. And we have imported a copy of Eigen 2's test suite into Eigen 3's, to guarantee that no regression was introduced with this major new version.

Moreover, we provide a full BLAS implementation built on top of Eigen, and we run the standard BLAS test suite against it. We also have a partial LAPACK implementation, passing the corresponding LAPACK tests.

We engage our user base in testing, by documenting our testing process and by getting beta users to run tests on a greater variety of systems than we could ourselves. We collect results on our dashboard.

We offer many build-system options to run tests in non-standard setups, for example switching the default matrix storage order to row-major.