Most of our current tests only check for rough sanity of the result, we need precision-oriented tests too. Since it's impossible to predict what the precision of a computation should be in general, one should manually record precision at a time when the algorithm is known to be sane, and make that the benchmark to check against, to prevent future regressions. Another approach is to compare precision to other existing libraries, but then, probably only some LAPACK libs are reliable enough for that. Also notice that if the "LAPACK test suite" todo item (bug 63) is achieved, it will already guarantee precision quite nicely, making this todo item less of an emergency.
The situation has already been significantly improved. In particular, SVD and selfadjoint-eigenvalues unit tests are now pretty aggressive in terms of numerical robustness. In the future it would be nice to extend the approach taken for SVD to other decompositions.
For sparse problems, we check the accuracy on a set of real-world problems. Still need to extend this set and automate its usage.
Good enough for 3.3.
For dense problems, implementing bug 62 should be enough, so I'm closing this entry.
*** This bug has been marked as a duplicate of bug 62 ***