Difference between revisions of "Todo for 3.0"

From Eigen
Jump to: navigation, search
Line 21: Line 21:
 
|-
 
|-
 
| 3
 
| 3
| mostly feature-complete
+
| sufficient features coded
 
|-
 
|-
 
| 4
 
| 4
| feature-complete and mostly stable API
+
| sufficient features and mostly stable API
 
|-
 
|-
 
| 5
 
| 5
| complete, tested, and stable API
+
| sufficient features, tested, with fully stable API
 
|}
 
|}
  

Revision as of 19:35, 10 November 2009

Eigen Owl Todo.jpg

This Todo is to keep track of the things that must be done for Eigen 3.0. There is a separate Todo for long-term and generalist Todo items. There is also a separate Release schedule for 3.0.

Note that a ton of stuff has already been done, and isn't mentioned here, so this page isn't at all intend to give an idea of all of what there will be in 3.0.

Note that this is a sortable table, so try to make it sort nicely. For the "status" column, let's use this convention:

Status Meaning
0 nothing started
1 some discussion happened
2 some development started
3 sufficient features coded
4 sufficient features and mostly stable API
5 sufficient features, tested, with fully stable API

For the "priority" column, let's use this convention:

Priority Meaning
0 Required for 3.0-beta1
1 Required for 3.0
2 Very important feature, but should not block 3.0
3 Can I have a pony, too?

For the "who" column, put whoever started working on it or is planning to, multiple persons possible, don't hesitate to add yourself as more help is always welcome!

Topic Task Who Status Priority
Core - xpr trees Investigate making nest-by-value the default for expressions and killing NestByValue. Thoroughly test the impact on performance. Hauke 2 - Hauke started it 0
Decompositions - rank API Do in the QR module (ColPiv and FullPiv), in the SVD module, and in the self-adjoint eigensolver, the same rank-determining API as in FullPivLU. That involves making rank(), isInvertible() etc... methods, using a threshold controlled by a setThreshold method. That also involves adding kernel/image methods. Benoit 1 - some threads 0
Decompositions - kernel for given dimension In all rank-revealing decompositions, it would be nice to have a function to construct the kernel matrix for a prescribed dimension of the kernel. At the very least for SVD decompositions, where this would just be taking the n singular vectors associated to the n smallest singular values. In real-world use cases, this is the most useful way to get the kernel. The same idea should then be applied to image() for good measure. Benoit 1 - discussed with Keir 1
Binary library: BLAS Implement a BLAS library using Eigen. Useful both in itself, and as a prerequisite for other Todo items (ultimately for using the LAPACK test-suite). Gael 2 - see blas/ directory 2 - Gael: I don't see it as a stopper since there exist a couple of other BLAS libraries.
Binary library: LAPACK Implement a LAPACK library using Eigen. Useful both in itself, and as a prerequisite for the Todo item about using the LAPACK test-suite. 1 - some threads 2
Tests : original LAPACK testsuite The idea is that once we have a LAPACK library implemented using Eigen, we should be able to run the original LAPACK testsuite directly against it. 1 - thread "back from google" 2 - however we need precision testing anyhow for 3.0-beta1
Tests : precision-oriented tests Most of our current tests only check for rough sanity of the result, we need precision-oriented tests too. Since it's impossible to predict what the precision of a computation should be in general, one should manually record precision at a time when the algorithm is known to be sane, and make that the benchmark to check against, to prevent future regressions. Another approach is to compare precision to other existing libraries, but then, probably only some LAPACK libs are reliable enough for that. Also notice that if the "LAPACK test suite" todo item is achieved, it will already guarantee precision quite nicely, making this todo item less of an emergency. 0 0 - and 2 if the "lapack test suite" item is done
Core - optimize partial reductions Partial reductions can be evaluated/vectorized in a more clever way with respect to the relative storage order Gael 1 - I know how to do it ;) 2
Sparse - API stability for basics features The goal is to have a stable sparse module for all basics features (matrix assembly, products, triangular solvers). The linear solvers will stay experimental. Gael 3 0
Eigenvalues for real general matrices The implementation borrowed from JAMA should be rewritten in a clean way separating the Real Schur decomposition as for the complex case. Then the ComplexEigenSolver and EigenSolver should be merged. 1 - some threads 0
Fixing 'all' STL containers Some more STL containers (e.g. std::list) require the same handling as was required for std::vector. This is due resizing and passing by value. 1 - short discussion on IRC 2
cache/block size at runtime Currently we only allow to control the cache/block size at compile-time. Big users like Google need to control it at runtime, e.g. to have a single binary that runs on various hardware. Benoit 1 - thread "back from google" 0
JacobiSVD template parameters Improve/simplify them according to discussion on the list Benoit 1 - thread on the list 0
SVD improvements SVD is still essentially C code with for loops. Need to make it use Householder module, need to support all rectangular sizes and complex numbers. Benoit 1 - been talking about it forever 0
Polar decomposition API Currently some functions in SVD. The first thing is that the same thing should be done in JacobiSVD. The next question is whether it's worth putting that code in a base class. And finally, the API itself is open for improvements, at least the function names could be less clunky. Benoit 0 0
LDLT on triangles At the moment there's a BUG comment in LDLT that claims that it does not use only one triangle of the matrix. Investigate and fix. 0 0
Blocked Householder At the moment our Householder transformations are not blocked, doing this and adapting decompositions to use this would result in large performance improvements and is a priority because it seems as if it might reveal needs for API improvements. 1 - been discussed forever 0