SpecialMatrix

From Eigen
Jump to: navigation, search

Here are some random thoughts about this topic.

Overview of each matrix type

Triangular matrix and SelfAdjoint matrix

Currently in core.

We have the "view as" concept (currently implemented in Part) which has 2 different behaviors:

1 - in write mode, it allows to update (resp. evaluate) only one part of the lhs matrix (resp. rhs expression). This works quite well (have to check vectorization). Ex:

 mat.part<UpperTriangular>() = a + b;

2 - in read mode it allows to extract a triangular part of a dense matrix and view it as a dense expression where the opposite part is 0. This mode requires a "if" in the coeff function. Therefore an algorithm working on a triangular matrix should by pass the Part expression.

We also have several flags allowing to recognize a triangular expression. So a dense matrix can be "marked" as triangular.

Issues:

Proposal (if we assume we don't need a special class for triangular storage):

  • remove Part
  • remove the marked<>() concept for triangular stuff (useless)
  • add a TriangularBase class (does not inherit MatrixBase)
    • mode = union of (Upper or Lower) + (Triangular or SelfAdjoint)
      • is "strict" useful ?
    • add a TriangularView class (similar to current Part)
    • add a CompactTriangularMatrix class for storage
  • solve:
    • move solveTriangular* to Part::solve*
    • make "unit-diag" a template parameter of this function
  • move .llt() .ldlt() (and all other functions for SelfAdjoint) to Part
  • optionally, add a small "view as dense" expression to get back the current (inefficient) behavior of Part and glue all matrices together.

Diagonal matrix

In the core module.

We currently have a storage class:

DiagonalMatrix<Scalar,Size> mat; (done)

(a diagonal matrix is assumed to be square)

We have wrappers from vector:

 vec.asDiagonal()

Done via DiagonalMatrixWrapper. DiagonalMatrix and DiagonalMatrixWrapper inherits DiagonalMatrixBase which itself inherits MatrixBase.

We have views from a dense matrix:

mat.diagonal()              // gives a vector
mat.diagonal().asDiagonal() // this should be maintained
mat.part<Diagonal>()        // perhaps part should not accept that anymore

Proposal:

  • remove the inheritance to MatrixBase (need to add more code in Geometry module since currently a diagonal matrix is handled as a MatrixBase)
  • remove the "if" in coeff()
  • as for triangular/self-adjoint, add a small "view as dense expression" with the if
  • add DiagonalBase::inverse(), etc.


skyline/band matrix

In a new "Band" or "Skyline" module. (a band matrix is assumed to be square)

We need storage, eg:

BandMatrix<Scalar,Size,Supers,Subs,Options> mat;
  Supers = number of super-diagonals
  Subs = number of sub-diagonals
  Options: 0, SelfAdjointBit

Internally, the coefficients would be stored in a single dense vector (let's follow LAPACK storage scheme)

Examples:

   BandMatrix<Scalar,Size,0,0> would be equivalent to a diagonal matrix, but I advocate to keep a separate diagonal matrix specialization (because we want it in Core, and it is much simpler than a generic band matrix)
   BandMatrix<Scalar,Size,1,1> == tridiagonal matrix (no need to have a special class)
   BandMatrix<Scalar,Size,2,0> would have the UpperTriangular bit.

Question: how to instanciate a selfadjoint tridiagonal matrix ? two possibilities:

    1 - BandMatrix<Scalar,Size,1,1,SelfAdjointBit>
    2 - BandMatrix<Scalar,Size,1,0,SelfAdjointBit>

The 2nd has the advantage to allow to control which triangular part is used for storage.

We need specialized solvers, as well as specialized matrix products with at least dense matrix/vector.

We don't need wrappers. (wrappers from what ?)

Again we probably also need a "view" class to read/write to the correspond bands of a dense matrix (just like Part for triangular/selfadjoint or Block). We could use the same principle than for diagonal matrix: a BandMatrixBase with a BandMatrix class for storage and a BandMatrixPart. For instance, some algorithms in QR module could use that.

And again let's add a "view as dense expression".

In the same vein, we also need to extend .diagonal() to address any sub/super diagonal. For a dense matrix it would return an extended DiagonalCoeffs expression while for a BandMatrix it would return a Block vector.


Block diagonal matrix

(... and more generally block matrices.)

We probably only need solvers and matrix products. If we are smart enough, both solvers and matrix products could be implemented using the Sparse framework as, eg., SparseMatrix<Matrix4d>.

Hessenberg matrix

Probably not needed to support explicitly.


Toeplitz matrix

In a dedicated module "Toeplitz".

We mainly need storage:

 ToeplitzMatrix<Scalar,Size,Options> mat;
 Options = 0, SelfAdjoint.

and a specialized solver.

We probably don't need to support any operation like +, * ?

And again let's add a "view as dense expression", this does not harm.


Permutation matrix

This one is simple, just a wrapper around a Vector*i with overloaded operator*.


How to glue everything together ?

Proposal:

  • add a new Common<Derived> base class from which all other *Base class will inherit from (MatrixBase, SparseMatrixBase, TriangularMatrixBase, etc.)
    • move the set of very basic common features to Common (+, -, etc.)
  • extend the concept of "Shape", then we only have to specialize each variant of assign.
    • the idea is to provide some specialization of assign:
      • dense = triangular
      • dense = band
      • dense = sparse
      • etc.
    • then we have to extend a bit CwiseBinaryOp so that it knows how to combine different shape, ex:
      • Triangular + Band => Band or Sparse (I don't know what's best yet)
      • Triangular + Sparse => Sparse
      • etc. (the idea is to default to Sparse which is the most generic)
  • Instead of the Common base class + "shape flags", perhaps one could make the base class of the expression a template parameter...


Discussions

Link with sparse matrix

The sparse module is based on the concept of Iterator to efficiently skip zeros. We have iterators for each sparse matrix classes, for unary and binary expressions etc. Reusing this concepts for these special matrix type would allow to reuse most of the algorithm of the sparse module. Something to investigate.

"View as dense" for all special matrices

-- Rikrd -- Looking at the overview, it seems all special matrices would accept a view as diagonal. This would allow to easily debug (print the matrix) and fallback to costly operations with dense matrices, if optimized versions for these operations are not implemented. This could be done by implementing the:

 inline const Scalar coeff(int row, int col) const
   {                                                
     // Here the code how to access the element (row, col)
   }

as it is now. Are there any special matrices for which a view as dense is impossible? Should it be a requirement for a special matrix to allow a view as dense?

-- Gael -- yes that's exactly the point! and yes this is doable for any special matrix. For instance, DiagonalMatrix should also propose a coeff function with an assert if i!=j. The "view as dense" expression replace this assert by a runtime if, and returns 0 if

-- Rikrd -- ok, I now fully understand. Correct me if I'm wrong but for instante the Toeplitz case should also propose a coeff function with an assert if i!=0 && j!=0, and the "view as dense" would replace the assert by a runtime calculation (with the if and substraction) of which coeff it should actually return. I think you propose a really good design.