Symmetric matrix are quiet important for people working with covariance matrix, especially in robotics. There are two main reasons: 1) more efficient computation and storage 2) but almost more important, for the semantic value. And the guarantee that the matrix remains symmetric computation after computation.
On the list there was a proposed design on the subject: http://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2010/06/msg00128.html I have no clue how it would work :) But one thing that worries me is "but note that one half is free to store another triangular or selfadjoint matrix", as I read it, the data between two symmetric matrices would be shared. But I am not sure how this sharing would happen, but for me, for convenience reason it is important to be able to easily "transport" the matrix around my code, like this: class Robot { Eigen::SymetricMatrixXd m_poseCov; public: const Eigen::SymetricMatrixXd& poseCov() { return m_poseCov; } }; If the above proposal would work nicely, and you would not have to give access to some other storage/matrix. Then I guess it can work. But then I am unsure how the "sharing" of the memory space would work at all.
This isn't how I understood it! The link pasted in that discussion, http://netlib.org/lapack/lapack-3.2.html#_9_6_rectangular_full_packed_format describe how to store a _single_ triangular/symmetric matrix as a rectangle, by cutting a sub-triangle and putting it back at the other end. In other words, to store the triangular 6x6 matrix A AA AAA BBBC BBBCC BBBCCC as this rectangular 3x7 matrix: BBBCAAA BBBCCAA BBBCCCA I think there's enough agreement that this feature is wanted, it's just waiting for someone to implement it.
Ah I see, sounds good :) > I think there's enough agreement that this feature is wanted, it's just waiting > for someone to implement it. Well I am almost volunteer. But I might want some documentation on how to write a new type of matrices. Should I fill a bug report on the subject ?
Yes, good idea, it can only make it happen sooner.
This feature (RFP support) would be great, especially for large matrices and/or in combination with the new Intel MKL integration.
As alternative to RFP format there was a suggestion of storing triangular/symmetric matrices using a simple packet aligned format, e.g., using 6 packages to store a 4x4 double matrix: x00 x01 x02 x03 0 x11 x12 x13 x22 x23 0 x33
hm, this might be tricky to implement. I'd rather give up alignment and extend our product kernels (gemv and gemm) to support stride increments to leverage fast products on a simple compact storage scheme.
An interesting observation after an IRC discussion I just had (this does not contribute to implementing this): Assuming your vertical dimension is way above a page size and the OS is smart enough to allocate physical RAM only for actually accessed memory, then with a simple selfAdjointView<> you end up only about on page "wasted" per column on average (maybe just half a page depending on alignment). E.g., a 128k x 128k double matrix on a 4kB page size system would require just about 8.5GiB RAM, although it occupies 16GiB virtual space. The same should essentially be true for cache line sizes and wasted cache space. Having a true memory saving symmetric/triangular storage is still interesting, in case of lots of small/medium size matrices, or when interfacing with libraries. E.g., a full BLAS/LAPACK implementation will require packed storage.
Here is a pull-request which gives basic support for RFP format: https://bitbucket.org/eigen/eigen/pull-requests/467
-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/42.