New user self-registration is disabled due to spam. Please email eigen-core-team @ lists.tuxfamily.org if you need an account.
Before reporting a bug, please make sure that your Eigen version is up-to-date!
Bug 42 - Support for symmetric matrix
Summary: Support for symmetric matrix
Status: DECISIONNEEDED
Alias: None
Product: Eigen
Classification: Unclassified
Component: Core - general (show other bugs)
Version: 3.0
Hardware: All All
: Low Feature Request
Assignee: Nobody
URL:
Whiteboard:
Keywords:
Depends on: 46
Blocks:
  Show dependency treegraph
 
Reported: 2010-10-12 11:51 UTC by Cyrille Berger
Modified: 2018-08-20 13:16 UTC (History)
5 users (show)



Attachments

Description Cyrille Berger 2010-10-12 11:51:15 UTC
Symmetric matrix are quiet important for people working with covariance matrix, especially in robotics. There are two main reasons:
1) more efficient computation and storage
2) but almost more important, for the semantic value. And the guarantee that the matrix remains symmetric computation after computation.
Comment 1 Cyrille Berger 2010-10-12 12:39:36 UTC
On the list there was a proposed design on the subject:
http://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2010/06/msg00128.html

I have no clue how it would work :) But one thing that worries me is "but note that one half is free to store another triangular or selfadjoint matrix", as I read it, the data between two symmetric matrices would be shared. But I am not sure how this sharing would happen, but for me, for convenience reason it is important to be able to easily "transport" the matrix around my code, like this:

class Robot {
 Eigen::SymetricMatrixXd m_poseCov;

public:
 const Eigen::SymetricMatrixXd& poseCov() { return m_poseCov; }
};


If the above proposal would work nicely, and you would not have to give access to some other storage/matrix. Then I guess it can work. But then I am unsure how the "sharing" of the memory space would work at all.
Comment 2 Benoit Jacob 2010-10-12 12:58:10 UTC
This isn't how I understood it!

The link pasted in that discussion,
http://netlib.org/lapack/lapack-3.2.html#_9_6_rectangular_full_packed_format

describe how to store a _single_ triangular/symmetric matrix as a rectangle, by cutting a sub-triangle and putting it back at the other end. In other words, to store the triangular 6x6 matrix

A
AA
AAA
BBBC
BBBCC
BBBCCC

as this rectangular 3x7 matrix:

BBBCAAA
BBBCCAA
BBBCCCA

I think there's enough agreement that this feature is wanted, it's just waiting for someone to implement it.
Comment 3 Cyrille Berger 2010-10-12 13:11:14 UTC
Ah I see, sounds good :)

> I think there's enough agreement that this feature is wanted, it's just waiting
> for someone to implement it.

Well I am almost volunteer. But I might want some documentation on how to write a new type of matrices. Should I fill a bug report on the subject ?
Comment 4 Benoit Jacob 2010-10-12 13:23:21 UTC
Yes, good idea, it can only make it happen sooner.
Comment 5 Jey Kottalam 2012-03-06 21:06:22 UTC
This feature (RFP support) would be great, especially for large matrices and/or in combination with the new Intel MKL integration.
Comment 6 Christoph Hertzberg 2014-06-13 21:58:36 UTC
As alternative to RFP format there was a suggestion of storing triangular/symmetric matrices using a simple packet aligned format, e.g., using 6 packages to store a 4x4 double matrix:

x00 x01 x02 x03
  0 x11 x12 x13
        x22 x23
          0 x33
Comment 7 Gael Guennebaud 2014-06-16 11:23:29 UTC
hm, this might be tricky to implement. I'd rather give up alignment and extend our product kernels (gemv and gemm) to support stride increments to leverage fast products on a simple compact storage scheme.
Comment 8 Christoph Hertzberg 2014-11-04 00:45:14 UTC
An interesting observation after an IRC discussion I just had (this does not contribute to implementing this):
Assuming your vertical dimension is way above a page size and the OS is smart enough to allocate physical RAM only for actually accessed memory, then with a simple selfAdjointView<> you end up only about on page "wasted" per column on average (maybe just half a page depending on alignment). E.g., a 128k x 128k double matrix on a 4kB page size system  would require just about 8.5GiB RAM, although it occupies 16GiB virtual space.
The same should essentially be true for cache line sizes and wasted cache space.


Having a true memory saving symmetric/triangular storage is still interesting, in case of lots of small/medium size matrices, or when interfacing with libraries. E.g., a full BLAS/LAPACK implementation will require packed storage.
Comment 9 Christoph Hertzberg 2018-08-20 13:16:49 UTC
Here is a pull-request which gives basic support for RFP format:
https://bitbucket.org/eigen/eigen/pull-requests/467

Note You need to log in before you can comment on or make changes to this bug.