This bugzilla service is closed. All entries have been migrated to
Bug 416 - Why can't I have a Row Major Column Vector
Summary: Why can't I have a Row Major Column Vector
Alias: None
Product: Eigen
Classification: Unclassified
Component: Core - general (show other bugs)
Version: 3.3 (current stable)
Hardware: All All
: Normal Internal Design
Assignee: Nobody
Keywords: Documentation
: 1722 (view as bug list)
Depends on: ExprEval
Blocks: 4.0
  Show dependency treegraph
Reported: 2012-02-08 19:23 UTC by brian
Modified: 2019-12-04 11:27 UTC (History)
6 users (show)


Description brian 2012-02-08 19:23:59 UTC
Eigen fails to compile the following program

#include <eigen/Eigen>
int main(int, char**) {
  Eigen::Array<double, Eigen::Dynamic, 1, Eigen::RowMajor> x;
  return 0;

with output

In file included from [...]/eigen3/Eigen/Core:293:0,
                 from [...]/eigen3/Eigen/Dense:1,
                 from [...]/eigen3/Eigen/Eigen:1,
                 from test.cpp:1:
[...]/eigen3/Eigen/src/Core/PlainObjectBase.h: In static member function ‘static void Eigen::PlainObjectBase<Derived>::_check_template_params() [with Derived = Eigen::Array<double, -0x00000000000000001, 1, 1>]’:
[...]/eigen3/Eigen/src/Core/Array.h:125:7:   instantiated from ‘Eigen::Array<_Scalar, _Rows, _Cols, _Options, _MaxRows, _MaxCols>::Array() [with _Scalar = double, int _Rows = -0x00000000000000001, int _Cols = 1, int _Options = 1, int _MaxRows = -0x00000000000000001, int _MaxCols = 1]’
test.cpp:4:60:   instantiated from here
[...]/eigen3/Eigen/src/Core/PlainObjectBase.h:631:7: error: ‘INVALID_MATRIX_TEMPLATE_PARAMETERS’ is not a member of ‘Eigen::internal::static_assertion<false>’

In my opinion, RowMajor or ColMajor is meaningless for (Row- and Column-) Vectors, but not an error. I believe that the assertion

   EIGEN_STATIC_ASSERT((EIGEN_IMPLIES(MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1, (Options&RowMajor)==RowMajor)
                        && EIGEN_IMPLIES(MaxColsAtCompileTime==1 && MaxRowsAtCompileTime!=1, (Options&RowMajor)==0)
                        && ((RowsAtCompileTime == Dynamic) || (RowsAtCompileTime >= 0))
                        && ((ColsAtCompileTime == Dynamic) || (ColsAtCompileTime >= 0))
                        && ((MaxRowsAtCompileTime == Dynamic) || (MaxRowsAtCompileTime >= 0))
                        && ((MaxColsAtCompileTime == Dynamic) || (MaxColsAtCompileTime >= 0))
                        && (MaxRowsAtCompileTime == RowsAtCompileTime || RowsAtCompileTime==Dynamic)
                        && (MaxColsAtCompileTime == ColsAtCompileTime || ColsAtCompileTime==Dynamic)
                        && (Options & (DontAlign|RowMajor)) == Options),

should not apply to this case. I do not know the exact intent, but I believe that the first two rows simply should be removed.

Please correct me, if I am wrong, or if something wouldn't work in this case.

kind regards,

Comment 1 Benoit Jacob 2012-02-08 19:40:24 UTC
This is intentional and was done to limit the number of redundant types. Redundant types mean more template instantiations and more binary code bloat.

In an ideal world we'd completely decouple API types from implementation types, but we're not there.
Comment 2 Christoph Hertzberg 2012-02-08 22:26:27 UTC
I stumbled on the same problem once, I accepted it then and added a ( ? : )-construct into my template parameters.

However, while I see the point I wouldn't 100% agree that this is necessary.
Have you ever actually experienced a significant code bloat? Otherwise your restriction causes more trouble than it solves ...
I guess for simple fixed-size operations the code is inlined anyways and I think it is rather rare that someone implements/uses the same (not inlined) function for both RowMajor and ColumnMajor vectors.

If code bloat is an issue somewhere, wouldn't it be possible to make some (always inlined) wrapper function, which calls the actual implementation?
Maybe at some point the infamous ExpressionEvaluator (bug 99) will be able to solve this anyways?
Comment 3 Gael Guennebaud 2014-05-28 14:50:48 UTC
Finally, I'm ok to reconsider this one once evaluators are there, but let's not delay 3.3 for that.

Actually, the main motivation was not binary code bloat but rather compilation-time and compiler memory consumption.
Comment 4 Christoph Hertzberg 2014-05-28 16:24:37 UTC
(In reply to comment #3)
> Actually, the main motivation was not binary code bloat but rather
> compilation-time and compiler memory consumption.

Wouldn't this only be the case if someone extensively uses both row-major and column-major vectors of the same size? We could simply make a "good practice" section on our row/column-major page and remove this assertion. The "default user" won't be affected at all, as long as the Flags parameter is left at default.

The advantage would be to simplify things for people doing lots of template depending things, such as:
  template<int cols>
  struct Example {
    typedef Eigen::Matrix<double, Eigen::Dynamic, cols, Eigen::RowMajor> type;

if they don't mind having RowMajor Column-vectors.

It would also simplify the proposed reshape operation (bug 437, and, allowing
  Matrix4d A;
to behave more as expected.
Comment 5 Gael Guennebaud 2015-03-28 11:20:20 UTC
Someone else has been annoyed by this restriction:

If we allow row-major column-vector then we also have to decide about the semantic of inner/outer-strides and inner/outer-sizes for these kind of objects. We have two options:

1 - For compile-time vectors, the storage order layout is completely ignored, making outer-stride/size meaning less for these objects. They would return size()/1 respectively.

2 - Make them behave as runtime vectors, e.g., like a Matrix<double,Dynamic,Dynamic,RowMajor>(size,1);

I guess that both options might surprise some users, so from an Eigen implementation point of view I'd go for option 1 that should be easier to manage with respect to the implementation of some optimized vector paths.
Comment 6 Christoph Hertzberg 2015-03-28 12:13:14 UTC
I would generally suggest 
 innerStride = RowMajor ? columnStride : rowStride
 outerStride = RowMajor ? rowStride : columnStride
 rowStride = RowMajor ? cols() : 1
 colStride = RowMajor ? 1 : rows()

That would result in outerStride==size() for compile-time vectors stored in natural ordering and outerStride==1 when stored in "unnatural" ordering, i.e., we'd always fulfill (2) and (1) as long as the suggested storage order is used.

If we allow innerStride > outerStride, we could (at the cost of performance) allow to Ref<> transposed matrices without copying, if the innerStride is Dynamic at compile time (cf. Bug 884 comment 7).
Comment 7 Gael Guennebaud 2015-04-24 16:32:04 UTC
Your rule is typically suggesting the option 2, especially when trying to generalize it to cases for which coefficients are not sequentially stored in memory. For instance, let's say someone want to reference a row of a 4x6 column-major matrix as a column-major object. If we extend your logic, then we should define:

rowStride = 6
colStride = 4


innerStride = 6
outerstride = 4

and then the only option is to define:

innerSize = 1
outerSize = 6

On the other hand, I would rather suggest to enforce outerSize==1 for compile-time vectors as for 1D objects, outer* should really not matter.

Then, the outer-stride can be chosen arbitrarily to what is the most convenient to compute, and the inner-stride will be either 1 or Dynamic.
Comment 8 Christoph Hertzberg 2015-04-24 19:04:10 UTC
(In reply to Gael Guennebaud from comment #7)
> [...] let's say someone want to reference a row of a 4x6
> column-major matrix as a column-major object. If we extend your logic, then
> we should define:

Essentially, what I wanted to suggest is that the following equations should always hold:
  rowStride == &A.coeffRef(i+1,j) - &A.coeffRef(i,j)
  colStride == &A.coeffRef(i,j+1) - &A.coeffRef(i,j)

That would imply any block of a dense object would inherit the strides of its base object, i.e., in your example rowStride should be 1.
This also means the following would all be the same
  A.topRows(1); A.topRows<1>(); A.row(0);
with the (only) exception that RowsAtCompileTime of the first is Dynamic, but 1 for the others. 
Then, whenever we have special code paths for RowsAtCompileTime==1 we ignore the RowMajorBit and RowStride but check for ColStride(AtCompileTime)==1 to decide if we can efficiently vectorize (analogously for ColsAtCompileTime==1, of course).
Comment 9 paul.vernaza 2016-03-08 02:07:41 UTC

I think I was just bitten by a version of this issue and just thought I would point out how unintuitive eigen's behavior can be when maps are involved.  Suppose I have an external matrix in row-major order, which I want to map as an eigen matrix.  The problem is that, unexpectedly, the Stride object I should use to map this matrix depends on whether the mapped matrix type has a static or dynamic number of rows, and how many rows it has.  This is because in the special case that the number of rows is static and equal to 1, eigen unexpectedly decides to assume row-major storage order by default, which switches the semantics of inner and outer strides in the map.

See the code below.  To summarize, this code works as expected for all cases except NROWS = 1, ROWTYPE = NROWS.  I should note that if you specify ColMajor order in the Matrix template options, eigen appropriately fails to compile.  It's just the default behavior that I find counterintuitive.

#define NROWS 1
#define NCOLS 2
#define NBUF 10

// this compiles and works as expected for any value of NROWS
#define ROWTYPE Dynamic
// this compiles, but does not work as expected for NROWS = 1

typedef Stride<1, Dynamic> MyStride;
typedef Matrix<double, ROWTYPE, Dynamic> MyMat;

int main(int argc, char* argv[]) {
  double raw[NBUF]; for (int i = 0; i < NBUF; i++) raw[i] = i;
  Map<MyMat, Unaligned, MyStride> map(raw, NROWS, NCOLS, MyStride(1,NCOLS));
  std::cout << map << "\n";
Comment 10 Gael Guennebaud 2016-03-08 11:02:02 UTC
In my opinion, it works as expected in all cases. The case "ROWS = 1, ROWTYPE = NROWS" is equivalent to (we really cannot distinguish the two cases):

Map<RowVector2d, 0, InnerStride<2> > map(raw);

Once written like this, it's clear that map should be equals to [0 2].

Enabling column-major row-vector with the logic suggested by Christoph would work for you:

Map<Matrix<double,1,2,ColMajor>, 0, InnerStride<2> > map(raw);

Here we would have map == [0 1]
Comment 11 Gael Guennebaud 2018-10-09 20:26:14 UTC
Getting back to this issue, since changing A.row(i).innerSize() would break existing code, the solution suggestion in comments 6-8 is not really an option. And to be honest, if I get a compile-time vector, I don't want to have to deal with any outer-stuff, nor have to reason about 2D indices. So I'd rather suggest to ignore RowMajor/ColMajor for compile-time vectors, even though this means that A.middleRows(i,1) would look different than A.middleRows<1>(i).
Comment 12 Gael Guennebaud 2018-10-15 23:15:21 UTC
well, I don't know what to think about this issue as I got myself tricked by this one in bug 1612.

Perhaps we could keep the best of the two worlds by implementing Christoph's suggestion and adding 1D vector-specific aliases when we known at code-writing-time that we are dealing with vectors:

- we already have operator()(index)
- we already have size()
- so we're only missing something like "IncrAtCompileTime" and "incr()" that would be aliases for the respective inner or outer versions depending on the expression.

There is still the problem of changing the behavior of A.row(i).innerSize()...
Comment 13 Yuanchen Zhu 2019-02-03 13:43:23 UTC
I also find it unintuitive that a submatrix of a matrix can change its majorness depending on if the submatrix is a row or a column. Say M is a column-majored matrix. Then M.row(i) is actually conceptually row-majored. This becomes an issue once you start to reshape the resulting 1D quantity, e.g, you might want treat pixels in a 2D image as a row of a big 3-row matrix. Then at the end, you want to unpack a row back into a 2d matrix. Naturally, when you reshape-back, you should preserve the majorness. But the row is actually row-majored, so if you don't change the majorness when you reshape, you get a row-majored matrix at the end.

To summarize: Having submatrix operations changing the majorness of the resulting submatrix depending on its shape is a library-wart. Can we change this behavior? Essentially once you choose a majorness convention for your application, all matrix operations should not change the majorness of the resulting matrix, unless EXPLICITLY requested by user, e.g., for efficiency.

In terms of solution, 

- We have already nailed the semantics of innerSize/outerSize for a 2D matrix. 

- If semantically a column vector or a row vector is a 2D matrix, then it really doesn't make sense to change the semantics of innerSize/outerSize of column/row vectors to be (size(), 1). 

- If semantically a column vector or a row vector is not a special case of a 2D matrix, then it shouldn't even have innerSize/outerSize defined. For Eigen, I believe this is not the case, and column/row vector IS still a matrix (just with rows or cols known to be 1 at compile time), otherwise it doesn't even make sense to talk about row vs column vector. 

I think the RIGHT THING TO DO is to eliminate the SPECIAL definition of innerSize/outerSize of vectors, which is different from that of a 2D matrix at the moment, and then go from there. I think this is close to Comment #8.

So yeah, A.row(i).innerSize() should really be just 1, as A.row(i) is a 1xN column-majored matrix.
Comment 14 Gael Guennebaud 2019-02-08 13:59:05 UTC
I started to implement this and this is a nightmare...

There is also one problem with:

VectorXd v(n); // a Nx1 column-major column-vector

  auto z = v.row(i).segment<0>(j).eval();

Currently, z is, as expected, a 1x0 object.

With the suggested changes, we cannot properly decide. This is because:

  auto x = v.row(i);

would now be a 1x1 column-major "something". The information that we took a "row" is lost and cannot be recovered by looking at the compile-time sizes. Therefore in "x.segment<0>(j)" the best we can do is:

1 - fail to compile

2 - look-at the storage order of x, and wrongly assume that we have a column-vector, and wrongly return a 0x1 object.

1x1 matrices have always been ambiguous, but this is much worse now.

We would have the same issue with:

  auto z = v.row(i).segment(j,m).eval();

If m==1, the effect of choosing between 1*Dynamic or Dynamic*1 would have no correctness effect, but if m==0...
Comment 15 Christoph Hertzberg 2019-02-08 17:25:34 UTC
What if we make `v.row(i)` equivalent to `v.block<1,v::ColsAtCompileTime>(i,0)` and therefore let it stay ColumnMajor?

Then segment/head/tail should always check:
* Are both dimensions !=1    --> Fail
* Is only one dimension == 1 --> Extract segment in that direction
* Are both dimensions == 1   --> Extract in direction of Majorness
(these are compile time dimensions, of course)

That would of course still make your example return a 0x1 object. But if someone really wants a 1x0 object one could write:

Unfortunately, what also would get more complicated (and actually could break API) is that Matrix::row would return a ColumnMajor block with InnerStride==1 and OuterStride==Dynamic, which will likely break a lot of internal logic ...

And as already said, it would break A.row(i).innerSize(); but make it more consistent with A.middleRows<R>(i).innerSize(); i.e., don't have a special case for R==1.
Comment 16 Christoph Hertzberg 2019-02-08 17:36:07 UTC
(In reply to Gael Guennebaud from comment #14)
>   auto z = v.row(i).segment<0>(j).eval();
> Currently, z is, as expected, a 1x0 object.

Actually, it already is a 0x1 object:
Comment 17 Gael Guennebaud 2019-02-08 20:39:34 UTC
(In reply to Christoph Hertzberg from comment #16)
> Actually, it already is a 0x1 object:

I did not expected that but you're right, actually .row()/.col() have to means to imply a storage order: it is inferred from the dimension being equals to 1. So same issue. I'm tempted to add a static assert if calling tail/segment/head on a 1x1 expression with a segment size other than 1 at compile-time.

(In reply to Christoph Hertzberg from comment #15)
> What if we make `v.row(i)` equivalent to
> `v.block<1,v::ColsAtCompileTime>(i,0)` and therefore let it stay ColumnMajor?

This is already the case expect that it sets the "InnerPanel" parameter if RowMajor.

> Unfortunately, what also would get more complicated (and actually could
> break API) is that Matrix::row would return a ColumnMajor block with
> InnerStride==1 and OuterStride==Dynamic, which will likely break a lot of
> internal logic ...

That's why it's a nightmare, and yes this will severely break any code relying on the fact that for a compile-time vector, inner-stride == increment.

Even only allowing:


requires severe re-write of our evaluation logics with special paths for compile-time vectors. This won't immediately break existing code, but generic function relying on innerSize/innerStride for compile-time vectors will have to be updated to support this new possibility.
Comment 18 Gael Guennebaud 2019-02-10 14:31:58 UTC
After-all, looks like we cannot do much about it before we decide to launch a v4 of Eigen.

If you're curious I can share a draft of the changes I started to make to support this... The current diff has more 400 modified lines, and reductions/products are still completely broken.
Comment 19 Christoph Hertzberg 2019-06-13 09:16:27 UTC
*** Bug 1722 has been marked as a duplicate of this bug. ***
Comment 20 Nobody 2019-12-04 11:27:15 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance:

Note You need to log in before you can comment on or make changes to this bug.