The following code crashes at the line "C = A * B" (compiled with g++ 4.9.0) -- begin code #include<Eigen/Sparse> #include<Eigen/Dense> using namespace Eigen; int main(int argc, char** argv) { Matrix<double, Dynamic, 1> A(3, 1), C; SparseMatrix<double> B(1, 1); A << 1, 1, 1; B.insert(0,0) = 2; C = A * B; // <-- crash here return 0; } -- end code The issue seems to be that A is declared with 1 fixed column. If I put here Dynamic it is OK.

I can confirm this. The problem seems to be that SparseDenseOuterProduct is reused for Dense*Sparse products (using a Transpose-flag) but does not work properly.

I played with it a little more and the issue seems to be in SparseMatrixBase<SparseDenseOuterProduct<SparseMatrix,Vector,true> >::evalTo(MatrixBase<Vector>&) It writes in the first row of C which does not exist. The thing is that in this case the storage order of C and the result of SparseDenseOuterProduct are different. C is col major and SparseDenseOuterProduct is row major (C = A*B is evaluated as C'=B'*A'). Perhaps the solution will have to identify this case and adjust accordingly the call to dst.coeffRef in evalTo (SparseMatrixBase::assignGeneric has the right code in spirit). I did this and it seems to solve my problem. Do not know if this solution has other issues...

I'm afraid this is only part of the problem. Another thing that fails is: Vector3d A(1,2,3); SparseMatrix<double> B(1,3), S; B.insert(0,0)=2; B.insert(0,2)=4; S = A*B; // incorrect, only first column is computed Matrix3d D = A*B; // only first column and transposed std::cout << "Sparse Result:\n" << S.toDense() << "\nDense result:\n" << D << "\nExpected result:\n" << A * B.toDense(); In case that A*B is not square, invalid writes occur -- which caused your original example to crash.

Ouch! You are right. It seems that SparseDenseOuterProduct::InnerIterator is badly broken. Until you come up with a more elegant solution that fits the rest of Eigen, I fixed it up with two InnerIterator classes: one that is more or less the way it is now handling the case in which the sparse matrix is concentrated in one outer index and the other handling the case in which the sparse matrix is spread over several outer indices. It seems to work for the cases I could come up for the destination (dense, sparse row major, and sparse col major), position of sparse argument (first or second), and storage order of the sparse argument (row/col major). I also changed the meaning of the bool argument to SparseDenseOuterProduct from Transpose to SparseFirst.

We'd appreciate, if you sent us a patch of your fix. And even more, if you can also make your test cases into a unit test!

Created attachment 476 [details] patch file A proposed fix to bug 838 as well as a unit test for the SparseDenseOuterProduct class.

Thank you for the patch, but there must be a much simpler fix avoiding the need for a new iterator. I'm working on it.

https://bitbucket.org/eigen/eigen/commits/9bb4eb34e080/ Summary: Fix bug 838: fix dense * sparse and sparse * dense outer products Backport: https://bitbucket.org/eigen/eigen/commits/f10b330930f1/ Related fix: https://bitbucket.org/eigen/eigen/commits/4448f103731e/ Summary: Fix inner iterator on an outer-vector

Great, assigning to dense matrices works correctly now! The only strange thing remaining I found is that I can assign a ColVector*Sparse product to a sparse matrix but not a Sparse*RowVector product (independent of the majorness of the sparse matrix). Vector3d A(1,2,3); RowVector3d At(1,2,3); SparseMatrix<double> B(1,3),Bt(3,1), S; B.insert(0,0)=2; B.insert(0,2)=4; Bt.insert(0,0)=2; Bt.insert(2,0)=4; S = A*B; // works now S = Bt*At; // does not compile also not if S is RowMajor

thanks, I thought this was covered by: VERIFY_IS_APPROX( m4=m2.col(c)*refMat3.col(c1).transpose(), refMat4=refMat2.col(c)*refMat3.col(c1).transpose()); but it turn out this hided another shortcoming: https://bitbucket.org/eigen/eigen/commits/69a38db18f39/ Changeset: 69a38db18f39 User: ggael Date: 2014-07-11 17:15:26 Summary: Fix bug 838: detect outer products from either the lhs or rhs 3.2: https://bitbucket.org/eigen/eigen/commits/5e0aef63d903/

That fix "works", but it fills up the sparse matrix completely, i.e. it returns a dense matrix disguised as sparse. --code begin #include <Eigen/Sparse> #include <Eigen/Dense> #include <iostream> using namespace Eigen; using namespace std; int main(int argc, char** argv) { SparseMatrix<double> S1; SparseMatrix<double, RowMajorBit> S2; Matrix<double, Dynamic, 1> A(3); SparseMatrix<double> B(1,3); A << 1, 2, 3; B.insert(0, 0) = 2; B.insert(0, 2) = 2; S1 = A * B; // produces a sparse matrix with 9 filled-up entries S2 = A * B; // produces a sparse matrix with 9 filled-up entries cout << "sparse1 = [" << S1 << "]" << endl; cout << "sparse2 = [" << S2 << "]" << endl; return 0; } -- code end -- output begin sparse1 = [Nonzero entries: (2,0) (4,1) (6,2) (0,0) (0,1) (0,2) (2,0) (4,1) (6,2) Outer pointers: 0 3 6 $ 2 0 2 4 0 4 6 0 6 ] sparse2 = [Nonzero entries: (2,0) (0,1) (2,2) (4,0) (0,1) (4,2) (6,0) (0,1) (6,2) Outer pointers: 0 3 6 $ 2 0 2 4 0 4 6 0 6 ] -- output end

I'm re-opening this but classify it as "Optimization". I assume the evaluators will re-implement this anyways?

yes, this is gonna be reimplemented in the evaluators. Actually, that will be the purpose of the next commit in the eval branch.

Partial fix and unit test to prevent further regression: https://bitbucket.org/eigen/eigen/commits/0c883fcd96da/ Changeset: 0c883fcd96da User: ggael Date: 2014-07-16 17:00:54 Summary: Bug 838: add unit test for fill-in in sparse outer product and fix abusive fill-in. The fix is partial because performance wise, when taking a row of a column-major sparse matrix, then we iterate many times on the sparse row, which is extremely inefficient. Currently this is difficult to detect because a row-vector *must* be row-major even though the actual storage is column-major. The user can workaround by calling middleRows(i,1) instead of row(i). On our side we can either do: 1 - exploit CoeffReadCost to evaluate only once the expensive operand 2 - allow rows of sparse matrices to appear as column major: this is tricky because traits<Block<XprType, ...> > is common for all kind of "XprType". well, actually, we have to do *both* 1 and 2 since 1 is more general, and 2 might be needed to speedup other expressions.

It seems to be fine now. The only thing that worried me at first sight is that it relies on the dense matrix to have the right storage order, but it seems to be handled because (as you mentioned) there is no row vector of column major storage order, for example. How to define efficiently an outer iterator when all you have is an InnerIterator is a much bigger issue... Just a question: why not have explicit OuterIterator? Of course, this will require a major addition, but when the structure is known (such as in the SparseMatrix or the SparseDenseOuterProduct cases) one can implement them much more efficiently than the current 'scan linearly the inner index' approach.

-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/838.