Summary: | InnerIterator cannot be assigned, copy- or default-constructed | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Eigen | Reporter: | Clemens Hofreither <clemens.hofreither> | ||||||
Component: | Sparse | Assignee: | Nobody <eigen.nobody> | ||||||
Status: | RESOLVED FIXED | ||||||||
Severity: | enhancement | CC: | desire.nuentsa_wakam, gael.guennebaud | ||||||
Priority: | Normal | ||||||||
Version: | unspecified | ||||||||
Hardware: | All | ||||||||
OS: | All | ||||||||
Whiteboard: | |||||||||
Bug Depends on: | |||||||||
Bug Blocks: | 558 | ||||||||
Attachments: |
|
Description
Clemens Hofreither
2013-02-21 15:07:44 UTC
Created attachment 315 [details]
A patch which fixes all described issues.
This patch implements my proposed changes.
Good Stuff, at least the default constructor is needed. What do you think about this Gael ? I'm not sure it's a good idea to store a vector of iterators. InnerIterators are quite heavy. Could you elaborate on your use case? Maybe a better alternative would be to add the possibility to increment an InnerIterator of an arbitrary number. This way, you would only have to store a vector of indices. My use case is sound. I have a tensor grid, of statically known but arbitrarily large tensor dimension d. To each tensor direction i=1,...,d I have an associated sparse matrix. I want to compute an output matrix whose rows are, roughly speaking, (sparse) tensor products of the sparse rows of the input matrices. The way I do this is (after applying the above patch to Eigen) by storing a statically sized array of size d of InnerIterators; each one points initially to the first non-zero item in a row of the associated input matrix. Then I advance them in lexicographic order, i.e., I first advance the first iterator, if it is exhausted, I reset it and advance the next one, etc. This works very nicely with the above patch. I ended up not using std::vectors of InnerIterators, but I see no reason not to allow them. I see no problems with the patch since it doesn't interfere with any previous use cases and enables new ones. I don't know what you mean by "increment an InnerIterator of an arbitrary number". OK, thanks for the explanations. After all, why not. Just be aware such iterators are quite heavy (32B) while their essential information (the number of non-zeros that have been processed) fit in an integer (4B). So we could also offer an API like: InnerIterator it(mat, j); // ... int p = it.pos(); // return the position of the iterator InnerIterator it(mat, j); it+=p; // jump to the previous position I admit it's a bit less convenient to write in your case because you have to recreate an iterator everytime, but memory wise it's much more satisfactory because you avoid storing redundant information. Nevertheless, I guess that both features can coexist. In my case I'm not worried about the storage requirements since I keep this data only on the stack for the duration of the algorithm described, which is only a small part of my overall program. Also, d will typically be small, say <= 5, so the overhead is negligible. The only thing that would be a small optimization in my use case would be to provide a method which resets the iterator to the start of the current row, since then I would save recreating a new iterator. But I don't know if this can be implemented without adding additional data members to the iterator. Anyway, the algorithm is not a bottleneck in my code, so it's not very important to me. Created attachment 362 [details]
A possible patch
Finally, I'm not convinced we want them to be copyable. Just for the record I attached a possible patch. Let's postpone the resolution of this discussion.
As a compromise, I eventually implemented this proposal for sparse-storage types (not for any expression): https://bitbucket.org/eigen/eigen/commits/c00b92176825/ Summary: Bug 557: make InnerIterator of sparse storage types more versatile by adding default-ctor, copy-ctor/assignment -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/557. |