Bugzilla – Bug 564

maxCoeff() returns -nan instead of max, while maxCoeff(&maxRow, &maxCol) works

Last modified: 2016-01-28 22:10:28 UTC

I've got code that for reasons (well, errors i think) ends up leaving some "-nan" in a matrix I'm searching for max values in. But the two "maxCoeff(**)" calls return different values.... Not sure which version of Eigen is installed (one of the 3's I think) since I didn't do it. When I call: double mymax; ArrayXXd smAry; ArrayXXd::Index maxRow,maxCol; mymax=smAry.maxCoeff(); mymax=smAry.maxCoeff(&maxRow, &maxCol); -------------------- smAry looks like (when output with cout<<smAry): 0 4 3 -nan 4 -nan 5 3 3 5 0 5 0 3 5 -nan Output in mymax from first: -nan Output in mymax from 2nd command 5

I agree that they should definitely return the same value and we should specify how it deals with NaNs. Keep in mind that "the biggest" does not suffice as specification, since both (5>-nan) and (-nan>5) are false by IEEE754. What's definitely wrong though is: Eigen::ArrayXd arr(6); arr << 2,2, 0.0/0.0, 0.0/0.0, 1, 1; std::cout << arr.transpose() << "\nmaxCoeff() = " << arr.maxCoeff(); Output (at least when compiled with vectorization enabled): 2 2 nan nan 1 1 maxCoeff() = 1 The problem is that maxps and maxpd always propagate the source operand, if at least one operand is NaN.

I don't really know how to fix that expect documenting that the result of min/max is undefined in the presence of NaN. Vectorization is not the only problem here. For instance look at this example: std::cout << std::max(std::max(2.,nan),1.0) << "\n"; std::cout << std::max(std::max(nan,2.),1.0) << "\n"; std::cout << std::max(1.0,std::max(nan,2.)) << "\n"; The result is: 2 nan 1

well, it seems OK for the std::max to return an arbitrary value because it's only a binary operator, but in our case I admit that's not good. However I don't see how to fix this without introducing a large overhead.

So std::max seems to propagate the first operand if one operand is nan, if the SSE-intrinsics for maxpX always properly propagate the same value, a solution would be to always use the current max values as source and the newly read values as dest operand. This will give defined behavior at the cost of a minor overhead (which could be disabled if some fast_math define is present). I'm not sure if compilers are allowed to change the operands for SSE intrinsics, if they are "almost" commutative. Pseudo code for VectorXd v (std::max can be replaced by the appropriate SSE-intrinsics): Scalar maxval = v(0); for(int i=1; i<v.rows(); ++i){ // defined behavior, NaNs are ignored: maxval = std::max(v(i), maxval); // spurious behavior: // maxval = std::max(maxval, v(i)); } return maxval;

I did some testing and I must admit that I was a bit enthusiastic with my last post (it only works if the first entry/package(s) is not a NaN). Otherwise, in fact an overhead introducing test needs to be implemented. Something like this (pseudo code): stable_max(a,b) = a>b || !(b==b) ? a : b; // always propagates not-NaNs stable_max(a,b) = a>b || !(a==a) ? a : b; // always propagates NaNs So maybe documenting it as undefined if NaNs are present is fine (Karl might disagree there).

Yes, I was also thinking about documenting as undefined because the purely sequential approach you described would also kill the performance regarding pipelining, probably more that your stable_max. One can also easily call vec.redux(stable_norm) with his favorite stable_norm version, or can we find an elegant API to offer these variants? I also think that we really should add a convenient and easily accessible way to check the presence of NaN, like adding: bool hasNaN() const { return !((this->array()==this->array()).all()); }

So, my biggest issue was more that it wasn't documented than that it didn't work. Though I'd certainly prefer it to work as I'd expect... I'm not precisely sure where the overhead you guys are talking about for making a stable version comes from, but the code-guru in my research group threw together a templated max-function for sparse matrices for me which utilizes the "limits" header, to have more stable behavior: #include <limits> (some other includes for sample code, and eigen) template<typename M> typename M::Scalar sparse_max(const M& m) { typedef typename M::Scalar scalar; typedef typename M::Index index; typedef typename M::InnerIterator iterator; scalar res = std::numeric_limits<scalar>::min(); for(index k = 0; k < m.outerSize(); ++k) for(iterator it(m, k); it; ++it) res = std::max(res, it.value()); return res; } Perhaps using something like the limit would introduce a very minimal overhead? Also, this function might be useful to include in general for your sparse matrices. It could also be an overload on the min/max functions that allows a "global minimum" for the first comparison, then make sure you are using the more stable version (with nan 2nd). something like: maxCoeff(&maxRow, &maxCol, <matrix type(double, etc)> <value max must be greater than>) I realize it's nearly identical to (mat><value>).any() ... but...

It is also important to specifically document this behavior I think...since there MIGHT be people that WANT to know if a NaN is present or the minimum value (there's not really an easy way right now without a self-defined redux).. which seems like a reasonably common thing you might want to check for....

We could add stable_max, stable_min methods, maybe with a template argument which decides whether NaNs shall be suppressed or propagated. Or directly add maxCoeff<Eigen::SuppressNaN>(), etc. As for hasNaN(), this would certainly come in handy from time to time, maybe along with an isFinite() method. @Karl, comment 7: Basically that was what I had in mind with comment 4 (I mixed up the operand order of max, though ...). Two minor problems: If your matrix only consists of NaNs or -Inf, still numeric_limits::min() will be returned. And it would require the redux function to have some kind of initial value (which again would introduce some overhead, especially for small matrices/vectors) And I would also say that I actually expect NaNs to propagate through max/min reductions (i.e., if a mistake happened once, I don't want it to be silently ignored)

Step 1, documentation: https://bitbucket.org/eigen/eigen/commits/cd15b09304ff/ Changeset: cd15b09304ff User: ggael Date: 2013-04-09 11:27:54 Summary: Bug 564: document the fact that minCoeff/maxCoeff members have undefined behavior if the matrix contains NaN.

Created attachment 327 [details] Add hasNaN and isFinite members. Here is a patch adding hasNaN and isFinite members. Probably not the most optimal implementation for native types, but should not be too bad. Since we already had a beta, I'd like someone approve it before pushing.

I think the patch is alright. Maybe some more tests asserting that the floating point operations on "sane values" lead to finite results could be added? I've added another Bug 585, which suggests improving the all()/any() reductions. Furthermore, hasNaN could be optimized by using the _mm_cmpunord_pX intrinsic on two consecutive packets, thus halving the number of comparisons. But I don't think that's worth blocking 3.2.

Another thing: What shall hasNaN() and isFinite() return for types which do not support special values (such as integers)? Simply always return false or true, resp.? That would be supported already by the current implementation (not in the most efficient way; however, compilers are pretty good at integer optimizing, so it could be reduced to a NOP on most compilers?)

I checked with GCC and clang, and indeed, isFinite and hasNaN boil down to a NOP. Using _mm_cmpord_pX (not _mm_cmpunord_pX) is a nice trick, however I'm afraid this is not easily integrable within our general framework.

https://bitbucket.org/eigen/eigen/commits/8ad4e281a3ce/ Changeset: 8ad4e281a3ce User: ggael Date: 2013-04-16 15:10:40 Summary: Big 564: add hasNaN and isFinite members still remains the "stable" versions -> for 3.3

(In reply to comment #14) > Using _mm_cmpord_pX (not _mm_cmpunord_pX) is a nice trick, however I'm afraid > this is not easily integrable within our general framework. Yes, I guess that would require the "meta package" approach which has been vaguely discussed for some time (is there a bz entry for that?). And I agree that it is not really worth the effort for now, since most likely hasNaN will hardly ever be performance critical and even if, I think memory throughput is the limiting factor most of the time.

(In reply to comment #16) > I think memory throughput is the limiting factor most of the time. very good point. Definitely not worth the effort!

For the record, isFinite has been renamed allFinite to avoid a naming collision with a coefficient-wise isFinite method.