In src/Cholesky/LDLT.h there is the line:
RealScalar tolerance = RealScalar(1) / NumTraits<RealScalar>::highest();
This causes a floating point underflow exception in the CPU. It is normally not handled by applications and is silently ignored. But we run with:
feenableexcept(FE_DIVBYZERO | FE_OVERFLOW | FE_INVALID | FE_UNDERFLOW);
Using min() seems to work fine for us:
RealScalar tolerance = std::numeric_limits<RealScalar>::min();
I think that change should be fine. std::numeric_limits::min is also used in the SVD module and the EigenSolver module at many places.
Unfortunatly the tests fail for the min() change since the <cmath> include has a min() define that leaks in to this file when used in the tests. So some reordering and adding of includes in some test files is probably also required.
Created attachment 835 [details]
Top notch hackery ;)
Seems to work. tests pass.
Created attachment 836 [details]
same patch with comment cleanup
add comment about macro expansion
remove comments about historic implementations
-- GitLab Migration Automatic Message --
This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/1528.