Floating point underflow exception in tolerance computation
Submitted by Anders Sundman
Assigned to Nobody
Link to original bugzilla bug (#1528)
Version: 3.4 (development)
Operating system: Linux
Description
In src/Cholesky/LDLT.h there is the line:
RealScalar tolerance = RealScalar(1) / NumTraits<RealScalar>::highest();
This causes a floating point underflow exception in the CPU. It is normally not handled by applications and is silently ignored. But we run with:
feenableexcept(FE_DIVBYZERO | FE_OVERFLOW | FE_INVALID | FE_UNDERFLOW);
Using min() seems to work fine for us:
RealScalar tolerance = std::numeric_limits<RealScalar>::min();