Another step towards enabling high precision Real
EDIT: since everything works now, I will make this description a lot shorter, and move the relevant parts directly into documentation :) If you want I can paste it back (I have a backup of original text ;), but it was mostly describing the situation status for a month ago.
I have added high precision support for Real type based on following types:
type | bits | decimal places | notes |
---|---|---|---|
float |
32 | 6 | hardware accelerated |
double |
64 | 15 | hardware accelerated |
long double |
80 | 18 | hardware accelerated |
boost::multiprecision::float128 |
128 | 33 | depending on processor type it can be hardware accelerated |
boost::multiprecision::mpfr |
Nbit | Nbit/(log(2)/log(10)) | uses external mpfr library |
boost::multiprecision::cpp_bin_float |
Nbit | Nbit/(log(2)/log(10)) | uses boost only, but is slower |
It all works nicely in !383 (merged) (which should be merged last). This merge request aims to minimally modify files already existing in master and focuses on adding the new files that are necessary to make it all work. So the only file modified significantly is lib/base/Math.hpp
also as a part of work on #97. It is a lot of commits here because that was a long journey. For historical reasons I would rather not squash these commits. Also the commits can be cherry picked between here and minieigen-real (@bchareyre and @gladk you have write access there). The files in lib/high-precision
and py
directories there are exactly the same files as here. Note that minieigen-real cannot be a separate library because binary compatibility is necessary for this to work.
During compilation you can provide one of these extra arguments (which affect resulting binaries):
-
REAL_PRECISION_BITS
orREAL_DECIMAL_PLACES
(the numbers according to the table above), they default todouble
if not specified. -
ENABLE_MPFR
in case if you wanted additional dependency and use MPFR.
This MR also adds vectorization, the make_SSE
to the pipeline. It was accepted by Anton last month in !365 (merged). In fact Anton I have used your test scripts, and extended them even more.
When you use high precision, for example Vector3r
becomes Vector3<boost::float128_t>
. This works great with Eigen and CGAL.
To make it all work I need to modify lots of other files to call the math::
functions correctly (this name agreed upon). I have split them into many smaller merge requests for easier checking.