Skip to content
Commit 66f07efe authored by Rasmus Munk Larsen's avatar Rasmus Munk Larsen
Browse files

Revert the specialization for scalar_logistic_op<float> introduced in:

https://bitbucket.org/eigen/eigen/commits/77b447c24e3344e43ff64eb932d4bb35a2db01ce


While providing a 50% speedup on Haswell+ processors, the large relative error outside [-18, 18] in this approximation causes problems, e.g., when computing gradients of activation functions like softplus in neural networks.
parent 3b15373b
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment