Currently many functions return slightly different results for coefficients evaluated using a SIMD path or a scalar path. This includes exp, log, logistic_function, etc.
Since the SIMD implementation of those functions can also be called on scalar inputs, we could easily solve this inconsistency by plugging the respective functor call to the generic SIMD path.
Shall we do that unconditionally or only if vectorization is enabled?
Created attachment 960 [details]
Patch to make the logistic function scalar-SIMD consistent
Here is a first patch for the logistic/sigmoid function.
See also: https://github.com/tensorflow/tensorflow/issues/33878
I agree, we should always call the same implementation. Having (even slightly) different results for the same input is not good.
For odd-sized inputs we should actually think about loading/storing partial packets (cf. Bug 692 comment 15), instead of doing a scalar loop at the end.
In Bug 1687 I suggested something like EIGEN_USE_SIMD_MATH_FUNCTIONS (but dropping EIGEN_FAST_MATH).
-- GitLab Migration Automatic Message --
This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/1777.