Assuming these reductions are applied on the result of SSE-comparisons, it's most likely faster to bit_and/bit_or some consecutive results then _mm_movemask_pX the result to an integer and compare that against 0x0, 0x3 or 0xF. This should reduce latency and the number of branches.
I'm not sure, if this is related to bug 65.
Bug 65 is on vectorizing vertical reductions on row-major matrices, so not related.
_mm_movemask_pX is pretty expensive, but its overhead is probably compensated by allowing vectorization on the input expression. However, we should first vectorize comparisons...
movemask can be expensive indeed (depending on the processor), so sufficiently many comparison results should be and/or-ed before using that.
For SSE4.1 there would be the more efficient alternative to use PTEST.
count() could also be improved. On SSE it would simply be subtracting the masks from a counting register using integer arithmetic, because int('true')==-1 for SSE.
-- GitLab Migration Automatic Message --
This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/585.