User:Cantonios/3.4
From Eigen
- New support for
bfloat16
The 16-bit Brain floating point format[1] is now available as Eigen::bfloat16
. The constructor must be called explicitly, but it can otherwise be used as any other scalar type. To convert back-and-forth between uint16_t
to extract the bit representation, use Eigen::numext::bit_cast
.
bfloat16 s(0.25); // explicit construction uint16_t s_bits = numext::bit_cast<uint16_t>(s); // bit representation using MatrixBf16 = Matrix<bfloat16, Dynamic, Dynamic>; MatrixBf16 X = s * MatrixBf16::Random(3, 3);
- Improved support for
half
- Native support for ARM
__fp16
, CUDA/HIP__half
, ClangF16C
- Better vectorization support, various bug fixes.
- Native support for ARM
- Improved support for custom types
- More custom types work out-of-the-box (see #2201[2])
- Improved Geometry Module
-
Transform::computeRotationScaling()
andTransform::computeScalingRotation()
are now more continuous across degeneracies (!349[3]). - New minimal vectorization support.
-
- Backend-specific improvements
- SSE/AVX/AVX512
- Enable AVX512 instructions by default if available
- SSE/AVX/AVX512