Difference between revisions of "User:Cantonios/3.4"

From Eigen
Jump to: navigation, search
(Created page with "* New support for <code>bfloat16</code> The 16-bit Brain floating point format[https://en.wikipedia.org/wiki/Bfloat16_floating-point_format] is now available as <code>Eigen::...")
 
Line 23: Line 23:
 
** SSE/AVX/AVX512
 
** SSE/AVX/AVX512
 
*** Enable AVX512 instructions by default if available
 
*** Enable AVX512 instructions by default if available
 +
*** <code>std::complex</code>, <code>half</code>, <code>bfloat16</code> vectorization support.
 +
*** Many missing packet functions added.
 +
** CUDA
 
***
 
***

Revision as of 19:32, 17 August 2021

  • New support for bfloat16

The 16-bit Brain floating point format[1] is now available as Eigen::bfloat16. The constructor must be called explicitly, but it can otherwise be used as any other scalar type. To convert back-and-forth between uint16_t to extract the bit representation, use Eigen::numext::bit_cast.

 bfloat16 s(0.25);                                 // explicit construction
 uint16_t s_bits = numext::bit_cast<uint16_t>(s);  // bit representation
 
 using MatrixBf16 = Matrix<bfloat16, Dynamic, Dynamic>;
 MatrixBf16 X = s * MatrixBf16::Random(3, 3);
  • Improved support for half
    • Native support for ARM __fp16, CUDA/HIP __half, Clang F16C
    • Better vectorization support, various bug fixes.
  • Improved support for custom types
    • More custom types work out-of-the-box (see #2201[2])
  • Improved Geometry Module
    • Transform::computeRotationScaling() and Transform::computeScalingRotation() are now more continuous across degeneracies (!349[3]).
    • New minimal vectorization support.
  • Backend-specific improvements
    • SSE/AVX/AVX512
      • Enable AVX512 instructions by default if available
      • std::complex, half, bfloat16 vectorization support.
      • Many missing packet functions added.
    • CUDA