From Eigen
Revision as of 19:59, 17 August 2021 by Cantonios (Talk | contribs)

Jump to: navigation, search
  • New support for bfloat16

The 16-bit Brain floating point format[1] is now available as Eigen::bfloat16. The constructor must be called explicitly, but it can otherwise be used as any other scalar type. To convert back-and-forth between uint16_t to extract the bit representation, use Eigen::numext::bit_cast.

 bfloat16 s(0.25);                                 // explicit construction
 uint16_t s_bits = numext::bit_cast<uint16_t>(s);  // bit representation
 using MatrixBf16 = Matrix<bfloat16, Dynamic, Dynamic>;
 MatrixBf16 X = s * MatrixBf16::Random(3, 3);
  • New backends
    • HIP: added support for AMD ROCm HIP, unified with the previously existing CUDA code for a generic GPU backend.
  • Improvements/Cleanups to Core modules
    • Improved support for half
      • Native support for ARM __fp16, CUDA/HIP __half, Clang F16C.
      • Better vectorization support across backends.
    • Improved support for custom types
      • More custom types work out-of-the-box (see #2201[2]).
    • Improved Geometry Module
      • Transform::computeRotationScaling() and Transform::computeScalingRotation() are now more continuous across degeneracies (see !349[3]).
      • New minimal vectorization support.
  • Backend-specific improvements
    • SSE/AVX/AVX512
      • Enable AVX512 instructions by default if available.
      • New std::complex, half, bfloat16 vectorization support.
      • Many missing packet functions added.
    • GPU (CUDA and HIP)
      • Several optimized math functions added, better support for std::complex.
      • Option to disable CUDA entirely by defining EIGEN_NO_CUDA.
      • Many more functions can now be used in device code (e.g. comparisons, matrix inversion).
    • SYCL
      • Redesigned SYCL implementation for use with the Tensor[4] module, which can be enabled by defining EIGEN_USE_SYCL.
      • New generic memory model used by TensorDeviceSycl.
      • Better integration with OpenCL devices.
      • Added many math function specializations.