Difference between revisions of "User:Cantonios/3.4"

From Eigen
Jump to: navigation, search
Line 12: Line 12:
 
** HIP: added support for AMD ROCm HIP, unified with the previously existing CUDA code for a generic GPU backend.
 
** HIP: added support for AMD ROCm HIP, unified with the previously existing CUDA code for a generic GPU backend.
  
* Improved support for <code>half</code>
+
* Impreovements/Cleanups to Core modules
** Native support for ARM <code>__fp16</code>, CUDA/HIP <code>__half</code>, Clang <code>F16C</code>.
+
** Improved support for <code>half</code>
** Better vectorization support across backends.
+
*** Native support for ARM <code>__fp16</code>, CUDA/HIP <code>__half</code>, Clang <code>F16C</code>.
 
+
*** Better vectorization support across backends.
* Improved support for custom types
+
** Improved support for custom types
** More custom types work out-of-the-box (see #2201[https://gitlab.com/libeigen/eigen/-/issues/2201]).
+
*** More custom types work out-of-the-box (see #2201[https://gitlab.com/libeigen/eigen/-/issues/2201]).
 
+
** Improved Geometry Module
* Improved Geometry Module
+
*** <code>Transform::computeRotationScaling()</code> and <code>Transform::computeScalingRotation()</code> are now more continuous across degeneracies (see !349[https://gitlab.com/libeigen/eigen/-/merge_requests/349]).
** <code>Transform::computeRotationScaling()</code> and <code>Transform::computeScalingRotation()</code> are now more continuous across degeneracies (see !349[https://gitlab.com/libeigen/eigen/-/merge_requests/349]).
+
*** New minimal vectorization support.
** New minimal vectorization support.
+
  
 
* Backend-specific improvements
 
* Backend-specific improvements
Line 32: Line 31:
 
*** Option to disable CUDA entirely by defining <code>EIGEN_NO_CUDA</code>.
 
*** Option to disable CUDA entirely by defining <code>EIGEN_NO_CUDA</code>.
 
*** Many more functions can now be used in device code (e.g. comparisons, matrix inversion).
 
*** Many more functions can now be used in device code (e.g. comparisons, matrix inversion).
 +
** SYCL
 +
*** Redesigned SYCL implementation for use with the Tensor[https://eigen.tuxfamily.org/dox/unsupported/eigen_tensors.html] module.
 +
*** Implementation guarded by <code>EIGEN_USE_SYCL</code>
 +
*** New generic memory model used by <code>TensorDeviceSycl</code>.
 +
*** Better integration with OpenCL devices.
 +
*** Math function specializations.

Revision as of 19:58, 17 August 2021

  • New support for bfloat16

The 16-bit Brain floating point format[1] is now available as Eigen::bfloat16. The constructor must be called explicitly, but it can otherwise be used as any other scalar type. To convert back-and-forth between uint16_t to extract the bit representation, use Eigen::numext::bit_cast.

 bfloat16 s(0.25);                                 // explicit construction
 uint16_t s_bits = numext::bit_cast<uint16_t>(s);  // bit representation
 
 using MatrixBf16 = Matrix<bfloat16, Dynamic, Dynamic>;
 MatrixBf16 X = s * MatrixBf16::Random(3, 3);
  • New backends
    • HIP: added support for AMD ROCm HIP, unified with the previously existing CUDA code for a generic GPU backend.
  • Impreovements/Cleanups to Core modules
    • Improved support for half
      • Native support for ARM __fp16, CUDA/HIP __half, Clang F16C.
      • Better vectorization support across backends.
    • Improved support for custom types
      • More custom types work out-of-the-box (see #2201[2]).
    • Improved Geometry Module
      • Transform::computeRotationScaling() and Transform::computeScalingRotation() are now more continuous across degeneracies (see !349[3]).
      • New minimal vectorization support.
  • Backend-specific improvements
    • SSE/AVX/AVX512
      • Enable AVX512 instructions by default if available.
      • New std::complex, half, bfloat16 vectorization support.
      • Many missing packet functions added.
    • GPU (CUDA and HIP)
      • Several optimized math functions added, better support for std::complex.
      • Option to disable CUDA entirely by defining EIGEN_NO_CUDA.
      • Many more functions can now be used in device code (e.g. comparisons, matrix inversion).
    • SYCL
      • Redesigned SYCL implementation for use with the Tensor[4] module.
      • Implementation guarded by EIGEN_USE_SYCL
      • New generic memory model used by TensorDeviceSycl.
      • Better integration with OpenCL devices.
      • Math function specializations.