Since Eigen version 3.1 and later, users can benefit from built-in Intel MKL optimizations with an installed copy of Intel MKL 10.3 (or later). Intel MKL provides highly optimized multi-threaded mathematical routines for x86-compatible architectures. Intel MKL is available on Linux, Mac and Windows for both Intel64 and IA32 architectures.
Using Intel MKL through Eigen is easy:
EIGEN_USE_MKL_ALLmacro before including any Eigen's header
When doing so, a number of Eigen's algorithms are silently substituted with calls to Intel MKL routines. These substitutions apply only for Dynamic or large enough objects with one of the following four standard scalar types:
complex<double>. Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms.
In addition you can coarsely select choose which parts will be substituted by defining one or multiple of the following macros:
|Enables the use of external BLAS level 2 and 3 routines (currently works with Intel MKL only)|
|Enables the use of external Lapack routines via the Intel Lapacke C interface to Lapack (currently works with Intel MKL only)|
|Same as |
|Enables the use of Intel VML (vector operations)|
Finally, the PARDISO sparse solver shipped with Intel MKL can be used through the PardisoLU, PardisoLLT and PardisoLDLT classes of the PardisoSupport module.
The breadth of Eigen functionality covered by Intel MKL is listed in the table below.
|Functional domain||Code example||MKL routines|
|Matrix-matrix operations |
|Matrix-vector operations |
|LU decomposition |
v1 = m1.lu().solve(v2);
|Cholesky decomposition |
v1 = m2.selfadjointView<Upper>().llt().solve(v2);
|QR decomposition |
|Singular value decomposition |
|Eigen-value decompositions |
|Schur decomposition |
|Vector Math |
In the examples, m1 and m2 are dense matrices and v1 and v2 are dense vectors.