▼Dense matrix and array manipulation | |
The Matrix class | In Eigen, all matrices and vectors are objects of the Matrix template class. Vectors are just a special case of matrices, with either 1 row or 1 column |
Matrix and vector arithmetic | This page aims to provide an overview and some details on how to perform arithmetic between matrices, vectors and scalars with Eigen |
The Array class and coefficient-wise operations | This page aims to provide an overview and explanations on how to use Eigen's Array class |
Block operations | This page explains the essentials of block operations. A block is a rectangular part of a matrix or array. Blocks expressions can be used both as rvalues and as lvalues. As usual with Eigen expressions, this abstraction has zero runtime cost provided that you let your compiler optimize |
Advanced initialization | This page discusses several advanced methods for initializing matrices. It gives more details on the comma-initializer, which was introduced before. It also explains how to get special matrices such as the identity matrix and the zero matrix |
Reductions, visitors and broadcasting | This page explains Eigen's reductions, visitors and broadcasting and how they are used with matrices and arrays |
Interfacing with raw buffers: the Map class | This page explains how to work with "raw" C/C++ arrays. This can be useful in a variety of contexts, particularly when "importing" vectors and matrices from other libraries into Eigen |
Reshape and Slicing | Eigen does not expose convenient methods to take slices or to reshape a matrix yet. Nonetheless, such features can easily be emulated using the Map class |
Aliasing | In Eigen, aliasing refers to assignment statement in which the same matrix (or array or vector) appears on the left and on the right of the assignment operators. Statements like mat = 2 * mat; or mat = mat.transpose(); exhibit aliasing. The aliasing in the first example is harmless, but the aliasing in the second example leads to unexpected results. This page explains what aliasing is, when it is harmful, and what to do about it |
Storage orders | There are two different storage orders for matrices and two-dimensional arrays: column-major and row-major. This page explains these storage orders and how to specify which one should be used |
▼Alignment issues | |
Explanation of the assertion on unaligned arrays | Hello! You are seeing this webpage because your program terminated on an assertion failure like this one: |
Fixed-size vectorizable Eigen objects | The goal of this page is to explain what we mean by "fixed-size vectorizable" |
Structures Having Eigen Members | |
Using STL Containers with Eigen | |
Passing Eigen objects by value to functions | Passing objects by value is almost always a very bad idea in C++, as this means useless copies, and one should pass them by reference instead |
Compiler making a wrong assumption on stack alignment | |
▼Reference | |
▼Core module | This is the main module of Eigen providing dense matrix and vector support (both fixed and dynamic size) with all the features corresponding to a BLAS library and much more.. |
Global array typedefs | |
Global matrix typedefs | |
Flags | |
Enumerations | |
Jacobi module | This module provides Jacobi and Givens rotations |
Householder module | This module provides Householder transformations |
Catalog of coefficient-wise math functions | This table presents a catalog of the coefficient-wise math functions supported by Eigen. In this table, a , b , refer to Array objects or expressions, and m refers to a linear algebra Matrix/Vector object. Standard scalar types are abbreviated as follows: |
Quick reference guide | |
▼Dense linear problems and decompositions | |
Linear algebra and decompositions | This page explains how to solve linear systems, compute various decompositions such as LU, QR, SVD, eigendecompositions... After reading this page, don't miss our catalogue of dense matrix decompositions |
Catalogue of dense decompositions | This page presents a catalogue of the dense matrix decompositions offered by Eigen. For an introduction on linear solvers and decompositions, check this page . To get an overview of the true relative speed of the different decompositions, check this benchmark |
Solving linear least squares systems | This page describes how to solve linear least squares systems using Eigen. An overdetermined system of equations, say Ax = b, has no solutions. In this case, it makes sense to search for the vector x which is closest to being a solution, in the sense that the difference Ax - b is as small as possible. This x is called the least square solution (if the Euclidean norm is used) |
Inplace matrix decompositions | Starting from Eigen 3.3, the LU, Cholesky, and QR decompositions can operate inplace, that is, directly within the given input matrix. This feature is especially useful when dealing with huge matrices, and or when the available memory is very limited (embedded systems) |
Benchmark of dense decompositions | This page presents a speed comparison of the dense matrix decompositions offered by Eigen for a wide range of square matrices and overconstrained problems |
▼Reference | |
Cholesky module | This module provides two variants of the Cholesky decomposition for selfadjoint (hermitian) matrices. Those decompositions are also accessible via the following methods: |
LU module | This module includes LU decomposition and related notions such as matrix inversion and determinant. This module defines the following MatrixBase methods: |
QR module | This module provides various QR decompositions This module also provides some MatrixBase methods, including: |
SVD module | This module provides SVD decomposition for matrices (both real and complex). Two decomposition algorithms are provided: |
Eigenvalues module | This module mainly provides various eigenvalue solvers. This module also provides some MatrixBase methods, including: |
▼Sparse linear algebra | |
Sparse matrix manipulations | Manipulating and solving sparse problems involves various modules which are summarized below: |
Solving Sparse Linear Systems | In Eigen, there are several methods available to solve linear systems when the coefficient matrix is sparse. Because of the special representation of this class of matrices, special care should be taken in order to get a good performance. See Sparse matrix manipulations for a detailed introduction about sparse matrices in Eigen. This page lists the sparse solvers available in Eigen. The main steps that are common to all these linear solvers are introduced as well. Depending on the properties of the matrix, the desired accuracy, the end-user is able to tune those steps in order to improve the performance of its code. Note that it is not required to know deeply what's hiding behind these steps: the last section presents a benchmark routine that can be easily used to get an insight on the performance of all the available solvers |
Matrix-free solvers | Iterative solvers such as ConjugateGradient and BiCGSTAB can be used in a matrix free context. To this end, user must provide a wrapper class inheriting EigenBase<> and implementing the following methods: |
▼Reference | |
SparseCore module | This module provides a sparse matrix representation, and basic associated matrix manipulations and operations |
OrderingMethods module | This module is currently for internal use only |
SparseCholesky module | This module currently provides two variants of the direct sparse Cholesky decomposition for selfadjoint (hermitian) matrices. Those decompositions are accessible via the following classes: |
SparseLU module | This module defines a supernodal factorization of general sparse matrices. The code is fully optimized for supernode-panel updates with specialized kernels. Please, see the documentation of the SparseLU class for more details |
SparseQR module | Provides QR decomposition for sparse matrices |
IterativeLinearSolvers module | This module currently provides iterative methods to solve problems of the form A x = b , where A is a squared matrix, usually very large and sparse. Those solvers are accessible via the following classes: |
Sparse meta-module | Meta-module including all related modules: |
▼Support modules | Category of modules which add support for external libraries |
CholmodSupport module | This module provides an interface to the Cholmod library which is part of the suitesparse package. It provides the two following main factorization classes: |
MetisSupport module | |
PardisoSupport module | This module brings support for the Intel(R) MKL PARDISO direct sparse solvers |
PaStiXSupport module | This module provides an interface to the PaSTiX library. PaSTiX is a general supernodal, parallel and opensource sparse solver. It provides the two following main factorization classes: |
SuiteSparseQR module | This module provides an interface to the SPQR library, which is part of the suitesparse package |
SuperLUSupport module | This module provides an interface to the SuperLU library. It provides the following factorization class: |
UmfPackSupport module | This module provides an interface to the UmfPack library which is part of the suitesparse package. It provides the following factorization class: |
Quick reference guide for sparse matrices | |
▼Geometry | |
Space transformations | In this page, we will introduce the many possibilities offered by the geometry module to deal with 2D and 3D rotations and projective or affine transformations |
▼Reference | |
▼Geometry module | This module provides support for: |
Global aligned box typedefs | |
Splines_Module | |