Eigen  3.4.90 (git rev 67eeba6e720c5745abc77ae6c92ce0a44aa7b7ae)
Eigen::SelfAdjointEigenSolver< MatrixType_ > Class Template Reference

Detailed Description

template<typename MatrixType_> class Eigen::SelfAdjointEigenSolver< MatrixType_ >

Computes eigenvalues and eigenvectors of selfadjoint matrices.

This is defined in the Eigenvalues module.

#include <Eigen/Eigenvalues>
Template Parameters
 MatrixType_ the type of the matrix of which we are computing the eigendecomposition; this is expected to be an instantiation of the Matrix class template.

A matrix $$A$$ is selfadjoint if it equals its adjoint. For real matrices, this means that the matrix is symmetric: it equals its transpose. This class computes the eigenvalues and eigenvectors of a selfadjoint matrix. These are the scalars $$\lambda$$ and vectors $$v$$ such that $$Av = \lambda v$$. The eigenvalues of a selfadjoint matrix are always real. If $$D$$ is a diagonal matrix with the eigenvalues on the diagonal, and $$V$$ is a matrix with the eigenvectors as its columns, then $$A = V D V^{-1}$$. This is called the eigendecomposition.

For a selfadjoint matrix, $$V$$ is unitary, meaning its inverse is equal to its adjoint, $$V^{-1} = V^{\dagger}$$. If $$A$$ is real, then $$V$$ is also real and therefore orthogonal, meaning its inverse is equal to its transpose, $$V^{-1} = V^T$$.

The algorithm exploits the fact that the matrix is selfadjoint, making it faster and more accurate than the general purpose eigenvalue algorithms implemented in EigenSolver and ComplexEigenSolver.

Only the lower triangular part of the input matrix is referenced.

Call the function compute() to compute the eigenvalues and eigenvectors of a given matrix. Alternatively, you can use the SelfAdjointEigenSolver(const MatrixType&, int) constructor which computes the eigenvalues and eigenvectors at construction time. Once the eigenvalue and eigenvectors are computed, they can be retrieved with the eigenvalues() and eigenvectors() functions.

The documentation for SelfAdjointEigenSolver(const MatrixType&, int) contains an example of the typical use of this class.

To solve the generalized eigenvalue problem $$Av = \lambda Bv$$ and the likes, see the class GeneralizedSelfAdjointEigenSolver.

MatrixBase::eigenvalues(), class EigenSolver, class ComplexEigenSolver
Inheritance diagram for Eigen::SelfAdjointEigenSolver< MatrixType_ >:

Public Types

typedef Eigen::Index Index

typedef NumTraits< Scalar >::Real RealScalar
Real scalar type for MatrixType_. More...

typedef internal::plain_col_type< MatrixType, RealScalar >::type RealVectorType
Type for vector of eigenvalues as returned by eigenvalues(). More...

typedef MatrixType::Scalar Scalar
Scalar type for matrices of type MatrixType_.

Public Member Functions

template<typename InputType >
SelfAdjointEigenSolvercompute (const EigenBase< InputType > &matrix, int options=ComputeEigenvectors)
Computes eigendecomposition of given matrix. More...

SelfAdjointEigenSolvercomputeDirect (const MatrixType &matrix, int options=ComputeEigenvectors)
Computes eigendecomposition of given matrix using a closed-form algorithm. More...

SelfAdjointEigenSolvercomputeFromTridiagonal (const RealVectorType &diag, const SubDiagonalType &subdiag, int options=ComputeEigenvectors)
Computes the eigen decomposition from a tridiagonal symmetric matrix. More...

const RealVectorTypeeigenvalues () const
Returns the eigenvalues of given matrix. More...

const EigenvectorsTypeeigenvectors () const
Returns the eigenvectors of given matrix. More...

ComputationInfo info () const
Reports whether previous computation was successful. More...

MatrixType operatorInverseSqrt () const
Computes the inverse square root of the matrix. More...

MatrixType operatorSqrt () const
Computes the positive-definite square root of the matrix. More...

Default constructor for fixed-size matrices. More...

template<typename InputType >
SelfAdjointEigenSolver (const EigenBase< InputType > &matrix, int options=ComputeEigenvectors)
Constructor; computes eigendecomposition of given matrix. More...

Constructor, pre-allocates memory for dynamic-size matrices. More...

Static Public Attributes

static const int m_maxIterations
Maximum number of iterations. More...

◆ Index

template<typename MatrixType_ >
Deprecated:
since Eigen 3.3

◆ RealScalar

template<typename MatrixType_ >

Real scalar type for MatrixType_.

This is just Scalar if Scalar is real (e.g., float or double), and the type of the real part of Scalar if Scalar is complex.

◆ RealVectorType

template<typename MatrixType_ >

Type for vector of eigenvalues as returned by eigenvalues().

This is a column vector with entries of type RealScalar. The length of the vector is the size of MatrixType_.

Constructor & Destructor Documentation

template<typename MatrixType_ >
inline

Default constructor for fixed-size matrices.

The default constructor is useful in cases in which the user intends to perform decompositions via compute(). This constructor can only be used if MatrixType_ is a fixed-size matrix; use SelfAdjointEigenSolver(Index) for dynamic-size matrices.

Example:

Matrix4f A = X + X.transpose();
es.compute(A);
cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
es.compute(A + Matrix4f::Identity(4,4)); // re-use es to compute eigenvalues of A+I
cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl;
static const RandomReturnType Random()
Definition: Random.h:114
static const IdentityReturnType Identity()
Definition: CwiseNullaryOp.h:801
Matrix< float, 4, 4 > Matrix4f
4×4 matrix of type float.
Definition: Matrix.h:500

Output:

The eigenvalues of A are:  -1.58 -0.473   1.32   2.46
The eigenvalues of A+I are: -0.581  0.527   2.32   3.46


template<typename MatrixType_ >
inlineexplicit

Constructor, pre-allocates memory for dynamic-size matrices.

Parameters
 [in] size Positive integer, size of the matrix whose eigenvalues and eigenvectors will be computed.

This constructor is useful for dynamic-size matrices, when the user intends to perform decompositions via compute(). The size parameter is only used as a hint. It is not an error to give a wrong size, but it may impair performance.

compute() for an example

template<typename MatrixType_ >
template<typename InputType >
 Eigen::SelfAdjointEigenSolver< MatrixType_ >::SelfAdjointEigenSolver ( const EigenBase< InputType > & matrix, int options = ComputeEigenvectors )
inlineexplicit

Constructor; computes eigendecomposition of given matrix.

Parameters
 [in] matrix Selfadjoint matrix whose eigendecomposition is to be computed. Only the lower triangular part of the matrix is referenced. [in] options Can be ComputeEigenvectors (default) or EigenvaluesOnly.

This constructor calls compute(const MatrixType&, int) to compute the eigenvalues of the matrix matrix. The eigenvectors are computed if options equals ComputeEigenvectors.

Example:

MatrixXd A = X + X.transpose();
cout << "Here is a random symmetric 5x5 matrix, A:" << endl << A << endl << endl;
cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl;
cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl;
double lambda = es.eigenvalues()[0];
cout << "Consider the first eigenvalue, lambda = " << lambda << endl;
VectorXd v = es.eigenvectors().col(0);
cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl;
cout << "... and A * v = " << endl << A * v << endl << endl;
MatrixXd D = es.eigenvalues().asDiagonal();
MatrixXd V = es.eigenvectors();
cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl;
Matrix< double, Dynamic, 1 > VectorXd
Dynamic×1 vector of type double.
Definition: Matrix.h:501
Matrix< double, Dynamic, Dynamic > MatrixXd
Dynamic×Dynamic matrix of type double.
Definition: Matrix.h:501

Output:

Here is a random symmetric 5x5 matrix, A:
1.36 -0.816  0.521   1.43 -0.144
-0.816 -0.659  0.794 -0.173 -0.406
0.521  0.794 -0.541  0.461  0.179
1.43 -0.173  0.461  -1.43  0.822
-0.144 -0.406  0.179  0.822  -1.37

The eigenvalues of A are:
-2.65
-1.77
-0.745
0.227
2.29
The matrix of eigenvectors, V, is:
-0.326 -0.0984   0.347 -0.0109   0.874
-0.207  -0.642   0.228   0.662  -0.232
0.0495   0.629  -0.164    0.74   0.164
0.721  -0.397  -0.402   0.115   0.385
-0.573  -0.156  -0.799 -0.0256  0.0858

Consider the first eigenvalue, lambda = -2.65
If v is the corresponding eigenvector, then lambda * v =
0.865
0.55
-0.131
-1.91
1.52
... and A * v =
0.865
0.55
-0.131
-1.91
1.52

Finally, V * D * V^(-1) =
1.36 -0.816  0.521   1.43 -0.144
-0.816 -0.659  0.794 -0.173 -0.406
0.521  0.794 -0.541  0.461  0.179
1.43 -0.173  0.461  -1.43  0.822
-0.144 -0.406  0.179  0.822  -1.37

compute(const MatrixType&, int)

◆ compute()

template<typename MatrixType_ >
template<typename InputType >
 SelfAdjointEigenSolver& Eigen::SelfAdjointEigenSolver< MatrixType_ >::compute ( const EigenBase< InputType > & matrix, int options = ComputeEigenvectors )

Computes eigendecomposition of given matrix.

Parameters
 [in] matrix Selfadjoint matrix whose eigendecomposition is to be computed. Only the lower triangular part of the matrix is referenced. [in] options Can be ComputeEigenvectors (default) or EigenvaluesOnly.
Returns
Reference to *this

This function computes the eigenvalues of matrix. The eigenvalues() function can be used to retrieve them. If options equals ComputeEigenvectors, then the eigenvectors are also computed and can be retrieved by calling eigenvectors().

This implementation uses a symmetric QR algorithm. The matrix is first reduced to tridiagonal form using the Tridiagonalization class. The tridiagonal matrix is then brought to diagonal form with implicit symmetric QR steps with Wilkinson shift. Details can be found in Section 8.3 of Golub & Van Loan, Matrix Computations.

The cost of the computation is about $$9n^3$$ if the eigenvectors are required and $$4n^3/3$$ if they are not required.

This method reuses the memory in the SelfAdjointEigenSolver object that was allocated when the object was constructed, if the size of the matrix does not change.

Example:

MatrixXf A = X + X.transpose();
es.compute(A);
cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
es.compute(A + MatrixXf::Identity(4,4)); // re-use es to compute eigenvalues of A+I
cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl;
Matrix< float, Dynamic, Dynamic > MatrixXf
Dynamic×Dynamic matrix of type float.
Definition: Matrix.h:500

Output:

The eigenvalues of A are:  -1.58 -0.473   1.32   2.46
The eigenvalues of A+I are: -0.581  0.527   2.32   3.46


◆ computeDirect()

template<typename MatrixType >
 SelfAdjointEigenSolver< MatrixType > & Eigen::SelfAdjointEigenSolver< MatrixType >::computeDirect ( const MatrixType & matrix, int options = ComputeEigenvectors )

Computes eigendecomposition of given matrix using a closed-form algorithm.

This is a variant of compute(const MatrixType&, int options) which directly solves the underlying polynomial equation.

Currently only 2x2 and 3x3 matrices for which the sizes are known at compile time are supported (e.g., Matrix3d).

This method is usually significantly faster than the QR iterative algorithm but it might also be less accurate. It is also worth noting that for 3x3 matrices it involves trigonometric operations which are not necessarily available for all scalar types.

For the 3x3 case, we observed the following worst case relative error regarding the eigenvalues:

• double: 1e-8
• float: 1e-3
compute(const MatrixType&, int options)

◆ computeFromTridiagonal()

template<typename MatrixType >
 SelfAdjointEigenSolver< MatrixType > & Eigen::SelfAdjointEigenSolver< MatrixType >::computeFromTridiagonal ( const RealVectorType & diag, const SubDiagonalType & subdiag, int options = ComputeEigenvectors )

Computes the eigen decomposition from a tridiagonal symmetric matrix.

Parameters
 [in] diag The vector containing the diagonal of the matrix. [in] subdiag The subdiagonal of the matrix. [in] options Can be ComputeEigenvectors (default) or EigenvaluesOnly.
Returns
Reference to *this

This function assumes that the matrix has been reduced to tridiagonal form.

◆ eigenvalues()

template<typename MatrixType_ >
 const RealVectorType& Eigen::SelfAdjointEigenSolver< MatrixType_ >::eigenvalues ( ) const
inline

Returns the eigenvalues of given matrix.

Returns
A const reference to the column vector containing the eigenvalues.
Precondition
The eigenvalues have been computed before.

The eigenvalues are repeated according to their algebraic multiplicity, so there are as many eigenvalues as rows in the matrix. The eigenvalues are sorted in increasing order.

Example:

cout << "The eigenvalues of the 3x3 matrix of ones are:"
<< endl << es.eigenvalues() << endl;
static const ConstantReturnType Ones()
Definition: CwiseNullaryOp.h:672

Output:

The eigenvalues of the 3x3 matrix of ones are:
-3.09e-16
0
3

eigenvectors(), MatrixBase::eigenvalues()

◆ eigenvectors()

template<typename MatrixType_ >
 const EigenvectorsType& Eigen::SelfAdjointEigenSolver< MatrixType_ >::eigenvectors ( ) const
inline

Returns the eigenvectors of given matrix.

Returns
A const reference to the matrix whose columns are the eigenvectors.
Precondition
The eigenvectors have been computed before.

Column $$k$$ of the returned matrix is an eigenvector corresponding to eigenvalue number $$k$$ as returned by eigenvalues(). The eigenvectors are normalized to have (Euclidean) norm equal to one. If this object was used to solve the eigenproblem for the selfadjoint matrix $$A$$, then the matrix returned by this function is the matrix $$V$$ in the eigendecomposition $$A = V D V^{-1}$$.

For a selfadjoint matrix, $$V$$ is unitary, meaning its inverse is equal to its adjoint, $$V^{-1} = V^{\dagger}$$. If $$A$$ is real, then $$V$$ is also real and therefore orthogonal, meaning its inverse is equal to its transpose, $$V^{-1} = V^T$$.

Example:

cout << "The first eigenvector of the 3x3 matrix of ones is:"
<< endl << es.eigenvectors().col(0) << endl;

Output:

The first eigenvector of the 3x3 matrix of ones is:
-0.816
0.408
0.408

eigenvalues()

◆ info()

template<typename MatrixType_ >
 ComputationInfo Eigen::SelfAdjointEigenSolver< MatrixType_ >::info ( ) const
inline

Reports whether previous computation was successful.

Returns
Success if computation was successful, NoConvergence otherwise.

◆ operatorInverseSqrt()

template<typename MatrixType_ >
 MatrixType Eigen::SelfAdjointEigenSolver< MatrixType_ >::operatorInverseSqrt ( ) const
inline

Computes the inverse square root of the matrix.

Returns
the inverse positive-definite square root of the matrix
Precondition
The eigenvalues and eigenvectors of a positive-definite matrix have been computed before.

This function uses the eigendecomposition $$A = V D V^{-1}$$ to compute the inverse square root as $$V D^{-1/2} V^{-1}$$. This is cheaper than first computing the square root with operatorSqrt() and then its inverse with MatrixBase::inverse().

Example:

MatrixXd A = X * X.transpose();
cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl;
cout << "The inverse square root of A is: " << endl;
cout << es.operatorInverseSqrt() << endl;
cout << "We can also compute it with operatorSqrt() and inverse(). That yields: " << endl;
cout << es.operatorSqrt().inverse() << endl;

Output:

Here is a random positive-definite matrix, A:
1.41 -0.697 -0.111  0.508
-0.697  0.423 0.0991   -0.4
-0.111 0.0991   1.25  0.902
0.508   -0.4  0.902    1.4

The inverse square root of A is:
1.88   2.78 -0.546  0.605
2.78   8.61   -2.3   2.74
-0.546   -2.3   1.92  -1.36
0.605   2.74  -1.36   2.18
We can also compute it with operatorSqrt() and inverse(). That yields:
1.88   2.78 -0.546  0.605
2.78   8.61   -2.3   2.74
-0.546   -2.3   1.92  -1.36
0.605   2.74  -1.36   2.18

operatorSqrt(), MatrixBase::inverse(), MatrixFunctions Module

◆ operatorSqrt()

template<typename MatrixType_ >
 MatrixType Eigen::SelfAdjointEigenSolver< MatrixType_ >::operatorSqrt ( ) const
inline

Computes the positive-definite square root of the matrix.

Returns
the positive-definite square root of the matrix
Precondition
The eigenvalues and eigenvectors of a positive-definite matrix have been computed before.

The square root of a positive-definite matrix $$A$$ is the positive-definite matrix whose square equals $$A$$. This function uses the eigendecomposition $$A = V D V^{-1}$$ to compute the square root as $$A^{1/2} = V D^{1/2} V^{-1}$$.

Example:

MatrixXd A = X * X.transpose();
cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl;
MatrixXd sqrtA = es.operatorSqrt();
cout << "The square root of A is: " << endl << sqrtA << endl;
cout << "If we square this, we get: " << endl << sqrtA*sqrtA << endl;

Output:

Here is a random positive-definite matrix, A:
1.41 -0.697 -0.111  0.508
-0.697  0.423 0.0991   -0.4
-0.111 0.0991   1.25  0.902
0.508   -0.4  0.902    1.4

The square root of A is:
1.09  -0.432 -0.0685     0.2
-0.432   0.379   0.141  -0.269
-0.0685   0.141       1   0.468
0.2  -0.269   0.468    1.04
If we square this, we get:
1.41 -0.697 -0.111  0.508
-0.697  0.423 0.0991   -0.4
-0.111 0.0991   1.25  0.902
0.508   -0.4  0.902    1.4

operatorInverseSqrt(), MatrixFunctions Module

◆ m_maxIterations

template<typename MatrixType_ >