Eigen
3.4.90 (git rev 67eeba6e720c5745abc77ae6c92ce0a44aa7b7ae)

This page explains Eigen's reductions, visitors and broadcasting and how they are used with matrices and arrays .
In Eigen, a reduction is a function taking a matrix or array, and returning a single scalar value. One of the most used reductions is .sum() , returning the sum of all the coefficients inside a given matrix or array.
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
using namespace std;
int main()
{
Eigen::Matrix2d mat;
mat << 1, 2,
3, 4;
cout << "Here is mat.sum(): " << mat.sum() << endl;
cout << "Here is mat.prod(): " << mat.prod() << endl;
cout << "Here is mat.mean(): " << mat.mean() << endl;
cout << "Here is mat.minCoeff(): " << mat.minCoeff() << endl;
cout << "Here is mat.maxCoeff(): " << mat.maxCoeff() << endl;
cout << "Here is mat.trace(): " << mat.trace() << endl;
}
internal::traits< Derived >::Scalar minCoeff() const Definition: Redux.h:433 internal::traits< Derived >::Scalar maxCoeff() const Definition: Redux.h:448  Here is mat.sum(): 10 Here is mat.prod(): 24 Here is mat.mean(): 2.5 Here is mat.minCoeff(): 1 Here is mat.maxCoeff(): 4 Here is mat.trace(): 5 
The trace of a matrix, as returned by the function trace()
, is the sum of the diagonal coefficients and can equivalently be computed a.diagonal().sum()
.
The (Euclidean a.k.a. \(\ell^2\)) squared norm of a vector can be obtained squaredNorm() . It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients.
Eigen also provides the norm() method, which returns the square root of squaredNorm() .
These operations can also operate on matrices; in that case, a nbyp matrix is seen as a vector of size (n*p), so for example the norm() method returns the "Frobenius" or "HilbertSchmidt" norm. We refrain from speaking of the \(\ell^2\) norm of a matrix because that can mean different things.
If you want other coefficientwise \(\ell^p\) norms, use the lpNorm<p>() method. The template parameter p can take the special value Infinity if you want the \(\ell^\infty\) norm, which is the maximum of the absolute values of the coefficients.
The following example demonstrates these methods.
Example:  Output: 

#include <Eigen/Dense>
#include <iostream>
int main()
{
Eigen::VectorXf v(2);
Eigen::MatrixXf m(2,2), n(2,2);
v << 1,
2;
m << 1,2,
3,4;
std::cout << "v.squaredNorm() = " << v.squaredNorm() << std::endl;
std::cout << "v.norm() = " << v.norm() << std::endl;
std::cout << "v.lpNorm<1>() = " << v.lpNorm<1>() << std::endl;
std::cout << "v.lpNorm<Infinity>() = " << v.lpNorm<Eigen::Infinity>() << std::endl;
std::cout << std::endl;
std::cout << "m.squaredNorm() = " << m.squaredNorm() << std::endl;
std::cout << "m.norm() = " << m.norm() << std::endl;
std::cout << "m.lpNorm<1>() = " << m.lpNorm<1>() << std::endl;
std::cout << "m.lpNorm<Infinity>() = " << m.lpNorm<Eigen::Infinity>() << std::endl;
}
 v.squaredNorm() = 5 v.norm() = 2.23607 v.lpNorm<1>() = 3 v.lpNorm<Infinity>() = 2 m.squaredNorm() = 30 m.norm() = 5.47723 m.lpNorm<1>() = 10 m.lpNorm<Infinity>() = 4 
Operator norm: The 1norm and \(\infty\)norm matrix operator norms can easily be computed as follows:
Example:  Output: 

#include <Eigen/Dense>
#include <iostream>
int main()
{
Eigen::MatrixXf m(2,2);
m << 1,2,
3,4;
std::cout << "1norm(m) = " << m.cwiseAbs().colwise().sum().maxCoeff()
<< " == " << m.colwise().lpNorm<1>().maxCoeff() << std::endl;
std::cout << "inftynorm(m) = " << m.cwiseAbs().rowwise().sum().maxCoeff()
<< " == " << m.rowwise().lpNorm<1>().maxCoeff() << std::endl;
}
 1norm(m) = 6 == 6 inftynorm(m) = 7 == 7 
See below for more explanations on the syntax of these expressions.
The following reductions operate on boolean values:
These are typically used in conjunction with the coefficientwise comparison and equality operators provided by Array. For instance, array > 0
is an Array of the same size as array
, with true at those positions where the corresponding coefficient of array
is positive. Thus, (array > 0).all()
tests whether all coefficients of array
are positive. This can be seen in the following example:
Example:  Output: 

#include <Eigen/Dense>
#include <iostream>
int main()
{
Eigen::ArrayXXf a(2,2);
a << 1,2,
3,4;
std::cout << "(a > 0).all() = " << (a > 0).all() << std::endl;
std::cout << "(a > 0).any() = " << (a > 0).any() << std::endl;
std::cout << "(a > 0).count() = " << (a > 0).count() << std::endl;
std::cout << std::endl;
std::cout << "(a > 2).all() = " << (a > 2).all() << std::endl;
std::cout << "(a > 2).any() = " << (a > 2).any() << std::endl;
std::cout << "(a > 2).count() = " << (a > 2).count() << std::endl;
}
Generalpurpose arrays with easy API for coefficientwise operations. Definition: Array.h:49  (a > 0).all() = 1 (a > 0).any() = 1 (a > 0).count() = 4 (a > 2).all() = 0 (a > 2).any() = 1 (a > 2).count() = 2 
TODO
In the meantime you can have a look at the DenseBase::redux() function.
Visitors are useful when one wants to obtain the location of a coefficient inside a Matrix or Array. The simplest examples are maxCoeff(&x,&y) and minCoeff(&x,&y), which can be used to find the location of the greatest or smallest coefficient in a Matrix or Array.
The arguments passed to a visitor are pointers to the variables where the row and column position are to be stored. These variables should be of type Index , as shown below:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
int main()
{
Eigen::MatrixXf m(2,2);
m << 1, 2,
3, 4;
//get location of maximum
Eigen::Index maxRow, maxCol;
float max = m.maxCoeff(&maxRow, &maxCol);
//get location of minimum
Eigen::Index minRow, minCol;
float min = m.minCoeff(&minRow, &minCol);
std::cout << "Max: " << max << ", at: " <<
maxRow << "," << maxCol << std::endl;
std:: cout << "Min: " << min << ", at: " <<
minRow << "," << minCol << std::endl;
}
EIGEN_DEFAULT_DENSE_INDEX_TYPE Index The Index type as used for the API. Definition: Meta.h:59  Max: 4, at: 1,1 Min: 1, at: 0,0 
Both functions also return the value of the minimum or maximum coefficient.
Partial reductions are reductions that can operate column or rowwise on a Matrix or Array, applying the reduction operation on each column or row and returning a column or row vector with the corresponding values. Partial reductions are applied with colwise() or rowwise() .
A simple example is obtaining the maximum of the elements in each column in a given matrix, storing the result in a row vector:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
using namespace std;
int main()
{
Eigen::MatrixXf mat(2,4);
mat << 1, 2, 6, 9,
3, 1, 7, 2;
std::cout << "Column's maximum: " << std::endl
}
const MaxCoeffReturnType maxCoeff() const Definition: VectorwiseOp.h:396  Column's maximum: 3 2 7 9 
The same operation can be performed rowwise:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
using namespace std;
int main()
{
Eigen::MatrixXf mat(2,4);
mat << 1, 2, 6, 9,
3, 1, 7, 2;
std::cout << "Row's maximum: " << std::endl
}
 Row's maximum: 9 7 
Note that columnwise operations return a row vector, while rowwise operations return a column vector.
It is also possible to use the result of a partial reduction to do further processing. Here is another example that finds the column whose sum of elements is the maximum within a matrix. With columnwise partial reductions this can be coded as:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
int main()
{
Eigen::MatrixXf mat(2,4);
mat << 1, 2, 6, 9,
3, 1, 7, 2;
Eigen::Index maxIndex;
std::cout << "Maximum sum at position " << maxIndex << std::endl;
std::cout << "The corresponding vector is: " << std::endl;
std::cout << mat.col( maxIndex ) << std::endl;
std::cout << "And its sum is is: " << maxNorm << std::endl;
}
 Maximum sum at position 2 The corresponding vector is: 6 7 And its sum is is: 13 
The previous example applies the sum() reduction on each column though the colwise() visitor, obtaining a new matrix whose size is 1x4.
Therefore, if
\[ \mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix} \]
then
\[ \mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix} \]
The maxCoeff() reduction is finally applied to obtain the column index where the maximum sum is found, which is the column index 2 (third column) in this case.
The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in one direction.
A simple example is to add a certain column vector to each column in a matrix. This can be accomplished with:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
using namespace std;
int main()
{
Eigen::MatrixXf mat(2,4);
Eigen::VectorXf v(2);
mat << 1, 2, 6, 9,
3, 1, 7, 2;
v << 0,
1;
//add v to each column of m
mat.colwise() += v;
std::cout << "Broadcasting result: " << std::endl;
std::cout << mat << std::endl;
}
 Broadcasting result: 1 2 6 9 4 2 8 3 
We can interpret the instruction mat.colwise() += v
in two equivalent ways. It adds the vector v
to every column of the matrix. Alternatively, it can be interpreted as repeating the vector v
four times to form a fourbytwo matrix which is then added to mat:
\[ \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}. \]
The operators =
, +
and 
can also be used columnwise and rowwise. On arrays, we can also use the operators *=
, /=
, *
and /
to perform coefficientwise multiplication and division columnwise or rowwise. These operators are not available on matrices because it is not clear what they would do. If you want multiply column 0 of a matrix mat
with v(0)
, column 1 with v(1)
, and so on, then use mat = mat * v.asDiagonal()
.
It is important to point out that the vector to be added columnwise or rowwise must be of type Vector, and cannot be a Matrix. If this is not met then you will get compiletime error. This also means that broadcasting operations can only be applied with an object of type Vector, when operating with Matrix. The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should not mix arrays and matrices in the same expression.
To perform the same operation rowwise we can do:
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
using namespace std;
int main()
{
Eigen::MatrixXf mat(2,4);
Eigen::VectorXf v(4);
mat << 1, 2, 6, 9,
3, 1, 7, 2;
v << 0,1,2,3;
//add v to each row of m
mat.rowwise() += v.transpose();
std::cout << "Broadcasting result: " << std::endl;
std::cout << mat << std::endl;
}
 Broadcasting result: 1 3 8 12 3 2 9 5 
Broadcasting can also be combined with other operations, such as Matrix or Array operations, reductions and partial reductions.
Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds the nearest neighbour of a vector v
within the columns of matrix m
. The Euclidean distance will be used in this example, computing the squared Euclidean distance with the partial reduction named squaredNorm() :
Example:  Output: 

#include <iostream>
#include <Eigen/Dense>
int main()
{
Eigen::MatrixXf m(2,4);
Eigen::VectorXf v(2);
m << 1, 23, 6, 9,
3, 11, 7, 2;
v << 2,
3;
Eigen::Index index;
// find nearest neighbour
(m.colwise()  v).colwise().squaredNorm().minCoeff(&index);
std::cout << "Nearest neighbour is column " << index << ":" << std::endl;
std::cout << m.col(index) << std::endl;
}
 Nearest neighbour is column 0: 1 3 
The line that does the job is
We will go step by step to understand what is happening:
m.colwise()  v
is a broadcasting operation, subtracting v
from each column in m
. The result of this operation is a new matrix whose size is the same as matrix m
: \[ \mbox{m.colwise()  v} = \begin{bmatrix} 1 & 21 & 4 & 7 \\ 0 & 8 & 4 & 1 \end{bmatrix} \]
(m.colwise()  v).colwise().squaredNorm()
is a partial reduction, computing the squared norm columnwise. The result of this operation is a row vector where each coefficient is the squared Euclidean distance between each column in m
and v
: \[ \mbox{(m.colwise()  v).colwise().squaredNorm()} = \begin{bmatrix} 1 & 505 & 32 & 50 \end{bmatrix} \]
minCoeff(&index)
is used to obtain the index of the column in m
that is closest to v
in terms of Euclidean distance.