In the documentation page on norms (http://eigen.tuxfamily.org/dox/group__TutorialReductionsVisitorsBroadcasting.html#title1) is written that: "The template parameter p can take the special value Infinity if you want the infinity norm, which is the maximum of the absolute values of the coefficients." The way the norm is computed reflects the definition (see Core/Dot.h, line 193), but this definition holds true only for vectors. The infinity norm of a matrix is not the max absolute values of the coefficients but rather the maximum absolute row sum of the matrix (see Wikipedia, https://en.wikipedia.org/wiki/Matrix_norm). How to reproduce: Eigen::MatrixXd eX = Eigen::MatrixXd(3,3); eX << -3,5,7, 2,6,4, 0,2,8; std::cout << eX.lpNorm<Eigen::Infinity>() << std::endl; // Should be 15, is 8 instead

Our norm and lpNorm implementations were always considered as vector-norms or "entry-wise norms" (i.e., matrices are interpreted as (m*n) x 1 vectors) and not as operators (otherwise, all other norms would be wrong, too). We probably could make this more clear in the documentation -- and if somebody really needs/wants to implement this, we could provide these methods: operatorNorm<int norm>(); operatorNorm<int start, int end>(); (or matrixNorm, instead of operatorNorm)

Oh now I see where is written... But still I think it should be pointed out more clearly. I would suggest something like If you want other $\ell^p$ norms, use the lpNnorm<p>() method. The template parameter p can take the special value Infinity if you want the $\ell^\infty$ norm, which is the maximum of the absolute values of the coefficients. Keep in mind that the lpNorm function applied to a matrix treat the matrix as a vector, and **does not** return the operator norm of the matrix. Also, I'm new to Eigen, but I imagine that implementing a matrixNorm<int norm>() function should be fairly easy. This would also give the possibility to be more explicit when computing the Frobenius norm. m.norm() --> m.matrixNorm<Frobenius>()

For the doc: https://bitbucket.org/eigen/eigen/commits/dbfa22e9d1ff/ Changeset: dbfa22e9d1ff User: ggael Date: 2015-09-28 09:55:36+00:00 Summary: Bug 1071: improve doc on lpNorm and add example for some operator norms Do we want to add shortcuts for matrixNorm methods? It would currently be limited to 1 and Infinity. Maybe it's simpler to let users do their own cooking as in the examples I added in the doc?

The 2-norm would be possible as well, using a SVD-decomposition. For norms above 2 and below Infinity, I'm neither sure if there is a closed-form solution, nor if they have any practical value. Having m.matrixNorm<Frobenius>()==m.norm() and m.matrixNorm<MaxNorm>()==m.lpNorm<Infinity>() might be nicer to read, but I'm not sure either if we really need this.

I think that having a function called m.lpNorm will continue to confuse users because it's not clear exactly what we are taking the norm of. Before sifting through the docs, I see three equally valid interpretations: calculate the norm of each row and return the largest; same as first but calculate column norms instead; or take the induced matrix norm. However, the current m.lpNorm doesn't actually calculate a matrix norm (hence why it is a misleading name). All matrix norms must satisfy the constraint norm(A*B) <= norm(A)*norm(B). Let A=[ 1, -1; -1, 1 ] and B = [ 1, 1; -1, -1], then (if my math is working) (A*B).lpNorm<Infinity>() = 2 while (A).lpNorm<Infinity>() = 1 and (B).lpNorm<Infinity>() = 1 which gives 2<=1. I think it's a bad idea to call the function m.lpNorm for matrices. A method such as m.colPNorm() that returns the vector column-wise norms, or m.maxColPNorm() that returns the result of the current m.lpNorm would emphasize that we aren't calculating a matix norm with this method. In regards to the matrix lp norm, only the one norm, 2-norm (aka spectral norm), and infinity norm are practical to calculate (for induced p norms). In general I prefer using the Frobenius norm as mentioned before. It's relatively simple and cheap and has all the properties of a matrix norm.

We will not change the behavior of lpNorm for matrices, since this is more likely to break code than be beneficial. Frobenius norm is currently the default norm for matrices (as this is essentially the entry-wise 2-norm). You can already calculate column- or row-wise norms using: M.colwise().norm() This returns a vector which can be further reduced to a scalar using maxCoeff(), sum(), norm(), ...

Okay, thanks for the reply. As long as we are using Frobenius by default under the hood; it's my favorite. If we already have M.colwise().norm() and maxCoeff(), that takes care of 1-norm and inf-norm. The 2-norm is the largest singular value, which is already supported with SVD (although there is probably a smarter algorithm). Any other induced p-norm is impossible/infeasible to calculate. Perhaps it would be beneficial to put all of these norms next to each other in the documentation? Call them something like m.induced_{one,two,inf}_norm since those are the only three p norms possible?

-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/1071.