This bugzilla service is closed. All entries have been migrated to https://gitlab.com/libeigen/eigen
Bug 279 - Be more flexible regarding mixing types
Summary: Be more flexible regarding mixing types
Status: RESOLVED FIXED
Alias: None
Product: Eigen
Classification: Unclassified
Component: Core - general (show other bugs)
Version: 3.0
Hardware: All All
: --- Unknown
Assignee: Nobody
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 3.3
  Show dependency treegraph
 
Reported: 2011-05-26 19:37 UTC by Gael Guennebaud
Modified: 2019-12-04 10:48 UTC (History)
5 users (show)



Attachments
Work in progress patch (140.22 KB, patch)
2016-06-03 16:40 UTC, Gael Guennebaud
no flags Details | Diff

Description Gael Guennebaud 2011-05-26 19:37:21 UTC
Our ability to allow mixed scalar types in operations is currently very limited, basically we only allow complex<real> * real including scalar multiples, coeff-wise product and matrix product.

However, it might be really desirable to allow, e.g., to mix an autodiff type with a standard scalar type. Casting a constant matrix to a matrix of autodiff variable is a waste of memory and computation.

We should also extend the mixing of types to other operations than products. It makes perfect sens to add complex to real, or substract scalars and autodiff variables.

All this information could be provide through a unique:

template<A,B> struct PromoteType { typedef XXX ResulType; };

By default, when A!=B, this class would be empty and we could detect that to trigger a static assertion.

So this is more like a generalization of our current scalar_product_traits and generalization of our current static assertions in BinaryOp.


The main questions are:
- is it ok to assume the result scalar type does not depend on the operation?
- is ok to assume that if mixing two type for a given operation is allowed, then it is allowed for any binary op? In the worst case, if the underlying binary scalar operation is not defined, then the user will get a compilation failure which should still be pretty clear, like:

cannot find atan2(double,my_custom_type)

see also the "overriding scalar_X_op for auto-dif" thread on the ML.
Comment 1 Benoit Jacob 2011-05-26 19:43:57 UTC
I'm a bit wary of automatic type promotion in general because it's often vague what the promoted type should be. For example, PromoteType<int,float> really shouldn't be defined. Now if you tell me that you're going to be conservative and only define PromoteType<A,B> when A is a subset of B or conversely so that PromoteType<A,B>==A or B in all cases where it's defined, then I think I'm OK.
Comment 2 Gael Guennebaud 2011-05-26 20:17:54 UTC
yes, PromoteType will be very conservative. As I said by default PromoteType<A,B> with A!=B will be defined only for std::complex.

Then it is the responsibility of the user of a custom scalar type to properly specialize PromoteType for his/her scalar type.
Comment 3 Bob Carpenter 2011-05-27 01:22:06 UTC
(I'm the one who requested this feature -- thanks much for taking it up and summarizing.)

The context is that we have a class VAR for automatic differentiation.  We've overridden all of the scalar ops in C++, cmath lib, and Boost's C99 extensions where appropriate to deal with mixing VAR and double (i.e., all double values in original functions).  When used on its own VAR works fine with Eigen in all the cases we've tested from multiplications to eigenvalues.

To answer Gael and Benoit's questions:

1. In all cases we're dealing with, if VAR is the auto-dif variable class, (VAR _op_ double) and (double _op_ VAR) should return VAR type for all _op_ types.

2.  Yes, we've overridden all of the binary ops, at least those that are built into C++, cmath and most of Boost's C99 extensions.  That includes atan2(VAR,double), atan2(double,VAR), and atan2(VAR,VAR), so you shouldn't need to do anything to deal with this.

Although not critical (unless it's being used internally), it'd be nice to have (VAR += double) supported, but (double += VAR) doesn't make any sense.

3. In the templated probability density library I've written for the other part of our project, I use boost::math::promote_args, which has the behavior that I need in terms of promotion.  It deals with integer types, too, but it's intended for contexts where int gets cast to a float or double for arithmetic purposes, which is what we need, as we're only dealing with floating-point and VAR matrices and operations.
Comment 4 Bob Carpenter 2011-05-27 01:24:20 UTC
I should've clarified:  we have defined (VAR += double) but it'd be nice if it'd work with Eigen::Matrix<VAR,...> += Eigen::Matrix<double,...>.
Comment 5 Steve Wolter 2011-11-14 20:57:26 UTC
I'd like to add an additional use case here: Boost.Units compatibility.

When working scientifically, Boost.Units is a great boon. However, since Eigen is fairly type-promotion-agnostic, units don't work on multiplication. Given a quantity in meters of type double (MD), the result of MD * MD is a quantity in square meters of type double. However, Eigen's multiplication routine expects it to be an MD.

Blitz++ solves this problem fairly elegantly by using a specializable class for each basic operation.

Double type promotion, Bob's differentiation and Boost.Units would be almost the same issue here, with the exception that the return type of unit operations do depend on the used units.

I'd be willing to help and lead in the implementation when I get a few pointers. However, before deciding about any go-ahead, it is probably important to know about Eigen's plans for C++11 - decltype would make any implementation easier by an order of magnitude.
Comment 6 Gael Guennebaud 2011-11-25 14:23:09 UTC
Steve,

unfortunately, in order to support as many as possible compilers and platforms we don't plan to allow any C++11  features, especially in core.

That said, we already have a template struct to control the returned scalar type of mixed products, e.g.:

namespace Eigen{
namespace internal{
template<typename T> struct scalar_product_traits<std::complex<T>, T>
{
  typedef std::complex<T> ReturnType;
};
}
}
Comment 7 Gael Guennebaud 2016-04-04 09:26:17 UTC
For 3.3 we should really support real+complex and the likes.
Comment 8 Gael Guennebaud 2016-06-02 20:36:12 UTC
Finally, some progress for scalar multiple:

https://bitbucket.org/eigen/eigen/commits/d896d06d500d/
Summary:     Implement generic scalar*expr and expr*scalar operator based on scalar_product_traits.
This is especially useful for custom scalar types, e.g., to enable float*expr<multi_prec> without conversion.
Comment 9 Gael Guennebaud 2016-06-03 12:05:54 UTC
In order to go further, I propose to generalize scalar_product_traits<Lhs,Rhs> as follows:

namespace Eigen {

template<typename LeftScalar,typename RightScalar,typename BinaryOp>
ScalarBinaryOpTraits {
  static const bool Defined = true/false; // or enum { Defined = 1/0 };
  typedef ... ReturnType;
};

}

In order to be backward compatible with code playing with internal::scalar_product_traits, I propose to only make it deprecated, and make the generic ScalarBinaryOpTraits inherits scalar_product_traits (regardless of BinaryOp, i.e., not only for BinaryOp="mul").


For the record, here is the current list of binary operators:

*,/,-,+,min,max,pow,zeta,polygamma,atan2
some comparisons: <,<=,>,>=,==,!=
and for bool only: &&,||,^
Comment 10 Christoph Hertzberg 2016-06-03 13:04:07 UTC
Generally that sounds good to me. I'm not sure if it makes sense to let all traits inherit from internal::scalar_product_traits, however.

Furthermore, could you clarify what a user would need to define to allow (e.g.) directly adding double to a custom Float type?

template<>
struct ScalarBinaryOpTraits<double, MyFloat, scalar_add_op> {
  static const bool Defined=true;
  typedef MyFloat ReturnType;
};

Especially, shall the third parameter be scalar_add_op<MyFloat>, or will there be a new scalar_add_op<double, MyFloat> and the third parameter is actually:
 template<typename LeftScalar, typename RightScalar,
          typename template<typename,typename> BinaryOp>
 struct BinaryOpTraits {....};


Currently, the scalar_product_traits are referenced by the scalar_product_op, I'm not sure if we can recursively reference from the ScalarBinaryOpTraits here (could be possible, since the scalar_product_op is forward declared).

Also, will we have different traits for + and - (or for == and !=, etc)?
Comment 11 Gael Guennebaud 2016-06-03 14:32:17 UTC
My current impl. looks like this:

template<typename T, typename U> struct scalar_product_traits
{
  enum { Defined = 0 };
};

// all other specializations of scalar_product_traits have been removed.

template<typename ScalarA, typename ScalarB, typename BinaryOp>
struct ScalarBinaryOpTraits
#ifndef EIGEN_PARSED_BY_DOXYGEN
  // for backward compatibility, use the hints given by the (deprecated) internal::scalar_product_traits class.
  : internal::scalar_product_traits<ScalarA,ScalarB>
#endif // EIGEN_PARSED_BY_DOXYGEN
{};

template<typename T, typename BinaryOp>
struct ScalarBinaryOpTraits<T,T,BinaryOp>
{
  enum { Defined = 1 };
  typedef T ReturnType;
};

template<typename T, typename BinaryOp>
struct ScalarBinaryOpTraits<T,std::complex<T>,BinaryOp>
{
  enum { Defined = 1 };
  typedef std::complex<T> ReturnType;
};

template<typename T, typename BinaryOp>
struct ScalarBinaryOpTraits<std::complex<T>, T,BinaryOp>
{
  enum { Defined = 1 };
  typedef std::complex<T> ReturnType;
};


Then, scalar_product_op does:

typedef typename ScalarBinaryOpTraits<LhsScalar,RhsScalar>::ReturnType result_type;

and I also generalized scalar_sum_op to take two scalars and does the same as scalar_product_op.

Finally, EIGEN_CHECK_BINARY_COMPATIBILIY(BINOP,LHS,RHS) boils down to 

ScalarBinaryOpTraits<LHS, RHS,BINOP>::Defined

As you probably noticed, so far I did not bother about the binary-op as it is still unclear to me what should be passed. So the first question to answer is whether we really need to allows specializations based on the underlying op?

If no, then let's just remove the third template parameter of ScalarBinaryOpTraits.

If yes, then we have to find an elegant solution because it would be too much trouble to specialize based on the exact binary functors, there are too many variants. Only for products:

scalar_product_op
scalar_conj_product_op
mul_assign_op
scalar_multiple2_op


Perhaps, we could add generic overloads of ScalarBinaryOpTraits for all of our functors that would inherits some generic tag, e.g.:

ScalarBinaryOpTraits<A,B,scalar_conj_product_op<A,B> > : ScalarBinaryOpTraits<A,B,product_tag> {};

Same for the comparisons.

This way we can introduce some king of hierarchy which can be fine tuned at any level...
Comment 12 Christoph Hertzberg 2016-06-03 15:41:07 UTC
Wouldn't it be possible to make default implementations of (at least) scalar_multiple_op and mul_assign_op based on scalar_product_op?

Generally, they should have the same Defined and PacketAccess, and the Cost should just differ by perhaps a ReadCost. For the OP_assign_op<T1,T2> we must also check if is_same<T1,scalar_OP_op<T1,T2>::ReturnType> is true, i.e., adding a CustomFloat to an ordinary float usually should be impossible without prior casting.

ScalarBinaryOpTraits could then simply just check if the corresponding operator is defined (with some SFINAE magic, perhaps. Or we make default implementations for every operator which just contain an enum {Defined=is_same<T1,T2>::value};). 
And EIGEN_CHECK_BINARY_COMPATIBILIY(BINOP,LHS,RHS) is simply:
  ScalarBinaryOpTraits<BINOP<LHS,RHS> >::Defined

Comparison operators may need some special handling, if they shall remain in the same scalar_cmp_op (which itself makes sense, and which will make implementing packet-comparison easier later).

If we want to simplify defining custom type mixing, we can provide a set of macros which implement specializations of a certain set of operators each. (We can declare the macros as experimental for 3.3.0, in case we are not sure about the API).
Comment 13 Gael Guennebaud 2016-06-03 16:31:09 UTC
I think that we should really be able to say that: "A op B" is fine regardless of op, which is not possible with only: ScalarBinaryOpTraits<BINOP<LHS,RHS> >.
Comment 14 Gael Guennebaud 2016-06-03 16:40:37 UTC
Created attachment 713 [details]
Work in progress patch

For the record, and to help discussion, here is the current status of my refactoring.

Recursively calling ScalarBinaryOpTraits<A,B,OP> from OP<A,B> seems to be OK.

It should also be noted that if someone enable "A op B" for any op but that some are not supported, then the user will still get a quite nice error message. For instance, we the attached patch, it is OK for Eigen to perform real_array = complex_array because we enabled "real op complex" for any op, but the compiler still generate:

AssignmentFunctors.h:24:102: error: assigning to 'double' from incompatible type 'const std::__1::complex<double>'

Which is probably better than what we could do on our own...
Comment 15 Christoph Hertzberg 2016-06-03 16:54:03 UTC
Ok, if that means that as long as there are specializations for A OP B (or func(A,B)) everything works as expected and otherwise a meaningful error is generated we can't get much better.

I hope I can have a closer look at your patch over the weekend ...
Comment 16 Gael Guennebaud 2016-06-03 21:34:04 UTC
For completeness, here is the current error message for: real_array.pow(complex_array)

(yes, this is a bad exemple as this configuration is supported by the standard, and we thus should support it as well)


../eigen/Eigen/src/Core/functors/BinaryFunctors.h:254:83: error: no matching function for call to 'pow'
  inline Scalar operator() (const Scalar& a, const OtherScalar& b) const { return numext::pow(a, b); }
                                                                                  ^~~~~~~~~~~
../eigen/Eigen/src/Core/CoreEvaluators.h:503:12: note: in instantiation of member function 'Eigen::internal::scalar_binary_pow_op<double, std::__1::complex<double>
      >::operator()' requested here
    return m_functor(m_lhsImpl.coeff(index), m_rhsImpl.coeff(index));

[... very long list of template instantiations...]

If we add:

template<typename T>
struct ScalarBinaryOpTraits<T,std::complex<T>, internal::scalar_binary_pow_op<T, std::complex<T> > > {
  enum { Defined=0 };
};

then we get the same error message preceded by another one, much shorter:

../eigen/Eigen/src/Core/CwiseBinaryOp.h:106:7: error: no member named
      'YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY' in 'Eigen::internal::static_assertion<false>'
      EIGEN_CHECK_BINARY_COMPATIBILIY(BinaryOp,typename Lhs::Scalar,typename Rhs::Scalar);
      ^
../eigen/Eigen/src/Core/util/XprHelper.h:659:5: note: expanded from macro 'EIGEN_CHECK_BINARY_COMPATIBILIY'
    YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY)
    ^
../eigen/Eigen/src/Core/../plugins/ArrayCwiseBinaryOps.h:89:10: note: in instantiation of member function
      'Eigen::CwiseBinaryOp<Eigen::internal::scalar_binary_pow_op<double, std::__1::complex<double> >, const Eigen::Array<double, -1, 1, 0, -1, 1>, const
      Eigen::Array<std::__1::complex<double>, -1, 1, 0, -1, 1> >::CwiseBinaryOp' requested here
  return CwiseBinaryOp<internal::scalar_binary_pow_op<Scalar, typename ExponentDerived::Scalar>, const Derived, const ExponentDerived>(
         ^
mixing.cpp:21:12: note: in instantiation of function template specialization 'Eigen::ArrayBase<Eigen::Array<double, -1, 1, 0, -1, 1>
      >::pow<Eigen::Array<std::__1::complex<double>, -1, 1, 0, -1, 1> >' requested here
  acd = ad.pow(acd);


So it might still be a good idea to explicitly disable the very few remaining incompatible configurations.
Comment 17 Gael Guennebaud 2016-06-14 13:35:21 UTC
I think that the set of required changes it ready to be merged, see this PR for details: https://bitbucket.org/eigen/eigen/pull-requests/194
Comment 18 Gael Guennebaud 2016-06-23 13:32:25 UTC
The PR has been merged, thus closing this entry.
Comment 19 Nobody 2019-12-04 10:48:33 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/279.

Note You need to log in before you can comment on or make changes to this bug.