polygamma computes the n-th derivative of the digamma (psi) function evaluated at position x. In all tools I checked, the parameter orders are as follows: polygamma(n,x) (both n and x are arrays) The first parameter is sometimes optional (n=0). It is thus natural to follow the same convention for the free function: Eigen::polygamma(n,x) but for the method based API, it feels odd to write: (1) n.polygamma(x) I'd rather inverse the parameter to write: (2) x.polygamma(n); but the first version is already implemented and used in Tensor. So currently, while fixing polygamma in Core (it was defined as a unary operator), I disabled the method-based API, and kept only the free function. Perhaps, we can keep it as is, and also remove the other ambiguous method-based binary operators that have been introduced in 3.3, like zeta. What do you think?

Regarding polygamma(n,x) vs polygamma(x,n) indeed only the first seems to be used (sometimes it is called psi(n,x) or psi[n](x) -- depending on what syntax the language allows). For the more general question about free functions/method-based API: I also think disabling the method-based API would simplify things a lot. In fact, definitions of new functions could then be completely wrapped into a single macro (assuming there exists a scalar implementation of the function): #define EIGEN_DEFINE_BINARY_FUNCTION(name, func) \ namespace Eigen { namespace internal { \ template<typename Scalar> struct name ## _op { *implementation using func* } \ } \ template<typename Derived1, typename Derived2> \ inline const Eigen::CwiseBinaryOp< \ internal::name ## _op<typename Derived1::Scalar>, \ const Derived1, const Derived2> \ name(const Eigen::ArrayBase<Derived1>& a, \ const Eigen::ArrayBase<Derived2>& b) \ { return Eigen::CwiseBinarOp<.....>(a.derived(), b.derived()); } \ } If we make this macro public (also for unary and ternary functors), it would be very easy for users to implement their own Array functions: #include <boost/math/special_functions/beta.hpp> EIGEN_DEFINE_BINARY_FUNCTION(betainc, boost::math::ibeta) With some thought, we may even be able to encapsulate the packet_traits<Scalar>::HasFunction flags into the corresponding function_op structures (adding more and more enums into the packet_traits is not really future-proof, I think).

If we removed all method-based implementations of the special functions and only leave the free functions (i.e., also for the unary functions), we could easily (and cleanly) separate them into a separate module. I guess they are barely used and the implementation got quite big (if the new beta_inc is included, sloccount counts 881 SLOCs in SpecialFunctions.h, whereas, e.g., the entire (dense) Cholesky module has 766 SLOCs). And honestly, I would prefer having them in an unsupported module for now, as I'm not really trusting the implementations (nor do we have extensive unit tests at the moment).

Re: moving into a separate module. Sounds like it would need to be done in the following order: 1. remove the method-based implementations of lgamma, igamma*, zeta, polygamma, etc. 2. change all unit tests to use the modified methods 3. split off SpecialFunctions into their own module (how would I do this? no clue here...). would this include splitting up SpecialFunctions.h into multiple subfiles?

Step 0: Decide if we want this, and if we want it in unsupported at first. Step 3 won't be too complicated then.

yes, it's probably safer to move them in a separate unsupported module first. We should also decide where to cross the line? Shall we (a) move everything? or (b) keep the ones which are simple wrappers on top of C++11's math? I would go with (b). Eventually, on the longer term, I'm wondering whether it would not make more sense to simply wrap boost::math. If it's a CUDA compatibility issue, perhaps it would then make more sense to propose a patch to boost::math?

Simply wrapping existing functions (such as std-math or boost::math) would have the big advantage that we would not be responsible for errors in their implementation. Even if we copy an existing implementation, I'd feel responsible for fixing errors in it. A disadvantage would be that there would be no easy mechanism to provide packet-implementations of these functions (however, some of these implementations are so highly iterative that I doubt that efficient SIMD implementations can be made, anyway). Regarding (a) or (b), I'm not sure. I think they somehow are closely related which would justify putting all of them into a single module. On the other hand, the wrappers around the std-functions are so simple that we could keep it in Core. I guess I'd be ok with both.

Another advantage of providing own implementations would be that we could more easily provide specializations for mixed int,double / double,int implementations (several special functions become rather trivial if at least one argument is an integer). Also, if that is ever needed, specializations for complex arguments would be possible (which I think boost does not provide).

My reasoning for (b) was to support the "standard" by default, but with this reasoning Core might become extremely large as the STL grows. So ok to move the whole SpecialFunctions file.

I've created a PR for the refactoring: https://bitbucket.org/eigen/eigen/pull-requests/210

-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/1232.