Difference between revisions of "Working notes - Tensor module"

From Eigen
Jump to: navigation, search
(Linked mailing list thread, making a lot of the stuff on this page obsolete)
 
Line 32: Line 32:
 
Currently, there is no class hierarchy, and the Tensor class template only stores values. Ideally, we want to have a class hierarchy in the same way the rest of Eigen does, because people have put in a lot of thought into that.
 
Currently, there is no class hierarchy, and the Tensor class template only stores values. Ideally, we want to have a class hierarchy in the same way the rest of Eigen does, because people have put in a lot of thought into that.
  
See also: [http://eigen.tuxfamily.org/dox-devel/TopicClassHierarchy.html Eigen Dokumentation: Class Hierarchy]
+
See also: [http://eigen.tuxfamily.org/dox-devel/TopicClassHierarchy.html Eigen Documentation: Class Hierarchy]
  
 
=== Proposal for naming the new class templates (objects) ===
 
=== Proposal for naming the new class templates (objects) ===

Latest revision as of 14:20, 7 January 2022

See also: Tensor support for a general introduction

IMPORTANT: A lot of the stuff on this page is now obsolete, due to work by Benoit Steiner. This page will be updated soon. In the mean time, take look at This mailing list thread.

General TODO list

Let's start with a general TODO / wishlist, and some things might be very difficult to implement, so the first goal would be to start with the lower-hanging fruit on this list.

  • test the whole thing with different compilers (MSVC etc.) and fix potential compatibility problems in this regard
  • simple expressions such as adding / subtracting two tensors and scalar multiplication
  • subtensors / generalized blocks
  • TensorMap analogous to Map
  • Ref<Tensor> or (if that's not possible) TensorRef<Tensor>
  • reshaping (analogous to the to-be-implemented version in matrices)
  • .asMatrix() for tensors (or tensor expressions) with two indices
  • .asVector() for tensors (or tensor expressions) with one index
  • similarly, somehow implement a .asTensor() expression for the standard matrices/vectors/array (if we can do that in a way that doesn't force people to use a C++11 compiler if they don't want to use tensors...)
  • tensor "arrays", so that element-wise multiplication (or sine or ...) is possible
  • efficient full and partial transpositions (full transposition: reverse order of all indices, partial: reverse only a few of them)
  • contractions
  • tensor product of two matrices that generates a tensor of rank 4
  • generic expressions with index placeholders, such has result(_1, _2, _3) = tensor1(_1, _, _) * tensor2(_, _2, _3) or similar (need to define a nice API for this)
    • in that vein, use efficient GEMM kernels for this when possible (might actually make sense to first partially transpose, then use GEMM kernel and then partially transpose back for some expressions, but not for others -> need some way of determining cost of these expressions)
    • see this mailing list posting for some ideas and references to other C++ tensor libraries
  • nice user interface to write loops over multiple indices (don't have to write 5 for-loops)
  • comma-initializer for tensors? if tensor has 4 indices, how do we handle this?
  • fixed-size tensors
  • sparse tensors

Class Hierarchy

Currently, there is no class hierarchy, and the Tensor class template only stores values. Ideally, we want to have a class hierarchy in the same way the rest of Eigen does, because people have put in a lot of thought into that.

See also: Eigen Documentation: Class Hierarchy

Proposal for naming the new class templates (objects)

Class for matrices Proposed name for tensors
EigenBase EigenTensorBase
DenseCoeffsBase DenseCoeffsTensorBase
DenseBase DenseTensorBase
MatrixBase TensorBase
PlainObjectBase PlainObjectTensorBase
Matrix Tensor
ArrayBase TensorArrayBase (to be implemented later)
Array TensorArray (to be implemented later)
sparse stuff let's cross that bridge when we get there...

Packet access functions

Most of the code in the above classes can be adjusted quite mechanically to fit the tensor structure (or just be left out for now because it's not immediately needed). However, there are exceptions with the packet access functions:

  • packet() (see e.g. PlainObjectBase) is fine, since it aceepts one or two indices as arguments and just returns a PacketScalar. It can easily be changed to support variadic arguments / std::array<Index, N> in order to map to tensor indices
  • writePacket() is problematic because it has two types of arguments: first one or two indices which determine the start of the packet and then a PacketScalar that defines the value to be stored. For matrices/vectors this argument order is completely fine, but if one wants to make the indices variadic, then this fails, since variadic arguments have to be the last set of arguments of a function
    • i.e. template<int loadMode, typename... IndexTypes> void writePacket(IndexTypes..., PacketScalar value) won't work in that argument order
    • (this problem does not apply to coeffRef() since it returns a reference to a coefficient)
    • maybe this is overthinking this? packet() / writePacket() seem to be called only from within Eigen itself, but never externally? so maybe we just force it to be either a single index (for linear access) or a std::array<Index, N> and don't accept variadic arguments on this level?

Expression template names

In Eigen there are a lot of expression templates such as CwiseBinaryOp etc. For tensors, there has to be a completely new implementation. However, it would be nice to reuse the name and not have to create names like TensorCwiseBinaryOp for each and every one of them.

Issues:

  • is it possible to specialize CwiseBinaryOp<BinOp, Lhs, Rhs> for Lhs and Rhs being of type EigenTensorBase?
  • if so, is it also possible to specialize internal::traits<CwiseBinaryOp> in the same manner? or should we stay away from internal::traits anyway (currently used minmally by the Tensor class itself) and introduce internal::tensor_traits?

What is the best solution here? Or should we just stick with TensorCwiseBinaryOp?