Summary: | EigenTensor evalGemm to use MKL batched gemm if MKL on | ||
---|---|---|---|
Product: | Eigen | Reporter: | william.tambellini |
Component: | Tensor | Assignee: | Nobody <eigen.nobody> |
Status: | NEW --- | ||
Severity: | Feature Request | CC: | benoit.steiner.goog, chtz, gael.guennebaud, rmlarsen, william.tambellini |
Priority: | Normal | ||
Version: | 3.3 (current stable) | ||
Hardware: | x86 - general | ||
OS: | Linux | ||
Whiteboard: |
Description
william.tambellini
2018-08-27 23:10:16 UTC
Batched gemm using MKL seems to be implemented in TensorFlow in : ~/repos/tensorflow/tensorflow/core/kernels/mkl_batch_matmul_op.cc method MklCblasGemmBatch() in class BatchMatMulMkl : public OpKernel Would someone know why it has not been implemented directly in EigenTensor ? Kind I don't know the details of the Tensor internals, but it looks like it is not easily possible to simply call `internal::general_matrix_matrix_product`, because the TensorContraction would need to call a matrix-matrix product with inner strides (in some cases). Maybe it is worth falling back to a GEMM call, if inner strides allow this. I also don't know if we want to have a batched GEMM implementation in Eigen -- I guess it could reduce overhead if lots of same-sized products are to be evaluated (and it could exploit multi-threading even for smaller matrices). Does TensorFlow actually use batched GEMM for contraction? You just referred to the place where they wrap the corresponding MKL function. Hi Cristoph Thanks. - calling internal::general_matrix_matrix_product in tensor evalGemm would nt anyway take advantage of mkl batched matmul because I suppose general_matrix_matrix_product does nt handle multiples matmul at a time, and does nt seem to call cblas_?gemm_batch. - I dont propose to have a general generic batched gemm in the full Eigen. I m just thinking to simply take advantage of MKL in TensorContraction evalGemm. It could be similar to evalGemmXSMM(), could be called evalGemmMKL(...), capable to use MKL batched gemm. Would limiting the implementation to TensorContraction be acceptable (no change to Eigen Core)? - the perf gain of mkl batched gemm over non batched is given here : https://software.intel.com/en-us/articles/introducing-batch-gemm-operations - TF, when built with MKL, seems to use batched GEMM if the operator (BatchMatMulMkl) is ofcourse used/called : // This file uses MKL CBLAS batched xGEMM for acceleration of TF Batch // Matrix-Matrix Multiplication (MatMul) operations. // We currently register this kernel only for MKL supported data // types (float, double, complex64, complex128). The macro INTEL_MKL is defined // by the build system only when MKL is chosen as an option at configure stage // and when it is undefined at build time, this file becomes an empty // compilation unit Kind -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/1591. |