|Summary:||Use aligned loads/stores whenever possible to speedup the GEBP kernel|
|Product:||Eigen||Reporter:||Benoit Steiner <benoit.steiner.goog>|
|Component:||Core - matrix products||Assignee:||Nobody <eigen.nobody>|
Description Benoit Steiner 2014-01-08 18:29:44 UTC
Created attachment 413 [details] Patch against the latest version of the code Using unaligned sse loads/stores is slower than using the corresponding aligned instructions, even if the underlying address is aligned. The attached patch attempts to use aligned loads/stores as much as possible in the gebp_kernel. This results in a performance gain of a few percent on the Eigen matrix-matrix benchmark as depicted by the attached before/after pictures (run on Sandy Bridge with gcc 4.6).
Comment 1 Benoit Steiner 2014-01-08 18:30:39 UTC
Created attachment 414 [details] Results of the Matrix-Matrix benchmark before applying the patch
Comment 2 Benoit Steiner 2014-01-08 18:31:05 UTC
Created attachment 415 [details] Results of the Matrix-Matrix benchmark after applying the patch
Comment 3 Gael Guennebaud 2014-03-07 08:40:59 UTC
The drawback of this patch is that it has a non negligible impact on both compilation times and binary code size since it duplicates each instantiation of a matrix-matrix product. Since the speedup seems to be rather small, I'm not sure it's a reasonable change.
Comment 4 Gael Guennebaud 2014-03-07 09:00:17 UTC
Created attachment 423 [details] merged benchmark plots
Comment 5 Gael Guennebaud 2014-03-07 09:02:30 UTC
Looking at this old benchmark (http://download.tuxfamily.org/eigen/btl-results-110323/matrix_matrix.pdf) it seems that MKL was (is still?) doing something similar since it exhibits strong spikes for sizes allowing aligned accesses.
Comment 6 Gael Guennebaud 2014-03-07 10:09:36 UTC
Two more remarks: - there already exist conditional version of the pstore/pload; they are called pstoret and ploadt - to avoid binary code duplication, maybe a dynamic branching in the kernel would have a negligible impact
Comment 7 Christoph Hertzberg 2014-03-07 11:22:14 UTC
(In reply to comment #4) > Created attachment 423 [details] > merged benchmark plots The impact of that patch looks indeed very minor (there are even some outliers for which performance slightly drops). If this means duplicating the binary size and compile-time I would rather not do it. Related to the discussion you started in bug 721: How would the impact be on other architectures? (In reply to comment #6) > - to avoid binary code duplication, maybe a dynamic branching in the kernel > would have a negligible impact Have you checked the impact of that idea (on performance and binary size)?
Comment 8 Gael Guennebaud 2014-03-07 19:42:50 UTC
(In reply to comment #7) > (In reply to comment #4) > > Created attachment 423 [details] > > merged benchmark plots > > The impact of that patch looks indeed very minor (there are even some outliers > for which performance slightly drops). If this means duplicating the binary > size and compile-time I would rather not do it. > Related to the discussion you started in bug 721: How would the impact be on > other architectures? We have to check, especially with AVX and NEON. > (In reply to comment #6) > > - to avoid binary code duplication, maybe a dynamic branching in the kernel > > would have a negligible impact > > Have you checked the impact of that idea (on performance and binary size)? no, this was more a todo item but my guess is that the overhead should be negligible with this approach.
Comment 10 Benoit Steiner 2014-03-19 01:28:12 UTC
I have attached an updated version of the patch: it relies on the existing ploadt and pstoret primitives instead of introducing new ones that do the same thing. I also updated the code to choose between aligned and unaligned memory accesses in the gebp_kernel itself. This reduces the size of the binary.
Comment 11 Benoit Steiner 2014-03-19 02:29:01 UTC
I have rerun the matrix-matrix benchmarks on SandyBridge and gcc-4.8.3 with the updated patch: the code is now a little faster for matrices that are less than 2328x2328, a little slower for bigger matrices. At this point it looks like a wash, I'll revisit this patch later once the AVX code is merged.
Comment 12 Nobody 2019-12-04 12:55:53 UTC
-- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/724.