This bugzilla service is closed. All entries have been migrated to
Bug 1717 - Small change in FFT size results in 100x slower FFT using default backend
Summary: Small change in FFT size results in 100x slower FFT using default backend
Alias: None
Product: Eigen
Classification: Unclassified
Component: Unsupported modules (show other bugs)
Version: 3.3 (current stable)
Hardware: All All
: Normal Performance Problem
Assignee: Nobody
Depends on:
Reported: 2019-05-17 21:02 UTC by John Abedor
Modified: 2019-12-04 18:39 UTC (History)
2 users (show)


Description John Abedor 2019-05-17 21:02:42 UTC
Time to compute FFT of a VectorXd of length 100000 ~ 0.1 sec.
Time to compute FFT of a VectorXd of length 100001 ~ 10 sec

Steps to reproduce:
Compile and run the following code.
Runs in less than 0.1 sec on my machine.
Then, change line defining "length" to increase it by one.
Runs in about 10 sec on my machine.

#include <Eigen/Core>
#include <unsupported/Eigen/FFT>
#include <cstdio>

int main()
    Eigen::FFT<double> fft;
    int const length = 100000;
    //int const length = 100001;
    Eigen::VectorXcd x = Eigen::VectorXcd::Zero( length );
    Eigen::VectorXcd fft_of_x;
    fft.fwd( fft_of_x, x );
    printf( "%g\n", fft_of_x[ 0 ].real() );
    return 0;

Eigen Version:

Additional information:
I observed similar degradation with and without optimization (-O2).
Issue disappears when I use the fftw backend.
Comment 1 Christoph Hertzberg 2019-05-17 21:58:21 UTC
I'm pretty sure this is mostly because 100000 nicely factorizes into numbers <=5 but 100001 does not. Very likely FFTW does much more sophisticated things to work on "odd-sized" inputs (i.e., sizes with large factors).

If anyone knows how to implement this for the default backend (without entirely copying FFTW), patches are welcome.

An obvious workaround is of course to use the FFTW-backend, if necessary.
Comment 2 Nobody 2019-12-04 18:39:17 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance:

Note You need to log in before you can comment on or make changes to this bug.