Skip to content

Conversation

rohanbabbar04
Copy link
Contributor

@rohanbabbar04 rohanbabbar04 commented Aug 30, 2025

Closes #386

  • Add new engine=mkl_fft option for Fourier transforms (FFT) which uses the mkl_fft package.
  • Add tests and an example using this engine.

@mrava87 mrava87 requested review from mrava87 and cako and removed request for mrava87 August 31, 2025 08:36
@mrava87
Copy link
Collaborator

mrava87 commented Aug 31, 2025

Nice @rohanbabbar04 😄 I'll take a look soon, @cako would be great if you can do the same.

PS: one thing I noticed quickly skimming through the files changed is that we should also modify https://github.com/PyLops/pylops/blob/dev/docs/source/installation.rst to include mkl_fft.

Copy link
Collaborator

@mrava87 mrava87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rohanbabbar04 I do not seem to be able to install mkl_fft on macOS? Seeing also your change in the requirements file it makes me suspect it is only available on linux platforms?

Does that make sense? If so I will review this on my other laptop but we should state that clearly in the installation.rst file and should not run tests on macOS when "engine": "mkl_fft" as otherwise it will just run twice the tests with the numpy engine...

Add check to skip mkl_fft on darwin

Minor update
@@ -1,3 +1,6 @@
--index-url https://software.repos.intel.com/python/pypi
Copy link
Collaborator

@mrava87 mrava87 Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmh I think this may not work as you may expect as it would always favor the intel repo over the official pypi for every package in the file (which I am not in favor of).... what we did for torch (after realizing that) was to make an additional requirements-torch file where there we switch the index just to install torch and then we do (eg taken from Makefile)

pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && $(PIP) install -e .

We could probably do something similar here and maybe provide make targets for intel where we install an intel flavored environment (see below for more on this)?

Copy link
Collaborator

@mrava87 mrava87 Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing I am a bit worried about / don't fully understand is this comment in the README of mkl_fft

If command above installs NumPy package from the PyPI, please use following command to install Intel optimized NumPy wheel package from Intel PyPI Cloud

So does this mean we need their NumPy as it won't work with the one from the official PyPI.

If so, I think we need a more targeted strategy where we provide files to create an environment with mkl where we use the Intel registry and we can also add a build GA to test it... I do not want to have our CI using the Numpy distributed by Intel as default as 99% of our users are more likely to use that from PyPI so I want that to be the one we run test against 😉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmh I think this may not work as you may expect as it would always favor the intel repo over the official pypi for every package in the file (which I am not in favor of).... what we did for torch (after realizing that) was to make an additional requirements-torch file where there we switch the index just to install torch and then we do (eg taken from Makefile)

pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && $(PIP) install -e .

We could probably do something similar here and maybe provide make targets for intel where we install an intel flavored environment (see below for more on this)?

Agreed, it’s better to create another requirements file, which I just added in my latest commits — requirements-intel-mkl.txt. Even better, we can define targets that call pip, for example:

pip install -r requirements-intel-mkl.txt && pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && pip install .

Similarly, for conda, it’s better to create a new file environment-dev-intel-mkl.yml, giving priority to software.intel.repos. This way, Python, NumPy, SciPy, and MKL can be installed directly from the Intel repository for maximum performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing I am a bit worried about / don't fully understand is this comment in the README of mkl_fft

If command above installs NumPy package from the PyPI, please use following command to install Intel optimized NumPy wheel package from Intel PyPI Cloud

So does this mean we need their NumPy as it won't work with the one from the official PyPI.

If so, I think we need a more targeted strategy where we provide files to create an environment with mkl where we use the Intel registry and we can also add a build GA to test it... I do not want to have our CI using the Numpy distributed by Intel as default as 99% of our users are more likely to use that from PyPI so I want that to be the one we run test against 😉

By default, NumPy is built with OpenBLAS, not MKL. For maximum performance, it’s better to ensure NumPy is configured with MKL (although it will still work with OpenBLAS, just not at the best performance).

As you suggested, the best approach would be to create a new GA workflow specifically for Intel MKL. In this workflow, we can set up an Intel Python environment and install all required libraries inside it. Ideally, we should create a conda environment with Python 3.10, 3.11, and 3.12 (from Intel’s software repos), install the necessary packages there, and run the pytest suite.

I’ve already implemented this in my latest commits by adding a new workflow file, build-mkl.yml, which performs these steps.

@mrava87
Copy link
Collaborator

mrava87 commented Sep 1, 2025

@rohanbabbar04 thanks for the explanation and changes!

I left a few more comments as I am not sure exactly how invasive is the use of Intel conda channel and Intel PyPI cloud... hopefully you have a better grasp than me on this and can comment on.

Regarding the code, I see that you switched from mkl_fft.interfaces.scipy_fft to mkl_fft.interfaces.numpy_fft and that led to removing a lot of the type casting. Again can you explain?


@pytest.mark.parametrize("par", pars_fft_small_real)
def test_FFT_small_real(par):
if par["engine"] == "mkl_fft" and sys.platform == "darwin":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be "Darwin" as in

if platform.system() == "Darwin":

I tried switching to "darwin" but it won't go into the condition... in fact on my macOS if I do

In [1]: import platform

In [2]: platform.system()
Out[2]: 'Darwin'

Copy link
Contributor Author

@rohanbabbar04 rohanbabbar04 Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we now have a new GA, it’s probably better to run it on ubuntu-latest (since macOS is not supported by the Intel software repositories). We could also skip the tests when mkl_fft is not enabled. For the normal GA checks, I think it’s fine to just skip mkl_fft. Do you have any suggestions for a better way to skip these tests in our normal GA, other than the approach I’ve taken in the tests?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I think what you did is the best way, since we have the mkl_fft_enabled it is a good idea to use it 😄

Add new requirements

Update code and tests

Minor change

Fix-Pass login shell as default

Add support for python3.10
@rohanbabbar04
Copy link
Contributor Author

rohanbabbar04 commented Sep 9, 2025

One more thing to note is pyFFTW, which sometimes does not behave correctly when installed with pip alongside NumPy + MKL. I came across this PR, which seems to provide a solution. Another good option is to use the conda-forge, which is updated and works reliably with NumPy + MKL. I’ve mentioned this in the installation.rst so that users are aware of it.

@rohanbabbar04 rohanbabbar04 changed the title Add Intel mkl_fft support for fft module Add Intel mkl_fft support for fft module and GitHub Action with Intel Python Sep 11, 2025
@mrava87
Copy link
Collaborator

mrava87 commented Sep 13, 2025

@rohanbabbar04 thanks, this already looks much better and less intrusive to the basic NumPy+SciPy installations that we
want to support as default 🚀

I'll take a more detailed look during the week and provide any further feedback. In the meantime, @cako would you be able to review this PR as well, since you have actually done more work than me in the FFT* operators so it would be good to get your approval on these updates 😄

@rohanbabbar04
Copy link
Contributor Author

rohanbabbar04 commented Sep 26, 2025

@mrava87 @cako Just checking if you got some time to review this PR 😅

@mrava87
Copy link
Collaborator

mrava87 commented Sep 26, 2025

@rohanbabbar04 sorry not yet on my side… will try to do it at the weekend 😀

But I won’t merge this until I hear from @cako as well…

Copy link
Collaborator

@mrava87 mrava87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rohanbabbar04 I finally had time to look at your PR in more details, and I think it looks great 😄 everything seems to be aligned with the other FFT implementations and the changes to environment/requirements files and the new GA are perfect.

I'll wait a couple of days, if I do not hear from @cako, I'll go ahead and merge

@cako
Copy link
Collaborator

cako commented Sep 28, 2025

Sorry guys, was on holiday and then back from a conference and then ill. I will take a look at this today!

Copy link
Collaborator

@cako cako left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thanks @rohanbabbar04!! I have a few questions:

  1. This was mentioned in the new docs, but just to confirm, my FFTW tests are not passing when I install requirements-intel-mkl.txt.
  2. I added a "Supported backends" section to the plot_fft.py example, please make sure it's correct!
  3. My understanding is that installing intel-numpy/intel-scipy hijacks the numpy.fft and scipy.fft libraries to under the MKL FFTs instead. Therefore, the way the installation currently works is that both intel-numpy and intel-scipy will be installed alongside mkl_fft, this means that every backend except FFTW (which will incidentally also be broken) will actually be using MKL. This is mentioned under the BLAS section of the Advanced Installation, but I think it bears repeating under the MKL-FFT section.
  4. If we want to make the MKL installation only install mkl_fft and not intel-numpy/intel-scipy, then we need to make 3 changes: 1) install dev requirements before mkl requirements, 2) use numpy>=2.0.0,<=2.2.5 in the dev requirements because otherwise mkl_fft will downgrade to numpy==2.2.5 and in the process, replace it with the intel version. I don't mind either solution, but I think it needs to be made very clear in the docs what's going on.
  5. I made some small annotation changes, please review them!

@mrava87
Copy link
Collaborator

mrava87 commented Sep 29, 2025

Thanks @cako!

For point 1, I see that pyfftw is in that requirement file too, but I see that in the action it is installed first with conda… but I think we should avoid too many exceptions, to avoid confusion (we, ourselves, will forget these little details with time). If the Make target dev-install_intel_mkl leads to the pyfftw tests not running / passing, we should either say that for intel_mkl we provide only a conda recipe or avoid installing pyfftw and maybe let FFT fallback to numpy?

For 3,4, I let @rohanbabbar04 reply but in general I agree with you… repeating always help 😝

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support Intel MKL FFT
3 participants