-
Notifications
You must be signed in to change notification settings - Fork 118
Add Intel mkl_fft
support for fft module and GitHub Action with Intel Python
#694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Nice @rohanbabbar04 😄 I'll take a look soon, @cako would be great if you can do the same. PS: one thing I noticed quickly skimming through the files changed is that we should also modify https://github.com/PyLops/pylops/blob/dev/docs/source/installation.rst to include |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rohanbabbar04 I do not seem to be able to install mkl_fft
on macOS? Seeing also your change in the requirements file it makes me suspect it is only available on linux platforms?
Does that make sense? If so I will review this on my other laptop but we should state that clearly in the installation.rst
file and should not run tests on macOS when "engine": "mkl_fft"
as otherwise it will just run twice the tests with the numpy
engine...
Add check to skip mkl_fft on darwin Minor update
7e42ed9
to
1fe6b34
Compare
requirements-dev.txt
Outdated
@@ -1,3 +1,6 @@ | |||
--index-url https://software.repos.intel.com/python/pypi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmh I think this may not work as you may expect as it would always favor the intel repo over the official pypi for every package in the file (which I am not in favor of).... what we did for torch (after realizing that) was to make an additional requirements-torch
file where there we switch the index just to install torch
and then we do (eg taken from Makefile)
pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && $(PIP) install -e .
We could probably do something similar here and maybe provide make targets for intel where we install an intel flavored environment (see below for more on this)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing I am a bit worried about / don't fully understand is this comment in the README of mkl_fft
If command above installs NumPy package from the PyPI, please use following command to install Intel optimized NumPy wheel package from Intel PyPI Cloud
So does this mean we need their NumPy as it won't work with the one from the official PyPI.
If so, I think we need a more targeted strategy where we provide files to create an environment with mkl
where we use the Intel registry and we can also add a build GA to test it... I do not want to have our CI using the Numpy distributed by Intel as default as 99% of our users are more likely to use that from PyPI so I want that to be the one we run test against 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmh I think this may not work as you may expect as it would always favor the intel repo over the official pypi for every package in the file (which I am not in favor of).... what we did for torch (after realizing that) was to make an additional
requirements-torch
file where there we switch the index just to installtorch
and then we do (eg taken from Makefile)pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && $(PIP) install -e .
We could probably do something similar here and maybe provide make targets for intel where we install an intel flavored environment (see below for more on this)?
Agreed, it’s better to create another requirements file, which I just added in my latest commits — requirements-intel-mkl.txt
. Even better, we can define targets that call pip
, for example:
pip install -r requirements-intel-mkl.txt && pip install -r requirements-dev.txt && pip install -r requirements-torch.txt && pip install .
Similarly, for conda
, it’s better to create a new file environment-dev-intel-mkl.yml
, giving priority to software.intel.repos
. This way, Python, NumPy, SciPy, and MKL can be installed directly from the Intel repository for maximum performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing I am a bit worried about / don't fully understand is this comment in the README of mkl_fft
If command above installs NumPy package from the PyPI, please use following command to install Intel optimized NumPy wheel package from Intel PyPI Cloud
So does this mean we need their NumPy as it won't work with the one from the official PyPI.
If so, I think we need a more targeted strategy where we provide files to create an environment with
mkl
where we use the Intel registry and we can also add a build GA to test it... I do not want to have our CI using the Numpy distributed by Intel as default as 99% of our users are more likely to use that from PyPI so I want that to be the one we run test against 😉
By default, NumPy is built with OpenBLAS, not MKL. For maximum performance, it’s better to ensure NumPy is configured with MKL (although it will still work with OpenBLAS, just not at the best performance).
As you suggested, the best approach would be to create a new GA workflow specifically for Intel MKL. In this workflow, we can set up an Intel Python environment and install all required libraries inside it. Ideally, we should create a conda
environment with Python 3.10, 3.11, and 3.12 (from Intel’s software repos), install the necessary packages there, and run the pytest suite.
I’ve already implemented this in my latest commits by adding a new workflow file, build-mkl.yml, which performs these steps.
@rohanbabbar04 thanks for the explanation and changes! I left a few more comments as I am not sure exactly how invasive is the use of Intel conda channel and Intel PyPI cloud... hopefully you have a better grasp than me on this and can comment on. Regarding the code, I see that you switched from |
pytests/test_ffts.py
Outdated
|
||
@pytest.mark.parametrize("par", pars_fft_small_real) | ||
def test_FFT_small_real(par): | ||
if par["engine"] == "mkl_fft" and sys.platform == "darwin": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be "Darwin" as in
pylops/pytests/test_torchoperator.py
Line 35 in cbd7d1b
if platform.system() == "Darwin": |
I tried switching to "darwin" but it won't go into the condition... in fact on my macOS if I do
In [1]: import platform
In [2]: platform.system()
Out[2]: 'Darwin'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we now have a new GA, it’s probably better to run it on ubuntu-latest
(since macOS is not supported by the Intel software repositories). We could also skip the tests when mkl_fft
is not enabled. For the normal GA checks, I think it’s fine to just skip mkl_fft
. Do you have any suggestions for a better way to skip these tests in our normal GA, other than the approach I’ve taken in the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I think what you did is the best way, since we have the mkl_fft_enabled
it is a good idea to use it 😄
Add new requirements Update code and tests Minor change Fix-Pass login shell as default Add support for python3.10
ee0c911
to
7f38d45
Compare
One more thing to note is |
mkl_fft
support for fft modulemkl_fft
support for fft module and GitHub Action with Intel Python
@rohanbabbar04 thanks, this already looks much better and less intrusive to the basic NumPy+SciPy installations that we I'll take a more detailed look during the week and provide any further feedback. In the meantime, @cako would you be able to review this PR as well, since you have actually done more work than me in the |
@rohanbabbar04 sorry not yet on my side… will try to do it at the weekend 😀 But I won’t merge this until I hear from @cako as well… |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rohanbabbar04 I finally had time to look at your PR in more details, and I think it looks great 😄 everything seems to be aligned with the other FFT implementations and the changes to environment/requirements files and the new GA are perfect.
I'll wait a couple of days, if I do not hear from @cako, I'll go ahead and merge
Sorry guys, was on holiday and then back from a conference and then ill. I will take a look at this today! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thanks @rohanbabbar04!! I have a few questions:
- This was mentioned in the new docs, but just to confirm, my FFTW tests are not passing when I install
requirements-intel-mkl.txt
. - I added a "Supported backends" section to the
plot_fft.py
example, please make sure it's correct! - My understanding is that installing
intel-numpy
/intel-scipy
hijacks thenumpy.fft
andscipy.fft
libraries to under the MKL FFTs instead. Therefore, the way the installation currently works is that bothintel-numpy
andintel-scipy
will be installed alongsidemkl_fft
, this means that every backend except FFTW (which will incidentally also be broken) will actually be using MKL. This is mentioned under the BLAS section of the Advanced Installation, but I think it bears repeating under the MKL-FFT section. - If we want to make the MKL installation only install
mkl_fft
and notintel-numpy
/intel-scipy
, then we need to make 3 changes: 1) install dev requirements before mkl requirements, 2) usenumpy>=2.0.0,<=2.2.5
in the dev requirements because otherwisemkl_fft
will downgrade to numpy==2.2.5 and in the process, replace it with the intel version. I don't mind either solution, but I think it needs to be made very clear in the docs what's going on. - I made some small annotation changes, please review them!
Thanks @cako! For point 1, I see that pyfftw is in that requirement file too, but I see that in the action it is installed first with conda… but I think we should avoid too many exceptions, to avoid confusion (we, ourselves, will forget these little details with time). If the Make target dev-install_intel_mkl leads to the pyfftw tests not running / passing, we should either say that for intel_mkl we provide only a conda recipe or avoid installing pyfftw and maybe let FFT fallback to numpy? For 3,4, I let @rohanbabbar04 reply but in general I agree with you… repeating always help 😝 |
Closes #386
engine=mkl_fft
option for Fourier transforms (FFT) which uses the mkl_fft package.