Skip to content

Add support for building llvmlite wheels on Windows ARM64#1387

Open
MugundanMCW wants to merge 5 commits intonumba:mainfrom
MugundanMCW:support_winarm64
Open

Add support for building llvmlite wheels on Windows ARM64#1387
MugundanMCW wants to merge 5 commits intonumba:mainfrom
MugundanMCW:support_winarm64

Conversation

@MugundanMCW
Copy link

PR Description:

  • The adoption of Windows on ARM (WoA) devices is steadily increasing, yet many Python wheels are still not available for this platform.
  • GitHub Actions now offers native CI runners for Windows on ARM devices (windows-11-arm), enabling automated builds and testing.
  • Currently, official llvmlite Python wheels are not provided for Windows ARM64, and users and developers face difficulties using the popular llvmlite library natively.
  • This PR introduces support for building llvmlite wheels on Windows ARM64, improving accessibility for developers and end users on this emerging platform.

Changes proposed:

  • The current Windows x64 CI pipeline depends on Conda Environment for building and testing llvmlite. Windows ARM64 support in Conda is still under development and is not available yet.
  • To overcome this limitation, this PR uses the native GitHub Windows ARM64 environment.
  • Added CI workflow[llvmlite_win-arm64_wheel_builder.yml] for building llvmlite wheels on Windows ARM64.
  • Added Windows ARM64 configuration in llvmdev_evaluate.py & llvmdev_build worflow.
  • Modified validate_win-64_wheel.py to conditionally include VCRUNTIME140_1.dll during CI validation process
  • Added build script[bld_llvm_winarm64.bat] for compiling llvm on Windows ARM64.
  • Modified ffi/build.py to detect target platform based upon the system architecture.
  • Added missing Windows ARM64 configs in test_binding.py to fix the test failures.

@gmarkall
Copy link
Member

gmarkall commented Jan 7, 2026

Thanks for the PR! Have you been able to get it to successfully run the workflow in a repo / fork of your own?

@esc
Copy link
Member

esc commented Jan 7, 2026

@MugundanMCW thank you for submitting this. Thank you for demonstrating that windows-on-arm builds will be possible for llvmlite.

Unfortunately however this is incompatible with our current support tier policy listed here:

https://numba.readthedocs.io/en/stable/reference/support_tiers.html?highlight=tier

I had mentioned previously that the approach using vcpkg was never going to be acceptable:

#1243 (comment)
#1243 (comment)

@seibert
Copy link
Contributor

seibert commented Jan 7, 2026

Although we can't yet support Windows ARM as a tier 1 platform, I think we can discuss how to merge this patch as a Tier 2b platform (where we currently put Windows ARM) so that we can upgrade the status of windows ARM when the other external requirements line up.

@esc esc added the discussion An issue requiring discussion label Jan 7, 2026
@esc
Copy link
Member

esc commented Jan 7, 2026

Although we can't yet support Windows ARM as a tier 1 platform, I think we can discuss how to merge this patch as a Tier 2b platform (where we currently put Windows ARM) so that we can upgrade the status of windows ARM when the other external requirements line up.

Yes, let's discuss next week during the maintainer meeting.

@esc
Copy link
Member

esc commented Jan 7, 2026

@MugundanMCW in the meantime, could you fix the flake8 issue detected please.

llvmlite/tests/test_binding.py:2221:13: E128 continuation line under-indented for visual indent

@MugundanMCW
Copy link
Author

Thanks for the PR! Have you been able to get it to successfully run the workflow in a repo / fork of your own?

Hi @gmarkall
Yes, I have ran the workflow in my fork repo and all the jobs are passing as expected. You can find the complete dry run of the workflow from here.

@MugundanMCW
Copy link
Author

llvmlite/tests/test_binding.py:2221:13: E128 continuation line under-indented for visual indent

Hi @esc
Thanks for the review, I have fixed indentation for Window ARM64 case.

@esc
Copy link
Member

esc commented Jan 8, 2026

@MugundanMCW I did start the llvmlite wheel builds today, but it seems like they are failing. Can you take a look.

@esc
Copy link
Member

esc commented Jan 8, 2026

Although we can't yet support Windows ARM as a tier 1 platform, I think we can discuss how to merge this patch as a Tier 2b platform (where we currently put Windows ARM) so that we can upgrade the status of windows ARM when the other external requirements line up.

Yes, let's discuss next week during the maintainer meeting.

I took another look at the calendar. The next Numba Dev meeting is actually on the 20th of Jan. I will add this to the agenda and hopefully it will be discussed then.

@MugundanMCW
Copy link
Author

@MugundanMCW I did start the llvmlite wheel builds today, but it seems like they are failing. Can you take a look.

Sure @esc, will take a look on the failing jobs.

@MugundanMCW
Copy link
Author

MugundanMCW commented Jan 14, 2026

Hi @esc,

The llvmlite win-arm64 builds are currently failing due to unavailability of conda llvmdev package for win-arm64. Without a fallback llvmdev package, the workflow ends up picking up the LLVM installation present in the GitHub runner environment, which leads to build failures. As a result, building llvmlite wheels for win-arm64 requires explicitly specifying a llvmdev workflow ID.
This could also be automated by adding a fallback to pull the runner ID from the latest Windows ARM64 llvmdev GitHub Actions workflow.

@esc
Copy link
Member

esc commented Jan 14, 2026

OK, it seems like the llvmdev workflow has issues too. Does this not need to complete in order for llvmlite to pass:

Screenshot 2026-01-14 at 08 25 04

How did you get it to pass on the workflow in your fork?

@esc
Copy link
Member

esc commented Jan 14, 2026

so it looks like the llvmdev_for_wheel_win-arm64 did produce an artifact

https://github.com/numba/llvmlite/actions/runs/20987446447/job/60332743138?pr=1387

@MugundanMCW
Copy link
Author

so it looks like the llvmdev_for_wheel_win-arm64 did produce an artifact

https://github.com/numba/llvmlite/actions/runs/20987446447/job/60332743138?pr=1387

Yes @esc, now we can go ahead and trigger the wheel builder for win-arm64 using this runner ID.

@esc
Copy link
Member

esc commented Jan 14, 2026

so it looks like the llvmdev_for_wheel_win-arm64 did produce an artifact
https://github.com/numba/llvmlite/actions/runs/20987446447/job/60332743138?pr=1387

Yes @esc, now we can go ahead and trigger the wheel builder for win-arm64 using this runner ID.

Yeah, I tried that, but I am getting an error here:

esc@artemis [numba_3.13] [llvmlite:support_winarm64:★★] ~/git/llvmlite gh workflow run .github/workflows/llvmlite_win-arm64_wheel_builder.yml -f llvmdev_run_id=20987446447
could not create workflow dispatch event: HTTP 422: Workflow does not have 'workflow_dispatch' trigger (https://api.github.com/repos/numba/llvmlite/actions/workflows/221472279/dispatches)

@esc
Copy link
Member

esc commented Jan 14, 2026

so it looks like the llvmdev_for_wheel_win-arm64 did produce an artifact
https://github.com/numba/llvmlite/actions/runs/20987446447/job/60332743138?pr=1387

Yes @esc, now we can go ahead and trigger the wheel builder for win-arm64 using this runner ID.

Yeah, I tried that, but I am getting an error here:

esc@artemis [numba_3.13] [llvmlite:support_winarm64:★★] ~/git/llvmlite gh workflow run .github/workflows/llvmlite_win-arm64_wheel_builder.yml -f llvmdev_run_id=20987446447
could not create workflow dispatch event: HTTP 422: Workflow does not have 'workflow_dispatch' trigger (https://api.github.com/repos/numba/llvmlite/actions/workflows/221472279/dispatches)

Is that because the workflow isn't merged to main yet?

@MugundanMCW
Copy link
Author

MugundanMCW commented Jan 14, 2026

Is that because the workflow isn't merged to main yet?

Yes, that’s the reason. Since the workflow file isn’t merged into main yet, it doesn’t exist on the default branch, so gh workflow run cannot dispatch it from there. I was able to trigger the workflow successfully from the GitHub Actions UI by explicitly selecting the branch from the workflow and then running it with the llvmdev_run_id input.

@esc
Copy link
Member

esc commented Jan 14, 2026

Is that because the workflow isn't merged to main yet?

Yes, that’s likely the reason. Since the workflow file isn’t merged into main yet, it doesn’t exist on the default branch, so gh workflow run cannot dispatch it from there. I was able to trigger the workflow successfully from the GitHub Actions UI by explicitly selecting the branch from the workflow and then running it with the llvmdev_run_id input.

Yeah, I think for that to work it needs to be on the repositories default branch:

Screenshot 2026-01-14 at 15 04 53

@esc
Copy link
Member

esc commented Jan 20, 2026

@MugundanMCW @khmyznikov -- we discussed this extensively during the developer meeting and came to the decision that the changes to our CI/build infrastructure are too stong. We don't want to introduce builds based on vcpkg for now and will instead wait on conda and a miniconda distribution to become available for the win-on-arm platform.

We do appreciate the feasibility of running llvmlite on windows-on-arm, thank you for showing that it will be easily possible to get the Numba/llvmlite stack running on windows-on-arm.

@khmyznikov
Copy link

@esc Let me share some extra thoughts. Python is aiming to move out of experimental status on WoA this fall, and before that happens, we’re working to get as many essential libraries ready as possible. Missing the llvmlite/numba stack could seriously disrupt this effort.

Conda support for WoA is coming, we’ve had positive signals, but it’s a big collaboration and might take another year to fully land.

With new powerful Arm hardware launching soon (more announcements coming) and Python progressing in parallel, we have a great chance to support the platform at the right time. I’d ask you to reconsider, our partners are ready to help finish and maintain this change. It might not be the perfect solution, but the vcpkg approach proved future-proof when we enabled scipy for WoA.

@esc
Copy link
Member

esc commented Jan 21, 2026

@khmyznikov the short answer is: no, not at this point. For a longer answer please continue to read.

This was discussed extensively during our weekly maintainer meeting and the decision to not accept the proposed solution at this time was unanimous. This does not mean that a) we don't want to support win-on-arm in general b) we will not reverse our decision about vcpkg in future. If there is a hair-on-fire need for Numba/llvmlite on win-arm64 with several of our users asking for this feature, that is a different story -- but we don't see the immediate need to go out of our way to support this platform. Note that, reversing this decision will require a discussion at the maintainer meeting.

For more context: we spent 2025 revamping our CI system and porting everything to GitHub actions using conda and pip. The CI setup is very consistent now and I don't want to add a special-case because conda isn't yet supported on an experimental platform. We as a team are very familiar with conda and conda-build but not with vcpkg and so we don't feel comfortable with this solution right now.

For the time being, I would encourage the following:

a) Leave this PR open as it may serve a future use, either fully or partially.
b) Maybe try to get a CI build running for Numba too, you shouldn't need to use vcpkg there. That would also show that Numba can run on win-arm64. This is the true test, the llvmlite test-suite is quite minimal and the real check if win-arm64 works will be if the Numba test suite passes. If you encounter test-failures there, we would love to hear about them.
c) Produce your own packages for Numba/llvmlite on win-arm64 and disseminate them that way. That would be the best way to start testing the stack and getting it into the hands of early adopters.

Lastly, I would like to emphasize that this decision was not take lightly and that we are sorry to have to potentially disrupt your efforts here. Also, thank you for your continued patience on this matter 🙏 the support for this platform will materialize in time.

@khmyznikov
Copy link

@esc Thanks for the answer and consideration. We’re essentially building the market, and it’s a classic chicken-and-egg problem. End users face a lack of support, leading them to switch to emulation or avoid the platform altogether. That’s why we had to be proactive and anticipate user needs several steps ahead.

@MugundanMCW could you also investigate numba ? We can spin llvmlite build in this repo

@esc
Copy link
Member

esc commented Jan 21, 2026

@esc Thanks for the answer and consideration. We’re essentially building the market, and it’s a classic chicken-and-egg problem. End users face a lack of support, leading them to switch to emulation or avoid the platform altogether. That’s why we had to be proactive and anticipate user needs several steps ahead.

Yes, I understand. As soon as conda on WoA becomes available we can hook up our CI and start testing and making wheel packages. Also note that Numba and llvmlite are often a good stress test for new Python versions and new platforms. Maybe to make things easier we could test or use early versions of conda on WoA, Numba and llvmlite essentially have only three major dependencies: LLVM, Python and Numpy (which has it's own dependencies) -- so as soon as there is any kinda of conda and or miniconda on the horizon for early adoptors, we can test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

2 - In Progress discussion An issue requiring discussion

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants