fix(fabric): raise on CPU tensor passed to all_reduce in non-CPU setup (#21530)#21573
Draft
GudeJunge wants to merge 2 commits intoLightning-AI:masterfrom
Draft
fix(fabric): raise on CPU tensor passed to all_reduce in non-CPU setup (#21530)#21573GudeJunge wants to merge 2 commits intoLightning-AI:masterfrom
GudeJunge wants to merge 2 commits intoLightning-AI:masterfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes #21530
Motivation & Context:
As reported in #21530, passing a CPU tensor to
Fabric.all_reducewhile running on a Multi-GPU setup (or even on Single-GPU setup) results in unintuitive behavior (perceived as a "silent failure"). Technically, the backend silently moves a copy of the tensor to the device for reduction, leaving the original CPU tensor completely untouched. This leads to silent logic errors in user code, especially during CPU-offloading workflows.Changes introduced:
Fabric.all_reduce: if the currentfabric.deviceis not CPU, we explicitly verify that the passed tensors are not on the CPU.RuntimeErroradvising the user to move the tensor tofabric.devicefirst.test_fabric.pyto ensure the validation logic works correctly.Breaking Changes:
Bugfix (yes?), User scripts that previously passed CPU tensors to
all_reduceon non-CPU devices survived due to the silent failure. These scripts will now explicitly fail with aRuntimeError.Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
📚 Documentation preview 📚: https://pytorch-lightning--21573.org.readthedocs.build/en/21573/