Rudimentary support for weight averaging (EMA) with FSDP#21414
Open
senarvi wants to merge 1 commit intoLightning-AI:masterfrom
Open
Rudimentary support for weight averaging (EMA) with FSDP#21414senarvi wants to merge 1 commit intoLightning-AI:masterfrom
senarvi wants to merge 1 commit intoLightning-AI:masterfrom
Conversation
d795592 to
241a95f
Compare
❌ 1 Tests Failed:
View the full list of 1 ❄️ flaky test(s)
To view more test analytics, go to the Test Analytics Dashboard |
Contributor
|
Thanks for leading this effort, @senarvi! I'm sure this feature will be useful to the PyTorch community in the coming years. After glancing through the code changes, they look good to me. As long as the revised EMA unit test passes, it should be good to go. |
d1f40af to
675ba32
Compare
675ba32 to
aff4ce9
Compare
aff4ce9 to
5a75ceb
Compare
5a75ceb to
c25847d
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
The
WeightAveragingcallback doesn't support sharded models. The reason is that either the averaged model should be sharded too, or the full model parameters are needed when creating and updating the averaged model. There was a lot of interest in using EMA with FSDP, but this was left out from the original PR, because it's not obvious how to implement it.@amorehead noticed that SimpleFold uses Lightning, AveragedModel, and FSDP. They simply summon full parameters before updating the averaged model. That's what this PR does.
The full parameters are also needed when creating the averaged model and when swapping the current and the averaged model for validation. I call
pl_module.configure_model()insetup(), meaning that the full parameters are initialized in CPU memory. SimpleFold doesn't defineconfigure_model()at all, so I believe the result is the same. When updating the averaged model, SimpleFold doesn't useoffload_to_cpu, so I don't use it either. If the entire model doesn't fit in the GPU memory, you'll run out of memory at this point.This is probably the best we can do without massive changes. Is this good enough? I don't know, I've never used FSDP. Maybe someone who has an actual use case could check if this is useful. Tagging people who asked about this in the original PR @amorehead @kzrpg @npuichigo
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
📚 Documentation preview 📚: https://pytorch-lightning--21414.org.readthedocs.build/en/21414/