-
Notifications
You must be signed in to change notification settings - Fork 68
Refactor restart of training #464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 410-refactor-of-model-initialisation-ie-weight-loading-model-freezing-transfer-learning
Are you sure you want to change the base?
Conversation
Refactor restart of training [warm start, forked runs, restarts] Closes #458
|
Great stuff! Do you plan to have config / hydra options, so we can start a model and then specify exactly where the checkpoint should come from? Or would this be out of scope? |
|
Yes exactly, so it's much easier to say "hey I have this checkpoint in an s3 bucket" or similar. |
…e loading system - Add test suite for multi-source checkpoint loading (local, S3, HTTP, GCS, Azure) - Add test suite for model loading strategies (standard, transfer learning, weights-only) - Test error handling for network failures and missing files - Test registry pattern functionality for both loaders - Add extensive documentation explaining test organization and principles The tests ensure robustness of the extensible checkpoint loading system across different sources and loading strategies, with proper error handling and validation.
|
@JesperDramsch thanks for the contribution. Is there an active use-case for the GCS loading? There is an active user-base for azure, so I think implementing the azure one would be a nice addition, but wonder if you could save yourself some work by leaving GCS for when a use-case arises? |
Description
This PR introduces a comprehensive checkpoint loading system that supports loading model checkpoints from various sources including local files, S3, HTTP/HTTPS, Google Cloud Storage, and Azure Blob Storage. The implementation provides a modular, extensible architecture that separates checkpoint retrieval from model loading strategies and includes robust error handling and validation.
What problem does this change solve?
This change addresses several key problems in the current checkpoint loading workflow:
New Feature: Adds multi-source checkpoint loading capabilities with support for:
Enhanced Model Loading Strategies:
Architecture Improvements:
What issue or task does this change relate to?
Closes #458
Should be updated and merged after #410 / #422
Additional notes
Implementation Details
The PR introduces two main components:
CheckpointLoaders (
checkpoint_loaders.py):CheckpointLoaderwith pluggable implementationsLocalCheckpointLoaderfor filesystem accessRemoteCheckpointLoaderwith multi-cloud supportModelLoading (
model_loading.py):ModelLoaderfor different loading strategiesUsage Examples
Testing and Quality Assurance
Compatibility and Dependencies
Future Extensions
This architecture makes it easy to add:
As a contributor to the Anemoi framework, please ensure that your changes include unit tests, updates to any affected dependencies and documentation, and have been tested in a parallel setting (i.e., with multiple GPUs). As a reviewer, you are also responsible for verifying these aspects and requesting changes if they are not adequately addressed. For guidelines about those please refer to https://anemoi.readthedocs.io/en/latest/
By opening this pull request, I affirm that all authors agree to the Contributor License Agreement.
📚 Documentation preview 📚: https://anemoi-training--464.org.readthedocs.build/en/464/
📚 Documentation preview 📚: https://anemoi-graphs--464.org.readthedocs.build/en/464/
📚 Documentation preview 📚: https://anemoi-models--464.org.readthedocs.build/en/464/