-
Notifications
You must be signed in to change notification settings - Fork 458
Adds retries to checkpointing. #1213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @finbarrtimbers, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the robustness of the checkpointing process within the training pipeline. It refactors the existing checkpoint saving logic into a new, dedicated function that automatically retries failed save operations using an exponential backoff strategy. This change aims to ensure that model checkpoints are reliably saved, even in the face of temporary issues, thereby safeguarding training progress and reducing potential disruptions. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
| ) | ||
| save_time = timer.duration | ||
|
|
||
| return save_time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Backoff retry may launch duplicate Ray remote calls
The maybe_save_checkpoint function uses @backoff.on_exception to retry on any exception, but the function contains ray_get_with_progress calls that are Ray remote operations. If any exception occurs during these operations, backoff will retry the entire function, which will launch new remote calls. This can result in multiple concurrent save attempts to the same checkpoint, causing file conflicts or race conditions. The backoff decorator should only wrap the entire operation or exception handling should distinguish between transient and non-transient failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a retry mechanism for checkpointing using the backoff library, which is a great addition for improving the robustness of the training process against transient failures. The checkpointing logic has also been refactored into a separate function, which improves code clarity. I have a couple of suggestions to make the implementation even more robust and maintainable.
| wandb.log(metrics, step=episode) | ||
|
|
||
|
|
||
| @backoff.on_exception(backoff.expo, Exception, max_tries=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using backoff.on_exception with the broad Exception class can mask underlying bugs by retrying on non-transient errors like TypeError or AttributeError. It's better to specify only the exceptions you expect to be transient, such as I/O or network-related errors. This makes the retry logic more robust and prevents hiding actual code issues.
Consider using a more specific set of exceptions. For example, you could catch IOError, OSError, and Ray-specific task errors. You might need to add import ray.exceptions for this.
Example:
@backoff.on_exception(backoff.expo, (IOError, OSError, ray.exceptions.RayTaskError), max_tries=3)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| dependencies = [ | ||
| "accelerate>=1.10.1", | ||
| "antlr4-python3-runtime==4.11", | ||
| "backoff>=2.2.1", | ||
| "bitsandbytes>=0.44.1; platform_system != 'Darwin'", | ||
| "datasets>=4.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regenerate requirements.txt after adding backoff
The new checkpoint retry logic imports backoff, and this commit adds the dependency in pyproject.toml. However, requirements.txt (the pinned export used for pip install -r requirements.txt) was not updated. Any environment that relies on the requirements file instead of uv will miss backoff, and importing open_instruct.grpo_fast will fail with ModuleNotFoundError. Please regenerate requirements.txt so that backoff is included.
Useful? React with 👍 / 👎.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Not sure how to test checkpointing, which is a recurring theme. I believe @saurabh111233212 is planning to do this next week.
Runs:
Note
Adds a retry-wrapped
maybe_save_checkpointhelper (with backoff) to save checkpoints and replaces the inline save block; adds thebackoffdependency.maybe_save_checkpoint(...)and decorates it with@backoff.on_exception(backoff.expo, Exception, max_tries=3)to retry failures.one_training_stepwith a call tomaybe_save_checkpoint(behavior preserved, including optional eval job launch).backofftopyproject.tomland pins it inrequirements.txt/uv.lock.Written by Cursor Bugbot for commit 72e748f. This will update automatically on new commits. Configure here.