Skip to content

Conversation

@finbarrtimbers
Copy link
Collaborator

@finbarrtimbers finbarrtimbers commented Nov 19, 2025

Not sure how to test checkpointing, which is a recurring theme. I believe @saurabh111233212 is planning to do this next week.

Runs:


Note

Adds a retry-wrapped maybe_save_checkpoint helper (with backoff) to save checkpoints and replaces the inline save block; adds the backoff dependency.

  • Training/GRPO:
    • Extracts checkpoint saving into maybe_save_checkpoint(...) and decorates it with @backoff.on_exception(backoff.expo, Exception, max_tries=3) to retry failures.
    • Replaces the inline save block in one_training_step with a call to maybe_save_checkpoint (behavior preserved, including optional eval job launch).
  • Dependencies:
    • Adds backoff to pyproject.toml and pins it in requirements.txt/uv.lock.

Written by Cursor Bugbot for commit 72e748f. This will update automatically on new commits. Configure here.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @finbarrtimbers, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness of the checkpointing process within the training pipeline. It refactors the existing checkpoint saving logic into a new, dedicated function that automatically retries failed save operations using an exponential backoff strategy. This change aims to ensure that model checkpoints are reliably saved, even in the face of temporary issues, thereby safeguarding training progress and reducing potential disruptions.

Highlights

  • Robust Checkpointing: Implemented a retry mechanism for checkpoint saving operations to enhance reliability and prevent transient failures from disrupting the training process.
  • Code Refactoring: Extracted the checkpoint saving logic into a new dedicated function, maybe_save_checkpoint, improving modularity and readability.
  • Dependency Update: Added the backoff library as a new dependency to facilitate the retry functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

)
save_time = timer.duration

return save_time
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Backoff retry may launch duplicate Ray remote calls

The maybe_save_checkpoint function uses @backoff.on_exception to retry on any exception, but the function contains ray_get_with_progress calls that are Ray remote operations. If any exception occurs during these operations, backoff will retry the entire function, which will launch new remote calls. This can result in multiple concurrent save attempts to the same checkpoint, causing file conflicts or race conditions. The backoff decorator should only wrap the entire operation or exception handling should distinguish between transient and non-transient failures.

Fix in Cursor Fix in Web

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a retry mechanism for checkpointing using the backoff library, which is a great addition for improving the robustness of the training process against transient failures. The checkpointing logic has also been refactored into a separate function, which improves code clarity. I have a couple of suggestions to make the implementation even more robust and maintainable.

wandb.log(metrics, step=episode)


@backoff.on_exception(backoff.expo, Exception, max_tries=3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using backoff.on_exception with the broad Exception class can mask underlying bugs by retrying on non-transient errors like TypeError or AttributeError. It's better to specify only the exceptions you expect to be transient, such as I/O or network-related errors. This makes the retry logic more robust and prevents hiding actual code issues.

Consider using a more specific set of exceptions. For example, you could catch IOError, OSError, and Ray-specific task errors. You might need to add import ray.exceptions for this.

Example:

@backoff.on_exception(backoff.expo, (IOError, OSError, ray.exceptions.RayTaskError), max_tries=3)

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 7 to 12
dependencies = [
"accelerate>=1.10.1",
"antlr4-python3-runtime==4.11",
"backoff>=2.2.1",
"bitsandbytes>=0.44.1; platform_system != 'Darwin'",
"datasets>=4.0.0",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Regenerate requirements.txt after adding backoff

The new checkpoint retry logic imports backoff, and this commit adds the dependency in pyproject.toml. However, requirements.txt (the pinned export used for pip install -r requirements.txt) was not updated. Any environment that relies on the requirements file instead of uv will miss backoff, and importing open_instruct.grpo_fast will fail with ModuleNotFoundError. Please regenerate requirements.txt so that backoff is included.

Useful? React with 👍 / 👎.

finbarrtimbers and others added 2 commits November 19, 2025 09:47
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants