Skip to content

[trl] Trl v0.28 (and above) rl fixes#4156

Open
Datta0 wants to merge 11 commits intounslothai:mainfrom
Datta0:trl_028_rl_fixes
Open

[trl] Trl v0.28 (and above) rl fixes#4156
Datta0 wants to merge 11 commits intounslothai:mainfrom
Datta0:trl_028_rl_fixes

Conversation

@Datta0
Copy link
Collaborator

@Datta0 Datta0 commented Mar 4, 2026

TRL restructured their RL pipelines to a new way. They have a file now called vllm_generation.py, which takes care of creating the vLLM instance and handling inference for RL. We need to patch it now, like we have been doing so far for the RL trainer classes.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates Unsloth's reinforcement learning integration to align with recent changes in the TRL library, specifically addressing the restructuring of their RL pipelines in versions 0.28 and higher. The primary goal is to ensure seamless operation and performance optimization by adapting to TRL's new vllm_generation.py module, which centralizes vLLM instance creation and inference. The changes focus on preventing redundant vLLM instantiations and optimizing weight synchronization, thereby improving efficiency when Unsloth's fast inference is utilized.

Highlights

  • TRL v0.28+ Compatibility: Adapted Unsloth's RL pipeline to support TRL versions 0.28 and above, which introduced a new vllm_generation.py file for vLLM instance management and inference.
  • Optimized vLLM Instantiation: Implemented dynamic patching for TRL's vllm_generation module to prevent redundant vLLM instance creation when Unsloth's fast inference LoRA is already active, reusing existing vLLM engines.
  • Reduced Weight Synchronization Overhead: Patched TRL's sync_weights method and removed collective_rpc('reload_weights') calls to avoid unnecessary weight synchronization, as Unsloth's fast inference LoRA already handles shared weights efficiently.
  • Loss Computation Alignment: Updated the grpo_compute_loss_slow and grpo_accumulated_loss functions to correctly handle new return values and parameters, ensuring compatibility with recent TRL loss calculation changes.
Changelog
  • unsloth/models/rl.py
    • Fixed minor whitespace inconsistencies in the __init__ method of Unsloth{RLConfig_name}.
    • Added a new function patch_trl_vllm_generation to encapsulate patching logic for TRL's vLLM generation components.
    • Integrated the call to patch_trl_vllm_generation within the main PatchFastRL function to ensure new TRL versions are correctly handled.
  • unsloth/models/rl_replacements.py
    • Imported the linecache module to facilitate dynamic source code patching.
    • Enhanced grpo_trainer__generate_single_turn to remove the sync_weights() block from TRL's source code, optimizing for Unsloth's fast inference.
    • Modified the calls to grpo_compute_loss_slow and grpo_accumulated_loss to correctly pass completion_mask and attention_mask respectively, and accommodate additional return values.
    • Introduced vllm_generation_init_patch to dynamically patch TRL's vllm_generation module, specifically targeting _init_vllm to reuse existing vLLM engines, sync_weights to skip redundant calls, and generate to bypass reload_weights RPCs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c75d8aa4af

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

f"{indent}# Skipping per-step vLLM sync_weights().\n"
)

function = sync_weights_block.sub(remove_sync_weights_block, function)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep vLLM weight sync for non-fast-inference runs

This change removes the self.vllm_generation.sync_weights() call from _generate_single_turn for all GRPO executions, not just when Unsloth fast-inference LoRA is active. In the non-fast-inference path (where _init_vllm creates a separate LLM(...) instance), that sync is what keeps vLLM generation weights aligned with the training model each step; skipping it causes rollouts to be generated from stale parameters, which can corrupt training signals. The later unsloth_fast_inference_lora guard in VLLMGeneration.sync_weights does not mitigate this because the call site is removed entirely.

Useful? React with 👍 / 👎.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates Unsloth to be compatible with TRL v0.28 and above by patching the new trl.generation.vllm_generation module. The changes correctly reuse existing vLLM instances and prevent redundant weight synchronization, following the established patching patterns in the codebase. The implementation is robust, utilizing regular expressions for source code transformation and linecache for better debugging of patched code. I've provided a couple of suggestions to refine exception handling to be more specific, which will improve code maintainability. Overall, the changes are well-executed and effectively address the compatibility requirements.

if Version(importlib_version("trl")) < Version("0.28.0"):
return

try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The exception handling (ImportError, NameError, Exception) is redundant because Exception is a base class for both ImportError and NameError. It's better to be more specific about the exceptions you expect to catch. In this case, ImportError is the most likely exception if the module path is incorrect or the trl version is not as expected. Catching the broad Exception can mask other unexpected issues during the import process.

Suggested change
try:
except ImportError as e:

logger.info(f"Unsloth: Could not find VLLMGeneration.{method_name}")
return False

try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching the broad Exception class can hide unexpected errors. The inspect.getsource() function is documented to raise TypeError for unsupported object types and OSError if the source file cannot be retrieved. It is better practice to catch these specific exceptions to make the error handling more precise and robust.

Suggested change
try:
except (TypeError, OSError) as e:

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3fd2994dbf

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

function,
)

<<<<<<< HEAD

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 Badge Remove unresolved merge markers in rl_replacements

The file contains raw conflict markers (<<<<<<<, |||||||, >>>>>>>) inside grpo_trainer__generate_single_turn, which makes this module invalid Python and raises a SyntaxError as soon as unsloth.models.rl_replacements is imported. In environments that load RL patching (including PatchFastRL), this is a hard failure that prevents startup entirely.

Useful? React with 👍 / 👎.

@Datta0 Datta0 force-pushed the trl_028_rl_fixes branch from 3fd2994 to 0af28ae Compare March 5, 2026 14:39
@pluesclues
Copy link
Collaborator

One comment that I have is that, my PR needs to be merged first:

Unsloth: #4140
Unsloth-zoo: unslothai/unsloth-zoo#528

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants