Skip to content

Conversation

@jacklanchantin
Copy link
Contributor

What does this PR do? Please describe:
A summary of the change or the issue that is fixed.

Fixes #{issue number}

Does your PR introduce any breaking changes? If yes, please list them:
List of all backwards-incompatible changes.

Check list:

  • Was the content of this PR discussed and approved via a GitHub issue? (no need for typos or documentation improvements)
  • Did you read the contributor guideline?
  • Did you make sure that your PR does only one thing instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (no need for typos, documentation, or minor internal changes)

Ilia Kulikov and others added 30 commits February 22, 2025 01:36
* Add Skywork Reward Model

* reorder

* working checkpoint with skywork

* unify VllmReward class

* cleanup wrap text

* actually not quite working. needs dp gather and scatter

* add gather scatter for vllm rm

* working checkpoint

* working checkpoint

* updates

* Instantiate metric recorders on rank 0 only (#1072)

* fix gather bug

* cleanup

* comment

* add grpo (runnable, but not increasing rewards yet

* log outputs

* cleannup

* rename

* merge

* fixes

* testing online dpo after merge

* bug fix

* fixing merge errors

* fix bugs

* merged with online_training

* update grpo

* cleanup

* fix grpo bug

* cleanup

* cleanup

* isort/black

* move vllm generate_rewards to its own function

* refactor how reward models use prompt_batch

* remove breakpoint

* working chkpt

* remove irrelevant skywork stuff

---------

Co-authored-by: jacklanchantin user <[email protected]>
Co-authored-by: Can Balioglu <[email protected]>
Ilia Kulikov and others added 26 commits June 11, 2025 01:22
* Pairwise J1 prompt

* Adding Pairwise-J1 support

* Minor changes

* Simplifying

* Add logging back in

* Add generation prompt.

* removing debug statements

* More logging for scores out of range

* Adding reward name as a reward class attribute

* Cleaning up generative judges with extractor classes

* Typing

* logger label

---------

Co-authored-by: swarna <[email protected]>
Co-authored-by: Ilia Kulikov <[email protected]>
* add gener_verifier for hf backend

* working? with HF background

* fix bug

* comments

* working with vllm now

* add generative_judge

* merge

* remove GeneralVerifier

* move wrap_text to extractor

* rename wrap_text --> format_prompt

* string

* remove comment

* comment

* comments

* comment
* bug fix

* new parser

* fix

* fix parsing

* dont parse ref answer

* remove unused
* add new args for reward_handler.create

* add qwen25_3b

* increase max seq len

* revert yaml
* add new args for reward_handler.create

* add qwen25_3b

* increase max seq len

* revert online_dpo
…ard is binary (#1262)

* parallel vllm worker init, online dpo ref score fix when bs>1 and reward is binary

* comments clean

* addressing feedback

---------

Co-authored-by: Ilia Kulikov <[email protected]>
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 30, 2025
@jacklanchantin jacklanchantin changed the title RL Midtraining Online Training Sep 30, 2025
jacklanchantin and others added 2 commits October 27, 2025 15:51
* drgrpo

* get vllm logps

* Update _wandb.py

* remove beta check

* format

* revert

* add importance sampling correction

* dont run ref model forward if beta==0

* add tis ratio clamp = 2

* clean up

* configs

* clean up

* default

* var name

* var name

* only use tis_imp_ratio_cap

* revert unrelated files

* clean up

* fix type hint

* black/isort

* Allow batched inputs for get_vllm_logprobs

* allow batch_sz > 1

* Modify condition for reference log probabilities

* fix batch>1, microbatching

---------

Co-authored-by: Jack Lanchantin <[email protected]>
…dd a comment for calling newer vllm generate function (#1396)

* set VLLM_ALLOW_INSECURE_SERIALIZATION=1 for newer vllm versions

* update generate function to align with newer vllm version
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants