Skip to content

Conversation

ansschh
Copy link

@ansschh ansschh commented Oct 3, 2025

unit tests for generate.py backend selection and argument parsing.

Tests Added:

  • Argument parsing with all options
  • Torch/triton/vllm backend initialization
  • Checkpoint path validation
  • Generation flow with/without limits
  • Tokenizer integration

ansschh added 2 commits August 9, 2025 11:25
- Add return type hint to get_tokenizer()
- Add type hints and checkpoint validation to generate.py main()
- Add parameter type hints to suppress_output() in torch/utils.py

Improves IDE support and catches potential bugs early.
- Test argument parsing with all options
- Test torch/triton/vllm backend initialization
- Test checkpoint path validation
- Test generation flow with/without limits
- Test tokenizer integration
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines +13 to +17
def main(args: argparse.Namespace) -> None:
# Validate checkpoint path exists
checkpoint_path = Path(args.checkpoint)
if not checkpoint_path.exists():
raise FileNotFoundError(f"Checkpoint path does not exist: {args.checkpoint}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Block remote vLLM checkpoints with unconditional filesystem check

The new guard raises FileNotFoundError whenever args.checkpoint is not an existing local path before any backend-specific logic runs. The vLLM backend previously accepted Hugging Face model IDs or other remote identifiers and let VLLMGenerator download the weights itself. With this change, calling python -m gpt_oss.generate meta-llama/Meta-Llama-3-8B-Instruct -b vllm now fails immediately even though the backend could handle it. Consider limiting the existence check to backends that truly require local files (torch/triton) or skipping it for identifiers that are not paths.

Useful? React with 👍 / 👎.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants