Skip to content

integrated vlm code for benchmark for Eagle2 #3698

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

chohk88
Copy link
Collaborator

@chohk88 chohk88 commented Jul 21, 2025

Description

Closing the previous pull request (#3652) due to rebase difficulties with the main branch. This new PR resubmits the same changes for the VLM benchmark framework—now cleanly rebased on the latest main branch—and incorporates all feedback from the original review.

  1. Integrated VLM benchmark framework
    • Currently supports Eagle2, Qwen 2.5-VL
    • Planned support: Paligemma etc.
  2. Added custom token-generation function** for multi-modal (MM) models

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@chohk88 chohk88 requested review from peri044 and zewenli98 July 21, 2025 16:27
@chohk88 chohk88 self-assigned this Jul 21, 2025
@chohk88 chohk88 added component: conversion Issues re: Conversion stage component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jul 21, 2025
@meta-cla meta-cla bot added the cla signed label Jul 21, 2025
@github-actions github-actions bot removed component: conversion Issues re: Conversion stage component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jul 21, 2025
@peri044
Copy link
Collaborator

peri044 commented Aug 6, 2025

Qwen model : command I used:
python run_vlm.py

Error:

File "/work/TensorRT/tools/llm/run_vlm.py", line 448, in <module>
    inputs = load_inputs(args, processor, DEVICE)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/TensorRT/tools/llm/run_vlm.py", line 188, in load_inputs
    from qwen_vl_utils import process_vision_info
ModuleNotFoundError: No module named 'qwen_vl_utils'

@peri044
Copy link
Collaborator

peri044 commented Aug 6, 2025

When I tried Eagle2 model, it shows

Traceback (most recent call last):
  File "/work/TensorRT/tools/llm/run_vlm.py", line 443, in <module>
    model, processor, emb_layer = load_model(args.model, DEVICE, dtype)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/TensorRT/tools/llm/run_vlm.py", line 141, in load_model
    return _load_eagle2(device, torch_dtype)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/TensorRT/tools/llm/run_vlm.py", line 101, in _load_eagle2
    AutoModel.from_pretrained(
  File "/root/.pyenv/versions/3.11.13/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.pyenv/versions/3.11.13/lib/python3.11/site-packages/transformers/modeling_utils.py", line 279, in _wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/.pyenv/versions/3.11.13/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4336, in from_pretrained
    config = cls._autoset_attn_implementation(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.pyenv/versions/3.11.13/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2109, in _autoset_attn_implementation
    cls._check_and_enable_flash_attn_2(
  File "/root/.pyenv/versions/3.11.13/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2252, in _check_and_enable_flash_attn_2
    raise ImportError(f"{preface} the package flash_attn seems to be not installed. {install_message}")
ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.
root@45fb01c53ae9:/work/TensorRT/tools/llm# python

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update docs and add these models to the list of supported models.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Aug 11, 2025
github-actions[bot]

This comment was marked as resolved.

Copy link
Collaborator Author

@chohk88 chohk88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your useful comments! I have addressed every comment!

@chohk88
Copy link
Collaborator Author

chohk88 commented Aug 11, 2025

Qwen model : command I used: python run_vlm.py

Error:

File "/work/TensorRT/tools/llm/run_vlm.py", line 448, in <module>
    inputs = load_inputs(args, processor, DEVICE)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/TensorRT/tools/llm/run_vlm.py", line 188, in load_inputs
    from qwen_vl_utils import process_vision_info
ModuleNotFoundError: No module named 'qwen_vl_utils'

I have added the installation instructions (for both FlashAttention2 and qwen_vl_utils) to the README and tutorial, and also included a helpful message to guide users on installation if the package is not found when running the script.

#### Vision Language Models: `run_vlm.py`

```bash
python run_vlm.py --model Qwen/Qwen2.5-VL-3B-Instruct --precision FP16 --num_tokens 128 --cache static_v1 --enable_pytorch_run --benchmark
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use eagle model command here since that is fully optimized

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Installing flash-attn 2.7.1+post4 works. let's mention this in the README under limitations. let's convey that we install this version but we actually don't use flash-attn and instead modify it to use sdpa

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants