Skip to content

Conversation

leejuyuu
Copy link

What does this PR do?

Trtllm backend improvements

  • feat: add new finish reasons
  • fix: fix prometheus_port CLI short arg conflict
  • fix: fix segfault when canceling request
  • feat: add stop sequence support
  • feat: catch broader exception
  • feat: check existence of config files

Fixes #3205

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@mfuntowicz

Add new finish reasons introduced in TensorRT-LLM v0.16.0.
The short arg of `prometheus_port` conflicts with `port`. Remove the
short arg variant.

Fixes huggingface#3205
When a request is cancelled, the `tensorrt_llm::executor::Result`
contains `outputTokenIds` with size 1, but `outputTokenIds[0]` has size
0. This causes `as_generation_step` to segfault.

Check the size of `outputTokenIds` and `logProbs` before attempting to
access the inner vector. The `finishReasons` can be skipped because it
has only one dimension and the minimum beam size is 1.
Because cxx have not added Option support yet, include two boolean flags
to denote whether the value is valid.

Change log level when request is cancelled to debug.
Support per request stop sequences.
The trycatch only uses the `what()` method, which means we can catch the
broader `std::exception` instead. This is beneficial because
nlohmann/json also throws exception.
When the required config files are not present, nlohmann/json throws
parsing error, which does not help much for identifying what was wrong.
Check the existence of these files early and return specific error
messages.
Currently, the do_sample option is ignored and the executor will always
sample. Set top_k to 1 if do_sample is false.
Get a more accurate inference start time from the trtllm response.
Because `Instant` does not expose absolute value, create reference
points on both sides and return duration relative to the reference
point instead.
The executor_status_looper runs a spin loop, even if there are no active
requests. This makes the service constantly wasting a CPU core.

Make the loop block on receiving requests if there are no running ones
to reduce CPU usage when idle.
Make `tensorrt_llm_backend_t` interior mutable by marking the `inner_` struct
as a `mutable` field, so we can make the methods `const`.

This makes the pointer accessible from multiple threads at the Rust
side without wrapping a Mutex. The underlying
tensorrt_llm::executor::Executor already contains a mutex.
The executor_status_looper spend CPU time polling at the number of
tokens. Because the function is protected by mutex inside, this also
interferes with the Executor.

Because now the TensorRtLlmBackendImpl is interior mutable, we can mark
it as `Send` and share it in multiple threads. Therefore, the loop can
be split into request and response parts, and we can await for tokens
instead of constantly polling.
The type of `eos_token_id` in `transformers.GenerationConfig` is
`Union[int, list[int]]` (as of transformers 4.57.0).

The original code only parses this field when the value is an array, so
the stop_words is not populated for some models.
Add code to handle the `int` case as well.
TGI already accepts grammar for guided decoding through its HTTP API,
however, this feature has been disabled for the trtllm backend.

To enable this feature:
- Replace the hard-coded disable of the grammar support with the
  `disable_grammar_support` arg present in the v3 backend.
- Pass tokenizer information when constructing the trtllm Executor and
  enable guided decoding by default.
- Pass the validated grammar type and value from requests to the
  Executor.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Conflicting short argument -p

1 participant