Skip to content

KeyError: invalid tool call parser 'openai' when using --tool-call-parser openai #51

@wanghuacan

Description

@wanghuacan

Description:
When I try to launch vllm serve with the --tool-call-parser openai option, the server crashes with a KeyError.

Steps to Reproduce:
Run the following command:

vllm serve openai/gpt-oss-120b  \
  --tensor_parallel_size 8 \
  --max-model-len 131072 \
  --max-num-batched-tokens 10240 \
  --max-num-seqs 128 \
  --tool-call-parser openai \
  --enable-auto-tool-choice \
  --gpu-memory-utilization 0.85 \
  --no-enable-prefix-caching

Actual Behavior:
The server raises the following error:

(APIServer pid=966592)     raise KeyError(f"invalid tool call parser: {args.tool_call_parser} "
(APIServer pid=966592) KeyError: 'invalid tool call parser: openai (chose from { deepseek_v3, glm45, granite-20b-fc, granite, hermes, hunyuan_a13b, internlm, jamba, kimi_k2, llama4_pythonic, llama4_json, llama3_json, minimax, mistral, phi4_mini_json, pythonic, qwen3_coder, step3, xlam })'

Expected Behavior:
openai should be a valid option for --tool-call-parser, or the documentation should clarify which parsers are supported.

Environment:

  • vLLM version: [0.10.1]
  • GPU type and count: [H100 * 8]

Question:

  • Is openai parser supported in the latest version?
  • If not, which parser should be used for OpenAI-style tool calls?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions