Problem
LLMTestCase.input only accepts str, making it impossible to include PDF documents as part of the agent input.
We are using pdf input like here: https://developers.openai.com/api/docs/guides/file-inputs/
Proposed Solution
Extend multimodal support to PDFs, following the existing MLLMImage pattern:
from deepeval.test_case import LLMTestCase, MLLMDocument
pdf = MLLMDocument(url='./contract.pdf', local=True)
test_case = LLMTestCase(
input=f"What does this contract say about payment terms? {pdf}",
tools_called=[...]
)
Problem
LLMTestCase.inputonly acceptsstr, making it impossible to include PDF documents as part of the agent input.We are using pdf input like here: https://developers.openai.com/api/docs/guides/file-inputs/
Proposed Solution
Extend multimodal support to PDFs, following the existing
MLLMImagepattern: