Skip to content

Conversation

Rahul2512Chauhan
Copy link
Contributor

This PR fixes misleading documentation for the LLMContextPrecisionWithoutReference metric.

Old docs suggested that reference context is required.
Updated docs correctly reflect that only retrieved contexts (retrieved_contexts) and user_input are required, matching the actual implementation.

Closes #1981

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Aug 30, 2025
Copy link
Contributor

@anistark anistark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @Rahul2512Chauhan
Please take a look at the comments left.

@@ -56,7 +56,7 @@ Ragas is a library that provides tools to supercharge the evaluation of Large La

<div class="toggle-list"><span class="arrow">→</span> How can I make evaluation results more explainable?</div>
<div style="display: none;">
The best way is to trace and log your evaluation, then inspect the results using LLM traces. You can follow a detailed example of this process <a href="/howtos/customizations/metrics/tracing/">here</a>.
The best way is to trace and log your evaluation, then inspect the results using LLM traces. You can follow a detailed example of this process <a href="https://docs.ragas.io/en/stable/howtos/customizations/metrics/tracing/">here</a>.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this. Not relevant to this PR.

@@ -17,7 +17,8 @@ The following metrics uses LLM to identify if a retrieved context is relevant or

### Context Precision without reference

`LLMContextPrecisionWithoutReference` metric can be used when you have both retrieved contexts and also reference answer associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `response`.
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the response.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the response.
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the `response`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misleading documentation for "LLM Based Context Precision without reference"
2 participants