-
Notifications
You must be signed in to change notification settings - Fork 1.1k
docs: fix misleading description for LLMContextPrecisionWithoutReference #2239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
docs: fix misleading description for LLMContextPrecisionWithoutReference #2239
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @Rahul2512Chauhan
Please take a look at the comments left.
@@ -56,7 +56,7 @@ Ragas is a library that provides tools to supercharge the evaluation of Large La | |||
|
|||
<div class="toggle-list"><span class="arrow">→</span> How can I make evaluation results more explainable?</div> | |||
<div style="display: none;"> | |||
The best way is to trace and log your evaluation, then inspect the results using LLM traces. You can follow a detailed example of this process <a href="/howtos/customizations/metrics/tracing/">here</a>. | |||
The best way is to trace and log your evaluation, then inspect the results using LLM traces. You can follow a detailed example of this process <a href="https://docs.ragas.io/en/stable/howtos/customizations/metrics/tracing/">here</a>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this. Not relevant to this PR.
@@ -17,7 +17,8 @@ The following metrics uses LLM to identify if a retrieved context is relevant or | |||
|
|||
### Context Precision without reference | |||
|
|||
`LLMContextPrecisionWithoutReference` metric can be used when you have both retrieved contexts and also reference answer associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `response`. | |||
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the response. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the response. | |
`LLMContextPrecisionWithoutReference` metric can be used when you have retrieved contexts (`retrieved_contexts`) associated with a `user_input`. This metric does not require a reference answer. To estimate if a retrieved context is relevant, this method uses the LLM to compare each of the retrieved contexts or chunks in `retrieved_contexts` with the `response`. |
This PR fixes misleading documentation for the LLMContextPrecisionWithoutReference metric.
Old docs suggested that reference context is required.
Updated docs correctly reflect that only retrieved contexts (
retrieved_contexts
) and user_input are required, matching the actual implementation.Closes #1981