Skip to content

Commit 3fe2162

Browse files
sadra-barikbinahgraber
authored andcommitted
Fix a couple of typos in docs/getstarted/evals.md (explodinggradients#2081)
Hi there! To fix a couple of typos in `docs/getstarted/evals.md`
1 parent 5658efb commit 3fe2162

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/getstarted/evals.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ choose_evaluator_llm.md
6262
**Evaluation**
6363

6464

65-
Here we will use [AspectCritic](../concepts/metrics/available_metrics/aspect_critic.md), which an LLM based metric that outputs pass/fail given the evaluation criteria.
65+
Here we will use [AspectCritic](../concepts/metrics/available_metrics/aspect_critic.md), which is an LLM based metric that outputs pass/fail given the evaluation criteria.
6666

6767

6868
```python
@@ -148,8 +148,8 @@ Output
148148
{'summary_accuracy': 0.84}
149149
```
150150

151-
This score shows that out of all the samples in our test data, only 84% of summaries passes the given evaluation criteria. Now, **It
152-
s important to see why is this the case**.
151+
This score shows that out of all the samples in our test data, only 84% of summaries passes the given evaluation criteria. Now, **It's
152+
important to see why is this the case**.
153153

154154
Export the sample level scores to pandas dataframe
155155

@@ -187,4 +187,4 @@ If you want help with improving and scaling up your AI application using evals.
187187

188188
## Up Next
189189

190-
- [Evaluate a simple RAG application](rag_eval.md)
190+
- [Evaluate a simple RAG application](rag_eval.md)

0 commit comments

Comments
 (0)