Skip to content

fix: preserve expected_outcome in conversational golden conversion#2598

Open
aerosta wants to merge 1 commit intoconfident-ai:mainfrom
aerosta:fix/conversational-golden-drops-expected-outcome
Open

fix: preserve expected_outcome in conversational golden conversion#2598
aerosta wants to merge 1 commit intoconfident-ai:mainfrom
aerosta:fix/conversational-golden-drops-expected-outcome

Conversation

@aerosta
Copy link
Copy Markdown
Contributor

@aerosta aerosta commented Apr 4, 2026

Summary

ConversationalGolden.expected_outcome was being dropped when goldens were converted to ConversationalTestCase. This could cause conversational metrics that require expected_outcome to fail validation or skip evaluation after conversion.

This change preserves expected_outcome in convert_convo_goldens_to_convo_test_cases(), so converted conversational test cases keep the same evaluation inputs as the source golden.

Changes

  • preserve expected_outcome when converting ConversationalGolden to ConversationalTestCase
  • add a regression test covering the conversion path

Test plan

DEEPEVAL_TELEMETRY_OPT_OUT=1 python -m pytest tests/test_core/test_datasets/test_dataset.py -q --tb=short
DEEPEVAL_TELEMETRY_OPT_OUT=1 python -m pytest tests/test_core/test_test_case/test_multi_turn/test_conversational_test_case.py -q --tb=short

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 4, 2026

@aerosta is attempting to deploy a commit to the Confident AI Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant