Skip to content

Conversation

nagkumar91
Copy link
Member

@nagkumar91 nagkumar91 commented Aug 12, 2025

Description

Please add an informative description that covers that changes made by the pull request and link all relevant issues.

If an SDK is being regenerated based on a new API spec, a link to the pull request containing these API spec changes should be included above.

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

@Copilot Copilot AI review requested due to automatic review settings August 12, 2025 15:40
@nagkumar91 nagkumar91 requested a review from a team as a code owner August 12, 2025 15:40
@github-actions github-actions bot added the Evaluation Issues related to the client library for Azure AI Evaluation label Aug 12, 2025
Copilot

This comment was marked as outdated.

Copy link

github-actions bot commented Aug 12, 2025

API Change Check

APIView identified API level changes in this PR and created the following API reviews

azure-ai-evaluation

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for reasoning models to evaluators by introducing an is_reasoning_model keyword parameter. When set, this parameter updates the evaluator configuration appropriately for reasoning models, enabling better integration with Azure OpenAI's reasoning capabilities.

Key Changes:

  • Added is_reasoning_model parameter to all evaluators' constructors
  • Updated QAEvaluator to propagate this parameter to child evaluators
  • Added defensive parameter checking in GroundednessEvaluator for backward compatibility
  • Updated documentation across evaluators to describe the new parameter

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
_similarity/_similarity.py Added is_reasoning_model parameter and updated docstrings
_retrieval/_retrieval.py Added is_reasoning_model parameter support
_response_completeness/_response_completeness.py Added is_reasoning_model parameter and improved formatting
_relevance/_relevance.py Added is_reasoning_model parameter support
_qa/_qa.py Updated to propagate is_reasoning_model to child evaluators
_groundedness/_groundedness.py Added parameter support with backward compatibility checks
_fluency/_fluency.py Added is_reasoning_model parameter and updated docstrings
_base_prompty_eval.py Updated to pass is_reasoning_model to AsyncPrompty.load
_base_multi_eval.py Minor import formatting improvement
_coherence/_coherence.py Added is_reasoning_model parameter and updated docstrings
CHANGELOG.md Documented the new feature and bug fix

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

return "pf_client"

assert False, "This should be impossible"
# Defensive default
return "run_submitter"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we let it fail like before ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Evaluation Issues related to the client library for Azure AI Evaluation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants