Skip to content

Commit 2fb0628

Browse files
committed
fix: resolve ruff linting issues
- Remove unused imports (pytest, configure_eval_set_run_span, configure_evaluation_span) - Rename unused loop variable evaluator_id to _evaluator_id
1 parent 3dd6d2c commit 2fb0628

File tree

4 files changed

+43
-116
lines changed

4 files changed

+43
-116
lines changed

samples/calculator/pyproject.toml

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,17 @@ name = "calculator-agent"
33
version = "0.0.1"
44
description = "calculator-agent"
55
authors = [{ name = "John Doe", email = "john.doe@myemail.com" }]
6-
dependencies = [
7-
"uipath>=2.4.0, <2.5.0",
8-
]
96
requires-python = ">=3.11"
107

11-
[dependency-groups]
12-
dev = [
13-
"uipath-dev>=0.0.15",
8+
dependencies = [
9+
"uipath==2.8.9.dev1012914676",
1410
]
11+
12+
[[tool.uv.index]]
13+
name = "testpypi"
14+
url = "https://test.pypi.org/simple/"
15+
publish-url = "https://test.pypi.org/legacy/"
16+
explicit = true
17+
18+
[tool.uv.sources]
19+
uipath = { index = "testpypi" }

samples/calculator/uv.lock

Lines changed: 31 additions & 106 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

tests/cli/eval/test_eval_span_utils.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,12 @@
44
from typing import Any, Dict, Optional
55
from unittest.mock import MagicMock
66

7-
import pytest
87
from opentelemetry.trace import Status, StatusCode
98

109
from uipath._cli._evals._span_utils import (
1110
EvalSetRunOutput,
1211
EvaluationOutput,
1312
EvaluationOutputSpanOutput,
14-
configure_eval_set_run_span,
15-
configure_evaluation_span,
1613
extract_evaluator_scores,
1714
normalize_score_to_100,
1815
set_eval_set_run_output_and_metadata,

tests/cli/eval/test_eval_tracing_integration.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -792,7 +792,7 @@ async def test_evaluation_set_run_span_has_output_attribute(
792792
assert "scores" in output_data
793793
assert isinstance(output_data["scores"], dict)
794794
# Verify that scores are in 0-100 range
795-
for evaluator_id, score in output_data["scores"].items():
795+
for _evaluator_id, score in output_data["scores"].items():
796796
assert isinstance(score, (int, float))
797797
assert 0 <= score <= 100
798798

0 commit comments

Comments
 (0)