Skip to content

Commit b78b498

Browse files
AAgnihotryclaude
andauthored
feat: add support for job attachment evaluation (#1489)
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
1 parent 97d3b00 commit b78b498

File tree

18 files changed

+2746
-9
lines changed

18 files changed

+2746
-9
lines changed

packages/uipath/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "uipath"
3-
version = "2.10.32"
3+
version = "2.10.33"
44
description = "Python SDK and CLI for UiPath Platform, enabling programmatic interaction with automation services, process management, and deployment tools."
55
readme = { file = "README.md", content-type = "text/markdown" }
66
requires-python = ">=3.11"
Lines changed: 162 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,162 @@
1+
# Quick Start Guide
2+
3+
Get up and running with the attachment evaluation sample in 5 minutes!
4+
5+
## Step 1: Install Dependencies
6+
7+
```bash
8+
cd samples/attachment_evaluation_test
9+
uv sync
10+
```
11+
12+
## Step 2: Set Up UiPath Credentials
13+
14+
Choose one of these methods:
15+
16+
### Option A: Environment Variables
17+
18+
```bash
19+
export UIPATH_URL="https://your-tenant.uipath.com"
20+
export UIPATH_ACCESS_TOKEN="your-access-token"
21+
```
22+
23+
### Option B: Interactive Auth
24+
25+
```bash
26+
uv run uipath auth
27+
```
28+
29+
Follow the prompts to authenticate.
30+
31+
## Step 3: Run the Agent
32+
33+
```bash
34+
# Generate a sales report
35+
uv run uipath run main '{"task": "Generate sales report"}'
36+
```
37+
38+
**Expected output:**
39+
```json
40+
{
41+
"report": "urn:uipath:cas:file:orchestrator:abc12345-...",
42+
"task": "Generate sales report",
43+
"status": "completed"
44+
}
45+
```
46+
47+
The `report` field contains the attachment URI!
48+
49+
## Step 4: Run Evaluations
50+
51+
```bash
52+
uv run uipath eval main evaluations/eval-sets/default.json --workers 1
53+
```
54+
55+
**Expected output:**
56+
```
57+
Running evaluations...
58+
✓ Test sales report exact match - PASSED (2/2 evaluators)
59+
✓ Test inventory report contains - PASSED (1/1 evaluator)
60+
✓ Test employee report line-by-line - PASSED (1/1 evaluator)
61+
✓ Test generic report exact match - PASSED (2/2 evaluators)
62+
✓ Test partial match with contains - PASSED (1/1 evaluator)
63+
64+
Summary: 5/5 tests passed
65+
```
66+
67+
## What's Happening?
68+
69+
1. **Agent runs** and generates report content
70+
2. **Content is uploaded** as a job attachment to UiPath
71+
3. **Attachment URI** is returned in agent output
72+
4. **Evaluators detect** the URI automatically
73+
5. **Attachment is downloaded** and content is evaluated
74+
6. **Results are displayed** showing pass/fail
75+
76+
## Try Different Reports
77+
78+
```bash
79+
# Inventory report
80+
uv run uipath run main '{"task": "Generate inventory report"}'
81+
82+
# Employee report
83+
uv run uipath run main '{"task": "Generate employee report"}'
84+
85+
# Generic report
86+
uv run uipath run main '{"task": "Complete project review"}'
87+
```
88+
89+
## View Attachment Content
90+
91+
After running the agent, you can manually download the attachment to see its content:
92+
93+
```bash
94+
# Extract the UUID from the output
95+
# urn:uipath:cas:file:orchestrator:YOUR-UUID-HERE
96+
97+
# Or check in UiPath Orchestrator UI:
98+
# Orchestrator > Jobs > Job Details > Attachments
99+
```
100+
101+
## Customize
102+
103+
### Add New Report Types
104+
105+
Edit `main.py` and add a new condition:
106+
107+
```python
108+
elif "financial" in task.lower():
109+
content = """Your report content here"""
110+
```
111+
112+
### Add New Evaluators
113+
114+
Create a new evaluator config in `evaluations/evaluators/`:
115+
116+
```json
117+
{
118+
"version": "1.0",
119+
"evaluatorTypeId": "uipath-json-similarity",
120+
"evaluatorConfig": {
121+
"name": "MyEvaluator",
122+
"targetOutputKey": "report"
123+
}
124+
}
125+
```
126+
127+
### Add New Test Cases
128+
129+
Edit `evaluations/eval-sets/default.json` and add:
130+
131+
```json
132+
{
133+
"name": "My test case",
134+
"input": {"task": "..."},
135+
"evaluationCriteria": {
136+
"MyEvaluator": {...}
137+
}
138+
}
139+
```
140+
141+
## Troubleshooting
142+
143+
### "Attachment not found"
144+
- Check your credentials point to the correct tenant
145+
- Verify the attachment wasn't deleted
146+
147+
### "Permission denied"
148+
- Ensure your access token has attachment read/write permissions
149+
150+
### "Module not found"
151+
- Run `uv sync` to install dependencies
152+
153+
## Next Steps
154+
155+
- Read the full [README.md](./README.md) for detailed documentation
156+
- Check [../../src/uipath/_resources/eval.md](../../src/uipath/_resources/eval.md) for evaluation framework details
157+
- See [../line_by_line_test/](../line_by_line_test/) for line-by-line evaluation examples
158+
159+
## Need Help?
160+
161+
- [UiPath Python SDK Documentation](https://docs.uipath.com/)
162+
- [GitHub Issues](https://github.com/UiPath/uipath-python-sdk/issues)

0 commit comments

Comments
 (0)