Skip to content

Commit a5f3687

Browse files
AdamWAdamW
authored andcommitted
Add version for Llamaindex with FastApi and Auth0
1 parent 62c249f commit a5f3687

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+12061
-2
lines changed

py-llamaindex/README.md

Lines changed: 93 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,94 @@
1-
# Assistant0: An AI Personal Assistant Secured with Auth0 - LlamaIndex Python Version
1+
# Assistant0: An AI Personal Assistant Secured with Auth0 - Llamaindex Python/FastAPI Version
2+
3+
Assistant0 an AI personal assistant that consolidates your digital life by dynamically accessing multiple tools to help you stay organized and efficient.
4+
5+
## About the template
6+
7+
This template scaffolds an Auth0 + LlamaIndex.js + Next.js starter app. It mainly uses the following libraries:
8+
9+
- [LlamaIndex's Python framework](https://docs.llamaindex.ai/en/stable/#introduction)
10+
- The [Auth0 AI SDK](https://github.com/auth0-lab/auth0-ai-python) and [Auth0 FastAPI SDK](https://github.com/auth0/auth0-fastapi) to secure the application and call third-party APIs.
11+
- [Auth0 FGA](https://auth0.com/fine-grained-authorization) to define fine-grained access control policies for your tools and RAG pipelines.
12+
13+
## 🚀 Getting Started
14+
15+
First, clone this repo and download it locally.
16+
17+
```bash
18+
git clone https://github.com/auth0-samples/auth0-assistant0.git
19+
cd auth0-assistant0/py-llamaindex
20+
```
21+
22+
The project is divided into two parts:
23+
24+
- `backend/` contains the backend code for the Web app and API written in Python using FastAPI.
25+
- `frontend/` contains the frontend code for the Web app written in React as a Vite SPA.
26+
27+
### Setup the backend
28+
29+
```bash
30+
cd backend
31+
```
32+
33+
Next, you'll need to set up environment variables in your repo's `.env` file. Copy the `.env.example` file to `.env`.
34+
35+
To start with the basic examples, you'll just need to add your OpenAI API key and Auth0 credentials.
36+
37+
- To start with the examples, you'll just need to add your OpenAI API key and Auth0 credentials for the Web app.
38+
- You can setup a new Auth0 tenant with an Auth0 Web App and Token Vault following the Prerequisites instructions [here](https://auth0.com/ai/docs/call-others-apis-on-users-behalf).
39+
- An Auth0 FGA account, you can create one [here](https://dashboard.fga.dev). Add the FGA store ID, client ID, client secret, and API URL to the `.env` file.
40+
41+
Next, install the required packages using your preferred package manager, e.g. uv:
42+
43+
```bash
44+
uv sync
45+
```
46+
47+
Now you're ready to start the database:
48+
49+
```bash
50+
# start the postgres database
51+
docker compose up -d
52+
```
53+
54+
Initialize FGA store:
55+
56+
```bash
57+
source .venv/bin/activate
58+
python -m app.core.fga_init
59+
```
60+
61+
Now you're ready to run the development server:
62+
63+
```bash
64+
source .venv/bin/activate
65+
uv run uvicorn app.main:app --reload
66+
```
67+
68+
### Start the frontend server
69+
70+
Rename `.env.example` file to `.env` in the `frontend` directory.
71+
72+
Finally, you can start the frontend server in another terminal:
73+
74+
```bash
75+
cd frontend
76+
cp .env.example .env # Copy the `.env.example` file to `.env`.
77+
npm install
78+
npm run dev
79+
```
80+
81+
This will start a React vite server on port 5173.
82+
83+
![A streaming conversation between the user and the AI](./public/images/home-page.png)
84+
85+
Agent configuration lives in `backend/app/agents/assistant0.ts`. From here, you can change the prompt and model, or add other tools and logic.
86+
87+
## License
88+
89+
This project is open-sourced under the MIT License - see the [LICENSE](LICENSE) file for details.
90+
91+
## Author
92+
93+
This project is built by [Adam W.](https://github.com/AdamWozniewski).
294

3-
Coming soon

py-llamaindex/backend/.env.example

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
APP_BASE_URL='http://localhost:8000'
2+
API_PREFIX=/api
3+
4+
AUTH0_SECRET='use [openssl rand -hex 32] to generate a 32 bytes value'
5+
AUTH0_DOMAIN=''
6+
AUTH0_CLIENT_ID=''
7+
AUTH0_CLIENT_SECRET=''
8+
9+
OPENAI_API_KEY=''
10+
PROVIDER=openai
11+
12+
# Database
13+
DATABASE_URL="postgresql+psycopg://postgres:postgres@localhost:5432/ai_documents_db"
14+
15+
# Auth0 FGA
16+
FGA_STORE_ID=<your-fga-store-id>
17+
FGA_CLIENT_ID=<your-fga-store-client-id>
18+
FGA_CLIENT_SECRET=<your-fga-store-client-secret>
19+
FGA_API_URL=https://api.xxx.fga.dev
20+
FGA_API_AUDIENCE=https://api.xxx.fga.dev/
21+
22+
# Shop API URL (Optional)
23+
# SHOP_API_URL="http://localhost:3001/api/shop"
24+
# SHOP_API_AUDIENCE="https://api.shop-online-demo.com"

py-llamaindex/backend/.gitignore

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
__pycache__
2+
app.egg-info
3+
*.pyc
4+
.mypy_cache
5+
.coverage
6+
htmlcov
7+
.cache
8+
.venv
9+
.env
10+
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
3.13

py-llamaindex/backend/README.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Setup the backend
2+
3+
```bash
4+
cd backend
5+
```
6+
7+
You'll need to set up environment variables in your repo's `.env` file. Copy the `.env.example` file to `.env`.
8+
9+
To start with the basic examples, you'll just need to add your OpenAI API key and Auth0 credentials.
10+
11+
- To start with the examples, you'll just need to add your OpenAI API key and Auth0 credentials for the Web app.
12+
- You can setup a new Auth0 tenant with an Auth0 Web App and Token Vault following the Prerequisites instructions [here](https://auth0.com/ai/docs/call-others-apis-on-users-behalf).
13+
- An Auth0 FGA account, you can create one [here](https://dashboard.fga.dev). Add the FGA store ID, client ID, client secret, and API URL to the `.env` file.
14+
15+
Next, install the required packages using your preferred package manager, e.g. uv:
16+
17+
```bash
18+
uv sync
19+
```
20+
21+
Now you're ready to start and migrate the database:
22+
23+
```bash
24+
# start the postgres database
25+
docker compose up -d
26+
```
27+
28+
Initialize FGA store:
29+
30+
```bash
31+
source .venv/bin/activate
32+
python -m app.core.fga_init
33+
```
34+
35+
Now you're ready to run the development server:
36+
37+
```bash
38+
source .venv/bin/activate
39+
uv run uvicorn app.main:app --reload
40+
```

py-llamaindex/backend/app/__init__.py

Whitespace-only changes.

py-llamaindex/backend/app/agents/__init__.py

Whitespace-only changes.
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
from __future__ import annotations
2+
from llama_index.core.agent import ReActAgentWorker
3+
from llama_index.llms.openai import OpenAI
4+
5+
from app.agents.tools.user_info_li import get_user_info_li
6+
from app.agents.tools.google_calendar_li import list_upcoming_events_li
7+
from app.agents.tools.shop_online_li import shop_online_li
8+
from app.agents.tools.context_docs_li import get_context_docs_li
9+
10+
llm = OpenAI(model="gpt-4.1-mini", temperature=0.2)
11+
12+
tools = [
13+
get_user_info_li,
14+
list_upcoming_events_li,
15+
shop_online_li,
16+
get_context_docs_li,
17+
]
18+
19+
agent = ReActAgentWorker.from_tools(
20+
tools=tools,
21+
llm=llm,
22+
verbose=True,
23+
system_prompt=(
24+
"You are a personal assistant named Assistant0. "
25+
"Use tools when helpful; prefer get_context_docs for knowledge base queries. "
26+
"Render email-like bodies as markdown (no code fences)."
27+
),
28+
).as_agent()

py-llamaindex/backend/app/agents/tools/__init__.py

Whitespace-only changes.
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
from __future__ import annotations
2+
from typing import List
3+
4+
from llama_index.core.tools import FunctionTool
5+
from pydantic import BaseModel
6+
7+
from app.core.rag_li import retrieve_nodes
8+
from app.core.fga import authorization_manager
9+
10+
11+
class GetContextDocsSchema(BaseModel):
12+
question: str
13+
14+
async def get_context_docs_li_fn(question: str, user_email: str) -> str:
15+
nodes = retrieve_nodes(question, top_k=12)
16+
17+
allowed: List[str] = []
18+
for n in nodes:
19+
doc_id = n.metadata.get("document_id")
20+
if not doc_id:
21+
continue
22+
23+
can_view = await authorization_manager.check_relation(
24+
user=user_email, doc_id=doc_id, relation="can_view"
25+
)
26+
if can_view:
27+
allowed.append(n.get_content(metadata_mode="none"))
28+
29+
if not allowed:
30+
return "I couldn't find any documents you are allowed to view."
31+
return "\n\n".join(allowed)
32+
33+
34+
get_context_docs_li = FunctionTool.from_defaults(
35+
name="get_context_docs",
36+
description="Retrieve documents from the knowledge base (LlamaIndex + FGA).",
37+
fn=get_context_docs_li_fn,
38+
)

0 commit comments

Comments
 (0)