You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+13-6Lines changed: 13 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,8 +2,6 @@
2
2
3
3
The easiest way to get started with [LlamaIndex](https://www.llamaindex.ai/) is by using `create-llama`. This CLI tool enables you to quickly start building a new LlamaIndex application, with everything set up for you.
@@ -22,16 +24,20 @@ to start the development server. You can then visit [http://localhost:3000](http
22
24
23
25
## What you'll get
24
26
25
-
- A Next.js-powered front-end using components from [shadcn/ui](https://ui.shadcn.com/). The app is set up as a chat interface that can answer questions about your data (see below)
27
+
- A Next.js-powered front-end using components from [shadcn/ui](https://ui.shadcn.com/). The app is set up as a chat interface that can answer questions about your data or interact with your agent
26
28
- Your choice of 3 back-ends:
27
29
-**Next.js**: if you select this option, you’ll have a full-stack Next.js application that you can deploy to a host like [Vercel](https://vercel.com/) in just a few clicks. This uses [LlamaIndex.TS](https://www.npmjs.com/package/llamaindex), our TypeScript library.
28
30
-**Express**: if you want a more traditional Node.js application you can generate an Express backend. This also uses LlamaIndex.TS.
29
-
-**Python FastAPI**: if you select this option, you’ll get a backend powered by the [llama-index python package](https://pypi.org/project/llama-index/), which you can deploy to a service like Render or fly.io.
31
+
-**Python FastAPI**: if you select this option, you’ll get a backend powered by the [llama-index Python package](https://pypi.org/project/llama-index/), which you can deploy to a service like Render or fly.io.
30
32
- The back-end has two endpoints (one streaming, the other one non-streaming) that allow you to send the state of your chat and receive additional responses
31
33
- You add arbitrary data sources to your chat, like local files, websites, or data retrieved from a database.
32
34
- Turn your chat into an AI agent by adding tools (functions called by the LLM).
33
35
- The app uses OpenAI by default, so you'll need an OpenAI API key, or you can customize it to use any of the dozens of LLMs we support.
You can supply your own data; the app will index it and answer questions. Your generated app will have a folder called `data` (If you're using Express or Python and generate a frontend, it will be `./backend/data`).
@@ -88,14 +94,15 @@ Need to install the following packages:
88
94
create-llama@latest
89
95
Ok to proceed? (y) y
90
96
✔ What is your project named? … my-app
91
-
✔ Which template would you like to use? › Chat
97
+
✔ Which template would you like to use? › Agentic RAG (single agent)
92
98
✔ Which framework would you like to use? › NextJS
93
99
✔ Would you like to set up observability? › No
94
100
✔ Please provide your OpenAI API key (leave blank to skip): …
95
101
✔ Which data source would you like to use? › Use an example PDF
96
102
✔ Would you like to add another data source? › No
97
103
✔ Would you like to use LlamaParse (improved parser for RAG - requires API key)? … no / yes
98
104
✔ Would you like to use a vector database? › No, just store the data in the file system
105
+
✔ Would you like to build an agent using tools? If so, selectthe tools here, otherwise just press enter › Weather
99
106
? How would you like to proceed? › - Use arrow-keys. Return to submit.
0 commit comments