A text translation application built with LangChain Expression Language (LCEL) that translates English text into other languages. It uses the Groq API with the Llama 3.1 8B Instant model for fast inference and includes a FastAPI backend with a Streamlit frontend.
User Input (text, language)
|
v
+------------------+ +-------------------+ +------------------+
| ChatPromptTemplate| ----> | ChatGroq (LLM) | ----> | StrOutputParser |
| (system + user) | | llama-3.1-8b | | (plain text) |
+------------------+ +-------------------+ +------------------+
|
v LCEL Chain: prompt | model | parser
Translated Text
Stack:
| Component | Technology |
|---|---|
| LLM Framework | LangChain + LCEL |
| Language Model | Groq (Llama 3.1 8B Instant) |
| Backend | FastAPI + LangServe |
| Frontend | Streamlit |
| Server | Uvicorn |
Simple-LLM-with-LCEL/
├── serve.py # FastAPI backend server with LCEL chain
├── client.py # Streamlit frontend client
├── simplellmLCEL.ipynb # Jupyter notebook tutorial
├── requirements.txt # Python dependencies
├── .env # Environment variables (not committed)
└── .gitignore
- Python 3.10+
- A Groq API key (free tier available)
-
Clone the repository
git clone https://github.com/sothulthorn/Simple-LLM-with-LCEL.git cd Simple-LLM-with-LCEL -
Create and activate a virtual environment
python -m venv .venv # Windows .venv\Scripts\activate # macOS/Linux source .venv/bin/activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
Create a
.envfile in the project root:GROQ_API_KEY=your_groq_api_key_here
Start the FastAPI backend:
python serve.pyThe server starts at http://127.0.0.1:8000. API docs are available at http://127.0.0.1:8000/docs.
In a separate terminal, start the Streamlit frontend:
streamlit run client.pyThe app opens at http://localhost:8501. Enter text to translate it to French.
Endpoint: POST /chain/invoke
Request:
{
"input": {
"language": "French",
"text": "Hello, how are you?"
},
"config": {},
"kwargs": {}
}Response:
{
"output": "Bonjour, comment allez-vous ?"
}cURL example:
curl -X POST "http://127.0.0.1:8000/chain/invoke" \
-H "Content-Type: application/json" \
-d '{"input": {"language": "French", "text": "Hello, how are you?"}, "config": {}, "kwargs": {}}'The simplellmLCEL.ipynb notebook walks through the core concepts step by step:
- Initializing a language model with ChatGroq
- Using
SystemMessageandHumanMessage - Parsing output with
StrOutputParser - Chaining components with the LCEL pipe operator (
|) - Building reusable prompt templates with
ChatPromptTemplate - Composing the full chain:
prompt | model | parser
The application uses three LangChain components chained together with LCEL:
- ChatPromptTemplate - Formats the system instruction (
"Translate the following into {language}:") and user text into a structured prompt. - ChatGroq - Sends the prompt to the Groq API running Llama 3.1 8B Instant and returns the model's response.
- StrOutputParser - Extracts the plain text content from the model's response object.
These are composed into a single chain using the pipe operator:
chain = prompt_template | model | parserThe chain is then exposed as a REST API using LangServe's add_routes, which automatically handles serialization, deserialization, and provides an interactive playground at /chain/playground.