LLMText is a Python library for interacting with LLMs (Large Language Models) asynchronously. It provides text generation, streaming responses, structured data extraction, agentic workflows with tools, and prompt optimization utilities.
- Async Text Generation: Generate text asynchronously from messages or plain text.
- Streaming Responses: Stream LLM responses in real-time.
- Structured Extraction: Extract typed data (Pydantic models) from text or messages using instructor.
- Agentic Workflows: Build agents that can call tools, evaluate responses, and maintain conversation state.
- Prompt Optimization: Automatically generate and optimize prompts using example inputs/outputs with scoring functions.
pip install llmtextOr install from source with Poetry:
poetry installCreate a .env file in the root directory with the following variables:
OPENAI_API_KEY=your_api_key_here
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional, for custom endpoints
OPENAI_MODEL=gpt-4o-mini # Optional, defaults to gpt-4o-minifrom llmtext.messages_fns import agenerate
from llmtext.types import Message
async def main():
text = await agenerate(
messages=[Message(role="user", content="What's the weather today?")]
)
print(text)
import asyncio
asyncio.run(main())from llmtext.messages_fns import astream_generate
from llmtext.types import Message
async def main():
stream = astream_generate(
messages=[Message(role="user", content="Tell me a story")]
)
async for chunk in stream:
print(chunk, end="", flush=True)
import asyncio
asyncio.run(main())from pydantic import BaseModel
from llmtext.messages_fns import astructured_extraction
from llmtext.types import Message
class WeatherResponse(BaseModel):
city: str
temperature: str
condition: str
async def main():
result = await astructured_extraction(
messages=[Message(role="user", content="What's the weather in Tokyo?")],
output_class=WeatherResponse
)
print(result.city, result.temperature, result.condition)
import asyncio
asyncio.run(main())from llmtext.types import Message, RunnableTool
from llmtext.agent import Agent
from typing import Annotated
from pydantic import Field
class SearchInternetTool(RunnableTool):
"""Tool to search the internet"""
query: Annotated[str, Field(description="search query")]
async def _arun(self) -> str:
return f"No results found for: {self.query}"
async def main():
agent = Agent(
messages=[Message(role="user", content="What's the weather today?")],
tools=[SearchInternetTool],
max_steps=3
)
async for event in agent.astream_events():
print(f"Event: {event['type']}, Step: {event['step']}")
import asyncio
asyncio.run(main())from llmtext.prompt_optimizer import agenerate_prompt_and_optimize
from typing import Annotated, Awaitable
async def scoring_fn(inputs, outputs, reference) -> float:
correct = sum(1 for o, r in zip(outputs, reference) if o == r)
return correct / len(inputs)
best_prompt = await agenerate_prompt_and_optimize(
example_inputs=["Hello in Korean", "Goodbye in Korean"],
example_outputs=["안녕하세요", "안녕히 가세요"],
scoring_fn=scoring_fn,
parallel_count=5
)
print(f"Optimized prompt: {best_prompt}")pytest -v -rP -xOr use the Poetry script:
poetry run testllmtext/
├── __init__.py # Package init, loads .env
├── llm/ # Core LLM class (OpenAI + instructor)
│ └── __init__.py
├── agent/ # Agent class for tool-augmented workflows
│ └── __init__.py
├── types/ # TypedDict types (Message, Event, ToolCall, etc.)
│ └── __init__.py
├── messages_fns/ # Async functions for message-based LLM calls
│ └── __init__.py
├── texts_fns/ # Async functions for text-based LLM calls
│ └── __init__.py
├── utils_fns/ # Utilities (message conversion, tool selector)
│ └── __init__.py
└── prompt_optimizer/ # Prompt generation and optimization
└── __init__.py
tests/
├── test_llm.py # Tests for LLM class
└── test_prompt_optimizer.py # Tests for prompt optimization
scripts/
├── start.py # Start script
├── lint.py # Linting (ruff + black)
├── test.py # Test runner
└── publish.py # Poetry publish
- python: ^3.10
- python-dotenv: Environment variable loading
- instructor: Structured output extraction from LLMs
- black: Code formatting
- pytest: Testing framework
- ruff: Linting
- pytest-asyncio: Async test support
- Fork the repository
- Create a new branch for your feature or bug fix
- Commit your changes
- Push your branch to your fork
- Submit a pull request
MIT License. See LICENSE for details.
