๐ง LLM Agent
๐ Overview
LLM Agent is a modular and extensible framework that enables building intelligent autonomous AI agents powered by Large Language Models (LLMs) such as OpenAI GPT, Claude, and local open-weight models. It provides a clean architecture to integrate tools, APIs, memory, and decision loops for creating production-grade intelligent agents.
๐งฉ Features
โ Multi-LLM Support (OpenAI, Anthropic, HuggingFace, Ollama) โ Modular Agent Architecture โ Tool Integration (Search, Calculator, File I/O, API Calls) โ Memory & Context Management โ Chain-of-Thought Reasoning โ Configurable Prompts and Workflows โ Extendable Plugin System
๐๏ธ Project Structure llm-agent/ โ โโโ src/ โ โโโ agent_core/ # Core agent logic and architecture โ โโโ tools/ # External tool integrations (API, search, etc.) โ โโโ memory/ # Long-term and short-term memory management โ โโโ workflows/ # Custom agent workflows and actions โ โโโ main.py # Entry point for the agent โ โโโ configs/ โ โโโ agent_config.yaml # Model and system settings โ โโโ requirements.txt # Python dependencies โโโ README.md # Project documentation โโโ LICENSE # License file
โ๏ธ Installation 1๏ธโฃ Clone the Repository git clone https://github.com/Shank312/llm-agent.git cd llm-agent
2๏ธโฃ Create a Virtual Environment python -m venv venv venv\Scripts\activate # On Windows
source venv/bin/activate # On macOS/Linux
3๏ธโฃ Install Dependencies pip install -r requirements.txt
๐งฉ Usage
To start the agent: python src/main.py
You can modify configuration (model, tools, memory) inside: configs/agent_config.yaml
๐งฐ Example Workflow
Example: Question Answering Agent
from src.agent_core import LLMAgent agent = LLMAgent(model="gpt-4", memory=True) response = agent.run("Summarize the main ideas from 'Designing Data-Intensive Applications'") print(response)
๐ Tool Integrations
| Tool | Description | 
|---|---|
| ๐ Search | Uses DuckDuckGo or Bing API for live web context | 
| ๐งฎ Calculator | Handles math operations via safe evaluation | 
| ๐ FileReader | Reads and processes text or JSON files | 
| ๐ง Memory | Stores conversation context using local vector DB | 
๐ฆ Dependencies
Common dependencies (add these to your requirements.txt): openai langchain huggingface_hub python-dotenv tiktoken requests pydantic
๐ Roadmap
Add LangGraph / CrewAI support
Implement Tool Routing with LangChain Agents
Integrate Local LLM (Llama 3 / Mistral)
Web UI Dashboard for Agent Logs
Add Docker Support
๐ค Contributing
Contributions are welcome!
Fork the repo
Create your feature branch (git checkout -b feature-name)
Commit changes (git commit -m 'Add new feature')
Push to branch (git push origin feature-name)
Open a Pull Request ๐ฏ
๐งพ License
This project is licensed under the MIT License โ see the LICENSE file for details.
๐ก Author
๐ค Shankar Kumar
๐ฌ Building next-gen AI systems | Open Source Contributor | Machine Learning Engineer