Agentify is built around three main concepts:
┌─────────────┐
│ Agent │ ◄── Executes tasks using tools and memory
└─────────────┘
│
├── Memory Service ◄── Manages conversation history
│
└── Tools ◄── Extends agent capabilities
The fundamental unit of work in Agentify. An agent:
- Receives user input
- Processes it using an LLM
- Can call tools to perform actions
- Maintains conversation history via memory
from agentify import BaseAgent, AgentConfig
agent = BaseAgent(
config=AgentConfig(
name="MyAgent",
system_prompt="You are a helpful assistant",
provider="provider",
model_name="model_name"
),
memory=memory_service,
memory_address=addr
)Configuration class for agents:
| Parameter | Type | Description | Default |
|---|---|---|---|
name |
str |
Agent identifier | Required |
system_prompt |
str |
System instructions | Required |
provider |
str |
LLM provider | Required |
model_name |
str |
Model to use | Required |
temperature |
float |
Creativity (0-1) | 0.7 |
max_tool_iter |
int|None |
Max tool calls | 10 |
stream |
bool |
Enable streaming | False |
timeout |
int |
Request timeout (s) | 60 |
reasoning_effort |
str|None |
For reasoning models | None |
MemoryService
├── ConversationStore (Backend)
│ ├── InMemoryStore
│ └── RedisStore
│ └── SQLiteStore
│ └── ElasticsearchStore
│
└── MemoryPolicy (Rules)
├── Message limit
├── TTL
└── Token budget
Identifies where conversations are stored:
from agentify.memory import MemoryAddress
addr = MemoryAddress(
user_id="user_123",
conversation_id="chat_456",
agent_id="agent_007"
)InMemoryStore - For development/testing:
from agentify.memory.stores import InMemoryStore
store = InMemoryStore()RedisStore - For production:
from agentify.memory.stores import RedisStore
store = RedisStore(url="redis://localhost:6379/0")
# Delete conversation
store.delete_conversation(addr)SQLiteStore - For zero-dependency persistence:
from agentify.memory.stores import SQLiteStore
# Persistent (single file)
store = SQLiteStore(db_path="agentify.db")
# In-memory (transient) by default
store = SQLiteStore(db_path=":memory:")ElasticsearchStore - For advanced search and durability:
from agentify.memory.stores import ElasticsearchStore
store = ElasticsearchStore(url="http://localhost:9200", index_name="agentify-memory")
# Delete conversation
store.delete_conversation(addr)Control memory behavior:
from agentify.memory import MemoryPolicy
policy = MemoryPolicy(
store=store,
ttl_seconds=3600, # Expire after 1 hour
max_user_msgs=10, # Keep last 10 user messages
max_assistant_msgs=10, # Keep last 10 assistant messages
)Tools extend agent capabilities. Tool arguments are validated against their JSON schema before execution.
from agentify.extensions.tools import (
TimeTool, # Get current date/time
CalculatorTool, # Math calculations
WeatherTool, # Weather info
TodoTool, # Task planning
ListDirTool, # List files
ReadFileTool, # Read files (supports max_bytes)
WriteFileTool, # Write files
)
agent = BaseAgent(
config=config,
memory=memory,
tools=[
TimeTool(),
CalculatorTool(),
TodoTool()
]
)Agentify offers two ways to create tools: the @tool decorator (recommended) or subclassing Tool.
The @tool decorator expects Google Style docstring and automatically generates the JSON Schema from your function signature:
from agentify import tool
@tool
def get_current_time() -> dict:
"""Returns the current date and time in ISO 8601 format."""
import datetime
return {"current_time": datetime.datetime.now().isoformat()}With Parameters:
@tool
def calculate(expression: str) -> dict:
"""Evaluates a mathematical expression.
Args:
expression: The math expression to evaluate (e.g., '2 + 2').
"""
import ast
# ... calculation logic ...
return {"result": result}Note: The Returns: section is purely for documentation and does NOT affect the generated JSON Schema.
Using Decorator Tools:
from agentify import BaseAgent, AgentConfig, tool
@tool
def my_tool(param: str) -> dict:
"""My custom tool."""
return {"result": param}
agent = BaseAgent(
config=config,
memory=memory,
tools=[my_tool] # Use directly, no need to instantiate
)For complex tools with state or initialization logic:
from agentify import Tool
class CustomTool(Tool):
def __init__(self):
schema = {
"name": "my_custom_tool",
"description": "What this tool does",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "First parameter"
}
},
"required": ["param1"]
}
}
super().__init__(schema, self._execute)
def _execute(self, param1: str) -> dict:
# Your logic here
return {"result": f"Processed: {param1}"}
# Use it
agent = BaseAgent(
config=config,
memory=memory,
tools=[CustomTool()] # Instantiate for class-based tools
)Agentify supports MCP (Model Context Protocol), an open standard for connecting AI agents to external tools and data sources.
| Transport | Use Case | Factory Method | Arguments |
|---|---|---|---|
| StdIO | Local servers (scripts, CLIs) | MCPConnection.stdio(...) |
command, args, env |
| SSE | Remote HTTP servers | MCPConnection.sse(...) |
url, headers, timeout, sse_read_timeout |
from agentify.mcp import MCPConnection
async with MCPConnection.stdio(command="uvx", args=["mcp-server-fetch"]) as mcp:
tools = await mcp.get_tools()
agent = BaseAgent(config=config, memory=memory, tools=tools, ...)
await agent.arun("Fetch https://example.com")from agentify.mcp import MCPConnection
async with MCPConnection.sse(url="http://localhost:8080/sse", headers={"Authorization": "Bearer token"}) as mcp:
tools = await mcp.get_tools()
agent = BaseAgent(config=config, memory=memory, tools=tools, ...)
await agent.arun("Use the remote tools")Note: The
mcppackage must be installed:pip install mcp
Monitor agent behavior:
from agentify.core.callbacks import AgentCallbackHandler
class MyCallback(AgentCallbackHandler):
def on_agent_start(self, agent_id: str, input_text: str):
print(f"Agent {agent_id} starting with: {input_text}")
def on_tool_start(self, tool_name: str, arguments: dict):
print(f"Calling tool {tool_name}: {arguments}")
def on_agent_finish(self, agent_id: str, output: str):
print(f"Agent {agent_id} finished: {output}")
agent = BaseAgent(
config=AgentConfig(
name="CallbackAgent",
callbacks=[MyCallback()]
),
memory=memory
)Security Note: The default
LoggingCallbackHandler(enabled whenverbose=True) automatically redacts sensitive keys likepassword,api_key, ortokenfrom tool arguments in the logs.
Agentify supports multiple LLM providers:
config = AgentConfig(
provider="openai",
model_name="gpt-4.1-mini"
)config = AgentConfig(
provider="deepseek",
model_name="deepseek-chat"
)config = AgentConfig(
provider="gemini",
model_name="gemini-2.5-flash"
)config = AgentConfig(
provider="azure",
model_name="model",
client_config_override={
"api_version": "2024-02-15-preview"
}
)config = AgentConfig(
provider="local",
model_name="google/gemma-4-e4b", # Example
# Optional override if not using env vars
client_config_override={
"base_url": "http://localhost:1234/v1"
}
)