Hands-on workshop covering AI agent frameworks, Model Context Protocol, and AWS AgentCore.
Build agents using LangChain
- Theory: Framework comparison, when to use each
- Labs: Build 3 different agents with increasing complexity
Connect AI agents to data sources using MCP
- Theory: MCP architecture and benefits
- Labs: Build MCP server and integrate with Claude
Deploy production-ready agents on AWS
- Theory: AgentCore services and features
- Labs: Deploy agents with runtime, memory, and observability
# Required
- AWS account with Bedrock access enabled
- Python 3.9+
- AWS CLI configured
# Setup
git clone https://github.com/aitechnav/ai_agents_aws.git
cd ai_agents_aws
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env with your AWS credentials- LangChain: Simple chains, prototyping → Use for basic agents
- LangGraph: Complex workflows, state management → Use for multi-step processes
- CrewAI: Multi-agent teams → Use for specialized collaboration
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.tools import tool
from langchain_aws import ChatBedrock
@tool
def calculator(expression: str) -> str:
"""Evaluate mathematical expressions"""
return str(eval(expression))
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
agent = create_tool_calling_agent(llm, [calculator], prompt)
executor = AgentExecutor(agent=agent, tools=[calculator])
result = executor.invoke({"input": "What is 157 * 23?"})from langgraph.graph import StateGraph
workflow = StateGraph(State)
workflow.add_node("classify", classify_query)
workflow.add_node("process", process_query)
workflow.add_conditional_edges("classify", route_query)
app = workflow.compile()
result = app.invoke({"query": "user question"})from crewai import Agent, Task, Crew
researcher = Agent(role='Researcher', goal='Research AWS services')
writer = Agent(role='Writer', goal='Write documentation')
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
result = crew.kickoff()Open standard for connecting AI models to data sources. Think USB-C for AI.
from mcp.server import Server
from mcp.types import Resource, Tool
app = Server("aws-mcp-server")
@app.list_resources()
async def list_resources():
return [Resource(uri="aws://s3/buckets", name="S3 Buckets")]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "list_buckets":
return s3.list_buckets()# Configure Claude Desktop with your MCP server
# ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"aws": {
"command": "python",
"args": ["path/to/aws_mcp_server.py"]
}
}
}- Runtime: Deploy and scale agents securely
- Memory: Persistent context across sessions
- Gateway: Transform APIs into agent tools
- Observability: Monitor with CloudWatch and X-Ray
from strands import Agent, Tool
from strands.runtime import AgentCoreRuntime
agent = Agent(
name="aws-assistant",
instructions="You are an AWS expert assistant",
model=llm,
tools=[search_docs, calculate_cost]
)
runtime = AgentCoreRuntime(region="us-east-1")
deployed_agent = runtime.create_agent(agent, name="prod-agent")
# Invoke
response = deployed_agent.invoke(
input="How much does Lambda cost?",
session_id="user-123"
)from strands.memory import AgentCoreMemory
memory = AgentCoreMemory(memory_id="support-agent-memory")
agent = Agent(name="support-agent", model=llm, memory=memory)
# Memory persists across sessions
response = agent.run(input=message, user_id="user-123")from strands.observability import AgentCoreObservability
observability = AgentCoreObservability(
log_group="/aws/agentcore/agent",
enable_xray=True
)
runtime = AgentCoreRuntime(observability=observability)
# All invocations now logged to CloudWatch and X-Ray# Test basic agent
python examples/basic_agent.py
# Test agent with tools
python examples/agent_with_tools.py
# Run interactive mode
python examples/basic_agent.py
# Choose option 2 for interactive chat- LangChain: https://python.langchain.com/docs/
- LangGraph: https://langchain-ai.github.io/langgraph/
- CrewAI: https://docs.crewai.com/
- MCP: https://spec.modelcontextprotocol.io/
- AgentCore: https://aws.amazon.com/bedrock/agentcore/
Workshop duration: ~$3-5 total
- Bedrock (Claude Sonnet): ~$2
- AgentCore Runtime: ~$1
- Other services: ~$1
Bedrock Access Denied: Go to AWS Console → Bedrock → Model Access → Enable Claude models
Dependencies Failed:
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txtAWS Credentials:
aws configure
aws sts get-caller-identityai_agents_aws/
├── section-01-frameworks/
│ ├── lab-1-langchain/
│ ├── lab-2-langgraph/
│ └── lab-3-crewai/
├── section-02-mcp/
│ ├── lab-1-server/
│ └── lab-2-client/
├── section-03-agentcore/
│ ├── lab-1-runtime/
│ ├── lab-2-memory/
│ └── lab-3-observability/
├── examples/
├── requirements.txt
├── .env.example
└── README.md
Maintainer: Anuj Tyagi
License: MIT