A Python SDK for conducting deep research using Google's Gemini models with built-in search grounding. No external search APIs required.
pip install deep-researcher-sdkOr with uv:
uv add deep-researcher-sdkfrom deep_research import research
result = research("What are the top trends in B2B SaaS marketing in 2025?")
print(result.report)Set your Gemini API key as an environment variable:
export GEMINI_API_KEY="your-api-key"Get a free API key from Google AI Studio.
from deep_research import research
result = research("Your research query here")
# Access the results
print(result.plan) # Research plan
print(result.learnings) # List of learnings from searches
print(result.report) # Final synthesized reportfrom deep_research import research
result = research(
"Your research query here",
output_dir="./research_output"
)
# Saves: plan.md, learning_1.md, ..., learnings.md, report.mdfrom deep_research import research
result = research(
"Your research query here",
thinking_model="gemini-2.5-pro", # For planning and synthesis
task_model="gemini-2.5-flash", # For search tasks
)from deep_research import DeepResearcher
researcher = DeepResearcher(
thinking_model="gemini-2.5-pro",
task_model="gemini-2.5-flash",
api_key="your-api-key", # Optional, defaults to GEMINI_API_KEY env var
)
# Run full research
result = researcher.research("Your query", output_dir="./output")
# Or run individual steps
plan = researcher.write_plan("Your query")
queries = researcher.generate_search_queries(plan)
learnings = [researcher.search_and_learn(q.query, q.research_goal) for q in queries]
report = researcher.write_report(plan, learnings)The SDK orchestrates a multi-step research flow:
- Plan Generation - Uses the thinking model to create a structured research plan
- Query Generation - Generates 3-5 targeted search queries based on the plan
- Search & Learn - Executes each query using Gemini's built-in search grounding, extracting key learnings
- Report Synthesis - Combines all learnings into a comprehensive final report
The SDK uses two models optimized for different tasks:
- Thinking Model (
gemini-2.5-pro): Used for planning, query generation, and final report synthesis. Optimized for reasoning and complex synthesis. - Task Model (
gemini-2.5-flash): Used for search tasks with grounding enabled. Optimized for speed and web search integration.
When using output_dir, the SDK saves:
output_dir/
├── plan.md # Research plan
├── learning_1.md # First search result
├── learning_2.md # Second search result
├── ...
├── learnings.md # All learnings combined
└── report.md # Final synthesized report
FROM letta/letta:latest
RUN /app/.venv/bin/python3 -m pip install deep-researcher-sdkMake sure GEMINI_API_KEY is set in your Letta environment.
def deep_research(query: str) -> str:
"""
Conduct deep research on a topic and return a comprehensive report.
Use this tool when you need to research a topic thoroughly before
making recommendations or answering complex questions. The query
should be specific and well-defined.
Args:
query (str): The research topic or question to investigate
Returns:
str: A comprehensive markdown report with findings and sources
"""
from deep_research import research
result = research(query)
return result.reportfrom letta_client import Letta
client = Letta(base_url="http://localhost:8283")
# Create tool from file
tool = client.tools.create_from_file(filepath="deep_research_tool.py")
# Attach to agent
client.tools.attach_to_agent(agent_id="your-agent-id", tool_id=tool.id)- Python 3.12+
- Google Gemini API key
MIT