An intelligent, LLM-powered test generation agent that automatically creates, executes, and fixes tests for your software projects.
π€ AI-Powered Test Generation: Uses advanced language models to generate comprehensive test suites π§ Automatic Test Fixing: Intelligently fixes failing tests using built-in tools π Multi-Language Support: Currently supports Python and Go projects π Multiple LLM Providers: Works with Claude, OpenAI, DeepSeek, and Gemini β‘ Smart Caching: Efficient caching system to avoid redundant processing π― Project Structure Detection: Automatically detects and adapts to your project's testing patterns π οΈ Tool Integration: Built-in tools for installing dependencies, fixing imports, and creating mocks
git clone <repository-url>
cd test-agent
pip install -e .pip install test-agent-
Generate tests for a Python project:
test-agent /path/to/your/project --provider claude
-
Generate tests for a Go project:
test-agent /path/to/your/go/project --language go --provider openai
-
Interactive setup (will prompt for provider and API key):
test-agent /path/to/your/project
test-agent <project_directory> [options]--language, -l: Programming language (auto-detected if not specified)--provider, -p: LLM provider (claude, openai, deepseek, gemini)--model, -m: Specific model to use (optional)--test-dir, -t: Custom test directory (optional)--api-key, -k: API key for the LLM provider
--exclude-dir, -e: Directory to exclude (can be used multiple times)--exclude-file, -x: File to exclude (can be used multiple times)--files, -f: Specific files to process
--no-cache: Disable caching--clear-cache: Clear cache before running--clear-all: Clear all caches and settings
--verbose, -v: Enable verbose output--quiet, -q: Minimize output--log-file: Path to save log file--log-level: Logging level (debug, info, warning, error)
--list-languages: List supported languages--list-providers: List supported LLM providers--save-key: Save API key for a provider
# Generate tests with specific provider
test-agent ./my-project --provider claude --verbose
# Skip certain directories
test-agent ./my-project --exclude-dir venv --exclude-dir .git
# Use custom test directory
test-agent ./my-project --test-dir ./custom-tests
# Clear cache and regenerate
test-agent ./my-project --clear-cache --provider openai
# List available providers
test-agent --list-providers
# Save API key for future use
test-agent --save-key --provider claude- Python: Full support with pytest and unittest frameworks
- Go: Full support with standard Go testing
| Provider | Models | API Key Required |
|---|---|---|
| Claude | claude-3-5-sonnet, claude-3-opus, claude-3-haiku | ANTHROPIC_API_KEY |
| OpenAI | gpt-4o, gpt-4-turbo, gpt-3.5-turbo | OPENAI_API_KEY |
| DeepSeek | deepseek-chat, deepseek-coder | DEEPSEEK_API_KEY |
| Gemini | gemini-1.5-pro, gemini-1.5-flash | GOOGLE_API_KEY |
API keys can be provided in several ways:
-
Environment variables:
export ANTHROPIC_API_KEY="your-key" export OPENAI_API_KEY="your-key" export DEEPSEEK_API_KEY="your-key" export GOOGLE_API_KEY="your-key"
-
Command line:
test-agent ./project --api-key your-key
-
Saved configuration:
test-agent --save-key --provider claude
Settings are stored in ~/.test_agent/config.json:
{
"api_keys": {
"claude": "your-key",
"openai": "your-key"
},
"last_provider": "claude"
}The test agent follows a comprehensive workflow:
- Project Analysis: Detects language, analyzes project structure, and identifies testing patterns
- File Discovery: Finds source files and determines which need tests
- Test Generation: Uses LLMs to generate comprehensive test suites
- Test Execution: Runs generated tests to verify they work
- Intelligent Fixing: Automatically fixes failing tests using built-in tools
- Validation: Ensures all tests pass before completion
The agent includes several tools for fixing tests:
- Package Installation: Automatically installs missing Python packages
- Import Fixing: Analyzes and fixes import statements
- Mock Creation: Creates mocks for unavailable dependencies
- Test Execution: Runs tests and captures detailed output
test_agent/
βββ cli.py # Command-line interface
βββ main.py # Main entry point and API
βββ config.py # Configuration management
βββ language/ # Language-specific adapters
β βββ python/ # Python language support
β βββ go/ # Go language support
βββ llm/ # LLM provider integrations
β βββ claude.py # Claude/Anthropic integration
β βββ openai.py # OpenAI integration
β βββ deepseek.py # DeepSeek integration
β βββ gemini.py # Google Gemini integration
βββ workflow/ # Workflow orchestration
β βββ graph.py # LangGraph workflow definition
β βββ state.py # Workflow state management
β βββ nodes/ # Individual workflow nodes
βββ tools/ # Built-in tools for test fixing
βββ memory/ # Caching and persistence
βββ utils/ # Utility functions
You can also use the test agent programmatically:
from test_agent import generate_tests
# Generate tests for a project
result = generate_tests(
project_directory="/path/to/project",
language="python",
llm_provider="claude",
api_key="your-api-key",
verbose=True
)
print(f"Generated {result['tests_generated']} tests")
print(f"Success rate: {result['tests_passed']}/{result['tests_generated']}")from test_agent import TestAgent
# Create agent instance
agent = TestAgent(
project_directory="/path/to/project",
llm_provider="claude",
excluded_dirs=["venv", ".git"],
cache_enabled=True
)
# Run test generation
result = agent.run_sync()- Caching: Intelligent caching system avoids re-analyzing unchanged files
- Parallel Processing: Concurrent analysis and test execution where possible
- Smart Batching: Processes files in batches to optimize LLM API usage
- Incremental Updates: Only processes changed files on subsequent runs
The agent provides comprehensive output including:
- Summary Statistics: Number of tests generated, passed, failed, and fixed
- Individual Test Results: Status of each generated test file
- Error Analysis: Detailed error information for debugging
- Tool Usage: Summary of tools used during test fixing
- Performance Metrics: Execution time and caching statistics
Example output:
=== Test Generation Complete ===
Status: success
Source files analyzed: 25
Tests generated: 23
Tests passed: 20
Tests failed: 2
Tests fixed: 1
Time taken: 45.32 seconds
Generated test files:
β
tests/test_api_utils.py
β
tests/test_database.py
β
tests/test_models.py
β tests/test_complex_logic.py
This project is licensed under the MIT License - see the LICENSE file for details.
- Python 3.8+
- Required packages are automatically installed via
requirements.txt - API key for at least one supported LLM provider