Thank you for your interest in contributing! This document provides guidelines for contributing to the project.
- Fork the repository
- Clone your fork:
git clone https://github.com/reaatech/agent-eval-harness.git - Install dependencies:
npm install - Create a branch:
git checkout -b feature/your-feature
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Lint
npm run lint
# Typecheck
npm run typecheck- Use TypeScript strict mode
- Follow existing code patterns
- Use single quotes for strings
- Include trailing commas in objects
- 2-space indentation
- Write tests for new features
- Maintain 80%+ code coverage
- Run tests before submitting PR:
npm test
Follow Conventional Commits:
feat:- New featuresfix:- Bug fixesdocs:- Documentation changesrefactor:- Code refactoringtest:- Test additions/changeschore:- Build/config changes
Example: feat(judge): add consensus voting for multi-judge evaluation
- Update documentation as needed
- Add/update tests
- Ensure all tests pass
- Update CHANGELOG.md with your changes
- Request review from maintainers
- New evaluation metrics - Add new ways to evaluate agent quality
- Provider integrations - Support additional LLM providers
- CI integrations - Support additional CI platforms
- Documentation - Improve guides and examples
- Bug fixes - Fix any issues found
- Performance - Optimize evaluation speed
- Use the GitHub issue template
- Include reproduction steps
- Provide expected vs actual behavior
- Include relevant logs/output
By contributing, you agree that your contributions will be licensed under the MIT License.