Data Validate is a robust multilingual spreadsheet validator and processor developed specifically to automate integrity and structure validation of data files for the AdaptaBrasil climate adaptation platform. It is especially useful for projects requiring standardization and rigorous validation of tabular data, such as scientific research, environmental databases, and indicator systems.
- Features
- Architecture
- Installation
- Usage
- Implemented Validations
- Project Structure
- Testing
- Development
- Documentation
- Contributing
- License
Data Validate implements the detailed specification defined in the validation protocol version 1.13, which establishes clear rules for the structure and content of spreadsheets used in the AdaptaBrasil platform.
- Structural Validation: Verifies spreadsheet structure, column names, and organization
- Content Validation: Applies specific business rules for each spreadsheet type
- Spell Checking: Multilingual spell correction system with custom dictionaries
- Hierarchical Validation: Validates indicator relationships and tree structures
- Detailed Reports: Generates detailed HTML, PDF, and validation logs
- Multilingual Support: Internationalization support in Portuguese and English
- Logging System: Detailed logging for auditing and debugging
- Python 3.12+: Main language
- Pandas: Data manipulation and analysis
- PyEnchant: Spell checking
- Calamine: Excel file reading
- Babel: Internationalization
- PDFKit: PDF report generation
- Poetry: Dependency management
The project follows a modular architecture based on clean design patterns:
π data_validate/
βββ ποΈ controllers/ # Orchestration and flow control
βββ π models/ # Data models for spreadsheets
βββ β
validators/ # Validation logic
βββ π οΈ helpers/ # Utilities and helper functions
βββ βοΈ config/ # Global configurations
βββ π§ middleware/ # Initialization layer
βββ π static/ # Static resources (templates, dictionaries, i18n)
- Initialization: Bootstrap configures environment and dependencies
- Loading: Reading and preprocessing spreadsheets
- Validation: Sequential execution of specialized validators
- Aggregation: Collection and organization of errors and warnings
- Reporting: Generation of detailed output reports
- Python 3.12 or higher
- Poetry for dependency management
- Wkhtmltopdf (for PDF generation)
Ensure python-dev and wkhtmltopdf are installed:
# Install dependencies
sudo apt install python3-dev wkhtmltopdfTo install wkhtmltopdf, download the installer from the official website: https://wkhtmltopdf.org/downloads.html
Or using chocolatey:
choco install -y wkhtmltopdf# 1.0 Create and activate a virtual environment
python -m venv .venv
# Activate the virtual environment
source .venv/bin/activate # On Linux/MacOS
.venv\Scripts\activate # On Windowspip install canoa-data-validatecanoa-data-validate --input_folder data/input --output_folder data/output --locale pt_BR --debug# 1.1 Clone the repository
git clone https://github.com/AdaptaBrasil/data_validate.git
cd data_validate
# 1.2 Create and activate a virtual environment (optional but recommended)
python -m venv .venv
# 1.3 Activate the virtual environment
source .venv/bin/activate # On Linux/MacOS
.venv\Scripts\activate # On Windows
# 2. Install Poetry (if needed)
pip install poetry
# 3. Install dependencies
poetry install
# 4. Activate the virtual environment
eval $(poetry env activate)python -m data_validate.main \
--input_folder data/input \
--output_folder data/output \
--locale pt_BR \
--debugpython -m data_validate.main --i data/input --o data/output --l pt_BR --d# Full pipeline execution
bash scripts/run_main_pipeline.sh# With debug active and detailed logs
python -m data_validate.main --input_folder data/input --debug# Without logs, time, or version in report
python -m data_validate.main \
--input_folder data/input \
--output_folder data/output \
--no-time \
--no-version# For quick executions, skipping spell check and title length warnings
python -m data_validate.main \
--input_folder data/input \
--no-spellchecker \
--no-warning-titles-length| Parameter | Abbreviation | Type | Description | Default | Required |
|---|---|---|---|---|---|
--input_folder |
--i |
str | Path to input folder with spreadsheets | - | β |
--output_folder |
--o |
str | Path to output folder for reports | output_data/ |
β |
--locale |
-l |
str | Interface language (pt_BR or en_US) | pt_BR |
β |
| Parameter | Abbreviation | Type | Description | Default |
|---|---|---|---|---|
--debug |
--d |
flag | Activates debug mode with detailed logs | False |
--no-time |
flag | Hides execution time information | False |
|
--no-version |
flag | Hides script version in final report | False |
|
--no-spellchecker |
flag | Disables spell checking | False |
|
--no-warning-titles-length |
flag | Disables title length warnings | False |
| Parameter | Type | Description | Default |
|---|---|---|---|
--sector |
str | Strategic sector name for report | None |
--protocol |
str | Protocol name for report | None |
--user |
str | User name for report | None |
--file |
str | Specific file name to analyze | None |
Place your Excel spreadsheets (.xlsx) in the input folder. The system processes:
- descricao.xlsx: Indicator descriptions and metadata
- valores.xlsx: Indicator values
- cenarios.xlsx: Analysis scenarios
- referencia_temporal.xlsx: Temporal references
- composicao.xlsx: Hierarchical compositions
- proporcionalidades.xlsx: Proportions and relationships
- legenda.xlsx: Legends and categories
- dicionario.xlsx: Dictionaries and vocabularies
The system generates:
- HTML Reports: Interactive visualization of results
- PDF Reports: Report generation in PDF format
- Detailed Logs: Execution and error logs
- β Verification of required file existence
- β Validation of column names and order
- β Checking expected data types
- β Sequential codes: Verification of numeric sequence (1, 2, 3...)
- β Unique values: Detection of duplicates in key fields
- β Relationships: Referential integrity validation between spreadsheets
- β Hierarchical levels: Verification of tree structures
- β Scenarios and temporality: Validation of valid combinations
- β Capitalization: Text standardization maintaining acronyms
- β Punctuation: Verification of specific punctuation rules
- β Special characters: Detection of CR/LF and invalid characters
- β Text length: Validation of character limits
- β HTML: Detection of non-permitted HTML tags
- β Multiple languages: Support for pt_BR and en_US
- β Custom dictionaries: Technical and domain-specific terms
- β Correction suggestions: Automatic recommendations
- β Numeric values: Type and range verification
- β Decimal places: Numeric precision validation
- β Required data: Verification of non-empty fields
- β Valid combinations: Validation of data relationships
data_validate/
βββ assets/ # Badges and visual resources
βββ data/ # Input and output data
β βββ input/ # Spreadsheets for validation
β βββ output/ # Generated reports and logs
βββ data_validate/ # Main source code
β βββ config/ # Global configurations
β βββ controllers/ # Orchestration and control
β β βββ context/ # Data contexts
β β βββ report/ # Report generation
β βββ helpers/ # Utilities and helper functions
β β βββ base/ # Base classes
β β βββ common/ # Common functions
β β βββ tools/ # Specialized tools
β βββ middleware/ # Initialization and bootstrap
β βββ models/ # Spreadsheet data models
β βββ static/ # Static resources
β β βββ dictionaries/ # Spell check dictionaries
β β βββ locales/ # Translation files
β β βββ report/ # Report templates
β βββ validators/ # Specialized validators
β βββ spell/ # Spell checking
β βββ spreadsheets/ # Spreadsheet validation
β βββ structure/ # Structural validation
βββ docs/ # Generated documentation
βββ tests/ # Unit tests
βββ scripts/ # Automation scripts
βββ Configuration Files # Config files
βββ pyproject.toml
βββ Makefile
βββ TESTING.md
The project uses pytest for unit testing with complete coverage.
# Run all tests
make test
# Tests with coverage
make test
# Fast tests (stops on error)
make test-fast
# Generate coverage HTML report
make test-short
# Clean test artifacts
make test-clean
# See all available commands
make help- Current coverage: 45%
- Minimum threshold: 4%
- Modules with 100% coverage: Text and number formatting
# Test specific modules
pytest tests/unit/helpers/common/formatting/ -v
pytest tests/unit/helpers/base/ -v# Install development dependencies
poetry install
# Configure pre-commit hooks
pre-commit install
# Format code with black
make black
# Lint with ruff
make ruff
# Run all linting
make lint| Command | Description |
|---|---|
make test |
Run all tests with coverage |
make test-fast |
Fast tests (stops on first error) |
make test-short |
Tests with short output |
make test-clean |
Remove test artifacts |
make badges |
Generate coverage and test badges |
make clean |
Remove temporary files |
make black |
Format code with Black |
make ruff |
Lint code with Ruff |
make lint |
Run all linting tools |
make docs |
Generate documentation |
make help |
Show all commands |
tests/
βββ unit/
βββ helpers/
βββ base/ # Base utilities tests
βββ common/ # Common utilities tests
β βββ formatting/ # Formatting tests
β βββ generation/ # Generation tests
β βββ processing/ # Processing tests
β βββ validation/ # Validation tests
βββ tools/ # Tools tests
# Generate documentation with pdoc
make docs- HOW_IT_WORKS.md: Detailed system architecture
- TESTING.md: Complete testing and coverage guide
- CODE_OF_CONDUCT.md: Development guidelines
- CHANGELOG.md: Version history
- pandas (>=2.2.3): Data manipulation
- chardet (>=5.2.0): Encoding detection
- calamine (>=0.5.3): Excel file reading
- pyenchant (>=3.2.2): Spell checking
- pdfkit (>=1.0.0): PDF generation
- babel (>=2.17.0): Internationalization
- pytest (^8.4.1): Testing framework
- pytest-cov (^6.2.1): Code coverage
- pytest-mock (^3.15.0): Mocking support
- ruff (^0.12.11): Fast linting
- black (^25.1.0): Code formatting
- pre-commit (^4.3.0): Pre-commit hooks
# Minimal validation (only input folder is required)
python -m data_validate.main --input_folder data/input
# Validation with specific folder and debug
python -m data_validate.main \
--input_folder /path/to/spreadsheets \
--output_folder /path/to/reports \
--debug# Interface in Portuguese (default)
python -m data_validate.main --input_folder data/input --locale pt_BR
# Interface in English
python -m data_validate.main --input_folder data/input --locale en_US# Full execution with all arguments
python -m data_validate.main \
--input_folder data/input \
--output_folder data/output \
--locale pt_BR \
--debug \
--sector "Biodiversidade" \
--protocol "Protocolo v2.1" \
--user "Pesquisador"# Fast execution without spell checking and length warnings
python -m data_validate.main \
--input_folder data/input \
--no-spellchecker \
--no-warning-titles-length \
--no-time \
--no-version# More concise command using abbreviations
python -m data_validate.main --i data/input --o data/output --l pt_BR --d# Execute full pipeline with logs
bash scripts/run_main_pipeline.sh| Spreadsheet | Description | Main Validations |
|---|---|---|
| sp_description | Indicator descriptions | Sequential codes, hierarchical levels, formatting |
| sp_value | Indicator values | Referential integrity, numeric types, decimal places |
| sp_scenario | Analysis scenarios | Unique values, punctuation, relationships |
| sp_temporal_reference | Temporal references | Temporal sequence, unique symbols |
| sp_composition | Hierarchical compositions | Tree structure, parent-child relationships |
| sp_proportionality | Proportions | Mathematical validation, consistency |
| sp_legend | Legends and categories | Categorical consistency, valid values |
| sp_dictionary | Dictionaries | Vocabulary integrity |
- Efficient processing: Optimized use of pandas for large datasets
- Parallel validation: Simultaneous execution of independent validations
- Smart caching: Reuse of loaded data
- Structured logging: Optimized logging system for performance
- Test Coverage: Automatically generated with genbadge
- Test Status: Updated with each execution
- Version: Synchronized with pyproject.toml
- Minimum code coverage: 4%
- Automated tests with pytest
- Linting with ruff
- Automatic formatting with black
- Fork the repository
- Clone your fork locally
- Create a branch for your feature (
git checkout -b feature/new-feature) - Implement your changes with tests
- Run tests (
make test) - Commit following the guidelines
- Push to your branch (
git push origin feature/new-feature) - Open a Pull Request
- Follow PEP 8 standard
- Maintain test coverage >= 50%
- Use type hints
- Document public functions
- Run
make blackbefore commit
- Complete code refactoring to improve modularity
- Creation of detailed documentation for each module and function in PDOC style
- Full deployment flow via PyPI to facilitate installation and use
- Improvements in the CI/CD Pipeline to include integration tests
- Implementation of integration tests to validate the complete system flow
- Optimization of the logging system for better performance and readability
- Addition of more specific validations for each type of spreadsheet
- Creation of a detailed contribution guide for new collaborators
This project is licensed under the MIT License - see the LICENSE file for details.
- Pedro Andrade - Coordinator - MAIL and GitHub
- MΓ‘rio de AraΓΊjo Carvalho - Contributor and Developer - GitHub
- Mauro Assis - Contributor - GitHub
- Miguel Gastelumendi - Contributor - GitHub
- Homepage: AdaptaBrasil GitHub
- Documentation: Docs
- Issues: Bug Tracker
- Changelog: Version History
pip uninstall canoa-data-validate# Error: "argument --input_folder is required"
# Solution: Always specify the input folder
python -m data_validate.main --input_folder data/input# For faster execution, disable slow validations
python -m data_validate.main \
--input_folder data/input \
--no-spellchecker \
--no-warning-titles-length# To reduce console output
python -m data_validate.main \
--input_folder data/input \
--no-time \
--no-version# The system automatically detects encoding with chardet
# For problematic files, verify they are in UTF-8# Install complete dependencies
poetry install
# For pdfkit issues on Linux
sudo apt-get install wkhtmltopdf
# For pyenchant issues
sudo apt-get install libenchant-2-2Developed with β€οΈ by the AdaptaBrasil team