-
Notifications
You must be signed in to change notification settings - Fork 11
Home
FindHao edited this page Oct 5, 2025
·
11 revisions
TritonParse is a comprehensive visualization and analysis tool for Triton IR files, designed to help developers analyze, debug, and understand Triton kernel compilation processes.
- Installation - Complete setup instructions
- Quick Start Tutorial - Your first TritonParse experience
- System Requirements - Prerequisites and compatibility
- Usage Guide - Generate traces and analyze kernels
- Web Interface Guide - Master the visualization interface
- File Diff View - Compare kernels across different traces
- Reproducer - Generate standalone kernel scripts
- File Formats - Understanding input/output formats
- Troubleshooting - Common issues and solutions
- Architecture Overview - System design and components
- Contributing - Development setup and guidelines
- Code Formatting - Formatting standards and tools
- Testing - Test structure and running tests
- IR Code View - Side-by-side IR viewing with mapping
- Source Mapping - IR stage mapping explained
- Environment Variables - Configuration options
- Performance Optimization - Tips for large traces
- FAQ - Frequently asked questions
- Basic Examples - Simple usage scenarios
- Advanced Examples - Complex use cases
- Troubleshooting - Common issues and solutions
- Interactive Kernel Explorer - Browse kernel information and stack traces
- Multi-format IR Support - View TTGIR, TTIR, LLIR, PTX, and AMDGCN
- IR Code View - Side-by-side IR viewing with synchronized highlighting and line mapping
- Interactive Code Views - Click-to-highlight corresponding lines across IR stages
- Launch Diff Analysis - Compare kernel launch events
- File Diff View - Compare kernels across different trace files side-by-side
- Compilation Tracing - Capture detailed Triton compilation events
- Launch Tracing - Capture detailed kernel launch events
- Stack Trace Integration - Full Python stack traces for debugging
- Metadata Extraction - Comprehensive kernel metadata and statistics
- NDJSON Output - Structured logging format for easy processing
- Standalone Scripts - Generate self-contained Python scripts to reproduce kernels
- Tensor Reconstruction - Rebuild tensors from statistical data or saved blobs
- Template System - Customize reproducer output with flexible templates
- Minimal Dependencies - Scripts run independently for debugging and testing
- GitHub Pages - Ready-to-use online interface
- Local Development - Full development environment
- Standalone HTML - Self-contained deployments
# Clone the repository
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
# Install dependencies
pip install -e .
import tritonparse.structured_logging
# Initialize logging
tritonparse.structured_logging.init("./logs/", enable_trace_launch=True)
# Your Triton/PyTorch code here
...
# Parse logs
import tritonparse.utils
tritonparse.utils.unified_parse(source="./logs/", out="./parsed_output")
Visit https://meta-pytorch.org/tritonparse/ and load your trace files!
# Generate standalone reproducer script
tritonparse reproduce ./parsed_output/trace.ndjson --line 1 --out-dir repro_output
- Live Tool: https://meta-pytorch.org/tritonparse/
- GitHub Repository: https://github.com/meta-pytorch/tritonparse
- Issues: GitHub Issues
- Discussions: GitHub Discussions
We welcome contributions! Please see our Contributing Guide for details on:
- Development setup and prerequisites
- Code formatting standards (Code Formatting Guide)
- Pull request and code review process
- Issue reporting guidelines
This project is licensed under the BSD-3 License. See the LICENSE file for details.
Note: This tool is designed for developers working with Triton kernels and GPU computing. Basic familiarity with GPU programming concepts (CUDA for NVIDIA or ROCm/HIP for AMD), and the Triton language is recommended for effective use.