A high-performance, PyTorch-like tensor library for Rust with support for multiple computational backends.
Most up-to-date documentation can be found here: docs
- 🚀 Multiple Backends: CPU (Rayon), WGPU, and CUDA support
- 🔄 Automatic Backend Selection: Falls back to best available backend
- 📐 Full Broadcasting: NumPy/PyTorch-style automatic broadcasting for all arithmetic operations
- 🎯 Type Safety: Rust's type system for memory safety
- ⚡ Zero-Copy Operations: Efficient memory management
- 🎛️ Feature Flags: Optional dependencies for different backends
Add to your Cargo.toml
:
[dependencies]
tensor_frame = "0.0.3-alpha"
# For GPU support
tensor_frame = { version = "0.0.3-alpha", features = ["wgpu"] }
Basic usage:
use tensor_frame::Tensor;
// Create tensors (automatically uses best backend)
let a = Tensor::from_vec(vec![1.0, 2.0, 3.0, 4.0], vec![2, 2])?;
let b = Tensor::from_vec(vec![10.0, 20.0], vec![2, 1])?;
// All operations support broadcasting: +, -, *, /
let c = (a + b)?; // Broadcasting: [2,2] + [2,1] -> [2,2]
let d = (c * b)?; // Element-wise multiplication with broadcasting
let sum = d.sum(None)?;
println!("Result: {:?}", sum.to_vec()?);
- Uses Rayon for parallel computation
- Always available
- Good for small to medium tensors
- Cross-platform GPU compute
- Supports Metal, Vulkan, DX12, OpenGL
- Enable with
features = ["wgpu"]
- NVIDIA GPU acceleration
- Enable with
features = ["cuda"]
- Requires CUDA toolkit
- 📖 Complete Guide - Comprehensive documentation with tutorials
- 🚀 Getting Started - Quick start guide
- 📚 API Reference - Detailed API documentation
- 💡 Examples - Practical examples and tutorials
- ⚡ Performance Guide - Optimization tips and benchmarks
- 🔧 Backend Guides - CPU, WGPU, and CUDA backend details
See the examples directory for more detailed usage:
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.