A high-performance binary differential patching library based on Rust and NAPI-RS, providing an optimized bsdiff/bspatch algorithm implementation for Node.js, supporting performance optimizations such as zstd compression and memory mapping.
- Extreme Performance: Rust implementation with memory mapping and parallel processing
- Memory Safety: Rust guarantees memory safety and thread safety
- Binary Patches: Efficient generation and application of binary file patches
- Smart Compression: Built-in zstd compression with optimal compression ratio configuration
- Dual API: Provides both synchronous and asynchronous APIs for different use cases
- Integrity Verification: Built-in patch file integrity verification mechanism
- Detailed Analysis: Compression ratio analysis and file size statistics
- Smart Checking: Automatic file existence and access permission verification
- Cross-platform: Supports Windows, macOS, Linux platforms
- Modern Bindings: High-performance Node.js bindings based on napi-rs
pnpm install @bsdiff-rust/nodeconst bsdiff = require('@bsdiff-rust/node')
// Synchronous API - suitable for simple scenarios
bsdiff.diffSync('old-file.zip', 'new-file.zip', 'patch.bin')
bsdiff.patchSync('old-file.zip', 'generated-file.zip', 'patch.bin')
// Asynchronous API - suitable for large files and production environments
await bsdiff.diff('old-file.zip', 'new-file.zip', 'patch.bin')
await bsdiff.patch('old-file.zip', 'generated-file.zip', 'patch.bin')import {
diff,
diffSync,
patch,
patchSync,
verifyPatch,
verifyPatchSync,
getPatchInfoSync,
getFileSizeSync,
checkFileAccessSync,
getCompressionRatioSync,
type PatchInfoJs,
type CompressionRatioJs,
} from '@bsdiff-rust/node'
// Generate and apply patches
await diff('old-file.zip', 'new-file.zip', 'patch.bin')
await patch('old-file.zip', 'generated-file.zip', 'patch.bin')diffSync(oldFile: string, newFile: string, patchFile: string): voidGenerate a patch file between two files.
patchSync(oldFile: string, newFile: string, patchFile: string): voidApply a patch to an old file to generate a new file.
diff(oldFile: string, newFile: string, patchFile: string): Promise<void>Asynchronously generate a patch file, suitable for large file processing.
patch(oldFile: string, newFile: string, patchFile: string): Promise<void>Asynchronously apply a patch, suitable for large file processing.
verifyPatchSync(oldFile: string, newFile: string, patchFile: string): boolean
verifyPatch(oldFile: string, newFile: string, patchFile: string): Promise<boolean>Verify the integrity and correctness of patch files.
getPatchInfoSync(patchFile: string): PatchInfoJsGet detailed information about patch files.
getCompressionRatioSync(oldFile: string, newFile: string, patchFile: string): CompressionRatioJsCalculate and analyze compression ratio information.
getFileSizeSync(filePath: string): numberGet file size in bytes.
checkFileAccessSync(filePath: string): voidCheck if a file exists and is readable, throws an exception if conditions are not met.
interface PatchInfoJs {
size: number // Patch file size in bytes
compressed: boolean // Whether compression is used (always true)
}
interface CompressionRatioJs {
oldSize: number // Old file size in bytes
newSize: number // New file size in bytes
patchSize: number // Patch file size in bytes
ratio: number // Compression ratio (percentage)
}- Memory Mapping (mmap): Zero-copy file reading, significantly improving large file processing performance
- zstd Compression: Uses high-performance zstd algorithm, balancing compression ratio and speed
- Smart Temporary Directory: Automatically selects the fastest temporary storage location (RAM disk priority)
- Parallel Processing: Utilizes Rust's rayon library for parallel computation
- Buffer Optimization: 64KB buffer optimization for I/O performance
# Run complete test suite
pnpm test
# Run performance benchmarks
pnpm bench- Functional Testing: Synchronous/asynchronous API integrity testing
- Error Handling: File not found, permission errors, and other exception scenarios
- Performance Testing: Using real large files (React version files)
- API Compatibility: Verification of all exported function availability
- Data Integrity: File consistency verification after patch application
- Utility Methods: File size, access permissions, compression ratio calculations
- zstd Compression: High-performance compression algorithm balancing speed and compression ratio
- Memory Mapping (mmap): Zero-copy file reading, significantly improving large file processing performance
- Rust Implementation: Memory safety and high-performance guarantees
Compared to traditional implementations, this library shows significant improvements across all metrics:
- Diff Performance: 32.7% improvement
- Patch Performance: 93.0% improvement
- Memory Usage: 75.0% reduction
- Multi-file Size Testing: Different scale files from 1KB to 10MB
- Change Ratio Testing: Different change degrees from 1% to 50%
- Real Scenario Testing: Using actual project files
- Utility Method Performance: Verification, information retrieval, and other auxiliary functions
- Cross-platform Performance: Performance across different operating systems
- Node.js: >= 16 (Latest LTS recommended)
- Rust: >= 1.70
- Package Manager: npm or pnpm
# Install dependencies
pnpm install
# Build release version
pnpm build
# Build debug version
pnpm build:debug
# Build for specific platform
pnpm build:arm64# Code formatting
pnpm format
# Code linting
pnpm lint
# Run tests
pnpm test
# Performance testing
pnpm benchbsdiff-rust/
βββ src/
β βββ lib.rs # NAPI binding entry
β βββ bsdiff_rust.rs # Core Rust implementation
βββ benchmark/
β βββ benchmark.ts # TypeScript benchmarks
βββ test/
β βββ index.ts # Functional tests
β βββ resources/ # Test resource files
βββ index.js # Node.js entry point
βββ index.d.ts # TypeScript type definitions
βββ Cargo.toml # Rust project configuration
βββ package.json # Node.js project configuration
- macOS: ARM64 (Apple Silicon) and x64 (Intel)
- Linux: ARM64 and x64 (GNU and musl)
- Windows: ARM64 and x64 (MSVC)
This project uses napi-rs's multi-package strategy, automatically downloading precompiled binaries for the corresponding platform during installation:
npm/
βββ @bsdiff-rust/darwin-arm64/ # macOS ARM64
βββ @bsdiff-rust/darwin-x64/ # macOS x64
βββ @bsdiff-rust/linux-arm64-gnu/ # Linux ARM64 glibc
βββ @bsdiff-rust/linux-x64-gnu/ # Linux x64 glibc
βββ @bsdiff-rust/linux-arm64-musl/ # Linux ARM64 musl
βββ @bsdiff-rust/linux-x64-musl/ # Linux x64 musl
βββ ...
Advantages:
- π Fast Installation: No compilation needed, direct download of precompiled binaries
- π¦ On-demand Download: Only downloads files needed for the current platform
- π‘οΈ Stable and Reliable: Avoids installation failures due to compilation environment issues
| Feature | bsdiff-rust (napi-rs) | Traditional (node-gyp) |
|---|---|---|
| Installation Speed | β‘ Second-level installation | π Requires compilation |
| Environment Dependencies | β No compilation environment needed | β Requires Python + C++ |
| Performance | π Rust optimization | π Standard C/C++ |
| Memory Safety | π‘οΈ Rust guarantees | |
| Maintainability | β¨ Modern code | π§ Traditional C code |
If you're migrating from other bsdiff libraries:
// Old API calls
const bsdiff = require('bsdiff-node')
bsdiff.diff(oldFile, newFile, patchFile, callback)
// New API calls
const bsdiff = require('@bsdiff-rust/node')
// Synchronous approach
bsdiff.diffSync(oldFile, newFile, patchFile)
// Or asynchronous approach
await bsdiff.diff(oldFile, newFile, patchFile)Uses memmap2 library for zero-copy file reading:
let old_mmap = unsafe { MmapOptions::new().map(&old_file_handle)? };
let new_mmap = unsafe { MmapOptions::new().map(&new_file_handle)? };Automatically selects the fastest temporary storage:
- Linux: Prioritizes
/dev/shm(memory disk) - macOS: Detects RAM disk
- General: Falls back to system temporary directory
Uses tuned zstd compression parameters:
compression_level: 3, // Balances speed and compression ratio
buffer_size: 64 * 1024, // 64KB bufferWant smaller patch files and better performance? Check out our detailed optimization guide:
π Patch Size Optimization Guide
- Use Normalized TAR Workflow - Can reduce patch size by 30-60%
- Fixed Build Parameters - Ensure reproducible builds
- Choose Appropriate Compression Level - Balance speed and size
- Preprocess Files - Remove irrelevant data
Based on real test data, compared to traditional implementations:
- Diff Performance: 32.7% improvement
- Patch Performance: 93.0% improvement
- Memory Usage: 75.0% reduction
- Patch Size: Can be reduced by 30-60% through optimization
- Fork the project
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Create a Pull Request
- Rust Code: Use
cargo fmtfor formatting - JavaScript/TypeScript: Use Prettier for formatting
- Commit Messages: Use clear English descriptions
- bsdiff Original Algorithm - Colin Percival's original implementation
- NAPI-RS Documentation - Node.js binding framework
- Rust Official Documentation - Rust programming language
- zstd Compression Algorithm - Facebook's open-source compression algorithm
β If this project helps you, please give it a star!
π Found an issue? Feel free to submit an Issue
π‘ Have suggestions for improvement? Welcome to submit a Pull Request