A comprehensive benchmark suite comparing semantic version libraries in Node.js, specifically:
node-semver
- The standard semver parser used by npm@vltpkg/semver
- High-performance semver library by the vlt team
This benchmark suite tests the most common semver operations:
- Parsing - Converting version strings to version objects
- Comparison - Comparing two versions (compare, gt, lt, eq)
- Satisfies - Checking if a version satisfies a range
- Node.js 22+ (required by @vltpkg/semver)
- vlt package manager
- hyperfine (optional, for detailed benchmarking)
# Install dependencies using vlt
vlt install
# Or use the package script
vlr install-deps
First, verify that both libraries produce consistent results:
vlr test
# Run the comprehensive benchmark suite
vlr benchmark
# Run individual benchmark categories
vlr benchmark:parsing
vlr benchmark:comparison
vlr benchmark:satisfies
The new chart generation feature automatically creates performance tables and charts:
# Run benchmarks and generate charts/tables in README
vlr benchmark:generate
# Just collect benchmark data (saves to benchmark-results.json)
vlr benchmark:collect
# Generate charts from existing results
vlr benchmark:charts
For more detailed performance analysis with statistical data:
# Quick hyperfine benchmark
vlr benchmark:quick
# Full hyperfine benchmark suite (exports JSON and Markdown results)
vlr benchmark:hyperfine
# Run all benchmarks (tests + charts + hyperfine)
vlr benchmark:all
The hyperfine benchmarks will generate result files:
results-parsing.json/md
- Parsing operation resultsresults-comparison.json/md
- Comparison operation resultsresults-satisfies.json/md
- Satisfies operation resultsresults-combined.json/md
- Full benchmark suite results
Based on the vlt team's documentation, @vltpkg/semver
should show:
- 40-50% faster at parsing versions
- 15-20% faster at parsing ranges
- 60-70% faster at testing versions against ranges
@vltpkg/semver wins overall, taking 2/3 categories.
- Parsing: node-semver is 60% faster
- Comparison: @vltpkg/semver is 84% faster
- Satisfies: @vltpkg/semver is 10% faster
Operation | node-semver | @vltpkg/semver | Winner | Improvement |
---|---|---|---|---|
Parsing | 131,968 ops/sec | 82,468 ops/sec | 🏆 node-semver | 60% |
Comparison | 122,921 ops/sec | 225,656 ops/sec | 🏆 @vltpkg/semver | 84% |
Satisfies | 17,860 ops/sec | 19,685 ops/sec | 🏆 @vltpkg/semver | 10% |
parsing │
node-semver │████████████████████████████████████████ 131,968
@vltpkg │█████████████████████████ 82,468
│
comparison │
node-semver │██████████████████████ 122,921
@vltpkg │████████████████████████████████████████ 225,656
│
satisfies │
node-semver │████████████████████████████████████ 17,860
@vltpkg │████████████████████████████████████████ 19,685
│
Metric | node-semver | @vltpkg/semver |
---|---|---|
Operations/sec | 131,968 | 82,468 |
Mean time (ms) | 0.008 | 0.012 |
Margin of error | ±0ms | ±0ms |
Relative margin | ±0.97% | ±0.41% |
Sample runs | 85 | 96 |
Metric | node-semver | @vltpkg/semver |
---|---|---|
Operations/sec | 122,921 | 225,656 |
Mean time (ms) | 0.008 | 0.004 |
Margin of error | ±0ms | ±0ms |
Relative margin | ±0.11% | ±0.13% |
Sample runs | 98 | 97 |
Metric | node-semver | @vltpkg/semver |
---|---|---|
Operations/sec | 17,860 | 19,685 |
Mean time (ms) | 0.056 | 0.051 |
Margin of error | ±0ms | ±0.001ms |
Relative margin | ±0.23% | ±1.37% |
Sample runs | 101 | 95 |
- Node.js: v24.1.0
- Platform: darwin (arm64)
- node-semver: v7.7.2
- @vltpkg/semver: v0.0.0-22
- Test Date: 2025-08-20, 2:10:09 p.m.
The benchmarks use a comprehensive set of test cases including:
Versions:
- Standard versions (1.0.0, 2.1.3)
- Prerelease versions (1.0.0-alpha.1, 1.2.3-beta.4)
- Build metadata (1.2.3+build.5)
- Complex prerelease strings
- Edge cases
Ranges:
- Caret ranges (^1.0.0)
- Tilde ranges (~1.2.3)
- Comparison ranges (>=1.2.7 <1.3.0)
- Hyphen ranges (1.2.3 - 2.3.4)
- Complex logical operators
semver-benchmarks/
├── benchmarks/
│ ├── index.js # Main benchmark suite
│ ├── parsing.js # Parsing benchmarks
│ ├── comparison.js # Comparison benchmarks
│ └── satisfies.js # Satisfies benchmarks
├── test/
│ └── test.js # Compatibility tests
├── run-hyperfine.sh # Hyperfine benchmark script
├── package.json # Project configuration
└── README.md # This file
Feel free to add more test cases, additional libraries, or improve the benchmark methodology.