A Python3-based network stress testing tool that orchestrates multiple parallel iperf3 client processes with configurable parameters. Ideal for testing network performance, identifying bottlenecks, and validating infrastructure under various load conditions.
- Multi-client orchestration: Run N parallel iperf3 clients simultaneously
- Flexible configuration: JSON-based config for easy test definition
- Protocol support: Both TCP and UDP with protocol-specific options
- Staggered starts: Avoid perfect synchronization with configurable offsets
- Detailed logging: Each client saves full JSON output with metadata
- Automated analysis: Generates CSV summary with key performance metrics
- Visualization tools: Powerful script for visualizing test results with detailed graphs
- Docker support: Run tests in containers with automatic visualization
- Ready-to-use scenarios: Pre-configured example test scenario included
The visualization above shows a comprehensive analysis including:
- Total Network Throughput: Aggregate upload vs download over time
- Per-Client Throughput: Individual client performance tracking
- TCP Retransmits: Network congestion indicators
- TCP RTT Variance: Latency variability measurements
- TCP Round Trip Time: Connection latency over time
The Docker container automatically runs the stress test and generates visualizations!
# Build the image
docker build -t network-stress-test .
# Copy and customize the example scenario
cp scenario/scenario_speedtest.json.example scenario/my_scenario.json
# Edit scenario/my_scenario.json with your iperf3 server IP
# Run the test (includes automatic visualization)
docker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_scenario.json:/app/my_scenario.json:ro \
network-stress-test -o /app/results my_scenario.json
# Results saved in ./results/ directory including:
# - iperf3-client-*.json (raw data)
# - results_summary.csv (summary)
# - network_stress_test_detailed.png (visualization)Or use Docker Compose:
# 1. Copy and customize the example scenario
cp scenario/scenario_speedtest.json.example scenario/scenario_speedtest.json
# Edit scenario/scenario_speedtest.json with your iperf3 server IP
# 2. Update docker-compose.yml to reference your scenario file
# 3. Run the test
docker-compose up --build
# Results saved in ./results/ directory with visualizationSkip visualization if desired:
docker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_scenario.json:/app/my_scenario.json:ro \
network-stress-test --no-visualize -o /app/results my_scenario.jsonPrerequisites:
- Python 3.6+
- iperf3 installed (see Installation below)
- matplotlib for visualization (optional):
pip install -r requirements.txt
Run a test:
# Copy and customize the example scenario
cp scenario/scenario_speedtest.json.example scenario/my_scenario.json
# Edit scenario/my_scenario.json with your iperf3 server IP
# Run the test with automatic visualization
./main.py scenario/my_scenario.json
# Or run just the stress test
python3 utilities/network_stress_test.py scenario/my_scenario.json
# Or specify custom output directory
./main.py scenario/my_scenario.json --output-dir ./my_resultsOnly required if running directly (not needed for Docker):
# Ubuntu/Debian
sudo apt-get install iperf3
# macOS
brew install iperf3
# RHEL/CentOS
sudo yum install iperf3
# Fedora
sudo dnf install iperf3# Install visualization dependencies (optional)
pip install -r requirements.txtTest configurations are defined in JSON files. See scenario/scenario_speedtest.json.example for a complete example.
{
"test_name": "My Speed Test",
"description": "Test description",
"clients": [
{
"server_ip": "192.168.1.100",
"server_port": 5201,
"protocol": "tcp",
"duration": 30,
"parallel_streams": 4,
"interval": 0.5,
"stagger_offset": 0,
"title": "Download Test"
},
{
"server_ip": "192.168.1.100",
"server_port": 5201,
"protocol": "tcp",
"duration": 30,
"parallel_streams": 4,
"interval": 0.5,
"stagger_offset": 35,
"reverse": true,
"title": "Upload Test"
}
]
}server_ip(string): Target iperf3 server IP addressprotocol(string): "tcp" or "udp"duration(integer): Test duration in seconds
server_port(integer): Server port (default: 5201)stagger_offset(float): Delay before starting this client in seconds (default: 0)interval(float): Seconds between periodic throughput reports (default: 1.0)title(string): Descriptive title for this test (used in visualizations)reverse(boolean): Run in reverse mode - server sends, client receives (default: false)
parallel_streams(integer): Number of parallel TCP streams (default: 1)- Maps to iperf3
-Pflag - Higher values can saturate links faster
- Maps to iperf3
bandwidth(string): Target bandwidth (default: "10M")- Maps to iperf3
-bflag - Examples: "1M", "100K", "1G", "500K"
- Supports K (Kbits/s), M (Mbits/s), G (Gbits/s)
- Maps to iperf3
Each client produces a file named iperf3-client-<id>.json containing:
{
"client_id": 0,
"start_timestamp": "2025-10-09T10:30:45.123456",
"command": "iperf3 -c 192.168.1.100 -p 5201 -t 30 --json -P 4",
"return_code": 0,
"iperf3_output": {
... full iperf3 JSON output ...
},
"error": null
}A results_summary.csv file is generated with the following columns:
| Column | Description | Units |
|---|---|---|
client_id |
Client identifier | - |
start_timestamp |
Test start time | ISO 8601 |
protocol |
tcp or udp | - |
bits_per_second |
Throughput | bits/s |
mbits_per_second |
Throughput (readable) | Mbits/s |
retransmits |
TCP retransmit count | count |
jitter_ms |
UDP jitter | milliseconds |
packet_loss_percent |
UDP packet loss | percentage |
return_code |
Exit code (0=success) | - |
error |
Error message if failed | - |
command |
Full iperf3 command run | - |
When using main.py or Docker, a network_stress_test_detailed.png file is automatically generated showing:
- Aggregate throughput over time (total network usage)
- Per-client throughput graphs
- TCP retransmits over time
- TCP RTT (Round Trip Time) measurements
- TCP RTT variance (latency variability)
- UDP packet loss (if UDP tests were run)
For TCP tests:
- bits_per_second / mbits_per_second: Achieved throughput
- Compare against expected link capacity
- Multiple clients should aggregate near total capacity
- retransmits: Number of TCP retransmissions
- Low values (< 1% of packets): Normal
- High values: Indicates congestion or packet loss
- Zero retransmits: Ideal conditions
For UDP tests:
- bits_per_second: Achieved throughput (should match target bandwidth if not limited)
- jitter_ms: Variation in packet arrival times
- < 10ms: Excellent
- 10-30ms: Good for most applications
-
30ms: May affect real-time applications
- packet_loss_percent: Percentage of lost packets
- < 1%: Excellent
- 1-5%: Acceptable for most applications
-
5%: Problematic, especially for VoIP/video
The script prints a summary after all tests complete:
================================================================================
TEST SUMMARY
================================================================================
Total Clients: 2
Successful: 2
Failed: 0
TCP Streams: 2
Total Throughput: 945.23 Mbps
Average per Stream: 472.62 Mbps
Total Retransmits: 234
================================================================================
The included scenario/scenario_speedtest.json.example demonstrates a comprehensive speed test:
Purpose: Measure both download and upload speeds with detailed metrics
Configuration:
- First test: Download test (30 parallel streams, 45 seconds)
- Second test: Upload test (30 parallel streams, 45 seconds, reverse mode)
- Staggered start with 50-second offset between tests
- 0.5-second reporting intervals for detailed metrics
Use cases:
- Customer bandwidth verification
- ISP speed testing
- Network performance baseline establishment
- Before/after comparison for network changes
To use:
# Copy the example
cp scenario/scenario_speedtest.json.example scenario/my_speedtest.json
# Edit the server IP (change 192.168.0.100 to your iperf3 server)
# In scenario/my_speedtest.json, update both "server_ip" values
# Run with Docker
docker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_speedtest.json:/app/my_speedtest.json:ro \
network-stress-test -o /app/results my_speedtest.json
# Or run directly
./main.py scenario/my_speedtest.jsonYou can easily create your own test scenarios:
Simple single-stream test:
{
"test_name": "Single Stream Test",
"description": "Basic throughput test",
"clients": [
{
"server_ip": "YOUR_SERVER_IP",
"server_port": 5201,
"protocol": "tcp",
"duration": 30,
"parallel_streams": 1
}
]
}Multi-stream saturation test:
{
"test_name": "Saturation Test",
"description": "Saturate link with 50 parallel streams",
"clients": [
{
"server_ip": "YOUR_SERVER_IP",
"protocol": "tcp",
"duration": 30,
"parallel_streams": 50,
"interval": 0.5
}
]
}UDP latency test:
{
"test_name": "UDP Latency Test",
"description": "Test UDP jitter and packet loss",
"clients": [
{
"server_ip": "YOUR_SERVER_IP",
"protocol": "udp",
"duration": 60,
"bandwidth": "10M"
}
]
}Running in Docker provides several advantages:
- No local dependencies: iperf3, Python, and matplotlib are pre-installed
- Automatic visualization: Test results are automatically visualized
- Consistent environment: Same behavior across different host systems
- Easy cleanup: No residual files on host system
- Network isolation: Uses host networking for direct access to iperf3 servers
Basic test with visualization:
docker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_scenario.json:/app/my_scenario.json:ro \
network-stress-test -o /app/results my_scenario.json
# Check results/network_stress_test_detailed.png for visualizationSkip visualization for faster results:
docker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_scenario.json:/app/my_scenario.json:ro \
network-stress-test --no-visualize -o /app/results my_scenario.jsonRun visualization manually on existing results:
docker run --rm \
-v $(pwd)/results:/app/results \
--entrypoint python3 \
network-stress-test utilities/visualize_detailed.py /app/results --saveInteractive shell for debugging:
docker run --rm -it --network host \
-v $(pwd)/results:/app/results \
--entrypoint /bin/bash \
network-stress-testThe docker-compose.yml file provides an easier way to run tests:
Setup:
# 1. Copy and customize the example scenario
cp scenario/scenario_speedtest.json.example scenario/scenario_speedtest.json
# 2. Edit scenario/scenario_speedtest.json with your iperf3 server IP
# 3. Update docker-compose.yml volumes section to mount your scenario fileRun:
docker-compose up --build
# Results saved to ./results/ including visualization PNGRun in detached mode:
docker-compose up -d
docker-compose logs -fView visualization after test completes:
open results/network_stress_test_detailed.png # macOS
xdg-open results/network_stress_test_detailed.png # Linux
start results/network_stress_test_detailed.png # WindowsThe script uses --network host by default to ensure direct access to the network for accurate testing:
- host: Container shares host network stack (recommended for accuracy)
- bridge: Container gets its own network namespace (may affect performance)
For most use cases, stick with host mode for the most accurate network testing.
# With automatic visualization
./main.py scenario/my_scenario.json
# Just the stress test
python3 utilities/network_stress_test.py scenario/my_scenario.json
# Custom output directory
./main.py scenario/my_scenario.json --output-dir ./my_resultsdocker run --rm --network host \
-v $(pwd)/results:/app/results \
-v $(pwd)/scenario/my_scenario.json:/app/my_scenario.json:ro \
network-stress-test -o /app/results my_scenario.jsonYou can target different iperf3 servers in the same test:
{
"test_name": "Multi-Server Test",
"description": "Testing multiple network paths",
"clients": [
{
"server_ip": "192.168.1.100",
"protocol": "tcp",
"duration": 30,
"parallel_streams": 4,
"title": "Server 1"
},
{
"server_ip": "192.168.2.100",
"protocol": "tcp",
"duration": 30,
"parallel_streams": 4,
"title": "Server 2"
}
]
}After running a test, examine the CSV:
cat results/results_summary.csvOr use command-line tools:
# Calculate total throughput
awk -F',' 'NR>1 {sum+=$5} END {print "Total Mbps:", sum}' results/results_summary.csv
# Check for failures
awk -F',' 'NR>1 && $9!=0 {print "Client", $1, "failed:", $10}' results/results_summary.csv
# Average UDP jitter
awk -F',' 'NR>1 && $7!="" {sum+=$7; count++} END {print "Avg jitter:", sum/count, "ms"}' results/results_summary.csv# Pretty-print JSON output for client 0
python3 -m json.tool results/iperf3-client-0.json
# Extract specific metrics
python3 -c "
import json
with open('results/iperf3-client-0.json') as f:
data = json.load(f)
if 'iperf3_output' in data and data['iperf3_output']:
print('Throughput:', data['iperf3_output']['end']['sum']['bits_per_second'])
"# Using direct Python
python3 utilities/visualize_detailed.py results/ --save
# Using Docker
docker run --rm \
-v $(pwd)/results:/app/results \
--entrypoint python3 \
network-stress-test utilities/visualize_detailed.py /app/results --saveInstall iperf3 (see Requirements section above) or use Docker.
- Verify the iperf3 server is running:
iperf3 -s - Check server is accessible:
ping <server_ip> - Verify firewall rules allow traffic on the specified port
- Verify server IP address is correct in your scenario JSON
- Test basic connectivity:
ping <server_ip> - Test iperf3 manually:
iperf3 -c <server_ip> -t 5 - Verify the iperf3 server is running and accessible
- Normal on busy systems
- UDP is lossy by design
- Reduce bandwidth or increase buffer sizes
- Expected under saturation conditions
- If persistent with low load, investigate:
- Network congestion
- MTU mismatches
- Faulty hardware
- Check for zombie iperf3 processes:
ps aux | grep iperf3 - Kill if needed:
pkill iperf3 - Ensure sufficient timeout in test duration
- Ensure matplotlib is installed:
pip install matplotlib - Check for errors in console output
- Verify JSON result files exist in output directory
- Try running visualization manually:
python3 utilities/visualize_detailed.py results/ --save
- Start with short tests: Use 10-30 second durations initially
- Increase load gradually: Start with few streams, add more
- Monitor both ends: Watch server CPU/memory during tests
- Use stagger offsets: Avoid thundering herd with
stagger_offset - Save results: Keep CSV files for historical comparison
- Test at different times: Network conditions vary throughout the day
- Verify baseline: Run single-stream test first to establish baseline
- Clean between tests: Remove old JSON files or use different output directories
import sys
sys.path.append('utilities')
from network_stress_test import run_stress_test, load_config
config = load_config('my_test.json')
run_stress_test(config)The script is modular and can be extended:
- Modify
IperfClient.build_command()inutilities/network_stress_test.pyto add iperf3 flags - Extend
parse_results()to extract additional metrics - Add custom summary calculations in
generate_summary_csv() - Modify
utilities/visualize_detailed.pyto add more graphs
.
├── main.py # Main wrapper with auto-visualization
├── utilities/
│ ├── network_stress_test.py # Core stress test orchestrator
│ └── visualize_detailed.py # Visualization generator
├── scenario/
│ └── scenario_speedtest.json.example # Example speed test scenario
├── Dockerfile # Docker container definition
├── docker-compose.yml # Docker Compose configuration
├── requirements.txt # Python dependencies
└── README.md # This file
- CPU usage: Each client spawns a thread; 10-50 clients should be fine on modern systems
- Memory: Minimal, mostly for JSON storage
- Disk I/O: Each client writes one JSON file (typically < 100KB)
- Network: This is the bottleneck you're testing!
This script is provided as-is for network testing and diagnostic purposes.
Feel free to extend this script for your specific use cases. The code is modular and well-documented.
- iperf3 documentation
- iperf3 GitHub
- RFC 2544: Benchmarking Methodology for Network Interconnect Devices
