Skip to content

feat(drivers/net): Add loopback network driver #1834

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 24, 2025

Conversation

Gelbpunkt
Copy link
Member

The idea is that in the immediate future, this driver will be the fallback network driver. Once we support multiple interfaces, it should always be the driver for the loopback interface.

@Gelbpunkt Gelbpunkt force-pushed the loopback branch 2 times, most recently from 4077a95 to 0c282d4 Compare July 15, 2025 15:39
@mkroening mkroening self-assigned this Jul 15, 2025
@mkroening mkroening self-requested a review July 15, 2025 15:40
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark Results

Benchmark Current: 30da148 Previous: 9804a49 Performance Ratio
startup_benchmark Build Time 80.02 s 100.57 s 0.80
startup_benchmark File Size 0.88 MB 0.88 MB 1.00
Startup Time - 1 core 0.83 s (±0.02 s) 0.88 s (±0.02 s) 0.94
Startup Time - 2 cores 0.83 s (±0.02 s) 0.91 s (±0.03 s) 0.92
Startup Time - 4 cores 0.85 s (±0.02 s) 0.90 s (±0.01 s) 0.95
multithreaded_benchmark Build Time 81.88 s 100.03 s 0.82
multithreaded_benchmark File Size 0.99 MB 0.99 MB 1.00
Multithreaded Pi Efficiency - 2 Threads 88.16 % (±7.89 %) 73.59 % (±5.70 %) 1.20
Multithreaded Pi Efficiency - 4 Threads 43.19 % (±3.12 %) 42.77 % (±2.22 %) 1.01
Multithreaded Pi Efficiency - 8 Threads 25.17 % (±1.51 %) 20.96 % (±1.21 %) 1.20
micro_benchmarks Build Time 94.48 s 83.21 s 1.14
micro_benchmarks File Size 1.00 MB 1.00 MB 1.00
Scheduling time - 1 thread 69.13 ticks (±4.40 ticks) 61.31 ticks (±1.80 ticks) 1.13
Scheduling time - 2 threads 41.28 ticks (±6.11 ticks) 36.22 ticks (±6.00 ticks) 1.14
Micro - Time for syscall (getpid) 14.87 ticks (±1.25 ticks) 13.79 ticks (±1.32 ticks) 1.08
Memcpy speed - (built_in) block size 4096 74366.80 MByte/s (±51464.05 MByte/s) 85845.79 MByte/s (±59111.97 MByte/s) 0.87
Memcpy speed - (built_in) block size 1048576 41323.76 MByte/s (±28686.30 MByte/s) 43653.64 MByte/s (±30175.59 MByte/s) 0.95
Memcpy speed - (built_in) block size 16777216 28016.49 MByte/s (±23060.87 MByte/s) 29568.32 MByte/s (±24301.55 MByte/s) 0.95
Memset speed - (built_in) block size 4096 74416.97 MByte/s (±51505.52 MByte/s) 86109.96 MByte/s (±59291.06 MByte/s) 0.86
Memset speed - (built_in) block size 1048576 41562.50 MByte/s (±28857.51 MByte/s) 43871.88 MByte/s (±30328.99 MByte/s) 0.95
Memset speed - (built_in) block size 16777216 28763.15 MByte/s (±23502.07 MByte/s) 30356.23 MByte/s (±24771.41 MByte/s) 0.95
Memcpy speed - (rust) block size 4096 65914.58 MByte/s (±46283.77 MByte/s) 74211.27 MByte/s (±51810.52 MByte/s) 0.89
Memcpy speed - (rust) block size 1048576 41327.41 MByte/s (±28692.52 MByte/s) 43566.94 MByte/s (±30117.37 MByte/s) 0.95
Memcpy speed - (rust) block size 16777216 27659.69 MByte/s (±22742.56 MByte/s) 29614.42 MByte/s (±24341.50 MByte/s) 0.93
Memset speed - (rust) block size 4096 66604.64 MByte/s (±46702.29 MByte/s) 74451.73 MByte/s (±51948.34 MByte/s) 0.89
Memset speed - (rust) block size 1048576 41593.23 MByte/s (±28871.14 MByte/s) 43781.02 MByte/s (±30267.28 MByte/s) 0.95
Memset speed - (rust) block size 16777216 28358.06 MByte/s (±23141.39 MByte/s) 30404.63 MByte/s (±24812.07 MByte/s) 0.93
alloc_benchmarks Build Time 93.04 s 81.65 s 1.14
alloc_benchmarks File Size 0.95 MB 0.95 MB 1.00
Allocations - Allocation success 100.00 % 100.00 % 1
Allocations - Deallocation success 69.97 % (±0.25 %) 69.98 % (±0.25 %) 1.00
Allocations - Pre-fail Allocations 100.00 % 100.00 % 1
Allocations - Average Allocation time 10138.83 Ticks (±235.56 Ticks) 9654.00 Ticks (±228.02 Ticks) 1.05
Allocations - Average Allocation time (no fail) 10138.83 Ticks (±235.56 Ticks) 9654.00 Ticks (±228.02 Ticks) 1.05
Allocations - Average Deallocation time 702.76 Ticks (±13.99 Ticks) 660.78 Ticks (±8.94 Ticks) 1.06
mutex_benchmark Build Time 92.08 s 83.44 s 1.10
mutex_benchmark File Size 1.00 MB 1.00 MB 1.00
Mutex Stress Test Average Time per Iteration - 1 Threads 13.98 ns (±0.68 ns) 12.02 ns (±0.47 ns) 1.16
Mutex Stress Test Average Time per Iteration - 2 Threads 16.56 ns (±1.54 ns) 13.72 ns (±0.87 ns) 1.21

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Member

@mkroening mkroening left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! This looks good.

Can you also add a simple example to hermit-rs that we can run in CI to test this?

The idea is that in the immediate future, this driver will be the
fallback network driver. Once we support multiple interfaces, it should
always be the driver for the loopback interface.

Signed-off-by: Jens Reidel <[email protected]>
@Gelbpunkt
Copy link
Member Author

Can you also add a simple example to hermit-rs that we can run in CI to test this?

Can do, but that would depend on this PR getting merged and vice-versa, I think

@mkroening mkroening added this pull request to the merge queue Jul 24, 2025
Merged via the queue into hermit-os:main with commit 7dd73eb Jul 24, 2025
30 checks passed
@mkroening
Copy link
Member

Can do, but that would depend on this PR getting merged and vice-versa, I think

Yes, please! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants