-
Notifications
You must be signed in to change notification settings - Fork 725
Arm backend: Minimal example of pruning #15851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Explain how to prune a NN and the associated uplift in performance when running on the Ethos-U NPU. Change-Id: Ib68513e5b4cb7ceef280b6fe089985e9948a8140
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15851
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 1f012a9 with merge base 8e33788 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a comprehensive tutorial on neural network pruning for the Arm Ethos-U NPU, demonstrating how to prune a model and measure the resulting performance improvements. The example uses a simple MNIST classifier and shows the complete workflow from training to deployment.
Key changes:
- Adds a Jupyter notebook tutorial demonstrating pruning workflow with PyTorch and ExecuTorch
- Updates performance monitoring to track MAC and Weight Decoder activity for Ethos-U85
- Shows 3x+ inference speedup and significant memory reduction through pruning
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| examples/arm/pruning_minimal_example.ipynb | Complete tutorial notebook showing how to prune a neural network, quantize it, and deploy to Ethos-U NPU with performance analysis |
| examples/arm/executor_runner/arm_perf_monitor.cpp | Adds MAC_ACTIVE and WD_ACTIVE PMU counter tracking for Ethos-U85 to support detailed performance analysis |
Comments suppressed due to low confidence (1)
examples/arm/pruning_minimal_example.ipynb:1
- The flag '--debug-force-regor' appears to contain a typo. Verify if this should be '--debug-force-regor' or if it's a misspelling of a valid flag like '--debug-force-regen' or '--debug-force-reorder'.
{
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "id": "a4750eaf", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Let's instantiate the model and train it. In order to get reproduceable results, we will fix the seed." |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'reproduceable' to 'reproducible'.
| "Let's instantiate the model and train it. In order to get reproduceable results, we will fix the seed." | |
| "Let's instantiate the model and train it. In order to get reproducible results, we will fix the seed." |
| "id": "9837d9ba", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "We obtian 96% top1 accuracy for the FP32 model.\n", |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'obtian' to 'obtain'.
| "We obtian 96% top1 accuracy for the FP32 model.\n", | |
| "We obtain 96% top1 accuracy for the FP32 model.\n", |
| "Original Weights Size 522.50 KiB\n", | ||
| "NPU Encoded Weights Size 507.44 KiB\n", | ||
| "```\n", | ||
| "In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remembmer this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n", |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'Remembmer' to 'Remember'.
| "In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remembmer this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n", | |
| "In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remember this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n", |
| "from torch import nn\n", | ||
| "import torch.nn.utils.prune as prune\n", | ||
| "import torch.nn.functional as F\n", | ||
| "from torch.utils.data import DataLoader, Subset\n", |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The import 'DataLoader' is duplicated on lines 54 and 58. Remove the duplicate import from line 58.
| "from torch.utils.data import DataLoader, Subset\n", | |
| "from torch.utils.data import Subset\n", |
| "metadata": {}, | ||
| "source": [ | ||
| "On the pruned model, the inference completes in 22k NPU cycles. The NPU still performs 8k MACs, but this time the number of cycles when the weight decoder is active has dropped to to 17k cycles. \n", | ||
| "It's also worth noting that the size of the pte file has been reducded significantly - from 518 KB of the original model to 57KB of the pruned workload. " |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'reducded' to 'reduced'.
| "It's also worth noting that the size of the pte file has been reducded significantly - from 518 KB of the original model to 57KB of the pruned workload. " | |
| "It's also worth noting that the size of the pte file has been reduced significantly - from 518 KB of the original model to 57KB of the pruned workload. " |
| "# Conclusion\n", | ||
| "We defined a simple model to solve the MNIST dataset. The model is using Linear layers and is heavily memory-bound on the external memory. We pruned the model and obtain similar int8 accuracy between the original workload and the pruned counterpart. Let us put the results from the runtime in a table and draw a few conclusions: \n", | ||
| "\n", | ||
| "| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory beats read | Size of the pte file |\n", |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'beats' to 'beats' in table header. This should be 'bytes' if referring to data size, or 'beats' if intentionally referring to memory bus beats.
| "| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory beats read | Size of the pte file |\n", | |
| "| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory bytes read | Size of the pte file |\n", |
| ETHOSU_PMU_Set_EVTYPER(drv, 4, ETHOSU_PMU_NPU_IDLE); | ||
| ETHOSU_PMU_Set_EVTYPER(drv, 5, ETHOSU_PMU_MAC_ACTIVE); | ||
| ETHOSU_PMU_Set_EVTYPER(drv, 6, ETHOSU_PMU_WD_ACTIVE); | ||
| // Enable the 5 counters |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment states 'Enable the 5 counters' but the code now enables 7 counters (CNT1 through CNT7). Update the comment to say 'Enable the 7 counters'.
| // Enable the 5 counters | |
| // Enable the 7 counters |
Explain how to prune a NN and the associated uplift in performance when running on the Ethos-U NPU.
cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai