Skip to content

Conversation

@gggekov
Copy link
Collaborator

@gggekov gggekov commented Nov 17, 2025

Explain how to prune a NN and the associated uplift in performance when running on the Ethos-U NPU.

cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai

Explain how to prune a NN and the associated uplift in performance
when running on the Ethos-U NPU.

Change-Id: Ib68513e5b4cb7ceef280b6fe089985e9948a8140
@gggekov gggekov requested a review from digantdesai as a code owner November 17, 2025 16:43
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15851

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 1f012a9 with merge base 8e33788 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 17, 2025
@gggekov gggekov added partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm ciflow/trunk topic: not user facing and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Nov 17, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a comprehensive tutorial on neural network pruning for the Arm Ethos-U NPU, demonstrating how to prune a model and measure the resulting performance improvements. The example uses a simple MNIST classifier and shows the complete workflow from training to deployment.

Key changes:

  • Adds a Jupyter notebook tutorial demonstrating pruning workflow with PyTorch and ExecuTorch
  • Updates performance monitoring to track MAC and Weight Decoder activity for Ethos-U85
  • Shows 3x+ inference speedup and significant memory reduction through pruning

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
examples/arm/pruning_minimal_example.ipynb Complete tutorial notebook showing how to prune a neural network, quantize it, and deploy to Ethos-U NPU with performance analysis
examples/arm/executor_runner/arm_perf_monitor.cpp Adds MAC_ACTIVE and WD_ACTIVE PMU counter tracking for Ethos-U85 to support detailed performance analysis
Comments suppressed due to low confidence (1)

examples/arm/pruning_minimal_example.ipynb:1

  • The flag '--debug-force-regor' appears to contain a typo. Verify if this should be '--debug-force-regor' or if it's a misspelling of a valid flag like '--debug-force-regen' or '--debug-force-reorder'.
{

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

"id": "a4750eaf",
"metadata": {},
"source": [
"Let's instantiate the model and train it. In order to get reproduceable results, we will fix the seed."
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'reproduceable' to 'reproducible'.

Suggested change
"Let's instantiate the model and train it. In order to get reproduceable results, we will fix the seed."
"Let's instantiate the model and train it. In order to get reproducible results, we will fix the seed."

Copilot uses AI. Check for mistakes.
"id": "9837d9ba",
"metadata": {},
"source": [
"We obtian 96% top1 accuracy for the FP32 model.\n",
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'obtian' to 'obtain'.

Suggested change
"We obtian 96% top1 accuracy for the FP32 model.\n",
"We obtain 96% top1 accuracy for the FP32 model.\n",

Copilot uses AI. Check for mistakes.
"Original Weights Size 522.50 KiB\n",
"NPU Encoded Weights Size 507.44 KiB\n",
"```\n",
"In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remembmer this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n",
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'Remembmer' to 'Remember'.

Suggested change
"In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remembmer this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n",
"In other words, the original Weights are 522KB and after compilation and encoding by the compiler, we get 507KB of weights that will be read by the NPU at runtime. Remember this is for the case when we've not applied pruning or clustering. This will generate original_model.pte file that we will deploy on device later on. \n",

Copilot uses AI. Check for mistakes.
"from torch import nn\n",
"import torch.nn.utils.prune as prune\n",
"import torch.nn.functional as F\n",
"from torch.utils.data import DataLoader, Subset\n",
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import 'DataLoader' is duplicated on lines 54 and 58. Remove the duplicate import from line 58.

Suggested change
"from torch.utils.data import DataLoader, Subset\n",
"from torch.utils.data import Subset\n",

Copilot uses AI. Check for mistakes.
"metadata": {},
"source": [
"On the pruned model, the inference completes in 22k NPU cycles. The NPU still performs 8k MACs, but this time the number of cycles when the weight decoder is active has dropped to to 17k cycles. \n",
"It's also worth noting that the size of the pte file has been reducded significantly - from 518 KB of the original model to 57KB of the pruned workload. "
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'reducded' to 'reduced'.

Suggested change
"It's also worth noting that the size of the pte file has been reducded significantly - from 518 KB of the original model to 57KB of the pruned workload. "
"It's also worth noting that the size of the pte file has been reduced significantly - from 518 KB of the original model to 57KB of the pruned workload. "

Copilot uses AI. Check for mistakes.
"# Conclusion\n",
"We defined a simple model to solve the MNIST dataset. The model is using Linear layers and is heavily memory-bound on the external memory. We pruned the model and obtain similar int8 accuracy between the original workload and the pruned counterpart. Let us put the results from the runtime in a table and draw a few conclusions: \n",
"\n",
"| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory beats read | Size of the pte file |\n",
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'beats' to 'beats' in table header. This should be 'bytes' if referring to data size, or 'beats' if intentionally referring to memory bus beats.

Suggested change
"| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory beats read | Size of the pte file |\n",
"| Model |NPU_ACTIVE cycles | NPU Encoded Weight Size | Weight Decoder Active Cycles | External memory bytes read | Size of the pte file |\n",

Copilot uses AI. Check for mistakes.
ETHOSU_PMU_Set_EVTYPER(drv, 4, ETHOSU_PMU_NPU_IDLE);
ETHOSU_PMU_Set_EVTYPER(drv, 5, ETHOSU_PMU_MAC_ACTIVE);
ETHOSU_PMU_Set_EVTYPER(drv, 6, ETHOSU_PMU_WD_ACTIVE);
// Enable the 5 counters
Copy link

Copilot AI Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment states 'Enable the 5 counters' but the code now enables 7 counters (CNT1 through CNT7). Update the comment to say 'Enable the 7 counters'.

Suggested change
// Enable the 5 counters
// Enable the 7 counters

Copilot uses AI. Check for mistakes.
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm topic: not user facing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant