-
Notifications
You must be signed in to change notification settings - Fork 642
[reland] Fix coreml to edge transform and lower #12629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12629
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 2579c85 with merge base 6d86fa9 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
exir/program/_program.py
Outdated
|
||
# Edge will complain if there are view ops requested for preservation, so we replace them with view_copy | ||
program = _replace_view_with_view_copy(program) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do backends treat these the same at to_backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They do not necessarily (CoreML does for many of them), but it's already applied in _generate_edge_program after the verifier, but before to_backend is called, so this doesn't change the behavior that exists today.
Some other options are:
- Always filter out view ops from preservation
- Query curr_partitioner.ops_to_not_decompose for view op's copy variant support, and only preserve the view op if the copy variant is present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can also remove this line. It shouldn't be needed after you land D78535519
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we test the model export and make sure last Friday issue not appear again with this diff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will fix the numerical lowering issues?
When combined with #12665, I think it should (this switches to using torchao APIs for quantization) |
I will test to_edge_transform_and_lower on the ANE static model in the examples/apple/coreml/llama folder. |
80db2ac
to
2f452c0
Compare
2f452c0
to
e4176ce
Compare
e4176ce
to
2579c85
Compare
@metascroy has imported this pull request. If you are a Meta employee, you can view this in D78751401. |
…uantization APIs (#12665) This switches the ANE model to use to_edge_transform_and_lower and torchao quantization APIs. To use to_edge_transform_and_lower, we first need to land: #12629 To use torchao quant APIs, we first need to land #12648 and #12664. This PR contains all of the changes from those PRs because it is rebased on them. I will rebase on main once those PRs land to make this easier to review.
Re-land of: pytorch#12564 Previous attempt had conflict with pytorch#12306 that caused CI failure. ------ The current design of using EDGE_DO_NOT_DECOMP to prevent decomposition has long standing issues, and often fails lowering when certain ops are requested for preservation. This shows up most notably in the CoreML backend, where most ops are requested for preservation. As a band-aid, we introduced _remove_invalid_ops_for_not_decompose to cover certain kinds of ops. But when an op is encountered that we do not have an exception for, lowering still fails. We also recently found another bug that shows up for SDPA related to contiguous views (https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/) that we still do not fully understand the root cause of. EDGE_DO_NOT_DECOMP is actually only used to support the "check_op_support" argument in the partitioner; ops_to_not_decompose only modifies the default composition table. In CoreML's case, "check_op_support" is not used, and the issues with EDGE_DO_NOT_DECOMP's design causes lots of lowering issues that are hard to keep up with. This PR enables a new path that bypasses EDGE_DO_NOT_DECOMP's when possible (_can_skip_using_EDGE_DO_NOT_DECOMP). Long term, we need to address the buggy design of EDGE_DO_NOT_DECOMP. There are some ideas here: https://fb.workplace.com/groups/pytorch.edge2.team/permalink/1241898747065975/ cc @kimishpatel @YifanShenSZ @cymbalrush
…uantization APIs (pytorch#12665) This switches the ANE model to use to_edge_transform_and_lower and torchao quantization APIs. To use to_edge_transform_and_lower, we first need to land: pytorch#12629 To use torchao quant APIs, we first need to land pytorch#12648 and pytorch#12664. This PR contains all of the changes from those PRs because it is rebased on them. I will rebase on main once those PRs land to make this easier to review.
Re-land of: #12564
Previous attempt had conflict with #12306 that caused CI failure.
The current design of using EDGE_DO_NOT_DECOMP to prevent decomposition has long standing issues, and often fails lowering when certain ops are requested for preservation. This shows up most notably in the CoreML backend, where most ops are requested for preservation.
As a band-aid, we introduced _remove_invalid_ops_for_not_decompose to cover certain kinds of ops. But when an op is encountered that we do not have an exception for, lowering still fails.
We also recently found another bug that shows up for SDPA related to contiguous views (https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/) that we still do not fully understand the root cause of.
EDGE_DO_NOT_DECOMP is actually only used to support the "check_op_support" argument in the partitioner; ops_to_not_decompose only modifies the default composition table.
In CoreML's case, "check_op_support" is not used, and the issues with EDGE_DO_NOT_DECOMP's design causes lots of lowering issues that are hard to keep up with. This PR enables a new path that bypasses EDGE_DO_NOT_DECOMP's when possible (_can_skip_using_EDGE_DO_NOT_DECOMP).
Long term, we need to address the buggy design of EDGE_DO_NOT_DECOMP. There are some ideas here: https://fb.workplace.com/groups/pytorch.edge2.team/permalink/1241898747065975/
cc @kimishpatel @YifanShenSZ @cymbalrush