-
Notifications
You must be signed in to change notification settings - Fork 626
[reland] Fix coreml to edge transform and lower #12629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12629
Note: Links to docs will display an error until the docs builds have been completed. ❌ 7 New FailuresAs of commit 3f67eef with merge base 413dee4 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
exir/program/_program.py
Outdated
|
||
# Edge will complain if there are view ops requested for preservation, so we replace them with view_copy | ||
program = _replace_view_with_view_copy(program) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do backends treat these the same at to_backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They do not necessarily (CoreML does for many of them), but it's already applied in _generate_edge_program after the verifier, but before to_backend is called, so this doesn't change the behavior that exists today.
Some other options are:
- Always filter out view ops from preservation
- Query curr_partitioner.ops_to_not_decompose for view op's copy variant support, and only preserve the view op if the copy variant is present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can also remove this line. It shouldn't be needed after you land D78535519
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we test the model export and make sure last Friday issue not appear again with this diff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will fix the numerical lowering issues?
When combined with #12665, I think it should (this switches to using torchao APIs for quantization) |
I will test to_edge_transform_and_lower on the ANE static model in the examples/apple/coreml/llama folder. |
Re-land of: #12564
Previous attempt had conflict with #12306 that caused CI failure.
The current design of using EDGE_DO_NOT_DECOMP to prevent decomposition has long standing issues, and often fails lowering when certain ops are requested for preservation. This shows up most notably in the CoreML backend, where most ops are requested for preservation.
As a band-aid, we introduced _remove_invalid_ops_for_not_decompose to cover certain kinds of ops. But when an op is encountered that we do not have an exception for, lowering still fails.
We also recently found another bug that shows up for SDPA related to contiguous views (https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/) that we still do not fully understand the root cause of.
EDGE_DO_NOT_DECOMP is actually only used to support the "check_op_support" argument in the partitioner; ops_to_not_decompose only modifies the default composition table.
In CoreML's case, "check_op_support" is not used, and the issues with EDGE_DO_NOT_DECOMP's design causes lots of lowering issues that are hard to keep up with. This PR enables a new path that bypasses EDGE_DO_NOT_DECOMP's when possible (_can_skip_using_EDGE_DO_NOT_DECOMP).
Long term, we need to address the buggy design of EDGE_DO_NOT_DECOMP. There are some ideas here: https://fb.workplace.com/groups/pytorch.edge2.team/permalink/1241898747065975/
cc @kimishpatel @YifanShenSZ @cymbalrush