[ET Device Support] Schema changes: device info on Tensor and buffer-level device array#17533
[ET Device Support] Schema changes: device info on Tensor and buffer-level device array#17533Gasoonjia wants to merge 2 commits intogh/gasoonjia/122/basefrom
Conversation
…level device array This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates. This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement. Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17533
Note: Links to docs will display an error until the docs builds have been completed. ❌ 35 New Failures, 15 Unrelated FailuresAs of commit 9584c1c with merge base fc3239c ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
…and buffer-level device array" This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates. This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement. Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/) [ghstack-poisoned]
| device_type: DeviceType = DeviceType.CPU | ||
| # Device index for multi-device scenarios (e.g., cuda:0, cuda:1). | ||
| # A value of -1 indicates the default device. | ||
| device_index: int = -1 |
There was a problem hiding this comment.
nit: is it worth adding a 'DeviceInfo' dataclass/flatbuffers table if we may expect more device-related data?
JacobSzwejbka
left a comment
There was a problem hiding this comment.
Review automatically exported from Phabricator review in Meta.
Stack from ghstack (oldest at bottom):
This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates.
This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement.
Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu
Differential Revision: D93635657