-
Notifications
You must be signed in to change notification settings - Fork 214
Add support for torch.export exported models #1499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Implements functionality to load and execute PyTorch models exported via torch.export (.pt2 files), enabling .NET applications to run ExportedProgram models as the PyTorch ecosystem transitions from ONNX to torch.export. ## Implementation ### Native Layer - Add THSExport.h and THSExport.cpp C++ wrappers for torch.export API - Expose helper functions (toIValue, ReturnHelper) in THSJIT.h - Add ExportedProgramModule typedef in Utils.h - Update CMakeLists.txt to include THSExport sources ### Managed Layer - Add LibTorchSharp.THSExport.cs with PInvoke declarations - Implement ExportedProgram, ExportedProgram<TResult>, and ExportedProgram<T, TResult> classes in new Export namespace - Provide torch.export.load() API following PyTorch conventions ### Features - Load .pt2 ExportedProgram files - Execute forward pass with type-safe generics - Device management (CPU, CUDA, MPS) - Dtype conversion support - Parameters and buffers access - Training/eval mode compatibility ### Testing - Add TestExport.cs with 10 comprehensive unit tests - Include 6 test .pt2 models covering various scenarios: - Simple linear model - Linear + ReLU - Multiple inputs - Tuple and list outputs - Sequential models - Update TorchSharpTest.csproj to copy .pt2 files to output ## Technical Details The implementation leverages ~80% of existing ScriptModule infrastructure, including TensorOrScalar marshalling and return value processing. The .pt2 format is compatible with torch::jit::load() in LibTorch C++ API. Fixes dotnet#1498
Implements functionality to load and execute PyTorch models exported via torch.export (.pt2 files), enabling .NET applications to run ExportedProgram models as the PyTorch ecosystem transitions from ONNX to torch.export. ## Implementation ### Native Layer - Add THSExport.h and THSExport.cpp C++ wrappers for AOTIModelPackageLoader API - Update Utils.h to include torch/csrc/inductor/aoti_package/model_package_loader.h - Upgrade to LibTorch 2.9.0 which includes AOTIModelPackageLoader symbols ### Managed Layer - Add LibTorchSharp.THSExport.cs with PInvoke declarations - Implement ExportedProgram and ExportedProgram<TResult> classes in Export namespace - Provide torch.export.load() API following PyTorch conventions ### Features - Load .pt2 ExportedProgram files compiled with torch._inductor.aoti_compile_and_package() - Execute inference-only forward pass with type-safe generics - Support for single tensor, array, and tuple (up to 3 elements) outputs - Proper IDisposable implementation for resource cleanup ### Testing - Add TestExport.cs with 7 comprehensive unit tests (all passing) - Include 6 test .pt2 models covering various scenarios: - Simple linear model - Linear + ReLU - Multiple inputs - Tuple and list outputs - Sequential models - Add generate_export_models.py for regenerating test models ## Technical Details The implementation uses torch::inductor::AOTIModelPackageLoader from LibTorch 2.9+ for AOTInductor-compiled models, providing 30-40% better latency than TorchScript. Models are inference-only and compiled for specific device (CPU/CUDA) at build time. Note: .pt2 files from torch.export.save() are Python-only and not supported. Only .pt2 files from torch._inductor.aoti_compile_and_package() work in C++. Fixes dotnet#1498
|
@dotnet-policy-service agree |
1f64f5b to
af266bd
Compare
|
Build Failures : Missing LibTorch 2.9.0 Packages I believe the CI builds are failing because the build system requires .sha files for LibTorch package validation, and these are missing for LibTorch 2.9.0 Missing SHA files:
Package availability check:
Why my local tests passed: I was building against the PyTorch Python installation at Should we wait for PyTorch to publish all LibTorch 2.9.0 packages? |
|
Add SHA validation files for LibTorch 2.9.0 packages to enable CI builds. PyTorch changed naming convention at 2.8.0 from 'libtorch-cxx11-abi-*' to unified 'libtorch-shared-with-deps-*' (which is cxx11-abi by default). Added: - libtorch-shared-with-deps-2.9.0+cpu.zip.sha (Linux) - libtorch-win-shared-with-deps-2.9.0+cpu.zip.sha (Windows) - libtorch-win-shared-with-deps-debug-2.9.0+cpu.zip.sha (Windows Debug) SHA values computed from official PyTorch downloads at download.pytorch.org.
|
@masaru-kimura-hacarus Thank you for the detailed investigation and the Gemini Deep Research report! You're absolutely right. I was looking for the wrong package name. I've just pushed the correct SHA files using the new naming convention. Let's see if the CI builds pass now |
|
@dotnet-policy-service agree |



Add support for torch.export exported models (#1498)
Implements functionality to load and execute PyTorch models exported via torch.export (.pt2 files), enabling .NET applications to run ExportedProgram models as the PyTorch ecosystem transitions from ONNX to torch.export.
Summary
This PR adds support for loading and running AOTInductor-compiled
.pt2models in TorchSharp usingtorch::inductor::AOTIModelPackageLoaderfrom LibTorch 2.9+.Key Points:
torch._inductor.aoti_compile_and_package()in PythonImplementation
Native Layer (C++)
Files:
src/Native/LibTorchSharp/Utils.h- Added AOTIModelPackageLoader header includesrc/Native/LibTorchSharp/THSExport.h- C++ API declarationssrc/Native/LibTorchSharp/THSExport.cpp- Implementation usingtorch::inductor::AOTIModelPackageLoaderKey Changes:
Managed Layer (C#)
Files:
src/TorchSharp/PInvoke/LibTorchSharp.THSExport.cs- PInvoke declarationssrc/TorchSharp/Export/ExportedProgram.cs- High-level C# APIAPI Design:
Features:
IDisposablefor proper resource cleanupExportedProgram<TResult>for type-safe returnsrun(),forward(), andcall()methods (all equivalent)Testing
Files:
test/TorchSharpTest/TestExport.cs- 7 comprehensive unit teststest/TorchSharpTest/generate_export_models.py- Python script to generate test modelstest/TorchSharpTest/*.pt2- 6 test modelsTest Coverage:
All 7 tests pass successfully.
Dependencies
Updated:
build/Dependencies.props- Updated LibTorch from 2.7.1 to 2.9.0LibTorch 2.9.0 includes the
torch::inductor::AOTIModelPackageLoaderimplementation that was previously only available in PyTorch source code.Technical Details
Two .pt2 Formats
PyTorch has two different .pt2 export formats:
Python-only (from
torch.export.save()):AOTInductor-compiled (from
torch._inductor.aoti_compile_and_package()):Python Model Generation
To create compatible .pt2 files:
Limitations
Performance
According to PyTorch documentation, AOTInductor provides:
Testing
Migration Guide
For users currently using TorchScript:
Before (TorchScript):
After (torch.export):
References
torch/csrc/inductor/aoti_package/model_package_loader.hFixes #1498