Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
a2bdfdb
test initial attempt libtorch upgrade
alinpahontu2912 Jul 7, 2025
2b27ed2
enable testing
alinpahontu2912 Jul 7, 2025
abc1ec6
retrigger
alinpahontu2912 Jul 7, 2025
bef4ed8
run build cuda packages
alinpahontu2912 Jul 8, 2025
38036ff
Merge branch 'main' of https://github.com/alinpahontu2912/TorchSharp
alinpahontu2912 Jul 8, 2025
03705b8
enable all workflows
alinpahontu2912 Jul 8, 2025
27e2da7
retrigger
alinpahontu2912 Jul 8, 2025
c0e8863
retrigger
alinpahontu2912 Jul 8, 2025
2f49628
revert to original azure pipelines
alinpahontu2912 Jul 8, 2025
af70eec
build libtorch cuda workflow test
alinpahontu2912 Jul 8, 2025
d351d20
undo pipelines changes
alinpahontu2912 Jul 8, 2025
dda1f74
disable fork pull request check
alinpahontu2912 Jul 8, 2025
61e98a1
use correct cuda version in azure pipelines
alinpahontu2912 Jul 8, 2025
8c5526c
test conditions
alinpahontu2912 Jul 8, 2025
78216cf
set buildlibtorchpackages to true
alinpahontu2912 Jul 10, 2025
5756844
test build and sign
alinpahontu2912 Jul 10, 2025
31fc2a3
fix conditions
alinpahontu2912 Jul 10, 2025
a857f27
change condition
alinpahontu2912 Jul 10, 2025
b934907
test
alinpahontu2912 Jul 10, 2025
b286d78
undo comment
alinpahontu2912 Jul 10, 2025
3b07255
test
alinpahontu2912 Jul 10, 2025
50f81a6
add debug to pack.proj
alinpahontu2912 Jul 11, 2025
72e656f
extra debugging
alinpahontu2912 Jul 11, 2025
405e479
test
alinpahontu2912 Jul 11, 2025
1ef86ff
update names
alinpahontu2912 Jul 11, 2025
bba4f59
split linux cuda packages
alinpahontu2912 Jul 14, 2025
a1201f8
attempt restitch windows packages
alinpahontu2912 Jul 14, 2025
be72ff5
disable parallel windows cuda and linux cuda jobs
alinpahontu2912 Jul 14, 2025
ca39c83
fix condition
alinpahontu2912 Jul 14, 2025
35b39a0
update packages project files
alinpahontu2912 Jul 14, 2025
b71b98b
minimize jobs
alinpahontu2912 Jul 14, 2025
1fe0996
change jobs dependencies
alinpahontu2912 Jul 14, 2025
def759f
change dependencies
alinpahontu2912 Jul 14, 2025
acdc68c
test new approach
alinpahontu2912 Jul 15, 2025
1b18547
fix yaml styling
alinpahontu2912 Jul 15, 2025
4e064f6
restitch windows cuda packages
alinpahontu2912 Jul 15, 2025
c985303
update cuda nuggets splits
alinpahontu2912 Jul 15, 2025
9f6d291
add missing windows cuda dll
alinpahontu2912 Jul 17, 2025
80449e9
update cuda load dll
alinpahontu2912 Jul 17, 2025
b8e5280
fix references
alinpahontu2912 Jul 23, 2025
4f2dc30
try load multiple dlls
alinpahontu2912 Jul 23, 2025
077a325
fix project package
alinpahontu2912 Jul 25, 2025
473ac33
remove debug prints and update devguide and releasenotes
alinpahontu2912 Jul 29, 2025
2244f1a
keep track of current size of libraries
alinpahontu2912 Jul 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Directory.Build.props
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
<SourceDir>$(RepoRoot)src/</SourceDir>
<PkgDir>$(RepoRoot)pkg/</PkgDir>

<LibTorchPackageVersion>2.5.1.0</LibTorchPackageVersion>
<LibTorchPackageVersion>2.7.1.0</LibTorchPackageVersion>
<LibTorchPackageVersion Condition="'$(TargetOS)' == 'mac' and '$(TargetArchitecture)' == 'x64'">2.2.2.0</LibTorchPackageVersion>

<!-- when building on local machines the massive downloads get placed up one directory -->
Expand Down Expand Up @@ -86,7 +86,7 @@
<!-- use stable versions for libtorch packages based on LibTorch version number scheme-->
<!-- we manually update these -->
<PropertyGroup Condition="'$(MSBuildProjectName.IndexOf(`libtorch-`))' != '-1'">
<LibTorchPackageVersion>2.5.1.0</LibTorchPackageVersion>
<LibTorchPackageVersion>2.7.1.0</LibTorchPackageVersion>
<LibTorchPackageVersion Condition="'$(TargetOS)' == 'mac' and '$(TargetArchitecture)' == 'x64'">2.2.2.0</LibTorchPackageVersion>
<EnablePackageValidation>false</EnablePackageValidation>
<VersionPrefix>$(LibTorchPackageVersion)</VersionPrefix>
Expand Down
4 changes: 2 additions & 2 deletions Directory.Build.targets
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
<NativeAssemblyReference Include="cudnn_ops64_9" Variant="cuda\" />
<NativeAssemblyReference Include="cufft64_11" Variant="cuda\" />
<NativeAssemblyReference Include="cufftw64_11" Variant="cuda\" />
<NativeAssemblyReference Include="cupti64_2023.1.1" Variant="cuda\" />
<NativeAssemblyReference Include="cupti64_2025.1.0" Variant="cuda\" />
<NativeAssemblyReference Include="curand64_10" Variant="cuda\" />
<NativeAssemblyReference Include="cusolver64_11" Variant="cuda\" />
<NativeAssemblyReference Include="cusolverMg64_11" Variant="cuda\" />
Expand All @@ -49,7 +49,7 @@
<NativeAssemblyReference Include="libiompstubs5md" Variant="cuda\" />
<NativeAssemblyReference Include="nvJitLink_120_0" Variant="cuda\" />
<NativeAssemblyReference Include="nvToolsExt64_1" Variant="cuda\" />
<NativeAssemblyReference Include="nvrtc-builtins64_121" Variant="cuda\" />
<NativeAssemblyReference Include="nvrtc-builtins64_128" Variant="cuda\" />
<NativeAssemblyReference Include="nvrtc64_120_0" Variant="cuda\" />
<NativeAssemblyReference Include="torch" Variant="cuda\" />
<NativeAssemblyReference Include="torch_cpu" Variant="cuda\" />
Expand Down
6 changes: 3 additions & 3 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ jobs:
condition: eq('${{ parameters.BuildLibTorchPackages }}', true)
displayName: Download libtorch native binaries

- script: dotnet build -c $(BuildConfig) src/Redist/libtorch-cuda-12.1/libtorch-cuda-12.1.proj /p:UpdateSHA=true /p:SkipTests=true /p:TargetOS=linux /t:Build /p:IncludeLibTorchCudaPackages=true
- script: dotnet build -c $(BuildConfig) src/Redist/libtorch-cuda-12.8/libtorch-cuda-12.8.proj /p:UpdateSHA=true /p:SkipTests=true /p:TargetOS=linux /t:Build /p:IncludeLibTorchCudaPackages=true
condition: eq('${{ parameters.BuildLibTorchPackages }}', true)
displayName: Download libtorch native CUDA binaries

Expand Down Expand Up @@ -157,7 +157,7 @@ jobs:
- script: dotnet build -c $(BuildConfig) src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:SkipTests=true /p:TargetOS=windows /t:Build /p:IncludeLibTorchCpuPackages=true
displayName: Download libtorch native binaries

- script: dotnet build -c $(BuildConfig) src/Redist/libtorch-cuda-12.1/libtorch-cuda-12.1.proj /p:UpdateSHA=true /p:SkipTests=true /p:TargetOS=windows /t:Build /p:IncludeLibTorchCudaPackages=true
- script: dotnet build -c $(BuildConfig) src/Redist/libtorch-cuda-12.8/libtorch-cuda-12.8.proj /p:UpdateSHA=true /p:SkipTests=true /p:TargetOS=windows /t:Build /p:IncludeLibTorchCudaPackages=true
condition: eq('${{ parameters.BuildLibTorchPackages }}', true)
displayName: Download libtorch native CUDA binaries

Expand Down Expand Up @@ -859,4 +859,4 @@ jobs:
publishVstsFeed: 'TorchSharp/TestPackages'
allowPackageConflicts: true
# often fails - try but ignore the error until we sort it out
continueOnError: true
continueOnError: true
4 changes: 2 additions & 2 deletions build/BranchInfo.props
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<PropertyGroup>
<MajorVersion>0</MajorVersion>
<MinorVersion>105</MinorVersion>
<PatchVersion>1</PatchVersion>
<PreviousPackageVersion>0.105.0</PreviousPackageVersion>
<PatchVersion>2</PatchVersion>
<PreviousPackageVersion>0.105.1</PreviousPackageVersion>
</PropertyGroup>
</Project>
6 changes: 3 additions & 3 deletions build/Dependencies.props
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@

<!-- Other/Non-Core Product Dependencies -->
<PropertyGroup>
<LibTorchVersion>2.5.1</LibTorchVersion>
<LibTorchVersion>2.7.1</LibTorchVersion>
<LibTorchVersion Condition="'$(TargetArchitecture)' == 'x64' and '$(TargetOS)' == 'mac'">2.2.2</LibTorchVersion>
<CudaVersionDot>12.1</CudaVersionDot>
<CudaVersionNoDot>121</CudaVersionNoDot>
<CudaVersionDot>12.8</CudaVersionDot>
<CudaVersionNoDot>128</CudaVersionNoDot>
<MklDnnVersion>2019.0.5.20190502</MklDnnVersion>
</PropertyGroup>

Expand Down
2 changes: 1 addition & 1 deletion build/ci/job-template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ parameters:

jobs:
- job: ${{ parameters.name }}
condition: ne(variables['build.sourcebranchname'], 'main')
# condition: ne(variables['build.sourcebranchname'], 'main')
variables:
_prepScript: ${{ parameters.prepScript }}
_testScript: ${{ parameters.testScript }}
Expand Down
66 changes: 33 additions & 33 deletions src/Native/LibTorchSharp/THSLinearAlgebra.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Tensor THSLinalg_cholesky(const Tensor tensor)
{
CATCH_TENSOR(torch::linalg::cholesky(*tensor))
CATCH_TENSOR(torch::linalg_cholesky(*tensor))
}

Tensor THSLinalg_cholesky_ex(const Tensor tensor, bool check_errors, Tensor* info)
Expand Down Expand Up @@ -44,7 +44,7 @@ Tensor THSLinalg_cross(const Tensor input, const Tensor other, const int64_t dim

Tensor THSLinalg_det(const Tensor tensor)
{
CATCH_TENSOR(torch::linalg::det(*tensor))
CATCH_TENSOR(torch::linalg_det(*tensor))
}

Tensor THSTensor_logdet(const Tensor tensor)
Expand All @@ -55,15 +55,15 @@ Tensor THSTensor_logdet(const Tensor tensor)
Tensor THSLinalg_slogdet(const Tensor tensor, Tensor* logabsdet)
{
std::tuple<at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::slogdet(*tensor);)
CATCH(res = torch::linalg_slogdet(*tensor);)
*logabsdet = ResultTensor(std::get<1>(res));
return ResultTensor(std::get<0>(res));
}

Tensor THSLinalg_eig(const Tensor tensor, Tensor* eigenvectors)
{
std::tuple<at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::eig(*tensor););
CATCH(res = torch::linalg_eig(*tensor););
*eigenvectors = ResultTensor(std::get<1>(res));
return ResultTensor(std::get<0>(res));
}
Expand Down Expand Up @@ -93,31 +93,31 @@ Tensor THSLinalg_eigh(const Tensor tensor, const char UPLO, Tensor* eigenvectors
std::string _uplo;
_uplo.push_back(UPLO);
std::tuple<at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::eigh(*tensor, _uplo););
CATCH(res = torch::linalg_eigh(*tensor, _uplo););
*eigenvectors = ResultTensor(std::get<1>(res));
return ResultTensor(std::get<0>(res));
}

Tensor THSLinalg_eigvals(const Tensor tensor)
{
CATCH_TENSOR(torch::linalg::eigvals(*tensor))
CATCH_TENSOR(torch::linalg_eigvals(*tensor))
}

Tensor THSLinalg_eigvalsh(const Tensor tensor, const char UPLO)
{
std::string _uplo;
_uplo.push_back(UPLO);
CATCH_TENSOR(torch::linalg::eigvalsh(*tensor, _uplo))
CATCH_TENSOR(torch::linalg_eigvalsh(*tensor, _uplo))
}

Tensor THSLinalg_householder_product(const Tensor tensor, const Tensor tau)
{
CATCH_TENSOR(torch::linalg::householder_product(*tensor, *tau))
CATCH_TENSOR(torch::linalg_householder_product(*tensor, *tau))
}

Tensor THSLinalg_inv(const Tensor tensor)
{
CATCH_TENSOR(torch::linalg::inv(*tensor))
CATCH_TENSOR(torch::linalg_inv(*tensor))
}

Tensor THSLinalg_inv_ex(const Tensor tensor, bool check_errors, Tensor* info)
Expand All @@ -131,7 +131,7 @@ Tensor THSLinalg_inv_ex(const Tensor tensor, bool check_errors, Tensor* info)
Tensor THSLinalg_lstsq_none(const Tensor A, const Tensor B, Tensor* residuals, Tensor* rank, Tensor* singular_values)
{
std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::lstsq(*A, *B, c10::nullopt, c10::nullopt);)
CATCH(res = torch::linalg_lstsq(*A, *B, c10::nullopt, c10::nullopt);)
*residuals = ResultTensor(std::get<1>(res));
*rank = ResultTensor(std::get<2>(res));
*singular_values = ResultTensor(std::get<3>(res));
Expand All @@ -141,7 +141,7 @@ Tensor THSLinalg_lstsq_none(const Tensor A, const Tensor B, Tensor* residuals, T
Tensor THSLinalg_lstsq_rcond(const Tensor A, const Tensor B, const double rcond, Tensor* residuals, Tensor* rank, Tensor* singular_values)
{
std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::lstsq(*A, *B, rcond, c10::nullopt);)
CATCH(res = torch::linalg_lstsq(*A, *B, rcond, c10::nullopt);)
*residuals = ResultTensor(std::get<1>(res));
*rank = ResultTensor(std::get<2>(res));
*singular_values = ResultTensor(std::get<3>(res));
Expand All @@ -151,7 +151,7 @@ Tensor THSLinalg_lstsq_rcond(const Tensor A, const Tensor B, const double rcond,
Tensor THSLinalg_lu(const Tensor A, const bool pivot, Tensor* L, Tensor* U)
{
std::tuple<at::Tensor, at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::lu(*A, pivot);)
CATCH(res = torch::linalg_lu(*A, pivot);)
*L = ResultTensor(std::get<1>(res));
*U = ResultTensor(std::get<2>(res));
return ResultTensor(std::get<0>(res));
Expand All @@ -160,7 +160,7 @@ Tensor THSLinalg_lu(const Tensor A, const bool pivot, Tensor* L, Tensor* U)
Tensor THSLinalg_lu_factor(const Tensor A, const bool pivot, Tensor* pivots)
{
std::tuple<at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::lu_factor(*A, pivot);)
CATCH(res = torch::linalg_lu_factor(*A, pivot);)
*pivots = ResultTensor(std::get<1>(res));
return ResultTensor(std::get<0>(res));
}
Expand Down Expand Up @@ -190,69 +190,69 @@ Tensor THSLinalg_ldl_solve(const Tensor LD, const Tensor pivots, const Tensor B,
Tensor THSLinalg_matrix_norm(const Tensor tensor, const Scalar ord, const int64_t* dim, const int dim_length, const bool keepdim)
{
auto dims = c10::ArrayRef<int64_t>(dim, dim_length);
CATCH_TENSOR(torch::linalg::matrix_norm(*tensor, *ord, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_matrix_norm(*tensor, *ord, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_matrix_norm_fronuc(const Tensor tensor, const int8_t fronuc, const int64_t* dim, const int dim_length, const bool keepdim)
{
auto dims = c10::ArrayRef<int64_t>(dim, dim_length);
CATCH_TENSOR(torch::linalg::matrix_norm(*tensor, (fronuc == 0) ? "fro" : "nuc", dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_matrix_norm(*tensor, (fronuc == 0) ? "fro" : "nuc", dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_vector_norm(const Tensor tensor, const Scalar ord, const int64_t* dim, const int dim_length, const bool keepdim)
{
auto dims = c10::ArrayRef<int64_t>(dim, dim_length);
CATCH_TENSOR(torch::linalg::vector_norm(*tensor, *ord, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_vector_norm(*tensor, *ord, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_matrix_rank(const Tensor tensor, const double atol, const bool has_atol, const double rtol, const bool has_rtol, const bool hermitian)
{
auto atol_ = has_atol ? atol : c10::optional<double>();
auto rtol_ = has_rtol ? rtol : c10::optional<double>();

CATCH_TENSOR(torch::linalg::matrix_rank(*tensor, atol_, rtol_, hermitian))
CATCH_TENSOR(torch::linalg_matrix_rank(*tensor, atol_, rtol_, hermitian))
}

Tensor THSLinalg_matrix_rank_tensor(const Tensor tensor, const Tensor atol, const Tensor rtol, const bool hermitian)
{
const c10::optional<at::Tensor> atol_ = atol != nullptr ? *atol : c10::optional<at::Tensor>();
const c10::optional<at::Tensor> rtol_ = rtol != nullptr ? *rtol : c10::optional<at::Tensor>();

CATCH_TENSOR(torch::linalg::matrix_rank(*tensor, atol_, rtol_, hermitian))
CATCH_TENSOR(torch::linalg_matrix_rank(*tensor, atol_, rtol_, hermitian))
}

Tensor THSLinalg_matrix_power(const Tensor tensor, const int64_t n)
{
CATCH_TENSOR(torch::linalg::matrix_power(*tensor, n))
CATCH_TENSOR(torch::linalg_matrix_power(*tensor, n))
}

Tensor THSLinalg_multi_dot(const Tensor* tensors, const int length)
{
CATCH_TENSOR(torch::linalg::multi_dot(toTensors<at::Tensor>((torch::Tensor**)tensors, length)))
CATCH_TENSOR(torch::linalg_multi_dot(toTensors<at::Tensor>((torch::Tensor**)tensors, length)))
}

Tensor THSLinalg_norm_str(const Tensor tensor, const char* p, const int64_t* dim, const int dim_length, const bool keepdim)
{
c10::optional<at::IntArrayRef> dims = (dim == nullptr) ? c10::nullopt : c10::optional<at::IntArrayRef>(at::ArrayRef<int64_t>(dim, dim_length));
CATCH_TENSOR(torch::linalg::norm(*tensor, p, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_norm(*tensor, p, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_norm_float(const Tensor tensor, const double p, const int64_t* dim, const int dim_length, const bool keepdim)
{
c10::optional<at::IntArrayRef> dims = (dim == nullptr) ? c10::nullopt : c10::optional<at::IntArrayRef>(at::ArrayRef<int64_t>(dim, dim_length));
CATCH_TENSOR(torch::linalg::norm(*tensor, p, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_norm(*tensor, p, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_norm_int(const Tensor tensor, const int p, const int64_t* dim, const int dim_length, const bool keepdim)
{
c10::optional<at::IntArrayRef> dims = (dim == nullptr) ? c10::nullopt : c10::optional<at::IntArrayRef>(at::ArrayRef<int64_t>(dim, dim_length));
CATCH_TENSOR(torch::linalg::norm(*tensor, p, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_norm(*tensor, p, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_norm_opt(const Tensor tensor, const int64_t* dim, const int dim_length, const bool keepdim)
{
c10::optional<at::IntArrayRef> dims = (dim == nullptr) ? c10::nullopt : c10::optional<at::IntArrayRef>(at::ArrayRef<int64_t>(dim, dim_length));
CATCH_TENSOR(torch::linalg::norm(*tensor, c10::nullopt, dims, keepdim, c10::nullopt))
CATCH_TENSOR(torch::linalg_norm(*tensor, c10::nullopt, dims, keepdim, c10::nullopt))
}

Tensor THSLinalg_pinv(const Tensor tensor, const double atol, const bool has_atol, const double rtol, const bool has_rtol, const bool hermitian)
Expand All @@ -273,7 +273,7 @@ Tensor THSLinalg_pinv_tensor(const Tensor tensor, const Tensor atol, const Tenso

Tensor THSLinalg_pinverse(const Tensor tensor, const double rcond, const bool hermitian)
{
CATCH_TENSOR(torch::linalg::pinv(*tensor, rcond, hermitian))
CATCH_TENSOR(torch::linalg_pinv(*tensor, rcond, hermitian))
}

Tensor THSLinalg_qr(const Tensor tensor, const char mode, Tensor* R)
Expand All @@ -295,50 +295,50 @@ Tensor THSLinalg_qr(const Tensor tensor, const char mode, Tensor* R)

Tensor THSLinalg_solve(const Tensor tensor, Tensor other, bool left)
{
CATCH_TENSOR(torch::linalg::solve(*tensor, *other, left))
CATCH_TENSOR(torch::linalg_solve(*tensor, *other, left))
}

Tensor THSLinalg_solve_ex(const Tensor tensor, Tensor other, bool left, bool check_errors, Tensor* S)
{
std::tuple<at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::solve_ex(*tensor, *other, left, check_errors););
CATCH(res = torch::linalg_solve_ex(*tensor, *other, left, check_errors););
*S = ResultTensor(std::get<1>(res));
return ResultTensor(std::get<0>(res));
}

Tensor THSLinalg_solve_triangular(const Tensor tensor, Tensor other, bool upper, bool left, bool unitriangular)
{
CATCH_TENSOR(torch::linalg::solve_triangular(*tensor, *other, upper, left, unitriangular))
CATCH_TENSOR(torch::linalg_solve_triangular(*tensor, *other, upper, left, unitriangular))
}

Tensor THSLinalg_solve_triangular_out(const Tensor tensor, Tensor other, bool upper, bool left, bool unitriangular, Tensor result)
{
CATCH_TENSOR(torch::linalg::solve_triangular_out(*result, *tensor, *other, upper, left, unitriangular))
CATCH_TENSOR(torch::linalg_solve_triangular_out(*result, *tensor, *other, upper, left, unitriangular))
}

Tensor THSLinalg_svd(const Tensor tensor, const bool full_matrices, Tensor* S, Tensor* Vh)
{
std::tuple<at::Tensor, at::Tensor, at::Tensor> res;
CATCH(res = torch::linalg::svd(*tensor, full_matrices, c10::nullopt););
CATCH(res = torch::linalg_svd(*tensor, full_matrices, c10::nullopt););
*S = ResultTensor(std::get<1>(res));
*Vh = ResultTensor(std::get<2>(res));
return ResultTensor(std::get<0>(res));
}

Tensor THSLinalg_svdvals(const Tensor tensor)
{
CATCH_TENSOR(res = torch::linalg::svdvals(*tensor, c10::nullopt))
CATCH_TENSOR(res = torch::linalg_svdvals(*tensor, c10::nullopt))
}

Tensor THSLinalg_tensorinv(const Tensor tensor, const int64_t ind)
{
CATCH_TENSOR(torch::linalg::tensorinv(*tensor, ind))
CATCH_TENSOR(torch::linalg_tensorinv(*tensor, ind))
}

Tensor THSLinalg_tensorsolve(const Tensor tensor, Tensor other, const int64_t* dim, const int dim_length)
{
c10::optional<at::IntArrayRef> dims = (dim == nullptr) ? c10::nullopt : c10::optional<at::IntArrayRef>(at::ArrayRef<int64_t>(dim, dim_length));
CATCH_TENSOR(torch::linalg::tensorsolve(*tensor, *other, dims))
CATCH_TENSOR(torch::linalg_tensorsolve(*tensor, *other, dims))
}

Tensor THSLinalg_vander(const Tensor tensor, const int64_t N)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
63D572598C8D532128A335018913E795C1BBB32602CE378896DC8CFBB5590976
1 change: 1 addition & 0 deletions src/Redist/libtorch-cpu/libtorch-macos-arm64-2.7.1.zip.sha
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
AA89AC85B91C83D0F976F8D135330D51E38AB777B26EC24F312FD58D079314CB
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
A294845080D67FF579073B7B6E17E7DA1CC856DDE6E63FCA2D71498B482580F8
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
E8D024BBD35FC007A033ABAA616D3104AE0D6262F66A6245A115E32F4A9FC44A
Loading