-
Notifications
You must be signed in to change notification settings - Fork 212
Dispose implicitly converted TorchSharp.Scalar and torch.Tensor #1496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
hiyuh
wants to merge
35
commits into
dotnet:main
Choose a base branch
from
hacarus:dispose-implicitly-converted-scalar-tensor
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Dispose implicitly converted TorchSharp.Scalar and torch.Tensor #1496
hiyuh
wants to merge
35
commits into
dotnet:main
from
hacarus:dispose-implicitly-converted-scalar-tensor
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Otherwise, dotnet build will fail.
In the following case, at least 266 exceptions are observed. * allowImplicitConversionOperator = false * dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release * Update src/TorchSharp/Scalar.cs. + Introduce ScalarLeakDetector. + Update Scalar. - Use ScalarLeakDetector.ThrowIfImplicitConversionNotAllowed.
In the following case, at least 45 exceptions are observed. * allowImplicitConversionOperator = false * dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release * Update src/TorchSharp/Tensor/Tensor.cs + Introduce TensorLeakDetector. + Update Tensor. - Use TensorLeakDetector.ThrowIfImplicitConversionNotAllowed.
* Update src/TorchSharp/Tensor/Tensor.Operators.cs. + Declare TorchSharp.Scalar more explicitly. - Use prefix for left or right. - Call ToScalar explicitly.
* Update src/TorchSharp/Tensor/Tensor.Math.cs. + Declare TorchSharp.Scalar explicitly. + Add FIXMEs.
* Update src/TorchSharp/Optimizers/Adadelta.cs. + Update Adadelta.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0 explicitly.
* Update src/TorchSharp/Optimizers/Adagrad.cs. + Update Adagrad.step. + Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. + Add FIXME for possible unsued initial_accumulator_value. + Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/Adam.cs. + Update Adam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/Adamax.cs. + Update Adamax.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for CA1806.
* Update src/TorchSharp/Optimizers/ASGD.cs. + Update ASGD.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unsed weight_decay_scalar. - Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/NAdam.cs. + Update NAdam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/RAdam.cs. + Update RAdam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible torch.Tensor.sub_ use. - Add FIXME for possible no dispose for torch.Tensor. - bias_corrected_exp_avg - t6 - adaptive_lr and its intermediates and derives - Add FIXME for possible no dispose on param.add_ if rho_t > 5.
* Update src/TorchSharp/Optimizers/RMSprop.cs. + Update RMSProp.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused momentum_scalar. - Add FIXME for possible unused weight_decay_scalar. - Cache momentum > 0. - Cache weight_decay != 0. - Add FIXME for possible no avg dispose.
* Update src/TorchSharp/Optimizers/SGD.cs. + Update SGD.step. - Declare TorchSharp.Scalar explicitly. - Cache momentum != 0. - Cache dampening != 1. - Cache weight_decay != 0. - Omit unused TorchSharp.Scalar construction.
4a7cbd2
to
64520bc
Compare
* Update src/TorchAudio/Functional.cs. + Update griffinlim. - Declare TorchSharp.Scalar explicitly. - Introduce eps_scalar.
* Update src/TorchSharp/Optimizers/AdamW.cs. + Update AdamW.step. - Declare TorchSharp.Scalar explicitly. - Dispose denom explicitly.
* Update src/TorchSharp/NN/Normalization/BatchNorm.cs. + Update BatchNorm.forward. - Declare TorchSharp.Scalar explicitly. - Add FIXME for cache over training.
* Update src/TorchSharp/Tensor/Factories/Tensor.Factories.cs. + Update torch.normal. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs. + Update torchvision.utils.save_image. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible torch.Tensor.round_ use. - Add FIXME for no torch.min_int_value.
* Update src/TorchSharp/Optimizers/Rprop.cs. + Update Rprop.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for unused lr. - Add FIXME for possible torch.Tensor.sign_ use. - Cache eta{minus,plus} and 1 as torch.Tensor.
* Update src/TorchVision/Ops/StochasticDepth.cs. + Update torchvision.ops.stochastic_depth. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs. + Update torchvision.utils.make_grid. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchSharp/Distributions/ExpRelaxedCategorical.cs. + Update TorchSharp.Modules.ExpRelaxedCategorical.log_prob. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible inplace ops use.
* Update src/TorchVision/Functional.cs. + Update torchvision.transforms.functional.convert_image_dtype. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchAudio/Functional.cs. + Update torchaudio.functional.griffinlim. - Declare TorchSharp.Scalar explicitly. - Cache momentum > 0.0. - Add FIXME for possible inplace ops use. - Simplify angles initialization. - Use torch.ones instead.
* Update src/TorchAudio/Functional.cs. + Update torchaudio.functional._get_sinc_resample_kernel. - Declare TorchSharp.Scalar explicitly.
* Update src/Native/LibTorchSharp/THSTensor.h. + Declare THSTensor_square{,_}. * Update src/Native/LibTorchSharp/THSTensorMath.cpp. + Implement THSTensor_square{,_}. * Update src/TorchSharp/PInvoke/LibTorchSharp.THSTensor.cs. + Declare THSTensor_square{,_}. * src/TorchSharp/Tensor/Tensor.Math.cs. + Update torch.Tensor.square. - Use THSTensor_square{,_}. - Makes better compatibility against libtorch. - Removes extra cost for TorchSharp.Scalar. * Update src/TorchSharp/Tensor/torch.PointwiseOps.cs. + Introduce torch.square_.
* Update src/TorchAudio/Functional.cs. + Update torchaudio.functional.spectrogram. - Use torch.Tensor.square. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible torch.Tensor.square use.
* Update src/TorchAudio/Functional.cs. + Update torchaudio.functional.inverse_spectrogram. - Use torch.Tensor.square.
* Update src/TorchAudio/Transforms/InverseMelScale.cs. + Update InverseMelScale.forward. - Declare TorchSharp.Scalar explicitly. - Use torch.Tensor.square.
These are exceptional changes for unit tests, which had no special meaning for torch.Tensor.pow use, based on my understanding.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Scalar
s implicitly created inTensor
operators #1434.TorchSharp.{Scalar,Tensor}LeakDetector
can throw exceptions on implicit conversion.{TorchSharp.Scalar,torch.Tensor}.Dispose
.{TorchSharp.Scalar,torch.Tensor}.Dispose
in TorchSharp is on going.