Skip to content

Conversation

hiyuh
Copy link
Contributor

@hiyuh hiyuh commented Sep 10, 2025

  • SSIA.
    • included commits from Dispose Scalars implicitly created in Tensor operators #1434.
    • newly introduced TorchSharp.{Scalar,Tensor}LeakDetector can throw exceptions on implicit conversion.
      • if enabled, it would show stack trace for possible missing {TorchSharp.Scalar,torch.Tensor}.Dispose.
      • i'm not sure whether these leak detectors need retain even after all fixes in TorchSharp have been made though.
    • real fix for missing {TorchSharp.Scalar,torch.Tensor}.Dispose in TorchSharp is on going.
  • CC: @ds5678

Masaru Kimura and others added 16 commits September 10, 2025 15:18
Otherwise, dotnet build will fail.
In the following case, at least 266 exceptions are observed.
* allowImplicitConversionOperator = false
* dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release

* Update src/TorchSharp/Scalar.cs.
  + Introduce ScalarLeakDetector.
  + Update Scalar.
    - Use ScalarLeakDetector.ThrowIfImplicitConversionNotAllowed.
In the following case, at least 45 exceptions are observed.
* allowImplicitConversionOperator = false
* dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release

* Update src/TorchSharp/Tensor/Tensor.cs
  + Introduce TensorLeakDetector.
  + Update Tensor.
    - Use TensorLeakDetector.ThrowIfImplicitConversionNotAllowed.
* Update src/TorchSharp/Tensor/Tensor.Operators.cs.
  + Declare TorchSharp.Scalar more explicitly.
    - Use prefix for left or right.
    - Call ToScalar explicitly.
* Update src/TorchSharp/Tensor/Tensor.Math.cs.
  + Declare TorchSharp.Scalar explicitly.
  + Add FIXMEs.
* Update src/TorchSharp/Optimizers/Adadelta.cs.
  + Update Adadelta.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache weight_decay != 0 explicitly.
* Update src/TorchSharp/Optimizers/Adagrad.cs.
  + Update Adagrad.step.
    + Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    + Add FIXME for possible unsued initial_accumulator_value.
    + Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/Adam.cs.
  + Update Adam.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache weight_decay != 0.
    - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/Adamax.cs.
  + Update Adamax.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache weight_decay != 0.
    - Add FIXME for CA1806.
* Update src/TorchSharp/Optimizers/ASGD.cs.
  + Update ASGD.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unsed weight_decay_scalar.
    - Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/NAdam.cs.
  + Update NAdam.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache weight_decay != 0.
    - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/RAdam.cs.
  + Update RAdam.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache weight_decay != 0.
    - Add FIXME for possible torch.Tensor.sub_ use.
    - Add FIXME for possible no dispose for torch.Tensor.
      - bias_corrected_exp_avg
      - t6
      - adaptive_lr and its intermediates and derives
    - Add FIXME for possible no dispose on param.add_ if rho_t > 5.
* Update src/TorchSharp/Optimizers/RMSprop.cs.
  + Update RMSProp.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible unused momentum_scalar.
      - Add FIXME for possible unused weight_decay_scalar.
    - Cache momentum > 0.
    - Cache weight_decay != 0.
    - Add FIXME for possible no avg dispose.
* Update src/TorchSharp/Optimizers/SGD.cs.
  + Update SGD.step.
    - Declare TorchSharp.Scalar explicitly.
    - Cache momentum != 0.
    - Cache dampening != 1.
    - Cache weight_decay != 0.
    - Omit unused TorchSharp.Scalar construction.
@hiyuh hiyuh force-pushed the dispose-implicitly-converted-scalar-tensor branch from 4a7cbd2 to 64520bc Compare September 11, 2025 09:04
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 11, 2025

  • nuked torch.Tensor.add(torch.Tensor, Scalar) w/ implicit conversion to TorchSharp.Scalar.
    image
  • i'll continue same kind fix based on "Find All References" by Visual Studio.

* Update src/TorchAudio/Functional.cs.
  + Update griffinlim.
    - Declare TorchSharp.Scalar explicitly.
      - Introduce eps_scalar.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 11, 2025

  • nuked torch.Tensor.add(Scalar) w/ implicit conversion to TorchSharp.Scalar.
    image
    • intentionally, i won't touch any unit tests if possible, since leaking due to test code incorrectness is not critical, as long as CI/CD passed.

* Update src/TorchSharp/Optimizers/AdamW.cs.
  + Update AdamW.step.
    - Declare TorchSharp.Scalar explicitly.
    - Dispose denom explicitly.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 11, 2025

  • ditto for torch.Tensor.add_(torch.Tensor, Scalar).
    image

Masaru Kimura added 3 commits September 12, 2025 11:38
* Update src/TorchSharp/NN/Normalization/BatchNorm.cs.
  + Update BatchNorm.forward.
    - Declare TorchSharp.Scalar explicitly.
    - Add FIXME for cache over training.
* Update src/TorchSharp/Tensor/Factories/Tensor.Factories.cs.
  + Update torch.normal.
    - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs.
  + Update torchvision.utils.save_image.
    - Declare TorchSharp.Scalar explicitly.
    - Add FIXME for possible torch.Tensor.round_ use.
    - Add FIXME for no torch.min_int_value.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • ditto for torch.Tensor.add_(Scalar).
    image
  • CI/CD errors on Azure DevOps are irrelevant to this MR.
    • Windows_x64_NetFX Release_Build fails on "Run Tests" due to network connection closed by remote host.
    • MacOS_arm64 Release_Build fails on "Build" due to cmake_minimum_required doesn't met.

* Update src/TorchSharp/Optimizers/Rprop.cs.
  + Update Rprop.step.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for unused lr.
    - Add FIXME for possible torch.Tensor.sign_ use.
    - Cache eta{minus,plus} and 1 as torch.Tensor.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • ditto for torch.Tensor.addcmul_(Tensor, Tensor, Scalar).
    image

Masaru Kimura added 2 commits September 12, 2025 15:00
* Update src/TorchVision/Ops/StochasticDepth.cs.
  + Update torchvision.ops.stochastic_depth.
    - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs.
  + Update torchvision.utils.make_grid.
    - Declare TorchSharp.Scalar explicitly.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • ditto for torch.Tensor.div_(Scalar target, RoundingMode).
    image

Masaru Kimura added 2 commits September 12, 2025 15:35
* Update src/TorchSharp/Distributions/ExpRelaxedCategorical.cs.
  + Update TorchSharp.Modules.ExpRelaxedCategorical.log_prob.
    - Declare TorchSharp.Scalar explicitly.
      - Add FIXME for possible inplace ops use.
* Update src/TorchVision/Functional.cs.
  + Update torchvision.transforms.functional.convert_image_dtype.
    - Declare TorchSharp.Scalar explicitly.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • ditto for torch.Tensor.mul(Scalar).
    image

Masaru Kimura added 2 commits September 12, 2025 16:15
* Update src/TorchAudio/Functional.cs.
  + Update torchaudio.functional.griffinlim.
    - Declare TorchSharp.Scalar explicitly.
    - Cache momentum > 0.0.
    - Add FIXME for possible inplace ops use.
    - Simplify angles initialization.
      - Use torch.ones instead.
* Update src/TorchAudio/Functional.cs.
  + Update torchaudio.functional._get_sinc_resample_kernel.
    - Declare TorchSharp.Scalar explicitly.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • ditto for torch.Tensor.mul_(Scalar).
    image

Masaru Kimura added 7 commits September 12, 2025 17:18
* Update src/Native/LibTorchSharp/THSTensor.h.
  + Declare THSTensor_square{,_}.
* Update src/Native/LibTorchSharp/THSTensorMath.cpp.
  + Implement THSTensor_square{,_}.
* Update src/TorchSharp/PInvoke/LibTorchSharp.THSTensor.cs.
  + Declare THSTensor_square{,_}.
* src/TorchSharp/Tensor/Tensor.Math.cs.
  + Update torch.Tensor.square.
    - Use THSTensor_square{,_}.
      - Makes better compatibility against libtorch.
      - Removes extra cost for TorchSharp.Scalar.
* Update src/TorchSharp/Tensor/torch.PointwiseOps.cs.
  + Introduce torch.square_.
* Update src/TorchAudio/Functional.cs.
  + Update torchaudio.functional.spectrogram.
    - Use torch.Tensor.square.
    - Declare TorchSharp.Scalar explicitly.
    - Add FIXME for possible torch.Tensor.square use.
* Update src/TorchAudio/Functional.cs.
  + Update torchaudio.functional.inverse_spectrogram.
    - Use torch.Tensor.square.
* Update src/TorchAudio/Transforms/InverseMelScale.cs.
  + Update InverseMelScale.forward.
    - Declare TorchSharp.Scalar explicitly.
    - Use torch.Tensor.square.
These are exceptional changes for unit tests, which had no special
meaning for torch.Tensor.pow use, based on my understanding.
@hiyuh
Copy link
Contributor Author

hiyuh commented Sep 12, 2025

  • prepared before nuking torch.Tensor.pow(Scalar); after migrated to torch.Tensor.square if possible.
    image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants