Skip to content

Added @torch.inference_mode() to HACA3 harmonize and combine_images#10

Open
inikishev wants to merge 1 commit intolianruizuo:mainfrom
inikishev:main
Open

Added @torch.inference_mode() to HACA3 harmonize and combine_images#10
inikishev wants to merge 1 commit intolianruizuo:mainfrom
inikishev:main

Conversation

@inikishev
Copy link
Copy Markdown

Currently inference runs with autograd enabled which requires a lot of extra VRAM. For example, I was getting "CUDA out of memory" with my 4GB VRAM during the fusion stage. With the inference mode decorator I am able to run it on my laptop without any issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant