Skip to content

16-bit Support and Dynamic Loss Scaling#360

Open
jasonkena wants to merge 10 commits intodbolya:masterfrom
jasonkena:amp
Open

16-bit Support and Dynamic Loss Scaling#360
jasonkena wants to merge 10 commits intodbolya:masterfrom
jasonkena:amp

Conversation

@jasonkena
Copy link

@jasonkena jasonkena commented Feb 27, 2020

This utilizes Apex's AMP, to enable 16-bit computation, increasing performance, and lowering GPU memory consumption (it saved me 1 GB with batch size 4). Unfortunately, it doesn't work with torch.jit.

The dynamic loss scaling will potentially fix #359, #340, #318, #316, #222, #186, and #56.

I also patched dcn_v2.py to support it, so that YOLACT++ will work.

I am very sorry for the awful commit history, it was because the Black code-formatter literally modified all of the code.

Cheers

@breznak
Copy link
Contributor

breznak commented May 16, 2020

Thanks @jasonkena ! The issue description is interesting, I'd especially welcome the fp16 support! But it is quite impossible to review with all the (false possitive) code changes due to your editor re-formatting. Can you avoid that and submit a PR with only your changes? (no unnecessary whitespace)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Warning: Moving average ignored a value of inf

2 participants