-
Notifications
You must be signed in to change notification settings - Fork 123
Open
Description
I have a question about the GPU memory usage for model training. I'm using a V100 32GB GPU, but I'm encountering "CUDA out of memory" errors when training for the first stage with default setting. This happens even when I set the gradient_accumulation_steps to 1. I would like to know how much VRAM is really needed for model training. I'm not sure if there's something wrong in my setup because your paper mentions that you also used V100 GPUs for training.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels