-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
I greatly appreciated your work, both for its simplicity of use and for your commitment. I'm probably wrong, but the library is very slow to use compared to other packages that do the same job.
I checked and all tensor operations are performed on the GPU (GTX 1070).
The TQDM library estimates an iteration every two seconds during training but the waiting time is 2 hours per epoch. Using other libraries for the same model I get a waiting time of 15 minutes per epoch.
I can assure you that the mask, the CRF layer are run on GPU.
I also tried to force methods with to (device) but obviously nothing has changed.
self.crflayer = CRF(hparams.num_classes, pad_idx=0).to(device) self.model.crflayer.forward(outputs, goldLabels, mask).to(device)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels