From @manonreau on graphprot:
When you store loss values, do not forget to detach it from the computation graph (that is used for the back propagation) or you could run into memory issues ! :)
ex :
tot_loss += loss.item() # DON'T DO THAT
tot_loss += loss.detach().item() # DO THAT :D
DeepRank is using:
running_loss += loss.data.item() in NeuralNet.py. Maybe we want to correct it.