Outputs for classification model are logits or y_pred? #310
-
| I am a little confused about the the outputs of a forward pass. In Section-4 (Neural Network Classification) it was mentioned that for a classification, the forward pass will output logits. these logits are then passed through activation functions to make them into y_pred. However in the computer vision section, the code is as follows: # training Loop
for epoch in tqdm(range(epochs)):
    print(f"Epoch: {epoch}\n-------")
    train_loss = 0
    # Add a loop to loop through training batches
    
    for batch, (X, y) in enumerate(train_dataloader):
        model_0.train() 
        # 1. Forward pass
        y_pred = model_0(X)
        
        # 2. Calculate loss (per batch)
        loss = loss_fn(y_pred, y)
        train_loss += loss # accumulatively add up the loss per epoch Can someone explain why the outputs of this forward pass y_pred and not logits? See the full code here: https://www.learnpytorch.io/03_pytorch_computer_vision/#33-creating-a-training-loop-and-training-a-model-on-batches-of-data | 
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
| Can someone please help me further understand this? | 
Beta Was this translation helpful? Give feedback.
-
| This looks like it may be a naming issue. E.g.  Because the  In the code above, if you named  | 
Beta Was this translation helpful? Give feedback.
Hi @pusapatiakhilraju,
This looks like it may be a naming issue.
E.g.
y_predin the code you're sharing (see here in the book: https://www.learnpytorch.io/03_pytorch_computer_vision/#33-creating-a-training-loop-and-training-a-model-on-batches-of-data) could also be calledy_logits.Because the
loss_fn = nn.CrossEntropyLoss(), this loss function can take raw logits (the raw output of the model, which is calledy_predin this case) directly.In the code above, if you named
y_predasy_logits, you would get the same results.