In lib/losses3D/BaseClass.py, I encountered an error on line 56:
assert input.size() == target.size(), "'input' and 'target' must have the same shape"

Upon inspection, I found that the shapes of input and target are different:
input.shape: torch.Size([4, 4, 128, 128, 48])
target.shape: torch.Size([4, 1, 128, 128, 48])
The mismatch occurs in dimension 1 (the channels dimension).
After debugging, I traced the source of input and target to the prepare_input function in lib/utils/general.py. At this stage:

- The function parameters are
modalities == 3 and channels == 3
- At this point,
target.shape is already torch.Size([4, 1, 128, 128, 48])
- However, after passing through the convolutional layers,
input_tensor transforms from torch.Size([4, 3, 128, 128, 48]) to torch.Size([4, 4, 128, 128, 48]), leading to the shape mismatch.
In
lib/losses3D/BaseClass.py, I encountered an error on line 56:Upon inspection, I found that the shapes of
inputandtargetare different:input.shape:torch.Size([4, 4, 128, 128, 48])target.shape:torch.Size([4, 1, 128, 128, 48])The mismatch occurs in dimension 1 (the channels dimension).
After debugging, I traced the source of
inputandtargetto theprepare_inputfunction inlib/utils/general.py. At this stage:modalities == 3andchannels == 3target.shapeis alreadytorch.Size([4, 1, 128, 128, 48])input_tensortransforms fromtorch.Size([4, 3, 128, 128, 48])totorch.Size([4, 4, 128, 128, 48]), leading to the shape mismatch.