Target torch size different to Input torch size
Hi there! I am currently trying to test on the pix3d data with the pretrained models provided. I have altered the config and the data is being loaded correctly, I am also using the Pix2Vox++-F branch. However, I am getting this error on colab after running
!python3 runner.py --test --weights='/content/drive/MyDrive/Pix2Vox/pretrained/Pix2Vox++-F-ShapeNet.pth'
Traceback (most recent call last): File "runner.py", line 86, in <module> main()
File "runner.py", line 75, in main test_net(cfg)
File "/content/drive/Pix2Vox/core/test.py", line 113, in test_net encoder_loss = bce_loss(generated_volume, ground_truth_volume) * 10
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py", line 612, in forward return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 3058, in binary_cross_entropy "Please ensure they have the same size.".format(target.size(), input.size())
ValueError: Using a target size (torch.Size([1, 256, 256, 256])) that is different to the input size (torch.Size([1, 32, 32, 32])) is deprecated. Please ensure they have the same size.
Any pointers on solving this issue? Thanks