You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when I train disconet using early_fusion, it's weight don't match
(cobevflow) aitest7@833f376856e4:~/wynne/CoBEVFlow$ python opencood/tools/train.py --hypes_yaml opencood/hypes_yaml/dair-v2x/npj/dair_disconet.yaml
Dataset Building ...
ASync dataset with 5 time delay initialized! 4650 samples totally!
ASync dataset with 5 time delay initialized! 1717 samples totally!
/public/home/aitest7/anaconda3/envs/cobevflow/lib/python3.7/site-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
=== Time consumed: 0.0 minutes. ===
Creating Model ...
device: cuda
full path is: /public/home/aitest7/wynne/CoBEVFlow/logs/dair_npj_disconet_w_2023_11_07_17_20_37
=== Time consumed: 0.1 minutes. ===
Training start!
Traceback (most recent call last):
File "opencood/tools/train.py", line 327, in
main()
File "opencood/tools/train.py", line 185, in main
teacher_model.load_state_dict(torch.load(teacher_checkpoint_path), strict=False)
File "/public/home/aitest7/anaconda3/envs/cobevflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PointPillarDiscoNetTeacher:
size mismatch for cls_head.weight: copying a param with shape torch.Size([2, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for reg_head.weight: copying a param with shape torch.Size([14, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
The text was updated successfully, but these errors were encountered:
when I train disconet using early_fusion, it's weight don't match
(cobevflow) aitest7@833f376856e4:~/wynne/CoBEVFlow$ python opencood/tools/train.py --hypes_yaml opencood/hypes_yaml/dair-v2x/npj/dair_disconet.yaml
Dataset Building ...
ASync dataset with 5 time delay initialized! 4650 samples totally!
ASync dataset with 5 time delay initialized! 1717 samples totally!
/public/home/aitest7/anaconda3/envs/cobevflow/lib/python3.7/site-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
=== Time consumed: 0.0 minutes. ===
Creating Model ...
device: cuda
full path is: /public/home/aitest7/wynne/CoBEVFlow/logs/dair_npj_disconet_w_2023_11_07_17_20_37
=== Time consumed: 0.1 minutes. ===
Training start!
Traceback (most recent call last):
File "opencood/tools/train.py", line 327, in
main()
File "opencood/tools/train.py", line 185, in main
teacher_model.load_state_dict(torch.load(teacher_checkpoint_path), strict=False)
File "/public/home/aitest7/anaconda3/envs/cobevflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PointPillarDiscoNetTeacher:
size mismatch for cls_head.weight: copying a param with shape torch.Size([2, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for reg_head.weight: copying a param with shape torch.Size([14, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
The text was updated successfully, but these errors were encountered: