You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I note that all your models are trained by using two-stage training strategy, which trains models on a larger additional dataset for the first stage. However, in Table 1 from your paper, models you compared with are trained just on DAVIS-2016. THis is not fair for other models, so I wonder how about J mean and F mean of your models trained just on DAVIS-2016?
The text was updated successfully, but these errors were encountered:
Please refer to Table. 5 in our paper for the result without the pre-training.
The DAVIS benchmark does not have a strictly rule on the use of additional training data.
Many previous methods (e.g. MaskTrack, perazzi et al.) adopt pre-training.
Hi, I note that all your models are trained by using two-stage training strategy, which trains models on a larger additional dataset for the first stage. However, in Table 1 from your paper, models you compared with are trained just on DAVIS-2016. THis is not fair for other models, so I wonder how about J mean and F mean of your models trained just on DAVIS-2016?
The text was updated successfully, but these errors were encountered: