You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @9p15p,
For multi-object inference, we first estimate the probability map for each object. Then, we combine them with the soft aggregation.
The first softmax (L71) is for the estimation of each object, and the second softmax (L116) is for implementing the soft aggregation operation.
I'm very interested in your soft aggregation approach for multi-object task and I can understand your mindset after reading your paper. I'm wondering how this approach improve the performance compared with winner-take-all approach?
It is because maintaining uncertainty is important during propagation. After Soft aggregation the results is a probability map, while, with winner-take-all approach, it is a binary map. I think soft probability map gives the network more information to address challenges during propagation.
in 71 line:
msv_E2[sc] = F.softmax(e2[0], dim=1)[:,1].data.cpu()
in 116 line:
all_E[:,:,f+1] = F.softmax(Variable(all_E[:,:,f+1]), dim=1).data
Why should we use "softmax" twice?
The text was updated successfully, but these errors were encountered: