You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, when using images of objects that are not from a canonical pose (e.g. the demo images are always taken from similar elevation as the objects, but this is not guaranteed in general), the reconstruction quality is really bad and objects become essentially flat.
I see that the transformer network uses default camera extrinsics as a conditioning variable and I was wondering whether you randomised camera position during training such that I can use this variable to improve the mesh quality? Or was it left constant during training?
The text was updated successfully, but these errors were encountered:
Yes the model is trained with random images from random camera poses. You just need to modify the scripts a bit. Check this section to see how we construct it. Alternatively, for the gradio demo here.
Hi, when using images of objects that are not from a canonical pose (e.g. the demo images are always taken from similar elevation as the objects, but this is not guaranteed in general), the reconstruction quality is really bad and objects become essentially flat.
I see that the transformer network uses default camera extrinsics as a conditioning variable and I was wondering whether you randomised camera position during training such that I can use this variable to improve the mesh quality? Or was it left constant during training?
The text was updated successfully, but these errors were encountered: