-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get depth from disparity #3
Comments
Do you actually have stereo disparity? Then depth = focal length * baseline / disparity. There are also a lot of data-driven methods for monocular depth estimation, for example monodepth2 and Midas. |
Thank You. Would you please share the source for StereoExp-v2? If not, would you please explain a bit more its architecture? |
What I mean is, you have these numbers right, of the inferred "prediction": [[2496.0127 2495.973 2495.9888 ... 855.7698 855.57666 856.0468 ] What are the units of these numbers m, mm, ft? Of course those numbers aren't disparities (since the images aren't that wide). So what do these numbers represent? How to convert this prediction to actual depth given camera intrinsics? |
I must be missing something. Our paper does not provide any model that computes disparity. Where did you get these numbers? In the demo.ipynb file? |
I confused between your model and MiDaS. These numbers are from the MiDaS prediction. |
Suppose we have the disparity and flow maps using your method. How does one get the depth in 3d space monocularly (assuming the intrinsic properties fx, fy, cx, cy and the distortion vector [r1, r2 ... p1, p2] of the camera are known)? Do you provide a function for that?
The text was updated successfully, but these errors were encountered: