Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get depth from disparity #3

Open
zendevil opened this issue Aug 22, 2020 · 5 comments
Open

How to get depth from disparity #3

zendevil opened this issue Aug 22, 2020 · 5 comments

Comments

@zendevil
Copy link

zendevil commented Aug 22, 2020

Suppose we have the disparity and flow maps using your method. How does one get the depth in 3d space monocularly (assuming the intrinsic properties fx, fy, cx, cy and the distortion vector [r1, r2 ... p1, p2] of the camera are known)? Do you provide a function for that?

@zendevil zendevil changed the title How to get the actual depth. How to get depth from disparity Aug 22, 2020
@gengshan-y
Copy link
Owner

Do you actually have stereo disparity? Then depth = focal length * baseline / disparity.

There are also a lot of data-driven methods for monocular depth estimation, for example monodepth2 and Midas.

@zendevil
Copy link
Author

Thank You. Would you please share the source for StereoExp-v2? If not, would you please explain a bit more its architecture?

@zendevil
Copy link
Author

zendevil commented Aug 23, 2020

Do you actually have stereo disparity? Then depth = focal length * baseline / disparity.

There are also a lot of data-driven methods for monocular depth estimation, for example monodepth2 and Midas.

What I mean is, you have these numbers right, of the inferred "prediction":

[[2496.0127 2495.973 2495.9888 ... 855.7698 855.57666 856.0468 ]
[2495.9575 2495.9158 2495.9329 ... 855.4036 855.20917 855.68256]
[2495.9797 2495.9387 2495.9556 ... 855.55426 855.36035 855.83234]
...
[3245.7551 3245.7756 3245.7664 ... 2852.5774 2852.4922 2852.702 ]
[3245.7275 3245.7478 3245.739 ... 2852.4827 2852.397 2852.6072 ]
[3245.7974 3245.8179 3245.809 ... 2852.7156 2852.6309 2852.8398 ]]

What are the units of these numbers m, mm, ft? Of course those numbers aren't disparities (since the images aren't that wide). So what do these numbers represent? How to convert this prediction to actual depth given camera intrinsics?
Thanks

@gengshan-y
Copy link
Owner

gengshan-y commented Aug 23, 2020

I must be missing something. Our paper does not provide any model that computes disparity. Where did you get these numbers? In the demo.ipynb file?

@zendevil
Copy link
Author

I confused between your model and MiDaS. These numbers are from the MiDaS prediction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants