Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using kinect Xbox 360 for custom data #200

Open
MXNXV opened this issue Feb 13, 2025 · 1 comment
Open

Using kinect Xbox 360 for custom data #200

MXNXV opened this issue Feb 13, 2025 · 1 comment

Comments

@MXNXV
Copy link

MXNXV commented Feb 13, 2025

Hello Authors,
Thank you for the wonderful work, I want to try out your model on my custom data which was captured by a kinect Xbox 360 camera. My dataset looks as follows:
{ "files": [ { "rgb": "converted_data/250cm/rgb/r-1733262273.619000-4267990994.jpg", "depth": "converted_data/250cm/depth/d-1733262273.567000-4264455720.png", "cam_in": [ 586.34, 586.34, -6.77, -4.19 ], "depth_scale": 1000.0 },
I have captured depth maps and rgb images of a box at known distances. I am planning to use the pretrained weights to test out the model performance on my dataset.
The range of a kinect cameras depth sensor is from 1.2m to 4.5m. My question is should I use the default values in data_basic (mono/configs/HourglassDecoder/vit.raft5.small.py)
`data_basic=dict(

canonical_space = dict(

    # img_size=(540, 960),

    focal_length=586.34,

),

depth_range=(0, 1),

depth_normalize=(0.1, max_value),

crop_size = (616, 1064),  # %28 = 0

 clip_depth_range=(0.1, 200),

vit_size=(616,1064)

)`

@YvanYin
Copy link
Owner

YvanYin commented Feb 21, 2025

You can use the default config. But you can adjust the 'clip_depth_range' to (0.1, 10). and 'depth_normalize' to (0.1, 10).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants