Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use this process? #7

Open
renjiahao0928 opened this issue Aug 23, 2021 · 2 comments
Open

How to use this process? #7

renjiahao0928 opened this issue Aug 23, 2021 · 2 comments

Comments

@renjiahao0928
Copy link

Thanks for your efforts.
When I finished configuring its dependencies,I found that I could not use the realsense-d435i to run this code.Can you tell me how to use it?How to modify the configuration file?Thanks

@MaverickPeter
Copy link
Member

Hi renjiahao.
To use d435i, you need to first change topic name in

<arg name="camera_topic" default="/$(arg robot_name)/image_rect"/>
<arg name="lidar_topic" default="/$(arg robot_name)/pointcloud"/>
and then some configs about frame_id should be changed in
map_frame_id: "/robot0/map"
sensor_frame_id: "/PandarQT"
robot_base_frame_id: "/PandarQT"
track_point_frame_id: "/PandarQT"
and also you have to create a parameter file and link here
camera_params_yaml: "/home/mav-lab/Projects/git_backup/GEM_ws/src/elevation_mapping/yq_intrinsic.yaml"

When you finish things mentioned above, you can run it.

@MaverickPeter
Copy link
Member

MaverickPeter commented Aug 23, 2021

One more thing, you should first have a odometry running to provide a tf in tf tree. And then, in configuration, you have to modify the map frame id and sensor frame id according to this odometry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants