Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtabmap with real scenes #1415

Open
xxxy12 opened this issue Dec 20, 2024 · 5 comments
Open

rtabmap with real scenes #1415

xxxy12 opened this issue Dec 20, 2024 · 5 comments

Comments

@xxxy12
Copy link

xxxy12 commented Dec 20, 2024

When I run rtabmap with EuRoC, it performs well. But when I run it with my stereo camera in real scenes, the quality of the point cloud is not satisfactory. I have set the max resolution of the camera and avoided large movement. Do you have any idea?

1
2

@matlabbe
Copy link
Member

Actually, it looks relatively good for a stereo camera. What kind of camera is used? Is depth computed on the camera? It looks like the disparity approach used is interpolating maybe too much the textureless surfaces. With EuRoC dataset, we compute the depth using opencv's cv::StereoBM approach, which would give sparse depth only where there is enough texture (in your example, the ground would be invisible).

Note that if your goal is to generate a very accurate point cloud, you may consider using/adding a TOFcamera/LiDAR instead of stereo.

@xxxy12
Copy link
Author

xxxy12 commented Dec 24, 2024

Thank you for your response. I use realsense d435. I try to make some post-process. Now, the details of the point cloud seems still unsatisfactory. Refer to your answer, I 'll change the environment and continue to look for the reasons.
图片
图片

@matlabbe
Copy link
Member

With rtabmap-databaseViewer, you can open the 3D view to inspect every single image/cloud. The noise you are seing is maybe coming from the depth image itself. If you need tips on how to filter some noise in post processing, you may share your database so we can take a look.

@xxxy12
Copy link
Author

xxxy12 commented Dec 27, 2024

Here is the database.
I got it with the following command. I think the depth should be computed based on the stereo images.

roslaunch realsense2_camera rs_camera.launch enable_infra1:=true enable_infra2:=true infra_width:=1280 infra_height:=800
roslaunch rtabmap_launch rtabmap.launch args:="--delete_db_on_start Odom/Strategy 9 OdomVINS/ConfigPath /data/Workspace/my/slam/realsense_config/stereo_config.yaml" left_image_topic:=/camera/infra1/image_rect_raw right_image_topic:=/camera/infra2/image_rect_raw left_camera_info_topic:=/camera/infra1/camera_info right_camera_info_topic:=/camera/infra2/camera_info

It is confused that the table and chair are not textureless targets, but the quality of their point cloud is poor (Despite my multiple and careful scans of the targets from multiple angles).
By the way, when I run rtabmap with a chair. In rgbd mode it works well, but in stereo mode, the point cloud is only about the basic outlines:
图片
图片

@matlabbe
Copy link
Member

RGB-D mode would use the IR emitter and would be able to estimate disparity on textureless surfaces.

Stereo mode would only estimate edges.

Couple of observations on the database:

  1. Is is taken from a robot? If so, you could add a TF frame base_link -> camera_link with roll and z values to make the ground aligned with xy plane in rviz (and use frame_id:=base_link instead of camera link). Otherwise, the occupancy grid may not make sense:
    Peek 2024-12-31 16-45

  2. There is also some drift in Z in the overall trajectory. If you have IMU, you could feed it (use Madgwick or complementary filter node if needed). Otherwise, you could set Reg/Force3DoF to true so that the ground is perfectly flat (assuming the robot will only move in 2D). However, the right camera rotation in TF should be set as in point 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants