Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We obtained incorrect results. #98

Open
ME-GAO opened this issue Jan 15, 2025 · 5 comments
Open

We obtained incorrect results. #98

ME-GAO opened this issue Jan 15, 2025 · 5 comments

Comments

@ME-GAO
Copy link

ME-GAO commented Jan 15, 2025

System information (version)

  • Operating System / Platform => ❔
  • OpenCV => ❔
  • Ceres => ❔
  • Boost => ❔
  • C++ => ❔
  • Compiler => ❔

Vision system

  • Number of cameras => 3
  • Types of cameras => perspective (perspective, fisheye, hybrid)
  • Multicamera configurations => overlapping(overlapping, non-overlapping, converging)
  • Configuration file => ❔ (i.e. *.yml)
  • Image sequences => ❔
    • number of images per camera
    • if relevant and possible, please share image sequences

Describe the issue / bug

Please provide a clear and concise description of what the issue / bug is.

Hello,

We are attempting to calibrate using three industrial cameras. All three cameras are of the same model (Resolution: 1224*
1024, Sensor Size: 2/3 inch), each equipped with a 50mm prime lens. The cameras are arranged in a circular configuration, 1 meter away from the calibration board (as shown in the first image). The calibration board has a total size of 10x10 cm, and the cameras are synchronized to capture images (the second, third and fourth images show pictures taken by the three cameras in the same frame). However, the output results were incorrect (the log file is attached). The saved detection images failed to detect all corner points in most of the images (as shown in the fifth image). At the same time, we simulated the above setup in Blender , but the results were still incorrect. The saved detection images failed to detect all corner points in most of the images (as shown in the sixth image). However, when we modified the properties of the four cameras in Blender (Resolution: 1824*1376, Sensor Horizontal Size: 32mm, Focal Length: 50mm), maintaining the circular arrangement, with the cameras positioned 4 meters away from the calibration board (which now has a size of 1x1 meter, as shown in the seventh image), the simulated images were correctly calibrated.

Could you please help us understand the reason behind the incorrect results?

The cases of the three scenarios mentioned above have been uploaded to GitHub.
https://github.com/ME-GAO/Calibration

图片1
0001
0001
0001
000026
000003
Snipaste_2025-01-15_19-47-06

@rameau-fr
Copy link
Owner

rameau-fr commented Jan 15, 2025

Thank you for using MC-Calib and for sharing your data. Please let me try to address the issues you faced and provide some recommendations.

Detection Issues

I tested MC-Calib using your data and managed to detect the checkerboard successfully. However, we have recently identified compatibility problems with the latest OpenCV versions that may cause the issues you are facing. We have recently fixed it, so please try downloading the newest version of MC-Calib and try again.

Blender Data

Thank you for providing the simulated Blender data. Unfortunately, when I ran MC-Calib on this dataset, the calibration failed. The sequence appears too limited to allow proper calibration, as there is insufficient motion diversity. This causes quasi-degenerate cases, probably due to the ambiguity between the long focal length and the translation of the camera. Such a scenario might lead to parameter drift during optimization. To resolve this, try providing a sequence with more images and incorporating greater motion diversity, and I believe that it should work great.

Real Images

The real images exhibit challenges similar to those of synthetic data, primarily due to the limited motion diversity inherent to very long focal length configurations. As you probably experimented, it leads to parameter drift during optimization.
To address this, I strongly recommend pre-calibrating the intrinsic parameters of each camera individually. Use the options fix_intrinsic and cam_params_path to fix these parameters during refinement.
For example, I assumed a focal length of 8000, a principal point centered in the images, and zero distortion, which resulted in a calibration with a 0.5 px average reprojection error (which is pretty good already)!
I have attached these results to this response (note that I do not know the actual size of your markers, so the scale of my solution will be arbitrary). Also, it would be much better with accurate intrinsic calibration of these cameras instead of some approximated parameters, as I did before.

Screenshot 2025-01-16 at 12 01 07 AM

Final Recommendations

While I managed to calibrate your system with these images, I believe that it could benefit from better data. Here are a few extra recommendations:

  • Acquire more images with greater motion diversity.
  • Avoid motion blur by reducing exposure time or ensuring both the camera and target remain static during capture.
  • For cameras without global shutters, ensure everything is static while capturing images.
  • Pre-calibrate the intrinsic parameters and fix them during refinement as described earlier.

I hope these suggestions help resolve the issues. Let me know if you have further questions or need additional assistance!

Calibration results with my approximated fixed intrinsic:

%YAML:1.0
---
nb_camera: 3
camera_0:
   camera_matrix: !!opencv-matrix
      rows: 3
      cols: 3
      dt: d
      data: [ 8000., 0., 612., 0., 8000., 512., 0., 0., 1. ]
   distortion_vector: !!opencv-matrix
      rows: 1
      cols: 5
      dt: d
      data: [ 0., 0., 0., 0., 0. ]
   distortion_type: 0
   camera_group: 0
   img_width: 1224
   img_height: 1024
   camera_pose_matrix: !!opencv-matrix
      rows: 4
      cols: 4
      dt: d
      data: [ 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0.,
          1. ]
camera_1:
   camera_matrix: !!opencv-matrix
      rows: 3
      cols: 3
      dt: d
      data: [ 8000., 0., 612., 0., 8000., 512., 0., 0., 1. ]
   distortion_vector: !!opencv-matrix
      rows: 1
      cols: 5
      dt: d
      data: [ 0., 0., 0., 0., 0. ]
   distortion_type: 0
   camera_group: 0
   img_width: 1224
   img_height: 1024
   camera_pose_matrix: !!opencv-matrix
      rows: 4
      cols: 4
      dt: d
      data: [ 0.93828269881410542, 0.03911257238008968,
          0.34365067115885417, -4.7690497965723484,
          -0.035492059875522279, 0.99922837346944893,
          -0.016821751971536351, 0.12327355431463358,
          -0.3440434431752929, 0.0035866886416011539,
          0.93894688075134347, -0.25510729471450028, 0., 0., 0., 1. ]
camera_2:
   camera_matrix: !!opencv-matrix
      rows: 3
      cols: 3
      dt: d
      data: [ 8000., 0., 612., 0., 8000., 512., 0., 0., 1. ]
   distortion_vector: !!opencv-matrix
      rows: 1
      cols: 5
      dt: d
      data: [ 0., 0., 0., 0., 0. ]
   distortion_type: 0
   camera_group: 0
   img_width: 1224
   img_height: 1024
   camera_pose_matrix: !!opencv-matrix
      rows: 4
      cols: 4
      dt: d
      data: [ 0.70731484253569099, 0.016181148159041991,
          0.7067134383701541, -9.2612492433496243,
          -0.0015703342598520373, 0.99977148180487219,
          -0.021319432919347528, 0.25917328455427968,
          -0.70689691439348001, 0.013969775014128032, 0.7071786180358054,
          3.4051854496935317, 0., 0., 0., 1. ]

@ME-GAO
Copy link
Author

ME-GAO commented Jan 20, 2025

Thank you very much for your professional response.
However, I am still encountering some issues:
Issue 1: I have already used the latest OpenCV version (4.11.0) released by you, but there are still issues with corner detection. Even with high-quality images generated from Blender, some images still fail to detect all the inner corners (as shown in the first image). Could this be due to the OpenCV version, or is it caused by the long focal length (since using the experimental data you provided works fine for detecting all the corners)?

Image

Issue 2: I followed your suggestion to fix the camera intrinsic parameters, and I achieved good calibration results in the actual experiment. I performed a new experiment with six cameras arranged in a converging configuration. The cameras are of the same model (Resolution: 1224x1024, Sensor size: 2/3 inch, all using 50mm fixed focal length lenses), but I got different average reprojection error results when I calculated the same dataset twice (as shown in the second and third images). However, the extrinsic parameters from both calculations are nearly identical. Why does the reprojection error in the third image suddenly increase? Could this still be due to corner detection issues?

Image

Image

Issue 3: Our final multi-camera configuration consists of multiple cameras arranged in a converging configuration. We further simulated our final multi-camera configuration using 8 cameras in Blender, while still fixing the intrinsic parameters, but the calibration results were completely incorrect. When we used the actual experimental data for the Converging Vision System that you provided, we obtained excellent calibration results. The camera arrangement is quite similar to yours, but since we are using long focal length and there is insufficient motion diversity, could it be that your calibration method cannot provide good results for this setup?
The data for the three issues mentioned above has been uploaded to GitHub.
https://github.com/ME-GAO/cali

@rameau-fr
Copy link
Owner

Thank you very much for your feedback—it is extremely valuable to us for improving MC-Calib.

I have a few additional comments, but it would be helpful if you could share your configuration files so that I can assist you further. I believe most of the issues you are encountering can be resolved by adjusting the configuration settings.

Issue 1: Incomplete Point Detection

Not detecting every single point is not necessarily a problem, as long as you have a sufficient number of images (it is better not to use images or points if they are not reliable). Given that you are using long focal length cameras, one potential cause could be related to your RANSAC threshold. You might want to increase it slightly to avoid rejecting potential inliers. What is your current threshold value? You could try setting ransac_threshold: 10 or ransac_threshold: 20 to see if it improves results.
Additionally, you can try deactivating keypoint refinement, as it might be removing points deemed unreliable.

Issue 2: Unreliable Calibration

Upon reviewing your reprojection errors, it appears that some images contain very few keypoints, which could affect the calibration. Some boards with a very limited number of points seem to be causing the issue. Try setting min_perc_pts: 0.7 it is completely fine if certain images are excluded if they are not reliable enough.
Even if the cameras are identical, make sure that their intrinsic parameters are calibrated individually.
/!\ Also, be aware that some boards are never detected, which suggests a potential issue with the configuration file or the board generation process.
Depending on the motion, shape, and number of boards in your setup, your calibration might be resulting in a non-overlapping system (espcially considering that some boards are never detected). This could lead the toolbox to run a non-overlapping calibration and suffer from poor initialization. Some users have reported a potential bug: LINK Please try modifying the indicated line, recompile, and check if it improves your results. However, before doing so, ensure that all boards can be detected.
Finally, you may want to make the board larger on your 3D object to improve detection further.

Issue 3: Limited Motion Diversity

After briefly checking your images, it seems that the motion diversity is indeed quite limited. It is fine if some images do not overlap between cameras, so you can try diversifying the motion further, as only a small portion of the images currently contain keypoints. Even if the object is not visible in all cameras for certain images, it is not a problem.
Additionally, having more images where two calibration boards are visible in a single frame would probably help the calibration process.

I hope these suggestions help! Please feel free to share your configuration files, and I’d be happy to assist you further.

@ME-GAO
Copy link
Author

ME-GAO commented Jan 25, 2025

Thank you for your patience and assistance.
I checked the Detection output file, and as you mentioned, some calibration boards were not detected. I have provided my configuration file and calibration board images on GitHub. I used your calibration board configuration file without any modifications. And I did not understand what boards_index means, I set it to [ ] according to your suggestion. I would appreciate it if you could further explain boards_index.
Additionally, I conducted a new experiment using 8 cameras arranged in a converging configuration. The 8 cameras are of the same model (resolution: 1224×1024, sensor size: 2/3 inch, all using 50mm fixed-focus lenses), and I calibrated each camera's intrinsic parameters using the MATLAB toolbox. However, I encountered the same issue: Some calibration boards still could not be detected. The calibration board used is shown in the image below, and I set square_size to 1.1 (default unit: cm).

Image

The data has been uploaded to GitHub.
https://github.com/ME-GAO/configs/blob/main/README.md

@rameau-fr
Copy link
Owner

rameau-fr commented Jan 25, 2025

Thank you very much for all the extra information. Here are some additional details. Now, I think you should be able to calibrate your system.

The boards

I figured out why some boards were not detected: you basically printed 6 boards but used only 4 of them. Each board has an index from 0 to 5. We have designed the toolbox so that you can specify the board indices to use. For instance, if you only want to use two boards [0, 2], you can specify that to avoid wasting time detecting unused boards.
If you leave this field empty but set the number of boards to 4, it will only consider the first four boards with indexes [0,1,2,3], meaning boards [4,5] will be ignored. To resolve this issue, I simply set number_board: 6.

Wrong initial values from MATLAB

The most critical issue is that the distortion parameters obtained from MATLAB seem quite off. It could be because of a different distortion model but after trying to calibrate the camera individually with MC-Calib I also ended up with similar values. I am pretty the very high value are due to the fact that all the keypoints are around the center of the image thus those parameters are very overfitted and unreliable. I have notice a similar problem with your MATLAB sequence, getting some very high distortion parameters such as 32 or 50 is generally a bad news. Also it could very much affect the other parameters of the cameras. So I would recommend taking new sequences for intrinsic calibration. Try to ensure that keypoints cover the entire sensor.
edit: So to avoid this problem I did set all the distortion to zero!

Other parameters

  • I also set keypoints_path: "None" to ensure the detection stage runs.
  • For the 8-camera setup, to ensure only the best images are used, I set min_perc_pts: 0.7 or 0.9.
  • I initially increased the RANSAC threshold to 100 to avoid removing too many points, but a more reasonable setting would be 15 or 20.

Results

With both the 6-camera and 8-camera sequences, I achieved a 0.4–0.5 px reprojection error. However, for the 8-camera sequence, it took a few trials to determine the best parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants