-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We obtained incorrect results. #98
Comments
Thank you very much for your professional response. Issue 2: I followed your suggestion to fix the camera intrinsic parameters, and I achieved good calibration results in the actual experiment. I performed a new experiment with six cameras arranged in a converging configuration. The cameras are of the same model (Resolution: 1224x1024, Sensor size: 2/3 inch, all using 50mm fixed focal length lenses), but I got different average reprojection error results when I calculated the same dataset twice (as shown in the second and third images). However, the extrinsic parameters from both calculations are nearly identical. Why does the reprojection error in the third image suddenly increase? Could this still be due to corner detection issues? Issue 3: Our final multi-camera configuration consists of multiple cameras arranged in a converging configuration. We further simulated our final multi-camera configuration using 8 cameras in Blender, while still fixing the intrinsic parameters, but the calibration results were completely incorrect. When we used the actual experimental data for the Converging Vision System that you provided, we obtained excellent calibration results. The camera arrangement is quite similar to yours, but since we are using long focal length and there is insufficient motion diversity, could it be that your calibration method cannot provide good results for this setup? |
Thank you very much for your feedback—it is extremely valuable to us for improving MC-Calib. I have a few additional comments, but it would be helpful if you could share your configuration files so that I can assist you further. I believe most of the issues you are encountering can be resolved by adjusting the configuration settings. Issue 1: Incomplete Point DetectionNot detecting every single point is not necessarily a problem, as long as you have a sufficient number of images (it is better not to use images or points if they are not reliable). Given that you are using long focal length cameras, one potential cause could be related to your RANSAC threshold. You might want to increase it slightly to avoid rejecting potential inliers. What is your current threshold value? You could try setting Issue 2: Unreliable CalibrationUpon reviewing your reprojection errors, it appears that some images contain very few keypoints, which could affect the calibration. Some boards with a very limited number of points seem to be causing the issue. Try setting Issue 3: Limited Motion DiversityAfter briefly checking your images, it seems that the motion diversity is indeed quite limited. It is fine if some images do not overlap between cameras, so you can try diversifying the motion further, as only a small portion of the images currently contain keypoints. Even if the object is not visible in all cameras for certain images, it is not a problem. I hope these suggestions help! Please feel free to share your configuration files, and I’d be happy to assist you further. |
Thank you for your patience and assistance. The data has been uploaded to GitHub. |
Thank you very much for all the extra information. Here are some additional details. Now, I think you should be able to calibrate your system. The boardsI figured out why some boards were not detected: you basically printed 6 boards but used only 4 of them. Each board has an index from 0 to 5. We have designed the toolbox so that you can specify the board indices to use. For instance, if you only want to use two boards [0, 2], you can specify that to avoid wasting time detecting unused boards. Wrong initial values from MATLABThe most critical issue is that the distortion parameters obtained from MATLAB seem quite off. It could be because of a different distortion model but after trying to calibrate the camera individually with MC-Calib I also ended up with similar values. I am pretty the very high value are due to the fact that all the keypoints are around the center of the image thus those parameters are very overfitted and unreliable. I have notice a similar problem with your MATLAB sequence, getting some very high distortion parameters such as 32 or 50 is generally a bad news. Also it could very much affect the other parameters of the cameras. So I would recommend taking new sequences for intrinsic calibration. Try to ensure that keypoints cover the entire sensor. Other parameters
ResultsWith both the 6-camera and 8-camera sequences, I achieved a 0.4–0.5 px reprojection error. However, for the 8-camera sequence, it took a few trials to determine the best parameters. |
System information (version)
Vision system
*.yml
)Describe the issue / bug
Please provide a clear and concise description of what the issue / bug is.
Hello,
We are attempting to calibrate using three industrial cameras. All three cameras are of the same model (Resolution: 1224*
1024, Sensor Size: 2/3 inch), each equipped with a 50mm prime lens. The cameras are arranged in a circular configuration, 1 meter away from the calibration board (as shown in the first image). The calibration board has a total size of 10x10 cm, and the cameras are synchronized to capture images (the second, third and fourth images show pictures taken by the three cameras in the same frame). However, the output results were incorrect (the log file is attached). The saved detection images failed to detect all corner points in most of the images (as shown in the fifth image). At the same time, we simulated the above setup in Blender , but the results were still incorrect. The saved detection images failed to detect all corner points in most of the images (as shown in the sixth image). However, when we modified the properties of the four cameras in Blender (Resolution: 1824*1376, Sensor Horizontal Size: 32mm, Focal Length: 50mm), maintaining the circular arrangement, with the cameras positioned 4 meters away from the calibration board (which now has a size of 1x1 meter, as shown in the seventh image), the simulated images were correctly calibrated.
Could you please help us understand the reason behind the incorrect results?
The cases of the three scenarios mentioned above have been uploaded to GitHub.
https://github.com/ME-GAO/Calibration
The text was updated successfully, but these errors were encountered: