In the following the basic calibration workflow will be described.
As a first step, the calibration pattern has to be created.
To do this, modify the parameters in the marked box and click "Generate Pattern". Now the pattern is shown in the pattern preview.
If you are satisfied with the pattern, save it by clicking "Save Pattern".
Now print the saved pattern and take shots with your multi camera setup. In order to get a good calibration, the position and rotation of the calibration pattern should heavily vary throughout the whole dataset. For further processing the tools assumes filenames with the namescheme $CameraID_$Timestamp.$Filetype, e.g. rgb0_000.png, rgb0_001.png, ... , kinect27_000.png, kinect27_001.png.
For batch renaming files you might also want to have a look at my File Renaming Tool.
In order to detect the patterns in the captured images, first select the directory and then click "Detect Correspondences" in the red marked area.
Based on the filenames, the tool will automatically generate a list of cameras, which can be viewed in "Camera Parameters" box.
Furthermore the number of detected correspondences per image will be printed to the logger and after finishing the process, the tool will ask you to save the correspondences.
If you already detected correspondences in the images in a previous calibration session, you can also load those by clicking "Load correspondences". In this case, it is not necessary to specify the calibration images dir. Important: If you don't load the correspondences, but detect them, make sure, that the correct pattern configuration is loaded. Especially the "Pattern Square Size" should equal the real size of an checkerboard edge in the printed ChArUco board.
In the next step, initial intrinsic parameters for all detected cameras are estimated based on the correspondences. To start this calculation, simply press "Calculate Intrinsics".
After finishing the estimation, the tool will ask you to save the estimated intrinsics as well as the estimated transforms between the cameras and the calibration patterns. Since this estimation can be time-consuming, I would advise to save the results in order to be able to reuse them later.
If you already have saved intrinsics before, you can load them by clicking "Load Intrinsics". In addition you can load the previously calculated transforms via "Load Pattern Positions" or recalculate them after loading the intrinsics by clicking "Calculate Pattern Positions".
The loaded or estimated intrinsics can now be viewed (and modified) in the camera parameters box.
Finally, the correspondences, initial intrinsics and pattern transforms are used to optimize the camera poses (relative to the first camera in the list) and refine the camera parameters. To start the Levenberg-Marquardt optimization, set the LM termination parameters and click "Start Optimization".
The extrinsic results as well as the RSME will be printed to the logger as shown here.
(6.) There are some further options, that will be listed in the following:
Cameras can be excluded from the intrinsic parameter refinement in the final optimization. To exclude all cameras, set the "Refine Intrinsics" drop down menu to "of no camera". To exclude single cameras, select "according to GUI setting" and uncheck "Refine intrinsics of this camera" in the camera parameter view for the cameras to be excluded.
In the same way, cameras can be excluded from the extrinsics optimization by using the corresponding "Optimize Extrinsics" drop-down menu and the checkboxes "Optimize extrinsics of this camera" in the camera parameter view.
By unchecking "Calculate intrinsics of this camera", a camera will be excluded from the intrinsics calculation in step 4. This can be desireable if a good intrinsic calibration is already known (and loaded).
The initial extrinsic parameters for the LM optimization can be computed via the transforms estimated in step 4 or the parameters set in the GUI can be used. To switch between these options, use the "Initial Extrinsics" drop-down menu.
If the cameras are mounted on a 2 axes rig like this you can also calibrate the rig axes. In order to do this, take images of the pattern from different rig positions and add a rig position file containing the data
$Timestamp $x-steps $y-steps
for every timestamp of the images. Here I added an example.
Then check the "Use rig position file" checkbox in step 3 as well as the "Refine Rig Axes" checkbox in step 5 and proceed as usual.
In theory, the tool can also use random patterns instead of the ChArUco boards by selecting the "Pattern Type" in step 1. However, the recognition of these patterns proved to be unreliable and heavily dependent on the detail level. Therefore I currently advise against using the random patterns until I have found and fixed the issue.