Abstracts Track 2024


Area 1 - Computer Vision

Nr: 46
Title:

Automation of Camera Calibration Using Display with Virtual Patterns

Authors:

Yuki Naganawa, Haruki Nogami, Yamato Kanetaka and Norishige Fukushima

Abstract: The camera calibration is a foundation of computer vision, and it determines the correspondence between the 3D world and the 2D image. The calibration usually requires the corresponding relationship between the point captured in a camera and the known 3D geometry points. Currently, we use a 2D plane pattern for calibration and need to change the pattern position many times. The processing requires movement of equipment, which is time-consuming and labor-intensive. This paper proposes an automated calibration method using a display that can actively change its screen, maintaining accuracy while fixing the position of the equipment as a photogrammetric method. Zhang's calibration method, which is currently the most used method, takes multiple images of a planar pattern and calculates camera parameters based on the correspondence between the known 2D points and the image point projected to a camera. Since planar patterns can be prepared inexpensively, they are widely used for convenience compared to conventional 3D patterns. The 2D pattern method gives two equations for each pattern image. However, since only the same information can be obtained from the exact positioning, it is necessary to change the camera and pattern placement each time to obtain all parameters. The change increases the capturing cost. Therefore, we reduce the moving process of the pattern using virtual patterns on a display. Our method assumes a camera position and projects a virtually generated pattern toward that position. We construct the projection model by taking the relationship between the camera and the screen as an unknown extrinsic parameter and incorporating the known relationship between the virtual pattern and the screen into the equation. By including the camera's position information in the calculation, we can obtain the information needed to calculate the parameters even if they have the same positional relationship. Therefore, there is no need to take images in different positions, and this method yields two equations for each image, similar to Zhang's method. The proposed approach natively involves positional assumption errors evaluated in our experiments. We conducted an experiment using computer simulations. We used root-mean-square error (RMSE) to measure the results and ran 100 trials, randomly generating a pattern for each trial. Also, the actual camera position and angle were varied with a small amount of random error from the assumed position. The results show the maximum re-projection error is about 0.5 pixels, and in particular, 80% of the results are within 0.05 pixels, which means that if the error in the camera position is small enough, the re-projection error can be sufficiently small. In conclusion, our method can calibrate cameras with sufficient accuracy as long as the assumed position is somewhat accurate. The limitation of the proposed method is the validity of the camera's installation error and the difficulty of setting the camera in the appropriate position. We will solve the issue in our future work.