Yijun Xiao, Robert B. Fisher

We have developed a camera calibration process that estimates the intrinsic parameters of a camera, including the radial and tangential lens distortion parameters. There are 2 novel contributions:

- A ellipse extraction algorithm that gives more accurate estimates of the ellipse center.
- A correction for the printing drift that occurs on laser-printer calibration charts.

This research is based on calibration charts from which the center-of-mass of the features is the calibration point. Many people use checkerboard grids. It is our opinion that the circular feature grids are better, because estimating the corner points on the checkerboard grids can be done to 0.1 pixel accuracy, whereas estimating the center of a circular feature can be done to 0.01 pixel accuracy.

This webpage explains how we calibrate a
camera using a planar calibration chart. The chart we use for calibration
looks something like this:

The chart is made by attaching a laser-printed paper to a planar surface. The circles on the paper provide known positions in 3D. When a camera looks at this chart from a few different angles, the camera's intrinsic and extrinsic paramters can be calculated from the image observations. The idea of calibrating a camera using a mono-plane chart has been explored by several reseachers [1-3] and the theories have been well laid out. Here, what we improve is the practice. We solve two problems: 1) how to accurately extract the projected circles from images of the calibration chart; 2) how to compensate for the printing shift of the circles on the calibration chart.

**Accurate Ellipse Extraction**: The task looks simple since there are already lots of ellpise detection methods in the computer vision literature. However, when things come to accuracy, we have to be very careful about the methods we choose. Here we propose a method that can achieve 0.01 pixel accuracy on synthetic data. The idea is to optimize an analytical ellipse on the image plane that maximizes the intensity difference between the inside and outside of the ellipse. This process can be illustrated by the following figure.( Click to enlarge. The blue line is the fitted ellipse.)

The process starts from a rough estimate of the position, orientation and shape parameters (semi-axes) of the ellipse. It calculates the mean values of the intensities in the outer and inner belt areas of the ellipse. A belt is the area between the ellipse and another concentric ellipse (that shares the same center, eccentricity and orientation). As illustrated above, the blue curve is the ellipse, the area between the red and blue curves is its outer belt and the area between the green and blue curves is its inner belt. The ellipse that maximizes the difference between the mean intensity values of the two belts is considered to be the optimal ellipse that can be estimated from the image data. For more technical description of the method, please refer to our paper. The matlab code is here.

One question about applying this method is the initialization of the ellipse. For this, we first extract the black dots (illustrated in the above figures) using blob detection techniques and use the centroids of the black dots as the first estimate of the ellipse centers to calibrate the camera. We do not require an accurate extraction of the black dots at this stage. The estimate of the dot centroids can have a few pixels error. The next step is then to project the circles on the calibration chart back to the image plane and get a coarse estimate of the positions, orientations and shape parameters of the corresponding ellipses. The estimated ellipse parameters are used to initialize the optimization to maximize the inside and outside intensity difference for each single ellipse.

**Printing Shift Compensation**: People often assume the patterns printed on paper by a laser printer are error-free. We found this is not true. For the calibration chart illustrated above, we found that there is a small vertical shift for each row of circles. We calculated the shifts of the circles for all the rows and compare the shifts derived from four different cameras. Surprisingly, the shifts are pretty consistent, which implies a systematic error that goes into the camera calibration. See the figure below:

In the matlab code, we provide a function:

`dy = find_shift_m()`to calculate the shifts of the circles on the calibration chart using the four cameras. The calibration data files contained in the code were produced by using the ellipse extraction method described above.

This work was done in the ChiRoPing project, funded by the EC's IST programme, STREP project 215370, in the ICT Challenge 2: "Cognitive Systems, Interaction, Robotics".

We are grateful to the authors at Oulu university who provide camera calibration matlab functions used in this research .

AUTHOR = "Xiao, Y. and Fisher, R.B.",

TITLE = "Accurate Feature Extraction and Control Point Correction for Camera Calibration with a Mono-Plane Target",

BOOKTITLE = Proc. 2010 Int. Conf. on 3D Processing, Visualization and Transmission, Paris

YEAR = "2010",

}

[2] J. Heikkila, "Geometric Camera Calibration Using Circular Control Points",

[3] B. Triggs, "Autocalibration from Planar Scenes",

Back to Edinburgh's ChiRoPing results page