Tsai Camera Calibration

 

Camera calibration and pose estimation are major issues in computer vision since they are related to many vision problems such as stereovision, structure from motion, robot navigation and change detection [Tsai86, Faugeras93, Fitzgibbon98, Kumar94, Wilson94, Heikkila97, Pollefeys00, Zhang00, Debevec01, Kurazume02].

Camera calibration consists in the estimation of a model for an un-calibrated camera. The objective is to find the external parameters (position and orientation relatively to a world co-ordinate system), and the internal parameters of the camera (principal point or image centre, focal length and distortion coefficients). One of the most used camera calibration techniques is the one proposed by Tsai [Tsai86]. Its implementation needs corresponding 3D point co-ordinates and 2D pixels in the image. It uses a two-stage technique to compute: first, the position and orientation and, second, the internal parameters of the camera.

In many computer vision applications a camera already calibrated is considered. This means that a model of the internal camera parameters is available. This model can be provided by the manufacturer or computed using a known target (usually a chessboard plane) [Heikkila97, Zhang00]. This situation, called pose estimation, just needs to recover the six parameters relative to the position and orientation of the camera. Some methods for pose estimation as well as a sensitivity analysis can be found in [Kumar94].

One of the most popular calibration method is the well-known Tsai camera calibration method that is suitable for a wide area of application since it can deal with coplanar and non-coplanar points. It also offers the possibility to calibrate internal and external parameters separately. This option is particularly useful since it gives the possibility to fix the internal parameters of the camera, when known, and carry out only pose estimation.

An online implementation of the Tsai calibration algorithm proposed by Reg Willson is available here.

 

Tsai Camera Model

The Tsai model is based on a pinhole perspective projection model and the following eleven parameters are to estimate:

f - Focal length of camera,

k - Radial lens distortion coefficient,

Cx, Cy - Co-ordinates of centre of radial lens distortion,

Sx - Scale factor to account for any uncertainty due to imperfections in hardware timing for scanning and digitisation,

Rx, Ry, Rz - Rotation angles for the transformation between the world and camera co-ordinates,

Tx, Ty, Tz - Translation components for the transformation between the world and camera co-ordinates.

 

Figure 1: Tsai Camera re-projection model with perspective projection and radial distortion.

 

The transformation from world (Xw,Yw,Zw) to image (Xi,Yi,Zi) co-ordinates considers the extrinsic parameters of the camera (Translation T and Rotation R) within the equation:






where R and T characterize the 3D transformation from the world to the camera co-ordinate system and are defined as follows:

with

(Rx,Ry,Rz) the Euler angles of the rotation around the three axes.

(Tx,Ty,Tz) the 3D translation parameters from world to image co-ordinates.

The transformation from 3D position (in the image co-ordinate frame) to the image plane is then computed through the following steps (see Figure 1):

Transformation from 3D world co-ordinates (Xi,Yi) to undistorted image plane (Xu,Yu) co-ordinates

Transformation from undistorted (Xu,Yu) to distorted (Xd,Yd) image co-ordinates

where

, and k is the lens distortion coefficient.

Transformation from distorted co-ordinates in image plane (Xd,Yd) to the final image co-ordinates (Xf,Yf) are:

with

(dx,dy): distance between adjacent sensor elements in the X and Y direction. dx and dy are fixed parameters of the camera. They depend only on the CCD size and the image resolution, (Xf,Yf) are the final pixel position in the image.

References

[Debevec01] P. Debevec,
Reconstructing and Augmenting Architecture with Image-Based Modelling, Rendering and Lighthing.
Proceedings of the International Symposium on Virtual Architecture (VAA’01), pp. 1-10, Dublin 21-22 June 2001.
[Faugeras93] O. Faugeras,
Three-Dimensional Computer Vision.
MIT Press, 1993.
[Fitzgibbon98] A.W. Fitzgibbon, A. Zisserman,
Automatic 3D Model Acquisition and Generation of New Images from Video Sequences.
In Proceedings of European Signal Processing conference (EUSIPCO '98), Rhodes, Greece, pages 1261-1269, 1998.
[Heikkila97] J. Heikkila, O. Silven,
A Four-Step Camera Calibration Procedure with Implicit Image Correction.
In Proc. of IEEE Computer Vision and Pattern Recognition, pp. 1106-1112, 1997.
[Kumar94] R. Kumar and A. Hanson.
Robust Methods for Estimating Pose and a Sensitivity Analysis.
CVGIP-Image Understanding, Vol. 60, No. 3, pp. 313-342, 1994.
[Kurazume02] R. Kurazume, K. Nishino, Z. Zhang, and K. Ikeuchi,
Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute
Proc. of Fifth Asian Conference on Computer Vision (ACCV), Vol. I, pp. 99-106, January 2002.
[Pollefeys00] M. Pollefeys,
3D Modelling from Images,
Tutorial notes, in conjunction with ECCV 2000, Dublin, Ireland, June 2000.
[Tsai86] R.Y. Tsai,
An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision.
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, pp. 364-374, 1986.
[Tsai87] R.Y. Tsai,
Metrology Using Off-the-Shelf TV Cameras and Lenses
IEEE Journal of Robotics and Automation, Vol. 3, No. 4, pp. 323-344, August 1987.
[Wilson94] Reg G. Willson
Modeling and Calibration of Automated Zoom Lenses
Ph.D. thesis, Department of Electrical and Computer Engineering,Carnegie Mellon University, January 1994.
[Zhang00] Z. Zhang.
A flexible new technique for camera calibration.
IEEE Transactionson Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330-1334, 2000

Author: Paulo Dias at IEETA/Universidade de Aveiro, Portugal - 05/11/2003