Next: Stratification Up: Uncalibrated Euclidean Reconstruction Previous: The reconstruction problem

Subsections

# Autocalibration

In many practical cases, the intrinsic parameters are unknown and the only information that can be be extracted from a sequence are point correspondences, which allow to compute a set of fundamental matrices. Autocalibration consist in computing the intrinsic parameters, or - in general - Euclidean information, starting from fundamental matrices (or, equivalently, from point correspondences). In this section we will see which constraints are available for the autocalibration.

## Two-views constraints

In the uncalibrated case, the fundamental matrix can be computed form point correspondences. As we saw in Section 2.1, the seven parameters of the fundamental matrix are available to describe the geometric relationship between the two views (the epipolar geometry); the five parameters of the essential matrix are needed to describe the rigid displacement, thus at most two independent constraint are available for the computation of the intrinsic parameters from the fundamental matrix.

These two constraints must come from the characterization of essential matrices given by Theorem 2.1. Indeed, the condition that the matrix E has a zero singular value and two non-zero equal singular values is equivalent to the following conditions, found by Huang and Faugeras [25]:

 (20)

The first condition is automatically satisfied, since , but the second condition can be decomposed [29] in two independent polynomial relations that are equivalent to the two equations found by Trivedi [47].

This is an algebraic interpretation of the so-called rigidity constraint, namely the fact that for any fundamental matrix there exist two intrinsic parameters matrix and and a rigid motion represented by and such that

By exploiting this constraint, Hartley [13] devised an algorithm to factorize the fundamental matrix that yields the five motion parameters and the two different focal lengths. He also pointed out that no more information could be extracted from the fundamental matrix without making additional assumptions (like, for example, that intrinsic parameters are constant).

A geometric interpretation of the rigidity constrain leads to the Kruppa equations, which will be briefly reviewed in Section 4.3.

## N-views constraints

The case of three views is not a straightforward generalization of the two-views case. The epipolar geometry can be described using the canonical decomposition introduced by Luong [30] or the trifocal tensor, both of which uses the minimal number of parameters, that turns out to be 18. The rigid displacement is described by 11 parameters: 6 for 2 rotations, 4 for two direction of translations and 1 ratio of translation norms2. Therefore, three views gives seven constraints on the intrinsic parameters. If they are constant, three views are sufficient to recover all the five intrinsic parameters.

In the general case of n views, Luong demonstrated that 11n - 15 parameters are need (at least) to describe the epipolar geometry, using his canonical decomposition. The rigid displacement is described by 6n -7 parameters: 3(n-1) for rotations, 2(n-1) for translations, and n-2 ratios of translation norms. There are, thus, 5n-8 constraints available for computing the intrinsic parameters.

As pointed out in [30], the n (n-1)/2 fundamental matrices are not independent, hence the n (n-1) constraints like (20) that can be derived from them are not independent. Nevertheless they can be used for computing the intrinsic parameters, since redundancy improves stability.

## Kruppa equations

With a minimum of three displacements, we can obtain the internal parameters of the camera using a system of polynomial equations due to Kruppa [26], which are derived from a geometric interpretation of the rigidity constraint [31,8].

The unknown in the Kruppa equations is the matrix , called the Kruppa coefficients matrix, that represents the dual of the image of the absolute conic (see [6] for details). From one can easily obtain the intrinsic parameters by means of Cholesky factorization ( is symmetric and positive definite ), or in closed form:

 (21)

Kruppa equations were rediscovered and derived by Maybank and Faugeras [31]. Recently Hartley [16] provided a simpler form, based on the Singular Value Decomposition of the fundamental matrix. Let be written as (with SVD), and

Then the Kruppa equations write (the derivation can be found in [16])

 (22)

From (22) one obtains two independent quadratic equations in the five parameters of for each fundamental matrix (i.e., for each displacement). Moreover, assuming that , which is a good approximation for usual cameras, one has the additional constraint k3 k5 = k2 [28]. There are basically two classes of methods for solving the resulting system of equations (assuming that more than three views are available) [28,50]:

• Partition the equations set in groups of five and solve each group with a global convergent technique for systems of polynomial equations, like homotopy continuation methods [35,42]. Each system will give a set of solutions and the solution common to all of them is chosen. This method - presented in [28] - has the great advantage of global convergence, but is computationally expensive. Moreover, the number of systems to be solved rapidly increases with the number of displacements.
• The over-constrained system of equation is solved with a non-linear least-squares technique (Levenberg-Marquardt [12], or Iterated Extended Kalman Filter [32]). The problem with non-linear least-squares is that a starting point close to the solution is needed. This can be obtained by applying globally convergent methods to subsets of equations (like in the previous case), or by making the additional assumption that (u0, v0) is in the center of the image, thereby obtaining (from just one fundamental matrix) two quadratic equations in two variables k1, k4, which can be solved analytically [16]. This technique is used in [50].

In [28] the authors compare three solving methods: the homotopy continuation method, Levenberg-Marquardt and the Iterated Extended Kalman Filter. From the simulations reported, it appears that all the methods give comparable results. However, the homotopy continuation method is suitable for the case of few displacements, as it would be difficult to use all the constraints provided by a long sequence, and its computational cost would be too high. Iterative approaches (Levenberg-Marquardt and Iterated Extended Kalman Filter) are well suited to the case where more displacements are available. The main limitation of all these methods is the sensitivity to the noise in the localization of points.

## Mendonça and Cipolla

Mendonça and Cipolla method for autocalibration [33] is based on the exploitation of the rigidity constraint (via Theorem 2.1). A cost function is designed, which takes the intrinsic parameters as arguments, and the fundamental matrices as parameters, and return a positive value proportional to the difference between the two non-zero singular value of the essential matrix. Let the fundamental matrix relating views i and j, and let and be the respective intrinsic parameters matrices. Let be the non zero singular values of The cost function is

 (23)

where wij are normalized weight factors.

The algorithm shows excellent convergence properties. Remarkably,convergence is achieved in the 90% of the cases, even when true values are perturbed with a relative error of 200% . On the other hand intrinsic parameters are recovered with fair, but not excellent, accuracy (5% with 5 views and 1.0 pixels standard deviation image noise).

Next: Stratification Up: Uncalibrated Euclidean Reconstruction Previous: The reconstruction problem
Andrea Fusiello
2000-03-16