next up previous contents
Next: Bibliography Up: Self Calibration Previous: Derivation of Kruppa's Equations

Explicit Computation

To derive explicit expressions for the matrices D and D' in terms of the fundamental matrix F, let us reconsider the above argument. Let F=UWVt be the Singular Value Decomposition of F. Here, Uand V are orthogonal, and W is a diagonal matrix with diagonal values r, s, 0. We can write this as follows:

\begin{displaymath}F = U
\left(
\begin{array}{ccc}
r & & \\
& s & \\
& & 1
\...
...0 & 1 & 0\\
-1 & 0 & 0\\
0 & 0 & 1
\end{array} \right)
V^{t}
\end{displaymath}

Define

\begin{displaymath}A'^{t} = U
\left(
\begin{array}{ccc}
r & & \\
& s & \\
& ...
...0 & 1 & 0\\
-1 & 0 & 0\\
0 & 0 & 1
\end{array} \right)
V^{t}
\end{displaymath}

Hence, F = A'tF'A with A,A' non-singular and F' having the desired canonical form. Applying the transformations $p\to A p, p'\to
A' p'$ and $F\to F'=A'^{-t}FA^{-1}$, we see that p'Fp is unchanged and F becomes canonical, so A,A' are the required rectifying transformations. These transformations take $P\to A P$, $P'\to A' P'$and hence $K\to AK$, $K\to A'K$, and hence the DIAC C=KKt becomes respectively D=ACAt and D'=A'CA't.

Now explicitly compute the dij in order to use equation (5.5). Decompose A and A' by rows:

\begin{displaymath}A =
\left(
\begin{array}{c}
\mbox{\boldmath {$a$ }}_{1}^{t}\...
...}^{t}\\
{\mbox{\boldmath {$a$ }}'}_{3}^{t}
\end{array}\right)
\end{displaymath}

Then, D= ACAt implies $d_{ij}=\mbox{\boldmath {$a$ }}_{i}^{t}C\mbox{\boldmath {$a$ }}_{j}$, and we have the following explicit form for the Kruppa equations:

 \begin{displaymath}
\frac{\mbox{\boldmath {$a$ }}_{1}^{t}C\mbox{\boldmath {$a$ }...
...ox{\boldmath {$a$ }}'}_{2}^{t}C{\mbox{\boldmath {$a$ }}'}_{2}}
\end{displaymath} (5.6)

We can write these equations directly in terms of the SVD of the fundamental matrix.

\begin{displaymath}A' =
\left(
\begin{array}{c}
{\mbox{\boldmath {$a$ }}'}_{1}^{...
...u}}_{2}^{t}\\
\mbox{\boldmath {u}}_{3}^{t}
\end{array}\right)
\end{displaymath}

where $\mbox{\boldmath {u}}_{i}^{t}$ is the i-th column of U. And

\begin{displaymath}A =
\left(
\begin{array}{c}
\mbox{\boldmath {$a$ }}_{1}^{t}\\...
...v}}_{1}^{t}\\
\mbox{\boldmath {v}}_{3}^{t}
\end{array}\right)
\end{displaymath}

where $\mbox{\boldmath {v}}_{i}^{t}$ is the i-th column of V. From (5.6) we obtain

\begin{displaymath}\frac{\mbox{\boldmath {v}}_{2}^{t}C\mbox{\boldmath {v}}_{2}}
...
...{2}\cdot\mbox{\boldmath {u}}_{2}^{t}C\mbox{\boldmath {u}}_{2}}
\end{displaymath}

Our problem has five degrees of freedom. Each pair of images provides two independent constraints. From three images we can form three pairs which provide three pairs of constraints. This is enough to solve for all the variables in C. However, note that all of the equations are multivariable quadratics in the coefficients of C. This makes the problem quite painful to solve in practise.

The difficulty of such purely algebraic approaches explains why alternative approaches have been explored for self calibration. [10] provides one such alternative. In any case, an algebraic solution can only ever provide the essential first step for a more refined bundle adjustment (error minimization) process.


next up previous contents
Next: Bibliography Up: Self Calibration Previous: Derivation of Kruppa's Equations
Bill Triggs
1998-11-13