next up previous
Next: Binary Images Up: Computer Vision IT412 Previous: Lecture 2

Subsections

Measurements and Noise

No imaging system is perfect. Noise is introduced into the imaging process via the use of real lenses and cameras that differ in operation from the pinhole camera model. Moreover, lighting and atmospheric conditions can also effect the resulting image. In addition, digital images suffer deviations in image values introduced by sampling. Thus, measurements are affected by fluctuations in the signal being measured, and these fluctuations are described according to some probability distribution, p(x).

Since p(x) is a probability distribution, it always satisfies

\begin{displaymath}
p(x)\geq 0, \mbox{for all $x$} \end{displaymath}

and

\begin{displaymath}
\int_{-\infty}^{\infty} p(x)dx = 1.\end{displaymath}

The mean or first moment of the distribution $\mu$ is given by

\begin{displaymath}
\mu = \frac{ \int_{-\infty}^{\infty} x p(x) dx}
 { \int_{-\infty}^{\infty} p(x)dx}, \end{displaymath}

but

\begin{displaymath}
\int_{-\infty}^{\infty} p(x) dx = 1, \end{displaymath}

implying

\begin{displaymath}
\mu = \int_{-\infty}^{\infty} x p(x)dx.\end{displaymath}

The spread of the distribution is given by the second moment or variance:

\begin{displaymath}
{\rm variance = (std. deviation)}^{2} = \sigma^2 = 
 \int_{-\infty}^{\infty}(x-\mu)^2 p(x)dx.\end{displaymath}

The cumulative probability distribution

\begin{displaymath}
P(x) = \int_{-\infty}^{x} p(t) dt \end{displaymath}

tells us the probability that the measurement will be less than or equal to x. Thus, the probability density distribution is just the derivative of the cumulative probability distribution, that is, p(x) = P'(x).

One way to improve accuracy is to average several measurements, assuming that the `noise' in them will be independent and tend to cancel out. To see how this is indeed the case, we consider the following analysis.

Let x = x1+x2, the sum of two independent variables with probability distributions p1(x1) and p2(x2). Then what can we say about p(x), the probability distribution of x?

We first look at the probability of getting a value between x and $x
+\delta x$. If the sum x1 + x2 is between x and $x
+\delta x$,and we are given a value of x2, then x1 must be between x-x2 and $x + \delta x-x_{2}$. The probability of this occurring is $p_{1}(x-x_{2})\delta x$.

But x2 can also take on a range of values. The probability that x2 lies between a particular x2 and $x_{2} + \delta x_{2}$ is

\begin{displaymath}
p_{2}(x_{2})\delta x_{2}.\end{displaymath}

To get the probability that the sum lies between x and $x
+\delta x$ we have to integrate the product over all x2. Thus

\begin{displaymath}
p(x) \delta{x} = \int_{-\infty}^{\infty} p_{1}(x - x_{2}) \delta x p_{2}(x_{2}) dx_{2}, \end{displaymath}

or

\begin{displaymath}
p(x) = \int_{-\infty}^{\infty}p_{1}(x - t) p_{2}(t) dt, \end{displaymath}

where t is a dummy variable of integration.

By a similar argument we can show symmetrically that

\begin{displaymath}
p(x) = \int_{-\infty}^{\infty}p_{2}(x - t) p_{1}(t) dt.\end{displaymath}

This is called the convolution of p1 and p2 and is denoted by $ p_{1} \otimes p_{2}$.

We can use this result to prove that the mean of the sum of several random variables is equal to the sum of the means, and the variance of the sum is the sum of the variances.

Taking multiple measurements of a variable

If we calculate the average of N independent measurements each having mean $\mu$ and standard deviation $\sigma$, then

\begin{displaymath}
\overline{x} = \frac{1}{N} \sum{x_{i}}.\end{displaymath}

So the average $\overline{x}$ will have mean $\mu$ $ = \frac{1}{N} N \mu $and standard deviation $\frac{\sigma}{\sqrt{N}}$.

This latter result is because the variance of the sum of the distributions is just $N \sigma^{2}$, and so the standard deviation of the sum is $\sqrt{N} \sigma$. Thus the distribution $\overline{x}$ has standard deviation $\frac{\sigma}{\sqrt{N}}$ and hence we obtain a more accurate measurement by taking the average of N independent measurements.

The usual probability distribution we use to model noise in an image is the normal or Gaussian distribution with mean $\mu$ and standard deviation $\sigma$ given by

\begin{displaymath}
p(x) = \frac{1}{\sqrt{2 \pi \sigma}} e^{-\frac{1}{2}(\frac{x -\mu}{\sigma})^{2}}. \end{displaymath}

Of course, digital images are sampled versions of continuous or analog images, and the sampling is done both spatially and in the luminance values (where it is known as quantization).


next up previous
Next: Binary Images Up: Computer Vision IT412 Previous: Lecture 2
Robyn Owens
10/29/1997