A scene can be submitted to two types of lighting change:
First, the intensity of each hue of the emitted light can be modified. We call it internal change of the
light. Second the light source can move into the scene. We call it external change of light.
In this article, we take into account only the internal changes of light. The reader can see [3]
for external changes modelization.
In order to get a robust characterization of an image according to color, we think it is interesting
to make our feature vector presented above invariant to internal change of intensity. It
requires first to estimate a good model for these transformations. According to the recent work of
Finlayson
[3] on color constancy, we use its diagonal model with an additional vector of translation to
get linear invariance against intensity. We obtain:
where is the pixel color and its linear transformation,
a diagonal matrix and a vector of translation.
Other linear models more complete exist but we use this diagonal one because it seems to provide the best
quality-complexity ratio.
By exploiting the model presented above, there are two methods to make the characterization
invariant to
linear change of intensity. The first solution is to modify our vector to make it
invariant
to these transformations. The model proposed here has six degrees of freedom ( ,
, , , ),
then our vector of eight invariants must be reduced to a
vector of 8-6=2 invariants (we must take 6 invariants to ``normalize'' the remaining ones).
Naturally, it becomes too poor to characterize and match points
efficiently, and then this solution will not be considered here.
The second approach takes into account the six parameters of the model without loosing
the richness of our feature vector . It consists in normalizing the
images in order to make
them independent of the model described at the equation 4.
The entire vector
will be then computed with these normalized images. Let us consider all the images
which can be deduced
from a given image by using transformations following our model.
We can define an equivalence
relation between all these images and an equivalence class named .
So the normalization of an
image expresses the characterization of by the normalized image
.
Let us consider two images and and their respective equivalence
classes
and characterized by and .
Then if the feature vectors computed with and
are
similar, it means that the two non normalized images and belong
to the same class
= and so are equivalent through this model.
Our solution for image normalization is very simple but works efficiently.
For each color plane R,G,B, the gray value
of each pixel is re-expressed in the interval [0..1] with a linear transformation using
the extrema of all the gray values in the studied plane like this:
This normalization makes the image independent of the parameters and of the model. It is applied on each channel but globally on the whole image according to the estimation of the extrema, and so is sensitive to any change of the image contents. It can be problematical for example when the two images have been taken from different viewpoints. To solve this problem, we propose a more local solution. For each pixel of the processed channel, the extrema are locally computed in a window centered on the pixel. With this solution, the local properties of the pixels are preserved. It needs one parameter: the width of the window to be considered; more the images are different, more this parameter must be small.
Figure: Two matched Harris points in left and right images Lizard differing from linear change of
intensity with:
= 0.5, = 0.4,
= 0.3, = 0.3,
= 0.2 and = 0.1.
Figure 4: Feature vectors of point 1 computed with images Lizard which have not been normalized.
The derivatives are computed using a Gaussian filter with 3. The eight invariants
represented here are not invariant to the diagonal model.
Figure 5: Locally normalized images (respectively left and right).
Figure 6: Feature vectors of point 1 computed with images Lizard which have been
locally normalized
between 0 and 1 (width of the window = 21).
The invariants computed at the same location after normalisation,
show clearly that this normalization makes
the feature vector invariant through the diagonal model.
In this section, we have proposed a set of eight invariants to rotation and to internal change of light (when the images are normalized) which characterize the points of a color image very simply, since it uses only the derivatives till the first order, and with a very good accuracy. The reader can see and compare feature vectors at figures 1 and 2 for invariance to rotation and at figures 3, 4, 5 and 6 for invariance through our diagonal model. So we are able to match points quite easily between two color images which differ from translation, rotation and complex change of intensity.