next up previous
Next: EXTRACTION OF POINTS OF Up: CHARACTERIZING POINTS OF INTEREST Previous: Characterization using first order

Change of intensity.

 

A scene can be submitted to two types of lighting change: First, the intensity of each hue of the emitted light can be modified. We call it internal change of the light. Second the light source can move into the scene. We call it external change of light. In this article, we take into account only the internal changes of light. The reader can see [3] for external changes modelization.
In order to get a robust characterization of an image according to color, we think it is interesting to make our feature vector tex2html_wrap_inline1143 presented above invariant to internal change of intensity. It requires first to estimate a good model for these transformations. According to the recent work of Finlayson [3] on color constancy, we use its diagonal model with an additional vector of translation to get linear invariance against intensity. We obtain:

  equation139

where tex2html_wrap_inline1145 is the pixel color and tex2html_wrap_inline1147 its linear transformation,
tex2html_wrap_inline1149 a diagonal matrix and tex2html_wrap_inline1151 a vector of translation.
Other linear models more complete exist but we use this diagonal one because it seems to provide the best quality-complexity ratio.
By exploiting the model presented above, there are two methods to make the characterization invariant to linear change of intensity. The first solution is to modify our vector tex2html_wrap_inline1143 to make it invariant to these transformations. The model proposed here has six degrees of freedom ( tex2html_wrap_inline1155 , tex2html_wrap_inline1157 tex2html_wrap_inline1159 , tex2html_wrap_inline1161 , tex2html_wrap_inline1163 , tex2html_wrap_inline1165 ), then our vector of eight invariants must be reduced to a vector of 8-6=2 invariants (we must take 6 invariants to ``normalize'' the remaining ones). Naturally, it becomes too poor to characterize and match points efficiently, and then this solution will not be considered here. The second approach takes into account the six parameters of the model without loosing the richness of our feature vector tex2html_wrap_inline1167 . It consists in normalizing the images in order to make them independent of the model described at the equation 4. The entire vector tex2html_wrap_inline1167 will be then computed with these normalized images. Let us consider all the images tex2html_wrap_inline1171 which can be deduced from a given image tex2html_wrap_inline1173 by using transformations following our model. We can define an equivalence relation between all these images and an equivalence class named tex2html_wrap_inline1175 . So the normalization of an image tex2html_wrap_inline1173 expresses the characterization of tex2html_wrap_inline1175 by the normalized image tex2html_wrap_inline1181 . Let us consider two images tex2html_wrap_inline1183 and tex2html_wrap_inline1185 and their respective equivalence classes tex2html_wrap_inline1187 and tex2html_wrap_inline1189 characterized by tex2html_wrap_inline1191 and tex2html_wrap_inline1193 . Then if the feature vectors tex2html_wrap_inline1167 computed with tex2html_wrap_inline1191 and tex2html_wrap_inline1193 are similar, it means that the two non normalized images tex2html_wrap_inline1183 and tex2html_wrap_inline1185 belong to the same class tex2html_wrap_inline1187 = tex2html_wrap_inline1189 and so are equivalent through this model.
Our solution for image normalization is very simple but works efficiently. For each color plane R,G,B, the gray value of each pixel is re-expressed in the interval [0..1] with a linear transformation using the extrema of all the gray values in the studied plane like this:

equation188

This normalization makes the image independent of the parameters tex2html_wrap_inline1209 and tex2html_wrap_inline1211 of the model. It is applied on each channel but globally on the whole image according to the estimation of the extrema, and so is sensitive to any change of the image contents. It can be problematical for example when the two images have been taken from different viewpoints. To solve this problem, we propose a more local solution. For each pixel of the processed channel, the extrema are locally computed in a window centered on the pixel. With this solution, the local properties of the pixels are preserved. It needs one parameter: the width of the window to be considered; more the images are different, more this parameter must be small.

   figure196
Figure: Two matched Harris points in left and right images Lizard differing from linear change of intensity with:    tex2html_wrap_inline1155 = 0.5, tex2html_wrap_inline1157 = 0.4, tex2html_wrap_inline1159 = 0.3, tex2html_wrap_inline1161 = 0.3, tex2html_wrap_inline1163 = 0.2 and tex2html_wrap_inline1165 = 0.1.

   figure211
Figure 4: Feature vectors of point tex2html_wrap_inline1225 1 computed with images Lizard which have not been normalized. The derivatives are computed using a Gaussian filter with tex2html_wrap_inline1227 3. The eight invariants represented here are not invariant to the diagonal model.

   figure218
Figure 5: Locally normalized images (respectively left and right).

   figure225
Figure 6: Feature vectors of point tex2html_wrap_inline1225 1 computed with images Lizard which have been locally normalized between 0 and 1 (width of the window = 21). The invariants computed at the same location after normalisation, show clearly that this normalization makes the feature vector invariant through the diagonal model.

In this section, we have proposed a set of eight invariants to rotation and to internal change of light (when the images are normalized) which characterize the points of a color image very simply, since it uses only the derivatives till the first order, and with a very good accuracy. The reader can see and compare feature vectors at figures 1 and 2 for invariance to rotation and at figures 3, 4, 5 and 6 for invariance through our diagonal model. So we are able to match points quite easily between two color images which differ from translation, rotation and complex change of intensity.


next up previous
Next: EXTRACTION OF POINTS OF Up: CHARACTERIZING POINTS OF INTEREST Previous: Characterization using first order

Philippe Montesinos
Wed Jun 2 18:06:30 MET DST 1999