Characteristics of SSC

Study of the Effect of Noise

Study of the Effect of Data Size

Study of Selection Errors

Study of the Performance on Real Range Data

 

To illustrate the characteristics of SSC it has been evaluated and compared with a number of different model selection criteria. In the following experiments, a model selection criterion is expected to be able to identify (from the model library shown in Table 1) the correct underlying surface model of a set of range data.

For this part of the experiments, 800 synthetic sets of data were generated according to the eight surface models in Table 1 by randomly changing the parameters (a,b,…f) of each data set 100 times.

Model 1

Partial Quadratic Surface

ax+by+cz+dx +ey +fz =1

Model 2

5th order conic, for example cylinder in z direction

ax+by+cx +dy +eyx =1

Model 3

5th order conic, for example cylinder in y direction

ax+bz+cx +dz +exz =1

Model 4

5th order conic, for example cylinder in x direction

az+by+cz +dy+fyz =1

Model 5

4th order conic, for example cylinder in z direction

ax+by+cx +dy =1

Model 6

4th order conic, for example cylinder in y direction

ax+bz+cx +dz =1

Model 7

4th order conic, for example cylinder in x direction

ay+bz+cy +dz =1

Model 8

Plane

ax +by +cz =1

Table 1 - Model library used for range segmentation.

since almost all criteria (except CP and SSC) are derived based on the MLE technique, the residuals also need to be calculated using MLE. However, during these experiments, it has been observed that the performances of these criteria deteriorated when the residuals were calculated based on the MLE. This is mainly due to the fact that the objective functions of the MLE technique are non-linear and the residuals cannot be accurately calculated. Therefore, to make the computation feasible, the algebraic (Least Square) residuals for evaluating all the criteria are employed. The same scale of noise for all the criteria was also used and computed according to where N is the number of data points and Ph is the number of parameters of the highest model in the library. The reason that the scale of noise for the highest surface is used has been described in here.

In this study, the possible outcomes of using a model selection criterion are classified to be one of:

  1. Correct Prediction
  2. Overestimation (of the number of parameters)
  3. Underestimation (of the number of parameters)
  4. Misclassification (of the model)

Misclassification happens when a criterion succeeds in determining the number of parameters, however the type of the chosen model is wrong. This happens when there are different models with the same number of parameters in the library.

As a measure of performance, the percentage of success (Correct Prediction) of each criterion is computed. This has been calculated by counting the number of correct predictions of the underlying model divided by the total number of different synthetic data sets used in the evaluation test.

Study of the Effect of Noise

To investigate the effect of noise level on the performance of each model selection criterion, the data for each experiment was disturbed by adding different amounts of normally distributed noise (with zero mean) while the image size (21´ 21) remained constant. All the different model selection criteria were applied to each data set. The success rate (correct prediction) of every criterion in accurately recovering the underlying model of data is shown in Figure 1. As shown in this figure, increasing the amount of noise deteriorates the performance of every criterion. The reduction in success rate is less for SSC and it appears that this criterion outperforms the rest for different amounts of noise. It should be noted that SSC has not been derived based on any specific assumption about the distribution of noise and therefore it appears to be more robust in comparison with the other criteria. To evaluate MCAIC in these experiments, first Wi was set to be as defined by Boyer et al. [2] and then it was set to be unity. The latter is shown by MCAIC-1 in the following figures.

An interesting point is that although MCAIC is not very efficient for small amounts of noise, its performance improves when noisier data is used.

Figure 1 - Percentage of success of different model selection criteria versus different noise levels for range images of size 21´ 21 pixels.

Study of the Effect of Data Size

To investigate the effect of image size on the performance of each criterion, the experiments were repeated for images of various sizes (from 21´ 21 to 181´ 181 pixels) while the image was disturbed with Gaussian noise (with mean of zero and variance of 0.01). The performances of all criteria are shown in Figure 2. In this figure, the image size for each experiment is shown along the horizontal axis. Surprisingly, the performances of some criteria (except SSC, MCAIC and MCAIC-1) have deteriorated when larger images are used. SSC outperforms all the other criteria and also has a stable performance for different sizes of range image. In addition, MCAIC and MCAIC-1 show poor performance when small and medium image sizes are used. However, for large image sizes their performances are improved to be slightly better than SSC and relatively stable.

 

Figure 2 - Percentage of success for different model selection criteria versus the size of range images. The variance of noise is 0.01. Only the relevant area is shown.

Study of Selection Errors

The model library used for range segmentation experiments is not a nested model library and includes 3 models which each have 5 parameters. In addition, there are 3 other models; all of which have 4 parameters. Thus, there are scopes for misclassification as well as under or over estimation to result. The percentage of correct prediction, overestimation, underestimation as well as misclassification for each criterion are reported as shown in Figure 3. As can be seen from this figure, all criteria except for SSC and MCAIC tend to overestimate the number of parameters of the chosen model. However, the tendency of SSC to overestimate the number of parameters of the true model is almost equal to its tendency to underestimate and therefore it appears to be unbiased. MCAIC and MCAIC-1, which assumes a t distribution for noise, tends to underestimate the number of parameters of the true model.

Furthermore, all of model selection criteria except for SSC, MCAIC and MCAIC-1 misclassify the true model in about 8% of cases. SSC, MCAIC and MCAIC-1 do not misclassify the underlying surface model. SSC has the highest success rate compared to the other criteria and GMDL, GAIC and MCAIC also perform relatively well. Figure 3 shows that almost all other criteria have relatively similar performances.

 

Figure 3 - Various bar colours represent the percentage of success, underestimation and overestimation of model dimensions for every criterion. These results are calculated using images of size 21´ 21 pixels and noise level is 1%.

Study of the Performance on Real Range Data

To provide a more realistic measure of how useful a criterion might be, the different model selection criteria are examined on real range images. For this experiments, 50 different range images (taken from [1]) of various objects containing both quadratic and planar surfaces were chosen. Those images have been taken by different range scanners, made of different materials (metal, wood, cardboard), having different colours and captured in different illumination conditions. Then, different model selection criteria were applied to the whole set. The results of those experiments are shown in Figure 4.

This figure shows that only SSC perform relatively well and the rest of the criteria have much less success in selecting the true surface model of real range data. This can be attributed to the fact that most assumptions used to derive these criteria are not realised in practice and thus, the success rates for real data are very low.

Figure 4 - Success rate (percentage of success) of various model selection criteria on real range data.

References

[1] Bab-Hadiashar, A. and Gheissari, N., Model Selection for Range Segmentation of Curved Objects, European Conference on Computer Vision (ECCV'04), pp. To appear,2004.

[2] Boyer, K. L., Mirza, M. J., and Ganguly, G., The Robust Sequential Estimator: A General Approach and its Application to Surface Organization in Range Data, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 16, pp. 987-1001, Oct, 1994.


 

Top

  • Index
  • Surface Selection Criterion (SSC)
  • Parametric Curved Surface Range Segmentation algorithm
  • Experimental Results

  •  

             

    By Niloofar Gheissari and Alireza Bab-Hadiashar

    May 2004