next up previous contents
Next: Problem Statement Up: Feature Point Tracking Algorithms Previous: Feature Point Tracking Algorithms

Introduction

Feature point tracking is a standard task of computer vision with numerous applications in navigation, motion understanding, surveillance, scene monitoring, and video database management. In an image sequence, moving objects are represented by their feature points detected prior to tracking or during tracking. Feature points may have local image properties assigned to them. However, in dynamic scenes these properties are often unstable. Instead of identifying a (physical) point by its neighborhood pattern, in the traditional statement of the feature point tracking problem [11,7,10,9,4] the points are treated as indistinguishable, and kinematic constraints are solely used to establish the correspondences.

The feature point tracking problem is formulated in section 2. Alternative approaches to local motion estimation are based on optical flow or patch matching. The reader is referred to [1] for a discussion of these approaches. Here, we only deal with feature point based motion tracking.

In real point-tracking tasks, the kinematic constraints are imposed by the particular application considered. The strongest and most commonly used constraint is the trajectory smoothness one, which stems from the inertia law of physics and limits the accelerations of the moving objects. When more complex, articulated motion is tracked, the smoothness constraint is partially relaxed. Other constraints, e.g., spatial proximity [9], are added.

Several solutions to the general feature point tracking problem have been proposed [11,7,10,9]. The existing feature point tracking algorithms are briefly presented in section 3.

Recently, we have reconsidered the problem and developed a new algorithm called IPAN Tracker [3,4] which extends the set of admissible events. This algorithm uses the idea of the original two-frame matching procedure [2]. The IPAN Tracker is described in detail in section 4.

At the same time, we initiated a comprehensive performance evaluation study of feature point tracking techniques. To our best knowledge, such evaluation was not done before. A reference experimental study is available for optical flow techniques [6]. Results of our study are summarized in section 5.

In many applications, e.g., surveillance and scene monitoring [5], objects may temporarily disappear, enter or leave the view field. In some tasks, such events are of particular interest, while in others they are treated as admissible but disturbing. The character of motion and the merit of tracking quality also vary from task to task. This leads to an intrinsic contradiction, since the approaches proposed as application-independent differ in their view of what the general framework should be. Consequently, comparison of the algorithms is not straightforward. In addition, most of the algorithms were not properly tested even separately, within their own frameworks. A potential user wishing to select a tracking technique for a particular computer vision application will likely face problems.

In section 6, we give some hints for such selection. The guidelines we offer are general enough but pragmatic. Without addressing particular applications, we share our experience concerning the algorithmic features, applicability, robustness and speed of the tracking algorithms. Finally, application of feature point tracking to estimation of blood flow velocity is discussed.


next up previous contents
Next: Problem Statement Up: Feature Point Tracking Algorithms Previous: Feature Point Tracking Algorithms
Dmitry Chetverikov
1998-11-24