cvdlogo cvdlogo Illustrated Dictionary of Computer Vision: 0
cvdlogo
1D
2D
2D coordinate system
2D Fourier transform
2D image
2D input device
2D point
2D point feature
2D pose estimation
2D projection
2.5D image
2.5D sketch
3D
3D coordinate system
3D data
3D data acquisition
3D image
3D interpretation
3D model
3D moment
3D object
3D point
3D point feature
3D pose estimation
3D reconstruction
3D skeleton
3D stratigraphy
3D structure recovery
3D texture
3D vision
4 connectedness
8 connectedness


1D: One dimensional, usually in reference to some structure. Examples include: 1) a signal $ x(t)$ that is a function of time $ t$, 2) the dimensionality of a single property value or 3) one degree of freedom in shape variation or motion.

2D: Two dimensional. A space describable using any pair of orthogonal basis vectors consisting of two elements.

2D coordinate system: A system associating uniquely 2 real numbers to any point of a plane. First, two intersecting lines (axes) are chosen on the plane, usually perpendicular to each other. The point of intersection is the origin of the system. Second, metric units are established on each axis (often the same for both axes) to associate numbers to points. The coordinates $ P_x$ and $ P_y$ of a point, P, are obtained by projecting P onto each axis in a direction parallel to the other axis, and reading the numbers at the intersections:
\epsfbox{FIGURES/2Dcoordsyst.eps}


2D Fourier transform: A special case of the general Fourier transform often used to find structures in images . [FP:7.3.1]

2D image: A matrix of data representing samples taken at discrete intervals. The data may be from a variety of sources and sampled in a variety of ways. In computer vision applications the image values are often encoded color or monochrome intensity samples taken by digital camera s but may also be range data . Some typical intensity values are:
\epsfbox{FIGURES/2dimagea.eps}


2D input device: A device for sampling light intensity from the real world into a 2D matrix of measurements. The most popular two dimensional imaging device is the charge-coupled device ( CCD ) camera. Other common devices are flatbed scanners and X-ray scanners.

2D point: A point in a 2D space, that is, characterized by two coordinates; most often, a point on a plane, for instance an image point in pixel coordinates. Notice, however, that two coordinates do not necessarily imply a plane: a point on a 3D surface can be expressed either in 3D coordinates or by two coordinates given a surface parameterization (see surface patch) .

2D point feature: Localized structures in a 2D image, such as interest points , corners and line meeting points (X, Y and T shaped for example). One detector for these features is the SUSAN corner finder.

2D pose estimation: A fundamental open problem in computer vision where the correspondence between two sets of 2D points is found. The problem is defined as follows: Given two sets of points $ \{X_{j}\}$ and $ \{Y_{k}\}$, find the Euclidean transformation $ \{R, t\}$ (the pose) and the match matrix $ \{m_{jk}\}$ (the correspondences) that best relates them. A large number of techniques has been used to address this problem, for example tree-pruning methods, the Hough transform and geometric hashing . A special case of 3D pose estimation .

2D projection: A transformation mapping higher dimensional space onto two dimensional space. The simplest method is to simply discard higher dimensional coordinates, although generally a viewing position is used and the projection is performed.
\epsfbox{FIGURES/2d_projection.eps}
For example, the main steps for a computer graphics projection are as follows: apply normalizing transform to 3D point world coordinates; clip against canonical view volume; project onto projection plane; transform into viewport in 2D device coordinates for display. Commonly used projections functions are parallel projection or perspective projection .

2.5D image: A range image obtained by scanning from a single viewpoint . This allows the data to be represented in a single image array, where each pixel value encodes the distance to the observed scene. The reason this is not called a 3D image is to make explicit the fact that the back sides of the scene objects are not represented.

2.5D sketch: Central structure of Marr's theory of vision. An intermediate description of a scene indicating the visible surfaces and their arrangement with respect to the viewer. It is built from several different elements: the contour, texture and shading information coming from the primal sketch , stereo information and motion. The description is theorized to be a kind of buffer where partial resolution of the objects takes place. The name $ 2\frac{1}{2}$D sketch stems from the fact that although local changes in depth and discontinuities are well resolved, and the absolute distance to all scene points may remain unknown.

3D: Three dimensional. A space describable using any triple of mutually orthogonal basis vectors consisting of three elements.

3D coordinate system: Same as 2D coordinate system , but in three dimensions.
\epsfbox{FIGURES/coord3d.eps}


3D data: Data described in all three spatial dimensions. See also range data , CAT and NMR . An example of a 3D data set is:
\epsfbox{FIGURES/3ddata.eps}


3D data acquisition: Sampling data in all three spatial dimensions. There are a variety of ways to perform this sampling, for example using structured light triangulation .

3D image: See range image .

3D interpretation: A 3D model, e.g., a solid object, that explains an image or a set of image data. For instance, a certain configuration of image lines can be explained as the perspective projection of a polyhedron; in simpler words, the image lines are the images of some of the polyhedron's lines. See also image interpretation .

3D model: A description of a 3D object that primarily describes its shape. Models of this sort are regularly used as exemplars in model based recognition and 3D computer graphics.

3D moments: A special case of moment where the data comes from a set of 3D points .

3D object: A subset of $ \mathbb{R}^3$. In computer vision, often taken to mean a volume in $ \mathbb{R}^3$ that is bounded by a surface . Any solid object around you is an example: table, chairs, books, cups, and you yourself.

3D point: An infinitesimal volume of 3D space.

3D point feature: A point feature on a 3D object or in a 3D environment. For instance, a corner in 3D space.

3D pose estimation: 3D pose estimation is the process of determining the transformation (translation and rotation) of an object in one coordinate frame with respect to another coordinate frame. Generally, only rigid objects are considered, models of those object exist a priori, and we wish to determine the position of that object in an image on the basis of matched features. This is a fundamental open problem in computer vision where the correspondence between two sets of 3D points is found. The problem is defined as follows: Given two sets of points $ \{X_{j}\}$ and $ \{Y_{k}\}$, find the parameters of an Euclidean transformation $ \{R, t\}$ (the pose)and the match matrix $ \{m_{jk}\}$ (the correspondences) that best relates them. Assuming the points correspond, they should match exactly under this transformation.

3D reconstruction: A general term referring to the computation of a 3D model from 2D images .

3D skeleton: See skeleton

3D stratigraphy: A modeling and visualization tool used to display different underground layers. Often used for visualizations of archaeological sites or for detecting different rock and soil structures in geological surveying.

3D structure recovery: See 3D reconstruction .

3D texture: The appearance of texture on a 3D surface when imaged, for instance, the fact that the density of texels varies with distance due to perspective effects. 3D surface properties (e.g., shape, distances, orientation) can be estimated from such effects. See also shape from texture , texture orientation .

3D vision: A branch of computer vision dealing with characterizing data composed of 3D measurements. For example, this may involve segmentation of the data into individual surfaces that are then used to identify the data as one of several models. Reverse engineering is a specialism inside 3D vision.

4 connectedness: A type of image connectedness in which each rectangular pixel is considered to be connected to the four neighboring pixels that share a common crack edge . See also 8 connectedness . This figure shows the four pixels connected to the central pixel (*):
\epsfbox{FIGURES/fourcon.eps}
and the four groups of pixels joined by 4 connectedness:
\epsfbox{FIGURES/tag2628.eps}


8 connectedness: A type of image connectedness in which each rectangular pixel is considered to be connected to all eight neighboring pixels. See also 4 connectedness. This figure shows the eight pixels connected to the central pixel (*):
\epsfbox{FIGURES/eightcon.eps}
and the two groups of pixels joined by 8 connectedness:
\epsfbox{FIGURES/tag2629.eps}
Return to CVdict entry page.

Valid HTML 4.01!