Object recognition – technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes and scales or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems. Many approaches to the task have been implemented over multiple decades.
Use example images (called templates or exemplars) of the objects to perform recognition
Objects look different under varying conditions:
Changes in lighting or color
Changes in viewing direction
Changes in size/shape
A single exemplar is unlikely to succeed reliably. However, it is impossible to represent all appearances of an object.
Edge matching
Uses edge detection techniques, such as the Canny edge detection, to find edges.
Changes in lighting and color usually don't have much effect on image edges
Strategy:
Detect edges in template and image
Compare edges images to find the template
Must consider range of possible template positions
Measurements:
Good – count the number of overlapping edges. Not robust to changes in shape
Better – count the number of template edge pixels with some distance of an edge in the search image
Best – determine probability distribution of distance to nearest edge in search image (if template at correct position). Estimate likelihood of each template position generating image
Divide-and-Conquer search
Strategy:
Consider all positions as a set (a cell in the space of positions)
Determine lower bound on score at best position in cell
If bound is too large, prune cell
If bound is not too large, divide cell into subcells and try each subcell recursively
Process stops when cell is “small enough”
Unlike multi-resolution search, this technique is guaranteed to find all matches that meet the criterion (assuming that the lower bound is accurate)
Finding the Bound:
To find the lower bound on the best score, look at score for the template position represented by the center of the cell
Subtract maximum change from the “center” position for any other position in cell (occurs at cell corners)
Complexities arise from determining bounds on distance[citation needed]
Greyscale matching
Edges are (mostly) robust to illumination changes, however they throw away a lot of information
Must compute pixel distance as a function of both pixel position and pixel intensity
Can be applied to color also
Gradient matching
Another way to be robust to illumination changes without throwing away as much information is to compare image gradients
Matching is performed like matching greyscale images
Simple alternative: Use (normalized) correlation
Histograms of receptive field responses
Avoids explicit point correspondences
Relations between different image points implicitly coded in the receptive field responses
Swain and Ballard (1991),[2] Schiele and Crowley (2000),[3] Linde and Lindeberg (2004, 2012)[4][5]
Large modelbases
One approach to efficiently searching the database for a specific image to use eigenvectors of the templates (called eigenfaces)
Modelbases are a collection of geometric models of the objects that should be recognized
a search is used to find feasible matches between object features and image features.
the primary constraint is that a single position of the object must account for all of the feasible matches.
methods that extract features from the objects to be recognized and the images to be searched.
surface patches
corners
linear edges
Interpretation trees
A method for searching for feasible matches, is to search through a tree.
Each node in the tree represents a set of matches.
Root node represents empty set
Each other node is the union of the matches in the parent node and one additional match.
Wildcard is used for features with no match
Nodes are “pruned” when the set of matches is infeasible.
A pruned node has no children
Historically significant and still used, but less commonly
Hypothesize and test
General Idea:
Hypothesize a correspondence between a collection of image features and a collection of object features
Then use this to generate a hypothesis about the projection from the object coordinate frame to the image frame
Use this projection hypothesis to generate a rendering of the object. This step is usually known as backprojection
Compare the rendering to the image, and, if the two are sufficiently similar, accept the hypothesis
Obtaining Hypothesis:
There are a variety of different ways of generating hypotheses.
When camera intrinsic parameters are known, the hypothesis is equivalent to a hypothetical position and orientation – pose – for the object.
Utilize geometric constraints
Construct a correspondence for small sets of object features to every correctly sized subset of image points. (These are the hypotheses)
Three basic approaches:
Obtaining Hypotheses by Pose Consistency
Obtaining Hypotheses by Pose Clustering
Obtaining Hypotheses by Using Invariants
Expense search that is also redundant, but can be improved using Randomization and/or Grouping
Randomization
Examining small sets of image features until likelihood of missing object becomes small
For each set of image features, all possible matching sets of model features must be considered.
Formula:
(1 – Wc)k = Z
W = the fraction of image points that are “good” (w ~ m/n)
c = the number of correspondences necessary
k = the number of trials
Z = the probability of every trial using one (or more) incorrect correspondences
Grouping
If we can determine groups of points that are likely to come from the same object, we can reduce the number of hypotheses that need to be examined
Pose consistency
Also called Alignment, since the object is being aligned to the image
Correspondences between image features and model features are not independent – Geometric constraints
A small number of correspondences yields the object position – the others must be consistent with this
General Idea:
If we hypothesize a match between a sufficiently large group of image features and a sufficiently large group of object features, then we can recover the missing camera parameters from this hypothesis (and so render the rest of the object)
Strategy:
Generate hypotheses using small number of correspondences (e.g. triples of points for 3D recognition)
Project other model features into image (backproject) and verify additional correspondences
Use the smallest number of correspondences necessary to achieve discrete object poses
For each object, set up an accumulator array that represents pose space – each element in the accumulator array corresponds to a “bucket” in pose space.
Then take each image frame group, and hypothesize a correspondence between it and every frame group on every object
For each of these correspondences, determine pose parameters and make an entry in the accumulator array for the current object at the pose value.
If there are large numbers of votes in any object's accumulator array, this can be interpreted as evidence for the presence of that object at that pose.
The evidence can be checked using a verification method
Note that this method uses sets of correspondences, rather than individual correspondences
Implementation is easier, since each set yields a small number of possible object poses.
Improvement
The noise resistance of this method can be improved by not counting votes for objects at poses where the vote is obviously unreliable
§ For example, in cases where, if the object was at that pose, the object frame group would be invisible.
These improvements are sufficient to yield working systems
Keypoints of objects are first extracted from a set of reference images and stored in a database
An object is recognized in a new image by individually comparing each feature from the new image to this database and finding candidate matching features based on Euclidean distance of their feature vectors.
Genetic algorithms can operate without prior knowledge of a given dataset and can develop recognition procedures without human intervention. A recent project achieved 100 percent accuracy on the benchmark motorbike, face, airplane and car image datasets from Caltech and 99.4 percent accuracy on fish species image datasets.[9][10]
^Worthington, Philip L., and Edwin R. Hancock. "Object recognition using shape-from-shading." IEEE Transactions on Pattern Analysis and Machine Intelligence 23.5 (2001): 535-542.
^Brown, M., and Lowe, D.G., "Recognising PanoramasArchived 2014-12-25 at the Wayback Machine," ICCV, p. 1218, Ninth IEEE International Conference on Computer Vision (ICCV'03) - Volume 2, Nice,France, 2003
^Thomas Serre, Maximillian Riesenhuber, Jennifer Louie, Tomaso Poggio, "On the Role of Object-Specific features for Real World Object Recognition in Biological Vision." Artificial Intelligence Lab, and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Center for Biological and Computational Learning, Mc Govern Institute for Brain Research, Cambridge, MA, USA
^Christian Demant, Bernd Streicher-Abel, Peter Waszkewitz, "Industrial image processing: visual quality control in manufacturing" Outline of object recognition at Google Books
^Heikkilä, Janne; Silvén, Olli (2004). "A real-time system for monitoring of cyclists and pedestrians". Image and Vision Computing. 22 (7): 563–570. doi:10.1016/j.imavis.2003.09.010.
^Jung, Ho Gi; Kim, Dong Suk; Yoon, Pal Joo; Kim, Jaihie (2006). "Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System". In Yeung, Dit-Yan; Kwok, James T.; Fred, Ana; Roli, Fabio; de Ridder, Dick (eds.). Structural, Syntactic, and Statistical Pattern Recognition. Lecture Notes in Computer Science. Vol. 4109. Berlin, Heidelberg: Springer. pp. 384–393. doi:10.1007/11815921_42. ISBN978-3-540-37241-7.
^Liu, F.; Gleicher, M.; Jin, H.; Agarwala, A. (2009). "Content-preserving warps for 3D video stabilization". ACM Transactions on Graphics. 28 (3): 1. CiteSeerX10.1.1.678.3088. doi:10.1145/1531326.1531350.