Loading…
An integrated model for evaluating the amount of data required for reliable recognition
Many recognition procedures rely on the consistency of a subset of data features with a hypothesis as the sufficient evidence to the presence of the corresponding object. We analyze here the performance of such procedures, using a probabilistic model, and provide expressions for the sufficient size...
Saved in:
Published in: | IEEE transactions on pattern analysis and machine intelligence 1997-11, Vol.19 (11), p.1251-1264 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many recognition procedures rely on the consistency of a subset of data features with a hypothesis as the sufficient evidence to the presence of the corresponding object. We analyze here the performance of such procedures, using a probabilistic model, and provide expressions for the sufficient size of such data subsets, that, if consistent, guarantee the validity of the hypotheses with arbitrary confidence. We focus on 2D objects and the affine transformation class, and provide, for the first time, an integrated model which takes into account the shape of the objects involved, the accuracy of the data collected, the clutter present in the scene, the class of the transformations involved, the accuracy of the localization, and the confidence we would like to have in our hypotheses. Increasingly, it turns out that most of these factors can be quantified cumulatively by one parameter, denoted "effective similarity", which largely determines the sufficient subset size. The analysis is based on representing the class of instances corresponding to a model object and a group of transformations, as members of a metric space, and quantifying the variation of the instances by a metric cover. |
---|---|
ISSN: | 0162-8828 1939-3539 |
DOI: | 10.1109/34.632984 |