Loading…
A Framework for Generic Object Recognition with Bayesian Networks
Since Biederman introduced to the computer vision community a theory of human image understanding called "recognition-by-components," great interest in using it as a basis for generic object recognition has been spawned. Inspired by OPTICA, we propose a framework for generic object recogni...
Saved in:
Published in: | International journal of computers & applications 2005-01, Vol.27 (3), p.123-138 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Since Biederman introduced to the computer vision community a theory of human image understanding called "recognition-by-components," great interest in using it as a basis for generic object recognition has been spawned. Inspired by OPTICA, we propose a framework for generic object recognition with multiple Bayesian networks, where the object, primitive, prediction, and face nets are integrated with the more commonly used graph representation in computer vision to capture the causal, probabilistic relations among objects, primitives, aspects, faces, and contours. Based on the use of likelihood evidence, the communication mechanism among the nets is simple and efficient, and the four basic recognition behaviours are realized in a single framework. Each net is an autonomous agent, selectively responding to the data from the lower level in the context from its parent net, and dealing with the uncertainty and controlling the recognition tasks on its corresponding level. Our contributions in this article are the dynamic feedback control among recognition stages based on Bayesian networks, the attention mechanism using consistency-and discrimination-based value functions, and the unification of incremental grouping, partial matching, and multi-key indexing as an identical process under prediction for hypothesis generation. Our experiments have demonstrated that this new approach is more robust and efficient than the previous one. |
---|---|
ISSN: | 1206-212X 1925-7074 |
DOI: | 10.1080/1206212X.2005.11441764 |