Loading…

Automatic video-based human motion analyzer for consumer surveillance system

With the continuous improvements in video-analysis techniques, automatic low-cost video surveillance gradually emerges for consumer applications. Video surveillance can contribute to the safety of people in the home and ease control of home-entrance and equipment-usage functions. In this paper, we s...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on consumer electronics 2009-05, Vol.55 (2), p.591-598
Main Authors: Weilun Lao, Jungong Han, De With, P.H.N.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the continuous improvements in video-analysis techniques, automatic low-cost video surveillance gradually emerges for consumer applications. Video surveillance can contribute to the safety of people in the home and ease control of home-entrance and equipment-usage functions. In this paper, we study a flexible framework for semantic analysis of human behavior from a monocular surveillance video, captured by a consumer camera. Successful trajectory estimation and human-body modeling facilitate the semantic analysis of human activities and events in video sequences. An additional contribution is the introduction of a 3-D reconstruction scheme for scene understanding, so that the actions of persons can be analyzed from different views. The total framework consists of four processing levels: (1) a preprocessing level including background modeling and multiple-person detection, (2) an object-based level performing trajectory estimation and posture classification, (3) an event-based level for semantic analysis, and (4) a visualization level including camera calibration and 3-D scene reconstruction. Our proposed framework was evaluated and has shown its good quality (86% accuracy of posture classification and 90% for events) and effectiveness, as it achieves a near real-time performance (6-8 frames/second).
ISSN:0098-3063
1558-4127
DOI:10.1109/TCE.2009.5174427