Loading…
Learning spatiotemporal representations for human fall detection in surveillance video
•An effective background subtraction technique is proposed.•A novel view-independent CNNs classifier which is applied.•High-quality network inputs have low computational cost.•A simple voting classifier works fairly well in multi-camera system. In this paper, a computer vision based framework is pro...
Saved in:
Published in: | Journal of visual communication and image representation 2019-02, Vol.59, p.215-230 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •An effective background subtraction technique is proposed.•A novel view-independent CNNs classifier which is applied.•High-quality network inputs have low computational cost.•A simple voting classifier works fairly well in multi-camera system.
In this paper, a computer vision based framework is proposed that detects falls from surveillance videos. Firstly, we employ background subtraction and rank pooling to model spatial and temporal representations in videos, respectively. We then introduce a novel three-stream Convolutional Neural Networks as an event classifier. Silhouettes and their motion history images serve as input to the first two streams, while dynamic images whose temporal duration is equal to motion history images, are used as input to the third stream. Finally, we apply voting on the results of event classification to perform multi-camera fall detection. The main novelty of our method against the conventional ones is that high-quality spatiotemporal representations in different levels are learned to take full advantage of the appearance and motion information. Extensive experiments have been conducted on two widely used fall datasets. The results have shown to demonstrate the effectiveness of the proposed method. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2019.01.024 |