Loading…

Semantic Analysis of Field Sports Video using a Petri-Net of Audio-Visual Concepts

The most common approach to automatic summarization and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative...

Full description

Saved in:
Bibliographic Details
Published in:Computer journal 2009-10, Vol.52 (7), p.808-823
Main Authors: Bai, Liang, Lao, Songyang, Smeaton, Alan F., O'Connor, Noel E., Sadlier, David, Sinclair, David
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The most common approach to automatic summarization and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets, which can be used for both semantic description and event detection within sports videos. Low-level algorithms to detect PCs using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of PCs is formally defined to describe video content. We call this a perception concept network-Petri-Net (PCN-PN) model. Using PCN-PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN-PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework.
ISSN:0010-4620
1460-2067
DOI:10.1093/comjnl/bxn058