Loading…
Automatic content understanding with cascaded spatial–temporal deep framework for capsule endoscopy videos
Capsule endoscopy (CE) is the first-line diagnostic tool for inspecting gastrointestinal (GI) tract diseases. It is a tremendous task on examining and managing the CE videos by endoscopists. Therefore, a computer-aided diagnosis system is desired and urgent. In this paper, a general cascaded spatial...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2017-03, Vol.229, p.77-87 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Capsule endoscopy (CE) is the first-line diagnostic tool for inspecting gastrointestinal (GI) tract diseases. It is a tremendous task on examining and managing the CE videos by endoscopists. Therefore, a computer-aided diagnosis system is desired and urgent. In this paper, a general cascaded spatial–temporal deep framework is proposed to understand the most commonly seen contents of whole GI tract videos. First, the noisy contents such as feces, bile, bubble, and low power images are detected and removed by a Convolutional Neural Network (CNN) model. The clear images are then classified into entrance, stomach, small intestine, and colon by the second CNN. Finally, the topographic segmentation of the whole video is performed with a global temporal integration strategy by Hidden Markov Model (HMM). Compared to existing methods, the proposed framework performs noise content detection and topographic segmentation at the same time, which significantly reduces the number of images to be checked by endoscopists and segments images of different organs more accurately. Experiments on a dataset with 630K images from 14 patients demonstrate that the proposed approach achieves a promising performance in terms of effectiveness and efficiency. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2016.06.077 |