Loading…
How Functions Evolve in Deep Convolutional Neural Network
Deep Convolutional Neural Network (CNN) has been successful in various visual applications. Unfortunately, the mechanism to explain how CNN actually learns and how it works is not clearly revealed and understood. In this paper, we propose analytics for CNN from the functional perspective by construc...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep Convolutional Neural Network (CNN) has been successful in various visual applications. Unfortunately, the mechanism to explain how CNN actually learns and how it works is not clearly revealed and understood. In this paper, we propose analytics for CNN from the functional perspective by constructing three simple yet effective measurements on the convolutional filters. It quantitatively measures the change of convolutional filters in training process, which can be used to explain the learning mechanism of different CNN architectures. By experimental facts on representative VGGNet and ResNet, we find that 1) the change magnitude of lower layer parameters is greater than that of upper layers; 2) lower layers are closer to raw data and wash out redundancy, and higher layers learn more useful information of the data; 3) redundant filters do exist in typical CNN; and 4) the functional behaviors of VGGNet and ResNet can explain the intrinsic difference of plain CNN and residual CNN architectures. Our analytical framework and observations can facilitate future research to understand the learning mechanisms of a wider range of CNN family comprehensively. |
---|---|
ISSN: | 2164-5221 |
DOI: | 10.1109/ICSP.2018.8652459 |