Loading…

Early stabilizing feature importance for TensorFlow deep neural networks

Feature importance is the process where the individual elements of a machine learning model's feature vector are ranked on their relative importance to the accuracy of that model. Some feature ranking algorithms are specific to a single model type, such as Garson and Goh's neural network w...

Full description

Saved in:
Bibliographic Details
Main Authors: Heaton, Jeff, McElwee, Steven, Fraley, James, Cannady, James
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Feature importance is the process where the individual elements of a machine learning model's feature vector are ranked on their relative importance to the accuracy of that model. Some feature ranking algorithms are specific to a single model type, such as Garson and Goh's neural network weight-based feature ranking algorithm. Other feature ranking algorithms are model agnostic, such as Brieman's perturbation feature ranking algorithm. This paper provides implementations for both the neural network weight-based and perturbation feature ranking algorithms for Google's TensorFlow Deep Learning framework. Additionally, this paper introduces a novel hybrid approach of these two feature ranking algorithms that produces a stable ranking of features earlier in the training epochs of a deep neural network. Earlier stabilization of feature rank can save considerable compute cycles during model searches and feature engineering where many representations of the feature vector must be compared. This paper demonstrates all three algorithms by empirically showing that the introduced hybrid weight perturbation algorithms achieves earlier stability than the established algorithms.
ISSN:2161-4407
DOI:10.1109/IJCNN.2017.7966442