Loading…

Improving the Performance of Deep Neural Networks Using Two Proposed Activation Functions

In artificial neural networks, activation functions play a significant role in the learning process. Choosing the proper activation function is a major factor in achieving a successful learning performance. Many activation functions are sufficient universal approximators, but their performance is la...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.82249-82271
Main Authors: Alkhouly, Asmaa A., Mohammed, Ammar, Hefny, Hesham A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In artificial neural networks, activation functions play a significant role in the learning process. Choosing the proper activation function is a major factor in achieving a successful learning performance. Many activation functions are sufficient universal approximators, but their performance is lacking. Thus, many efforts have been directed toward activation functions to improve the learning performance of artificial neural networks. However, the learning process involves many challenges, such as saturation, dying, and exploding/vanishing the gradient problems. The contribution of this work resides in several axes. First, we introduce two novel activation functions: absolute linear units and inverse polynomial linear units. Both activation functions are augmented by an adjustable parameter that controls the slope of the gradient. Second, we present a comprehensive study and a taxonomy of various types of activation functions. Third, we conduct a broad range of experiments on several deep neural architecture models with consideration of network type and depth. Fourth, we evaluate the proposed activation functions' performance in image and text classification tasks. For this purpose, several public benchmark datasets are utilized to evaluate and compare the performance of the proposed functions with that of a group of common activation functions. Finally, we deeply analyze the impact of several common activation functions on deep network architectures. Results reveal that the proposed functions outperform most of the popular activation functions in several benchmarks. The statistical study of the overall experiments on both classification categories indicates that the proposed activation functions are robust and superior among all the competitive activation functions in terms of average accuracy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3085855