Loading…
Optimization of tuning parameters for open node fault regularizer
In neural network training, adding a regularization term into the objective function is an effective method to improve the generalization ability and fault tolerance. Recently, an open node fault regularizer (ONFR) approach was proposed to train radial basis function (RBF) networks. However, this ap...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2012-10, Vol.94, p.32-45 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In neural network training, adding a regularization term into the objective function is an effective method to improve the generalization ability and fault tolerance. Recently, an open node fault regularizer (ONFR) approach was proposed to train radial basis function (RBF) networks. However, this approach only aims at minimizing the training set error of the trained network under the open node fault situation. This paper studies the generalization ability of faulty RBF networks. We derive a formula to predict the generalization ability of faulty RBF networks. With this formula, we are able to predict the generalization ability of faulty RBF networks without using a test set or generating a large number of potential faulty networks. Based on the formula, we then develop an algorithm to optimize the regularization parameter and RBF width. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2012.03.010 |