Loading…

A Refined Fuzzy Min-Max Neural Network With New Learning Procedures for Pattern Classification

The fuzzy min-max (FMM) neural network stands as a useful model for solving pattern classification problems. FMM has many important features, such as online learning and one-pass learning. It, however, has certain limitations, especially in its learning algorithm, which consists of the expansion, ov...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on fuzzy systems 2020-10, Vol.28 (10), p.2480-2494
Main Authors: Sayaydeh, Osama Nayel Al, Mohammed, Mohammed Falah, Alhroob, Essam, Tao, Hai, Lim, Chee Peng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The fuzzy min-max (FMM) neural network stands as a useful model for solving pattern classification problems. FMM has many important features, such as online learning and one-pass learning. It, however, has certain limitations, especially in its learning algorithm, which consists of the expansion, overlap test, and contraction procedures. This article proposes a refined fuzzy min-max (RFMM) neural network with new procedures for tackling the key limitations of FMM. RFMM has a number of contributions. First, a new expansion procedure for overcoming the problems of overlap leniency and irregularity of hyperbox expansion is introduced . It avoids the overlap cases between hyperboxes from different classes, reducing the number of overlap cases to one (containment case). Second, a new formula that simplifies the original rules in the overlap test is proposed. It has two important features: (i) identifying the overlap leniency problem during the expansion procedure; (ii) activating the contraction procedure to eliminate the containment case. Third, a new contraction procedure for overcoming the data distortion problem and providing more accurate decision boundaries for the contracted hyperboxes is proposed. Fourth, a new prediction strategy that combines both membership function and distance measure to prevent any possible random decision-making during the test stage is proposed. The performance of RFMM is evaluated with the UCI benchmark datasets. The results demonstrate the effectiveness of the proposed modifications in making RFMM a useful model for solving pattern classification problems, as compared with other existing FMM and non-FMM classifiers.
ISSN:1063-6706
1941-0034
DOI:10.1109/TFUZZ.2019.2939975