Loading…
Optimizing Connection Weights in Neural Networks Using Ali Baba and the Forty Thieves Algorithm
Learning neural networks (NNs) is one of the most challenging problems in machine learning and has lately drawn the attention of many academics. The nonlinear structure of NNs and the unknown optimal set of key governing parameters (biases and weights) make training NNs particularly challenging. The...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Learning neural networks (NNs) is one of the most challenging problems in machine learning and has lately drawn the attention of many academics. The nonlinear structure of NNs and the unknown optimal set of key governing parameters (biases and weights) make training NNs particularly challenging. The primary drawbacks of conventional training methods are delayed convergence and local optima stagnation. Due to this, stochastic optimization techniques are dependable substitutes for addressing these problems. This study presents a new training method on the basis of the newly announced ali Baba and the forty thieves (AFT) algorithm. It has been shown that this algorithm can tackle a variety of optimization problems and can outperform many existing algorithms. Our efforts to measure its performance were prompted by this when training NNs. The proposed AFT-based trainer was tested using a bunch of 15 datasets that vary in intricacy. A comparison of AFT-based training NNs with other meta-heuristics presented for the same purpose was conducted to explore the efficiency of this training method. In most datasets, the training method-based AFT converged faster and was more adept at avoiding local optima than other rival methods. |
---|---|
ISSN: | 2831-4948 |
DOI: | 10.1109/ACIT58888.2023.10453886 |