Loading…
Outsmarting machine learning – Ensemble methods
Ensemble learning has shown outstanding performance in a variety of machine learning applications by collaborating on forecasting from numerous core models. This article provides a compendious explanation for learning ensemble methods, including 3 primary methods: stacking, boosting and bagging. It...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Ensemble learning has shown outstanding performance in a variety of machine learning applications by collaborating on forecasting from numerous core models. This article provides a compendious explanation for learning ensemble methods, including 3 primary methods: stacking, boosting and bagging. It chronicles its evolution from the inceptive stages to modern cutting-edge algorithms. The purpose of study explores on popular ensemble techniques, namely AdaBoost (adaptive boosting), XGBoost (extreme gradient boosting), gradient boosting, random forest. A concerted effort is made to provide a brief exposition of their mathematical and algorithmic representations, filling a vacuum in the current literature that would be useful for both machine learning researchers and practitioners. |
---|---|
ISSN: | 0094-243X 1551-7616 |
DOI: | 10.1063/5.0227605 |