Loading…
One-Stage Tree: end-to-end tree builder and pruner
Decision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutio...
Saved in:
Published in: | Machine learning 2022-05, Vol.111 (5), p.1959-1985 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Decision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutions. Inspired by the current popular neural networks,
soft
trees that support end-to-end training with back-propagation have attracted more and more attention. However, existing
soft
trees either lose the interpretability due to the continuous relaxation or employ the two-stage method of end-to-end building and then pruning. In this paper, we propose One-Stage Tree to build and prune the decision tree jointly through a bilevel optimization problem. Moreover, we leverage the reparameterization trick and proximal iterations to keep the tree discrete during end-to-end training. As a result, One-Stage Tree reduces the performance gap between training and testing and maintains the advantage of interpretability. Extensive experiments demonstrate that the proposed One-Stage Tree outperforms CART and the existing
soft
trees on classification and regression tasks. |
---|---|
ISSN: | 0885-6125 1573-0565 |
DOI: | 10.1007/s10994-021-06094-4 |