Loading…
Efficient Additive Models via the Generalized Lasso
We propose a framework for learning generalized additive models at very little additional cost (a small constant) compared to some of the most efficient schemes for learning linear classifiers such as linear SVMs and regularized logistic regression. We achieve this through a simple feature encoding...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We propose a framework for learning generalized additive models at very little additional cost (a small constant) compared to some of the most efficient schemes for learning linear classifiers such as linear SVMs and regularized logistic regression. We achieve this through a simple feature encoding scheme followed by a novel approach to regularization which we term ``generalized lasso''. Addtive models offer an attractive alternative to linear models for many large scale tasks as they have significantly higher predictive power while remaining easily interpretable. Furthermore, our regularizations approach extends to arbitrary graphs, allowing, for example, to explicitly incorporate spatial information or similar priors. Traditional approaches for learning additive models, such as back fitting, do not scale to large datasets. Our new formulation of the resulting optimization problem allows us to investigate the use of recent accelerated gradient algorithms and demonstrate speed comparable to state of the art linear SVM training methods, making additive models suitable for very large problems. In our experiments we find that additive models consistently outperform linear models on various datasets. |
---|---|
ISSN: | 2375-9232 2375-9259 |
DOI: | 10.1109/ICDMW.2010.184 |