Loading…
A new flexible and partially monotonic discrete choice model
•Ensured interpretability by ensuring monotonic attribute effects in choice models.•Maintained flexibility of utility function while ensuring monotonicity.•Specified utility using lattice network – a constrained piece-wise linear function.•Light architecture of the proposed model makes it scalable.•...
Saved in:
Published in: | Transportation research. Part B: methodological 2024-05, Vol.183, p.102947, Article 102947 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Ensured interpretability by ensuring monotonic attribute effects in choice models.•Maintained flexibility of utility function while ensuring monotonicity.•Specified utility using lattice network – a constrained piece-wise linear function.•Light architecture of the proposed model makes it scalable.•Novel model is best in interpretability at marginal loss in predictability.
The poor predictability and the misspecification arising from hand-crafted utility functions are common issues in theory-driven discrete choice models (DCMs). Data-driven DCMs improve predictability through flexible utility specifications, but they do not address the misspecification issue and provide untrustworthy behavioral interpretations (e.g., biased willingness to pay estimates). Improving interpretability at the minimum loss of flexibility/predictability is the main challenge in the data-driven DCM. To this end, this study proposes a flexible and partially monotonic DCM by specifying the systematic utility using the Lattice networks (i.e., DCM-LN). DCM-LN ensures the monotonicity of the utility function relative to the selected attributes while learning attribute-specific non-linear effects through piecewise linear functions and interaction effects using multilinear interpolations in a data-driven manner. Partial monotonicity could be viewed as domain-knowledge-based regularization to prevent overfitting, consequently avoiding incorrect signs of the attribute effects. The light architecture and an automated process to write monotonicity constraints make DCM-LN scalable and translatable to practice. The proposed DCM-LN is benchmarked against deep neural network-based DCM (i.e., DCM-DNN) and a DCM with a hand-crafted utility in a simulation study. While DCM-DNN marginally outperforms DCM-LN in predictability, DCM-LN highly outperforms all considered models in interpretability, i.e., recovering willingness to pay at individual and population levels. The empirical study verifies the balanced interpretability and predictability of DCM-LN. With superior interpretability and high predictability, DCM-LN lays out new pathways to harmonize the theory-driven and data-driven paradigms. |
---|---|
ISSN: | 0191-2615 1879-2367 |
DOI: | 10.1016/j.trb.2024.102947 |