Loading…

Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning

Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions, which is crucial in applications where online experimentation is limited. However, depending entirely on logged data, OPE/L is sensitive to environment distribution shifts -- discrepancies between the...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-07
Main Authors: Kallus, Nathan, Mao, Xiaojie, Wang, Kaiwen, Zhou, Zhengyuan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions, which is crucial in applications where online experimentation is limited. However, depending entirely on logged data, OPE/L is sensitive to environment distribution shifts -- discrepancies between the data-generating environment and that where policies are deployed. \citet{si2020distributional} proposed distributionally robust OPE/L (DROPE/L) to address this, but the proposal relies on inverse-propensity weighting, whose estimation error and regret will deteriorate if propensities are nonparametrically estimated and whose variance is suboptimal even if not. For standard, non-robust, OPE/L, this is solved by doubly robust (DR) methods, but they do not naturally extend to the more complex DROPE/L, which involves a worst-case expectation. In this paper, we propose the first DR algorithms for DROPE/L with KL-divergence uncertainty sets. For evaluation, we propose Localized Doubly Robust DROPE (LDR\(^2\)OPE) and show that it achieves semiparametric efficiency under weak product rates conditions. Thanks to a localization technique, LDR\(^2\)OPE only requires fitting a small number of regressions, just like DR methods for standard OPE. For learning, we propose Continuum Doubly Robust DROPL (CDR\(^2\)OPL) and show that, under a product rate condition involving a continuum of regressions, it enjoys a fast regret rate of \(\mathcal{O}\left(N^{-1/2}\right)\) even when unknown propensities are nonparametrically estimated. We empirically validate our algorithms in simulations and further extend our results to general \(f\)-divergence uncertainty sets.
ISSN:2331-8422