Loading…

Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback

We consider online no-regret learning in unknown games with bandit feedback, where each player can only observe its reward at each time -- determined by all players' current joint action -- rather than its gradient. We focus on the class of \textit{smooth and strongly monotone} games and study...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-03
Main Authors: Ba, Wenjia, Lin, Tianyi, Zhang, Jiawei, Zhou, Zhengyuan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider online no-regret learning in unknown games with bandit feedback, where each player can only observe its reward at each time -- determined by all players' current joint action -- rather than its gradient. We focus on the class of \textit{smooth and strongly monotone} games and study optimal no-regret learning therein. Leveraging self-concordant barrier functions, we first construct a new bandit learning algorithm and show that it achieves the single-agent optimal regret of \(\tilde{\Theta}(n\sqrt{T})\) under smooth and strongly concave reward functions (\(n \geq 1\) is the problem dimension). We then show that if each player applies this no-regret learning algorithm in strongly monotone games, the joint action converges in the \textit{last iterate} to the unique Nash equilibrium at a rate of \(\tilde{\Theta}(nT^{-1/2})\). Prior to our work, the best-known convergence rate in the same class of games is \(\tilde{O}(n^{2/3}T^{-1/3})\) (achieved by a different algorithm), thus leaving open the problem of optimal no-regret learning algorithms (since the known lower bound is \(\Omega(nT^{-1/2})\)). Our results thus settle this open problem and contribute to the broad landscape of bandit game-theoretical learning by identifying the first doubly optimal bandit learning algorithm, in that it achieves (up to log factors) both optimal regret in the single-agent learning and optimal last-iterate convergence rate in the multi-agent learning. We also present preliminary numerical results on several application problems to demonstrate the efficacy of our algorithm in terms of iteration count.
ISSN:2331-8422