Loading…

No-Regret Learning in Dynamic Stackelberg Games

In a Stackelberg game , a leader commits to a randomized strategy, and a follower chooses their best strategy in response. We consider an extension of a standard Stackelberg game, called a discrete-time dynamic Stackelberg game , that has an underlying state space that affects the leader's rewa...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control 2024-03, Vol.69 (3), p.1-14
Main Authors: Lauffer, Niklas, Ghasemi, Mahsa, Hashemi, Abolfazl, Savas, Yagiz, Topcu, Ufuk
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In a Stackelberg game , a leader commits to a randomized strategy, and a follower chooses their best strategy in response. We consider an extension of a standard Stackelberg game, called a discrete-time dynamic Stackelberg game , that has an underlying state space that affects the leader's rewards and available strategies and evolves in a Markovian manner depending on both the leader and follower's selected strategies. Although standard Stackelberg games have been utilized to improve scheduling in security domains, their deployment is often limited by requiring complete information of the follower's utility function. In contrast, we consider scenarios where the follower's utility function is unknown to the leader; however, it can be linearly parameterized. Our objective then is to provide an algorithm that prescribes a randomized strategy to the leader at each step of the game based on observations of how the follower responded in previous steps. We design an online learning algorithm that, with high probability, is no-regret , i.e., achieves a regret bound (when compared to the best policy in hindsight) which is sublinear in the number of timesteps; the degree of sublinearity depends on the number of features representing the follower's utility function. The regret of the proposed learning algorithm is independent of the size of the state space and polynomial in the rest of the parameters of the game. We show that the proposed learning algorithm outperforms existing model-free reinforcement learning approaches.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2023.3330797