Loading…

Revisiting a Design Choice in Gradient Temporal Difference Learning

Off-policy learning enables a reinforcement learning (RL) agent to reason counterfactually about policies that are not executed and is one of the most important ideas in RL. It, however, can lead to instability when combined with function approximation and bootstrapping, two arguably indispensable i...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-11
Main Authors: Qian, Xiaochi, Zhang, Shangtong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Off-policy learning enables a reinforcement learning (RL) agent to reason counterfactually about policies that are not executed and is one of the most important ideas in RL. It, however, can lead to instability when combined with function approximation and bootstrapping, two arguably indispensable ingredients for large-scale reinforcement learning. This is the notorious deadly triad. The seminal work Sutton et al. (2008) pioneers Gradient Temporal Difference learning (GTD) as the first solution to the deadly triad, which has enjoyed massive success thereafter. During the derivation of GTD, some intermediate algorithm, called \(A^\top\)TD, was invented but soon deemed inferior. In this paper, we revisit this \(A^\top\)TD and prove that a variant of \(A^\top\)TD, called \(A_t^\top\)TD, is also an effective solution to the deadly triad. Furthermore, this \(A_t^\top\)TD only needs one set of parameters and one learning rate. By contrast, GTD has two sets of parameters and two learning rates, making it hard to tune in practice. We provide asymptotic analysis for \(A^\top_t\)TD and finite sample analysis for a variant of \(A^\top_t\)TD that additionally involves a projection operator. The convergence rate of this variant is on par with the canonical on-policy temporal difference learning.
ISSN:2331-8422