Loading…

A Survey on Causal Reinforcement Learning

While reinforcement learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability. Interestingly, many researchers have leveraged insights from the causality literature recently, b...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-11, p.1-21
Main Authors: Zeng, Yan, Cai, Ruichu, Sun, Fuchun, Huang, Libo, Hao, Zhifeng
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:While reinforcement learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability. Interestingly, many researchers have leveraged insights from the causality literature recently, bringing forth flourishing works to unify the merits of causality and address well the challenges from RL. As such, it is of great necessity and significance to collate these causal RL (CRL) works, offer a review of CRL methods, and investigate the potential functionality from causality toward RL. In particular, we divide the existing CRL approaches into two categories according to whether their causality-based information is given in advance or not. We further analyze each category in terms of the formalization of different models, ranging from the Markov decision process (MDP), partially observed MDP (POMDP), multiarmed bandits (MABs), imitation learning (IL), and dynamic treatment regime (DTR). Each of them represents a distinct type of causal graphical illustration. Moreover, we summarize the evaluation matrices and open sources, while we discuss emerging applications, along with promising prospects for the future development of CRL.
ISSN:2162-237X
DOI:10.1109/TNNLS.2024.3403001