Loading…

Large Language Models as Markov Chains

Large language models (LLMs) have proven to be remarkably efficient, both across a wide range of natural language processing tasks and well beyond them. However, a comprehensive theoretical analysis of the origins of their impressive performance remains elusive. In this paper, we approach this chall...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-10
Main Authors: Zekri, Oussama, Ambroise Odonnat, Benechehab, Abdelhakim, Bleistein, Linus, Boullé, Nicolas, Redko, Ievgen
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large language models (LLMs) have proven to be remarkably efficient, both across a wide range of natural language processing tasks and well beyond them. However, a comprehensive theoretical analysis of the origins of their impressive performance remains elusive. In this paper, we approach this challenging task by drawing an equivalence between generic autoregressive language models with vocabulary of size \(T\) and context window of size \(K\) and Markov chains defined on a finite state space of size \(\mathcal{O}(T^K)\). We derive several surprising findings related to the existence of a stationary distribution of Markov chains that capture the inference power of LLMs, their speed of convergence to it, and the influence of the temperature on the latter. We then prove pre-training and in-context generalization bounds and show how the drawn equivalence allows us to enrich their interpretation. Finally, we illustrate our theoretical guarantees with experiments on several recent LLMs to highlight how they capture the behavior observed in practice.
ISSN:2331-8422