Loading…

A Measure-Theoretic Characterization of Tight Language Models

Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of inf...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-08
Main Authors: Du, Li, Lucas Torroba Hennigen, Pimentel, Tiago, Meister, Clara, Eisner, Jason, Cotterell, Ryan
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
ISSN:2331-8422
DOI:10.48550/arxiv.2212.10502