Loading…

Domain-matched Pre-training Tasks for Dense Retrieval

Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-07
Main Authors: Barlas Oğuz, Lakhotia, Kushal, Gupta, Anchit, Lewis, Patrick, Karpukhin, Vladimir, Piktus, Aleksandra, Chen, Xilun, Riedel, Sebastian, Wen-tau Yih, Gupta, Sonal, Mehdad, Yashar
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.
ISSN:2331-8422