Loading…

PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models

Pivot-based neural representation models have led to significant progress in domain adaptation for NLP. However, previous research following this approach utilize only labeled data from the source domain and unlabeled data from the source and target domains, but neglect to incorporate massive unlabe...

Full description

Saved in:
Bibliographic Details
Published in:Transactions of the Association for Computational Linguistics 2020-01, Vol.8, p.504-521
Main Authors: Ben-David, Eyal, Rabinovitz, Carmel, Reichart, Roi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Pivot-based neural representation models have led to significant progress in domain adaptation for NLP. However, previous research following this approach utilize only labeled data from the source domain and unlabeled data from the source and target domains, but neglect to incorporate massive unlabeled corpora that are not necessarily drawn from these domains. To alleviate this, we propose : A representation learning model that extends contextualized word embedding models such as BERT (Devlin et al., ) with pivot-based fine-tuning. PERL outperforms strong baselines across 22 sentiment classification domain adaptation setups, improves in-domain model performance, yields effective reduced-size models, and increases model stability.
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00328