Loading…
Shift-Reduce Constituent Parsing with Neural Lookahead Features
Transition-based models can be fast and accurate for constituent parsing. Compared with chart-based models, they leverage richer features by extracting history information from a parser stack, which consists of a sequence of non-local constituents. On the other hand, during incremental parsing, cons...
Saved in:
Published in: | Transactions of the Association for Computational Linguistics 2017-01, Vol.5, p.45-58 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Transition-based models can be fast and accurate for constituent parsing.
Compared with chart-based models, they leverage richer features by extracting
history information from a parser stack, which consists of a sequence of
non-local constituents. On the other hand, during incremental parsing,
constituent information on the right hand side of the current word is not
utilized, which is a relative weakness of shift-reduce parsing. To address this
limitation, we leverage a fast neural model to extract lookahead features. In
particular, we build a bidirectional LSTM model, which leverages full sentence
information to predict the hierarchy of constituents that each word starts and
ends. The results are then passed to a strong transition-based constituent
parser as lookahead features. The resulting parser gives 1.3% absolute
improvement in WSJ and 2.3% in CTB compared to the baseline, giving the highest
reported accuracies for fully-supervised parsing. |
---|---|
ISSN: | 2307-387X 2307-387X |
DOI: | 10.1162/tacl_a_00045 |