Loading…
Training Tips for the Transformer Model
This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model ( ). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time,...
Saved in:
Published in: | Prague bulletin of mathematical linguistics 2018-04, Vol.110 (1), p.43-70 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model (
). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra “more data and larger models”, we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging. We hope that our observations will allow others to get better results given their particular hardware and data constraints. |
---|---|
ISSN: | 1804-0462 0032-6585 1804-0462 |
DOI: | 10.2478/pralin-2018-0002 |