Loading…

A Study on the Best Way to Compress Natural Language Processing Models

Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limita...

Full description

Saved in:
Bibliographic Details
Main Authors: Antunes, Joao, Pardal, Miguel L., Coheur, Luisa
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limitations. In this paper, we apply state-of-the-art model compression techniques to create compact versions of several of these models. In order to evaluate whether the trade-off between model performance and budget is worthwhile, we evaluate them in terms of efficiency, model simplicity and environmental foot-print. We also present a brief comparison between uncompressed and compressed models when running in low-end hardware.
ISSN:1558-4739
DOI:10.1109/FUZZ-IEEE55066.2022.9882595