Loading…

A Study on the Best Way to Compress Natural Language Processing Models

Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limita...

Full description

Saved in:
Bibliographic Details
Main Authors: Antunes, Joao, Pardal, Miguel L., Coheur, Luisa
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 8
container_issue
container_start_page 1
container_title
container_volume
creator Antunes, Joao
Pardal, Miguel L.
Coheur, Luisa
description Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limitations. In this paper, we apply state-of-the-art model compression techniques to create compact versions of several of these models. In order to evaluate whether the trade-off between model performance and budget is worthwhile, we evaluate them in terms of efficiency, model simplicity and environmental foot-print. We also present a brief comparison between uncompressed and compressed models when running in low-end hardware.
doi_str_mv 10.1109/FUZZ-IEEE55066.2022.9882595
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9882595</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9882595</ieee_id><sourcerecordid>9882595</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-1ef9044ddb4ed26853eb74ade6da618b0aa4cfd4bd7bf8db1dfebb22230744a43</originalsourceid><addsrcrecordid>eNotj09LwzAcQKMgOOc-gZeA59Zf0iRNjrO0Oqh_QIewy0iWX2ula0fTHvrtFdzpHR48eITcM4gZA_NQbHe7aJPnuZSgVMyB89hozaWRF2RlUs2UkkKlDOCSLJiUOhJpYq7JTQg_ABxAmgUp1vRjnPxM-46O30gfMYz0y8507GnWH08DhkBf7TgNtqWl7erJ1kjfh_7wJ5qupi-9xzbckqvKtgFXZy7Jtsg_s-eofHvaZOsyajgkY8SwMiCE906g50rLBF0qrEflrWLagbXiUHnhfOoq7R3zFTrHOU8gFcKKZEnu_rsNIu5PQ3O0w7w_bye_Tj5Otw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Study on the Best Way to Compress Natural Language Processing Models</title><source>IEEE Xplore All Conference Series</source><creator>Antunes, Joao ; Pardal, Miguel L. ; Coheur, Luisa</creator><creatorcontrib>Antunes, Joao ; Pardal, Miguel L. ; Coheur, Luisa</creatorcontrib><description>Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limitations. In this paper, we apply state-of-the-art model compression techniques to create compact versions of several of these models. In order to evaluate whether the trade-off between model performance and budget is worthwhile, we evaluate them in terms of efficiency, model simplicity and environmental foot-print. We also present a brief comparison between uncompressed and compressed models when running in low-end hardware.</description><identifier>EISSN: 1558-4739</identifier><identifier>EISBN: 9781665467100</identifier><identifier>EISBN: 166546710X</identifier><identifier>DOI: 10.1109/FUZZ-IEEE55066.2022.9882595</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; environ-mental footprint ; Fuzzy systems ; Hardware ; model compression ; model evaluation ; Natural language processing ; Performance evaluation</subject><ispartof>2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2022, p.1-8</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9882595$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9882595$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Antunes, Joao</creatorcontrib><creatorcontrib>Pardal, Miguel L.</creatorcontrib><creatorcontrib>Coheur, Luisa</creatorcontrib><title>A Study on the Best Way to Compress Natural Language Processing Models</title><title>2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)</title><addtitle>FUZZ-IEEE</addtitle><description>Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limitations. In this paper, we apply state-of-the-art model compression techniques to create compact versions of several of these models. In order to evaluate whether the trade-off between model performance and budget is worthwhile, we evaluate them in terms of efficiency, model simplicity and environmental foot-print. We also present a brief comparison between uncompressed and compressed models when running in low-end hardware.</description><subject>Computational modeling</subject><subject>environ-mental footprint</subject><subject>Fuzzy systems</subject><subject>Hardware</subject><subject>model compression</subject><subject>model evaluation</subject><subject>Natural language processing</subject><subject>Performance evaluation</subject><issn>1558-4739</issn><isbn>9781665467100</isbn><isbn>166546710X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj09LwzAcQKMgOOc-gZeA59Zf0iRNjrO0Oqh_QIewy0iWX2ula0fTHvrtFdzpHR48eITcM4gZA_NQbHe7aJPnuZSgVMyB89hozaWRF2RlUs2UkkKlDOCSLJiUOhJpYq7JTQg_ABxAmgUp1vRjnPxM-46O30gfMYz0y8507GnWH08DhkBf7TgNtqWl7erJ1kjfh_7wJ5qupi-9xzbckqvKtgFXZy7Jtsg_s-eofHvaZOsyajgkY8SwMiCE906g50rLBF0qrEflrWLagbXiUHnhfOoq7R3zFTrHOU8gFcKKZEnu_rsNIu5PQ3O0w7w_bye_Tj5Otw</recordid><startdate>20220718</startdate><enddate>20220718</enddate><creator>Antunes, Joao</creator><creator>Pardal, Miguel L.</creator><creator>Coheur, Luisa</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20220718</creationdate><title>A Study on the Best Way to Compress Natural Language Processing Models</title><author>Antunes, Joao ; Pardal, Miguel L. ; Coheur, Luisa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-1ef9044ddb4ed26853eb74ade6da618b0aa4cfd4bd7bf8db1dfebb22230744a43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computational modeling</topic><topic>environ-mental footprint</topic><topic>Fuzzy systems</topic><topic>Hardware</topic><topic>model compression</topic><topic>model evaluation</topic><topic>Natural language processing</topic><topic>Performance evaluation</topic><toplevel>online_resources</toplevel><creatorcontrib>Antunes, Joao</creatorcontrib><creatorcontrib>Pardal, Miguel L.</creatorcontrib><creatorcontrib>Coheur, Luisa</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEL</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Antunes, Joao</au><au>Pardal, Miguel L.</au><au>Coheur, Luisa</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Study on the Best Way to Compress Natural Language Processing Models</atitle><btitle>2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)</btitle><stitle>FUZZ-IEEE</stitle><date>2022-07-18</date><risdate>2022</risdate><spage>1</spage><epage>8</epage><pages>1-8</pages><eissn>1558-4739</eissn><eisbn>9781665467100</eisbn><eisbn>166546710X</eisbn><abstract>Current research in Natural Language Processing shows a growing number of models extensively trained with large computational budgets. However, these models present computationally demanding requirements, preventing them from being deployed in devices with strict resource and response latency limitations. In this paper, we apply state-of-the-art model compression techniques to create compact versions of several of these models. In order to evaluate whether the trade-off between model performance and budget is worthwhile, we evaluate them in terms of efficiency, model simplicity and environmental foot-print. We also present a brief comparison between uncompressed and compressed models when running in low-end hardware.</abstract><pub>IEEE</pub><doi>10.1109/FUZZ-IEEE55066.2022.9882595</doi><tpages>8</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1558-4739
ispartof 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2022, p.1-8
issn 1558-4739
language eng
recordid cdi_ieee_primary_9882595
source IEEE Xplore All Conference Series
subjects Computational modeling
environ-mental footprint
Fuzzy systems
Hardware
model compression
model evaluation
Natural language processing
Performance evaluation
title A Study on the Best Way to Compress Natural Language Processing Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T02%3A03%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Study%20on%20the%20Best%20Way%20to%20Compress%20Natural%20Language%20Processing%20Models&rft.btitle=2022%20IEEE%20International%20Conference%20on%20Fuzzy%20Systems%20(FUZZ-IEEE)&rft.au=Antunes,%20Joao&rft.date=2022-07-18&rft.spage=1&rft.epage=8&rft.pages=1-8&rft.eissn=1558-4739&rft_id=info:doi/10.1109/FUZZ-IEEE55066.2022.9882595&rft.eisbn=9781665467100&rft.eisbn_list=166546710X&rft_dat=%3Cieee_CHZPO%3E9882595%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-1ef9044ddb4ed26853eb74ade6da618b0aa4cfd4bd7bf8db1dfebb22230744a43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9882595&rfr_iscdi=true