Loading…

Neural Network Quantisation for Faster Homomorphic Encryption

Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly t...

Full description

Saved in:
Bibliographic Details
Main Authors: Legiest, Wouter, Turan, Furkan, Van Beirendonck, Michiel, D'Anvers, Jan-Pieter, Verbauwhede, Ingrid
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 3
container_issue
container_start_page 1
container_title
container_volume
creator Legiest, Wouter
Turan, Furkan
Van Beirendonck, Michiel
D'Anvers, Jan-Pieter
Verbauwhede, Ingrid
description Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
doi_str_mv 10.1109/IOLTS59296.2023.10224890
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10224890</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10224890</ieee_id><sourcerecordid>10224890</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-48e8afa9a415b020b4bda78243589198386e284d9d047b7431b93a5a17f180763</originalsourceid><addsrcrecordid>eNo1j8FKw0AURUdBsNb-gYv5gcT3Zt4k8xYupLS2EFrEui6TZoKjbRImKdK_t0XlLu7mcDlXCImQIgI_LtfF5s2w4ixVoHSKoBRZhisx4ZytNqAJtTHXYoRMKmECvBV3ff8JYDJmNRJPK3-Mbi9Xfvhu45d8PbpmCL0bQtvIuo1y7vrBR7loD-fE7iPs5KzZxVN3Ie7FTe32vZ_89Vi8z2eb6SIp1i_L6XORBAU0JGS9dbVjR2hKUFBSWbncKtLGMp5FbeaVpYoroLzMSWPJ2hmHeY0W8kyPxcPvbvDeb7sYDi6etv939Q_1BUmX</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Neural Network Quantisation for Faster Homomorphic Encryption</title><source>IEEE Xplore All Conference Series</source><creator>Legiest, Wouter ; Turan, Furkan ; Van Beirendonck, Michiel ; D'Anvers, Jan-Pieter ; Verbauwhede, Ingrid</creator><creatorcontrib>Legiest, Wouter ; Turan, Furkan ; Van Beirendonck, Michiel ; D'Anvers, Jan-Pieter ; Verbauwhede, Ingrid</creatorcontrib><description>Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.</description><identifier>EISSN: 1942-9401</identifier><identifier>EISBN: 9798350341355</identifier><identifier>DOI: 10.1109/IOLTS59296.2023.10224890</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computer architecture ; convolutional neural networks ; Cryptography ; fully homomorphic encryption ; Libraries ; Neural networks ; privacy-preserving machine learning ; quantisation ; Quantization (signal) ; Seals ; Training</subject><ispartof>2023 IEEE 29th International Symposium on On-Line Testing and Robust System Design (IOLTS), 2023, p.1-3</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10224890$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10224890$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Legiest, Wouter</creatorcontrib><creatorcontrib>Turan, Furkan</creatorcontrib><creatorcontrib>Van Beirendonck, Michiel</creatorcontrib><creatorcontrib>D'Anvers, Jan-Pieter</creatorcontrib><creatorcontrib>Verbauwhede, Ingrid</creatorcontrib><title>Neural Network Quantisation for Faster Homomorphic Encryption</title><title>2023 IEEE 29th International Symposium on On-Line Testing and Robust System Design (IOLTS)</title><addtitle>IOLTS</addtitle><description>Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.</description><subject>Computer architecture</subject><subject>convolutional neural networks</subject><subject>Cryptography</subject><subject>fully homomorphic encryption</subject><subject>Libraries</subject><subject>Neural networks</subject><subject>privacy-preserving machine learning</subject><subject>quantisation</subject><subject>Quantization (signal)</subject><subject>Seals</subject><subject>Training</subject><issn>1942-9401</issn><isbn>9798350341355</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1j8FKw0AURUdBsNb-gYv5gcT3Zt4k8xYupLS2EFrEui6TZoKjbRImKdK_t0XlLu7mcDlXCImQIgI_LtfF5s2w4ixVoHSKoBRZhisx4ZytNqAJtTHXYoRMKmECvBV3ff8JYDJmNRJPK3-Mbi9Xfvhu45d8PbpmCL0bQtvIuo1y7vrBR7loD-fE7iPs5KzZxVN3Ie7FTe32vZ_89Vi8z2eb6SIp1i_L6XORBAU0JGS9dbVjR2hKUFBSWbncKtLGMp5FbeaVpYoroLzMSWPJ2hmHeY0W8kyPxcPvbvDeb7sYDi6etv939Q_1BUmX</recordid><startdate>20230703</startdate><enddate>20230703</enddate><creator>Legiest, Wouter</creator><creator>Turan, Furkan</creator><creator>Van Beirendonck, Michiel</creator><creator>D'Anvers, Jan-Pieter</creator><creator>Verbauwhede, Ingrid</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20230703</creationdate><title>Neural Network Quantisation for Faster Homomorphic Encryption</title><author>Legiest, Wouter ; Turan, Furkan ; Van Beirendonck, Michiel ; D'Anvers, Jan-Pieter ; Verbauwhede, Ingrid</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-48e8afa9a415b020b4bda78243589198386e284d9d047b7431b93a5a17f180763</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer architecture</topic><topic>convolutional neural networks</topic><topic>Cryptography</topic><topic>fully homomorphic encryption</topic><topic>Libraries</topic><topic>Neural networks</topic><topic>privacy-preserving machine learning</topic><topic>quantisation</topic><topic>Quantization (signal)</topic><topic>Seals</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Legiest, Wouter</creatorcontrib><creatorcontrib>Turan, Furkan</creatorcontrib><creatorcontrib>Van Beirendonck, Michiel</creatorcontrib><creatorcontrib>D'Anvers, Jan-Pieter</creatorcontrib><creatorcontrib>Verbauwhede, Ingrid</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Legiest, Wouter</au><au>Turan, Furkan</au><au>Van Beirendonck, Michiel</au><au>D'Anvers, Jan-Pieter</au><au>Verbauwhede, Ingrid</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Neural Network Quantisation for Faster Homomorphic Encryption</atitle><btitle>2023 IEEE 29th International Symposium on On-Line Testing and Robust System Design (IOLTS)</btitle><stitle>IOLTS</stitle><date>2023-07-03</date><risdate>2023</risdate><spage>1</spage><epage>3</epage><pages>1-3</pages><eissn>1942-9401</eissn><eisbn>9798350341355</eisbn><abstract>Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.</abstract><pub>IEEE</pub><doi>10.1109/IOLTS59296.2023.10224890</doi><tpages>3</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1942-9401
ispartof 2023 IEEE 29th International Symposium on On-Line Testing and Robust System Design (IOLTS), 2023, p.1-3
issn 1942-9401
language eng
recordid cdi_ieee_primary_10224890
source IEEE Xplore All Conference Series
subjects Computer architecture
convolutional neural networks
Cryptography
fully homomorphic encryption
Libraries
Neural networks
privacy-preserving machine learning
quantisation
Quantization (signal)
Seals
Training
title Neural Network Quantisation for Faster Homomorphic Encryption
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T12%3A10%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Neural%20Network%20Quantisation%20for%20Faster%20Homomorphic%20Encryption&rft.btitle=2023%20IEEE%2029th%20International%20Symposium%20on%20On-Line%20Testing%20and%20Robust%20System%20Design%20(IOLTS)&rft.au=Legiest,%20Wouter&rft.date=2023-07-03&rft.spage=1&rft.epage=3&rft.pages=1-3&rft.eissn=1942-9401&rft_id=info:doi/10.1109/IOLTS59296.2023.10224890&rft.eisbn=9798350341355&rft_dat=%3Cieee_CHZPO%3E10224890%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-48e8afa9a415b020b4bda78243589198386e284d9d047b7431b93a5a17f180763%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10224890&rfr_iscdi=true