Loading…

OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network

Deep neural network (DNN) typically involves convolutions, pooling, and activation function. Due to the growing concern about privacy, privacy-preserving DNN becomes a hot research topic. Generally, the convolution and pooling operations can be supported by additive homomorphic and secure comparison...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-03
Main Authors: Qian, Feng, Xia, Zhihua, Xu, Zhifeng, Weng, Jiasi, Weng, Jian
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Qian, Feng
Xia, Zhihua
Xu, Zhifeng
Weng, Jiasi
Weng, Jian
description Deep neural network (DNN) typically involves convolutions, pooling, and activation function. Due to the growing concern about privacy, privacy-preserving DNN becomes a hot research topic. Generally, the convolution and pooling operations can be supported by additive homomorphic and secure comparison, but the secure implementation of activation functions is not so straightforward for the requirements of accuracy and efficiency, especially for the non-linear ones such as exponential, sigmoid, and tanh functions. This paper pays a special attention to the implementation of such non-linear functions in semi-honest model with two-party settings, for which SIRNN is the current state-of-the-art. Different from previous works, we proposed improved implementations for these functions by using their intrinsic features as well as worthy tiny tricks. At first, we propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy with most of the computations executed locally. Exponential protocol is widely used in machine learning tasks such as Poisson regression, and is also a key component of sigmoid and tanh functions. Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks, which helps to save overhead and improve performance. As a result, we implement these functions with fewer fundamental building blocks. The comprehensive evaluations show that our protocols achieve state-of-the-art precision while reducing run-time by approximately 57%, 44%, and 42% for exponential (with only negative inputs), sigmoid, and Tanh functions, respectively.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2937126534</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2937126534</sourcerecordid><originalsourceid>FETCH-proquest_journals_29371265343</originalsourceid><addsrcrecordid>eNqNjMsKwjAQRYMgWNR_GHBdqEnra1fE4kqLupdQI0Rrpk4min69in6Aq3PgHm5LRFKpYTxJpeyIvvenJEnkaCyzTEUirMu8mMG6YXuxT3OArakCGdjdMS418QPmeGkCa7booCRkrLD2cESCFbraOqMJ8ort7ZsUwVUf8WAdbD5nZBzDygTS9Rt8Rzr3RPuoa2_6P3bFoFjs5su4IbwG43l_wkDuPe3lVI2HcpSpVP1XvQBfuEzT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2937126534</pqid></control><display><type>article</type><title>OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network</title><source>Publicly Available Content Database</source><creator>Qian, Feng ; Xia, Zhihua ; Xu, Zhifeng ; Weng, Jiasi ; Weng, Jian</creator><creatorcontrib>Qian, Feng ; Xia, Zhihua ; Xu, Zhifeng ; Weng, Jiasi ; Weng, Jian</creatorcontrib><description>Deep neural network (DNN) typically involves convolutions, pooling, and activation function. Due to the growing concern about privacy, privacy-preserving DNN becomes a hot research topic. Generally, the convolution and pooling operations can be supported by additive homomorphic and secure comparison, but the secure implementation of activation functions is not so straightforward for the requirements of accuracy and efficiency, especially for the non-linear ones such as exponential, sigmoid, and tanh functions. This paper pays a special attention to the implementation of such non-linear functions in semi-honest model with two-party settings, for which SIRNN is the current state-of-the-art. Different from previous works, we proposed improved implementations for these functions by using their intrinsic features as well as worthy tiny tricks. At first, we propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy with most of the computations executed locally. Exponential protocol is widely used in machine learning tasks such as Poisson regression, and is also a key component of sigmoid and tanh functions. Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks, which helps to save overhead and improve performance. As a result, we implement these functions with fewer fundamental building blocks. The comprehensive evaluations show that our protocols achieve state-of-the-art precision while reducing run-time by approximately 57%, 44%, and 42% for exponential (with only negative inputs), sigmoid, and Tanh functions, respectively.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Cognitive tasks ; Exponential functions ; Linear functions ; Machine learning ; Neural networks ; Privacy ; Recurrent neural networks</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2937126534?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Qian, Feng</creatorcontrib><creatorcontrib>Xia, Zhihua</creatorcontrib><creatorcontrib>Xu, Zhifeng</creatorcontrib><creatorcontrib>Weng, Jiasi</creatorcontrib><creatorcontrib>Weng, Jian</creatorcontrib><title>OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network</title><title>arXiv.org</title><description>Deep neural network (DNN) typically involves convolutions, pooling, and activation function. Due to the growing concern about privacy, privacy-preserving DNN becomes a hot research topic. Generally, the convolution and pooling operations can be supported by additive homomorphic and secure comparison, but the secure implementation of activation functions is not so straightforward for the requirements of accuracy and efficiency, especially for the non-linear ones such as exponential, sigmoid, and tanh functions. This paper pays a special attention to the implementation of such non-linear functions in semi-honest model with two-party settings, for which SIRNN is the current state-of-the-art. Different from previous works, we proposed improved implementations for these functions by using their intrinsic features as well as worthy tiny tricks. At first, we propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy with most of the computations executed locally. Exponential protocol is widely used in machine learning tasks such as Poisson regression, and is also a key component of sigmoid and tanh functions. Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks, which helps to save overhead and improve performance. As a result, we implement these functions with fewer fundamental building blocks. The comprehensive evaluations show that our protocols achieve state-of-the-art precision while reducing run-time by approximately 57%, 44%, and 42% for exponential (with only negative inputs), sigmoid, and Tanh functions, respectively.</description><subject>Artificial neural networks</subject><subject>Cognitive tasks</subject><subject>Exponential functions</subject><subject>Linear functions</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Privacy</subject><subject>Recurrent neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjMsKwjAQRYMgWNR_GHBdqEnra1fE4kqLupdQI0Rrpk4min69in6Aq3PgHm5LRFKpYTxJpeyIvvenJEnkaCyzTEUirMu8mMG6YXuxT3OArakCGdjdMS418QPmeGkCa7booCRkrLD2cESCFbraOqMJ8ort7ZsUwVUf8WAdbD5nZBzDygTS9Rt8Rzr3RPuoa2_6P3bFoFjs5su4IbwG43l_wkDuPe3lVI2HcpSpVP1XvQBfuEzT</recordid><startdate>20240301</startdate><enddate>20240301</enddate><creator>Qian, Feng</creator><creator>Xia, Zhihua</creator><creator>Xu, Zhifeng</creator><creator>Weng, Jiasi</creator><creator>Weng, Jian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240301</creationdate><title>OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network</title><author>Qian, Feng ; Xia, Zhihua ; Xu, Zhifeng ; Weng, Jiasi ; Weng, Jian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29371265343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Cognitive tasks</topic><topic>Exponential functions</topic><topic>Linear functions</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Privacy</topic><topic>Recurrent neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Qian, Feng</creatorcontrib><creatorcontrib>Xia, Zhihua</creatorcontrib><creatorcontrib>Xu, Zhifeng</creatorcontrib><creatorcontrib>Weng, Jiasi</creatorcontrib><creatorcontrib>Weng, Jian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qian, Feng</au><au>Xia, Zhihua</au><au>Xu, Zhifeng</au><au>Weng, Jiasi</au><au>Weng, Jian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network</atitle><jtitle>arXiv.org</jtitle><date>2024-03-01</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Deep neural network (DNN) typically involves convolutions, pooling, and activation function. Due to the growing concern about privacy, privacy-preserving DNN becomes a hot research topic. Generally, the convolution and pooling operations can be supported by additive homomorphic and secure comparison, but the secure implementation of activation functions is not so straightforward for the requirements of accuracy and efficiency, especially for the non-linear ones such as exponential, sigmoid, and tanh functions. This paper pays a special attention to the implementation of such non-linear functions in semi-honest model with two-party settings, for which SIRNN is the current state-of-the-art. Different from previous works, we proposed improved implementations for these functions by using their intrinsic features as well as worthy tiny tricks. At first, we propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy with most of the computations executed locally. Exponential protocol is widely used in machine learning tasks such as Poisson regression, and is also a key component of sigmoid and tanh functions. Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks, which helps to save overhead and improve performance. As a result, we implement these functions with fewer fundamental building blocks. The comprehensive evaluations show that our protocols achieve state-of-the-art precision while reducing run-time by approximately 57%, 44%, and 42% for exponential (with only negative inputs), sigmoid, and Tanh functions, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2937126534
source Publicly Available Content Database
subjects Artificial neural networks
Cognitive tasks
Exponential functions
Linear functions
Machine learning
Neural networks
Privacy
Recurrent neural networks
title OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T22%3A51%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=OPAF:%20Optimized%20Secure%20Two-Party%20Computation%20Protocols%20for%20Nonlinear%20Activation%20Functions%20in%20Recurrent%20Neural%20Network&rft.jtitle=arXiv.org&rft.au=Qian,%20Feng&rft.date=2024-03-01&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2937126534%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29371265343%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2937126534&rft_id=info:pmid/&rfr_iscdi=true