Loading…
ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks
With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting...
Saved in:
Published in: | arXiv.org 2023-07 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sun, Zhichuang Sun, Ruimin Liu, Changming Amrita Roy Chowdhury Long, Lu Jha, Somesh |
description | With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2460089965</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2460089965</sourcerecordid><originalsourceid>FETCH-proquest_journals_24600899653</originalsourceid><addsrcrecordid>eNqNi0sKwjAUAIMgWNQ7PHBdiOnH1p2Iogvrou6lNC_YWvM0H8XbW8QDuBoGZgYsEFE0D7NYiBGbWttyzkW6EEkSBUyWl0rSq0C3hBWUWHuDUGkJG6WaukHt4KhDic-mRjiQxA72WqFB3Xv5tg5voMjAmvSTOu8a0lUHBXrzhXuRudoJG6qqszj9ccxm281pvQvvhh4erTu35E0_2rOIU86zPE-T6L_qA5l5RdE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2460089965</pqid></control><display><type>article</type><title>ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks</title><source>Publicly Available Content (ProQuest)</source><creator>Sun, Zhichuang ; Sun, Ruimin ; Liu, Changming ; Amrita Roy Chowdhury ; Long, Lu ; Jha, Somesh</creator><creatorcontrib>Sun, Zhichuang ; Sun, Ruimin ; Liu, Changming ; Amrita Roy Chowdhury ; Long, Lu ; Jha, Somesh</creatorcontrib><description>With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accelerators ; Artificial intelligence ; Electronic devices ; Inference ; Machine learning ; Outsourcing ; Privacy</subject><ispartof>arXiv.org, 2023-07</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2460089965?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25733,36991,44569</link.rule.ids></links><search><creatorcontrib>Sun, Zhichuang</creatorcontrib><creatorcontrib>Sun, Ruimin</creatorcontrib><creatorcontrib>Liu, Changming</creatorcontrib><creatorcontrib>Amrita Roy Chowdhury</creatorcontrib><creatorcontrib>Long, Lu</creatorcontrib><creatorcontrib>Jha, Somesh</creatorcontrib><title>ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks</title><title>arXiv.org</title><description>With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.</description><subject>Accelerators</subject><subject>Artificial intelligence</subject><subject>Electronic devices</subject><subject>Inference</subject><subject>Machine learning</subject><subject>Outsourcing</subject><subject>Privacy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0sKwjAUAIMgWNQ7PHBdiOnH1p2Iogvrou6lNC_YWvM0H8XbW8QDuBoGZgYsEFE0D7NYiBGbWttyzkW6EEkSBUyWl0rSq0C3hBWUWHuDUGkJG6WaukHt4KhDic-mRjiQxA72WqFB3Xv5tg5voMjAmvSTOu8a0lUHBXrzhXuRudoJG6qqszj9ccxm281pvQvvhh4erTu35E0_2rOIU86zPE-T6L_qA5l5RdE</recordid><startdate>20230706</startdate><enddate>20230706</enddate><creator>Sun, Zhichuang</creator><creator>Sun, Ruimin</creator><creator>Liu, Changming</creator><creator>Amrita Roy Chowdhury</creator><creator>Long, Lu</creator><creator>Jha, Somesh</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230706</creationdate><title>ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks</title><author>Sun, Zhichuang ; Sun, Ruimin ; Liu, Changming ; Amrita Roy Chowdhury ; Long, Lu ; Jha, Somesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24600899653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accelerators</topic><topic>Artificial intelligence</topic><topic>Electronic devices</topic><topic>Inference</topic><topic>Machine learning</topic><topic>Outsourcing</topic><topic>Privacy</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Zhichuang</creatorcontrib><creatorcontrib>Sun, Ruimin</creatorcontrib><creatorcontrib>Liu, Changming</creatorcontrib><creatorcontrib>Amrita Roy Chowdhury</creatorcontrib><creatorcontrib>Long, Lu</creatorcontrib><creatorcontrib>Jha, Somesh</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Zhichuang</au><au>Sun, Ruimin</au><au>Liu, Changming</au><au>Amrita Roy Chowdhury</au><au>Long, Lu</au><au>Jha, Somesh</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks</atitle><jtitle>arXiv.org</jtitle><date>2023-07-06</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2460089965 |
source | Publicly Available Content (ProQuest) |
subjects | Accelerators Artificial intelligence Electronic devices Inference Machine learning Outsourcing Privacy |
title | ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T05%3A14%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ShadowNet:%20A%20Secure%20and%20Efficient%20On-device%20Model%20Inference%20System%20for%20Convolutional%20Neural%20Networks&rft.jtitle=arXiv.org&rft.au=Sun,%20Zhichuang&rft.date=2023-07-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2460089965%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24600899653%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2460089965&rft_id=info:pmid/&rfr_iscdi=true |