Loading…
Privacy-Preserving DNN Model Authorization against Model Theft and Feature Leakage
Today's intelligent services are built on well-trained deep neural network (DNN) models, which usually require large private datasets along with a high cost for model training. It consequently makes the model providers cherish the pre-trained DNN models and only distribute them to authorized us...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Today's intelligent services are built on well-trained deep neural network (DNN) models, which usually require large private datasets along with a high cost for model training. It consequently makes the model providers cherish the pre-trained DNN models and only distribute them to authorized users. However, malicious users can steal these valuable models for abuse, illegal copy and redistribution. Attackers can also extract private features from even authorized models to leak partial training datasets. They both violate privacy. Existing techniques from secure community attempt to avoid parameter leakage during model authorization but yet cannot solve privacy issues sufficiently. In this paper, we propose a privacy-preserving model authorization approach, AgAuth, to resist the aforementioned privacy threats. We devise a novel scheme called Information-Agnostic Conversion (IAC) for forwarding procedure to eliminate residual features in model parameters. Based on it, we then propose Inference-on-Ciphertext (CiFer) mechanism for DNN reasoning, which includes three stages in each forwarding. The Encrypt phase first converts the proprietary model parameters to demonstrate uniform distribution. The Forward stage per-forms forwarding function without decryption at authorized side. Specifically, this stage just computes over ciphertext. The Decrypt phase finally recovers the information-agnostic outputs to informative output tensor for real-world services. In addition, we implement a prototype and conduct extensive experiments to evaluate its performance. The qualitative and quantitative results demonstrate that our solution AgAuth is privacy-preserving to defend against model theft and feature leakage, without accuracy loss or notable performance decrease. |
---|---|
ISSN: | 1938-1883 |
DOI: | 10.1109/ICC45855.2022.9839218 |