Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy
Cloud-edge collaborative inference approach splits deep neural networks (DNNs) into two parts that run collaboratively on resource-constrained edge devices and cloud servers, aiming at minimizing inference latency and protecting data privacy. However, even if the raw input data from edge devices is not directly exposed to the cloud, state-of-the-art attacks targeting collaborative inference are still able to reconstruct the raw private data from the intermediate outputs of the exposed local models, introducing serious privacy risks. In this paper, a secure privacy inference framework for cloud-edge collaboration is proposed, termed CIS, which supports adaptively partitioning the network according to the dynamically changing network bandwidth and fully releases the computational power of edge devices. To mitigate the influence introduced by private perturbation, CIS provides a way to achieve differential privacy protection by adding refined noise to the intermediate layer feature maps offloaded to the cloud. Meanwhile, with a given total privacy budget, the budget is reasonably allocated by the size of the feature graph rank generated by different convolution filters, which makes the inference in the cloud robust to the perturbed data, thus effectively trade-off the conflicting problem between privacy and availability. Finally, we construct a real cloud-edge collaborative inference computing scenario to verify the effectiveness of inference latency and model partitioning on resource-constrained edge devices. Furthermore, the state-of-the-art cloud-edge collaborative reconstruction attack is used to evaluate the practical availability of the end-to-end privacy protection mechanism provided by CIS.
READ FULL TEXT