CMStatistics 2022: Start Registration
View Submission - CMStatistics
B1261
Title: The challenges of training models with differential privacy Authors:  Soham De - DeepMind (United Kingdom) [presenting]
Abstract: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method for deep learning, realizes this protection by injecting noise during training. However, previous works have found that DP-SGD often leads to a significant degradation in performance on standard benchmarks. We will first describe the challenges of achieving good performance under differential privacy guarantees. We will then discuss some recent work that shows that using simple techniques to improve signal propagation and convergence on deep networks can significantly improve the performance of DP-SGD on large models.