Publikation: Bidrag til bog/antologi/rapport/proceeding › Konferencebidrag i proceedings › Forskning › peer review
Publikation: Bidrag til bog/antologi/rapport/proceeding › Konferencebidrag i proceedings › Forskning › peer review
}
TY - GEN
T1 - Not All Noise Is Accounted Equally
T2 - IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING
AU - Dormann, Friedrich
AU - Frisk, Osvald
AU - Andersen, Lars Nørvang
AU - Pedersen, Christian Fischer
PY - 2021
Y1 - 2021
N2 - Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-The-Art in the privacy/utility tradeoff of private end-To-end CNNs.
AB - Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-The-Art in the privacy/utility tradeoff of private end-To-end CNNs.
KW - Deep Learning
KW - Differential Privacy
KW - Gradient Noise
KW - Privacy
KW - Stochastic Gradient Descent
U2 - 10.1109/MLSP52302.2021.9596307
DO - 10.1109/MLSP52302.2021.9596307
M3 - Article in proceedings
SN - 978-1-6654-1184-4
T3 - IEEE Workshop on Machine Learning for Signal Processing
BT - 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021
PB - IEEE
Y2 - 25 October 2021 through 28 October 2021
ER -