Aarhus Universitets segl

Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates

Publikation: Bidrag til bog/antologi/rapport/proceedingKonferencebidrag i proceedingsForskningpeer review

Standard

Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates. / Dormann, Friedrich ; Frisk, Osvald; Andersen, Lars Nørvang et al.
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE, 2021. (IEEE Workshop on Machine Learning for Signal Processing, Bind 2021).

Publikation: Bidrag til bog/antologi/rapport/proceedingKonferencebidrag i proceedingsForskningpeer review

Harvard

Dormann, F, Frisk, O, Andersen, LN & Pedersen, CF 2021, Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates. i 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE, IEEE Workshop on Machine Learning for Signal Processing, bind 2021, IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING, Gold Coast, Queensland, Australien, 25/10/2021. https://doi.org/10.1109/MLSP52302.2021.9596307

APA

Dormann, F., Frisk, O., Andersen, L. N., & Pedersen, C. F. (2021). Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates. I 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021 IEEE. https://doi.org/10.1109/MLSP52302.2021.9596307

CBE

Dormann F, Frisk O, Andersen LN, Pedersen CF. 2021. Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates. I 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE. (IEEE Workshop on Machine Learning for Signal Processing, Bind 2021). https://doi.org/10.1109/MLSP52302.2021.9596307

MLA

Dormann, Friedrich et al. "Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates". 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE. (IEEE Workshop on Machine Learning for Signal Processing, Bind 2021). 2021. https://doi.org/10.1109/MLSP52302.2021.9596307

Vancouver

Dormann F, Frisk O, Andersen LN, Pedersen CF. Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates. I 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE. 2021. (IEEE Workshop on Machine Learning for Signal Processing, Bind 2021). doi: 10.1109/MLSP52302.2021.9596307

Author

Dormann, Friedrich ; Frisk, Osvald ; Andersen, Lars Nørvang et al. / Not All Noise Is Accounted Equally : How Differentially Private Learning Benefits From Large Sampling Rates. 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021. IEEE, 2021. (IEEE Workshop on Machine Learning for Signal Processing, Bind 2021).

Bibtex

@inproceedings{a288a1b7e9dd4d9986bad30f3f6a1ec4,
title = "Not All Noise Is Accounted Equally: How Differentially Private Learning Benefits From Large Sampling Rates",
abstract = "Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-The-Art in the privacy/utility tradeoff of private end-To-end CNNs.",
keywords = "Deep Learning, Differential Privacy, Gradient Noise, Privacy, Stochastic Gradient Descent",
author = "Friedrich Dormann and Osvald Frisk and Andersen, {Lars N{\o}rvang} and Pedersen, {Christian Fischer}",
year = "2021",
doi = "10.1109/MLSP52302.2021.9596307",
language = "English",
isbn = "978-1-6654-1184-4",
series = "IEEE Workshop on Machine Learning for Signal Processing",
publisher = "IEEE",
booktitle = "2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021",
note = "IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING ; Conference date: 25-10-2021 Through 28-10-2021",
url = "https://2021.ieeemlsp.org/",

}

RIS

TY - GEN

T1 - Not All Noise Is Accounted Equally

T2 - IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING

AU - Dormann, Friedrich

AU - Frisk, Osvald

AU - Andersen, Lars Nørvang

AU - Pedersen, Christian Fischer

PY - 2021

Y1 - 2021

N2 - Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-The-Art in the privacy/utility tradeoff of private end-To-end CNNs.

AB - Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-The-Art in the privacy/utility tradeoff of private end-To-end CNNs.

KW - Deep Learning

KW - Differential Privacy

KW - Gradient Noise

KW - Privacy

KW - Stochastic Gradient Descent

U2 - 10.1109/MLSP52302.2021.9596307

DO - 10.1109/MLSP52302.2021.9596307

M3 - Article in proceedings

SN - 978-1-6654-1184-4

T3 - IEEE Workshop on Machine Learning for Signal Processing

BT - 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing, MLSP 2021

PB - IEEE

Y2 - 25 October 2021 through 28 October 2021

ER -