Skip to content

Publication

Private Post-GAN Boosting

Abstract

"Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output generator at the end of training. We propose Private post-GAN boosting (Private PGB), a differentially private method that combines samples produced by the sequence of generators obtained during GAN training to create a high-quality synthetic dataset. To that end, our method leverages the Private Multiplicative Weights method (Hardt and Rothblum, 2010) to reweight generated samples. We evaluate Private PGB on two dimensional toy data, MNIST images, US Census data and a standard machine learning prediction task. Our experiments show that Private PGB improves upon a standard private GAN approach across a collection of quality measures. We also provide a non-private variant of PGB that improves the data quality of standard GAN training." (Author's abstract, IAB-Doku) ((en))

Cite article

Neunhoeffer, M., Wu, Z. & Dwork, C. (2021): Private Post-GAN Boosting. In: ICLR (Hrsg.) (2021): International Conference on Learning Representations 2021, p. 1-19.

Download

Free Access