Authors

* External authors

Venue

Date

Share

A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization

Ashwinee Panda*

Xinyu Tang*

Vikash Sehwag

Saeed Mahloujifar*

Prateek Mittal*

* External authors

ICML-24

2024

Abstract

An open problem in differentially private deep learning is hyperparameter optimization (HPO). DP-SGD introduces new hyperparameters and complicates existing ones, forcing researchers to painstakingly tune hyperparameters with hundreds of trials, which in turn makes it impossible to account for the privacy cost of HPO without destroying the utility. We propose an adaptive HPO method that uses cheap trials (in terms of privacy cost and runtime) to estimate optimal hyperparameters and scales them up. We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning, across architectures and a wide range of

Related Publications

Finding a needle in a haystack: A Black-Box Approach to Invisible Watermark Detection

ECCV, 2024
Minzhou Pan*, Zhenting Wang, Xin Dong, Vikash Sehwag, Lingjuan Lyu, Xue Lin*

In this paper, we propose WaterMark Detection (WMD), the first invisible watermark detection method under a black-box and annotation-free setting. WMD is capable of detecting arbitrary watermarks within a given reference dataset using a clean non watermarked dataset as a ref…

How to Trace Latent Generative Model Generated Images without Artificial Watermark?

ICML, 2024
Zhenting Wang, Vikash Sehwag, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas*, Shiqing Ma*

Latent generative models (e.g., Stable Diffusion) have become more and more popular, but concerns have arisen regarding potential misuse related to images generated by these models. It is, therefore, necessary to analyze the origin of images by inferring if a particular imag…

Differentially Private Image Classification by Learning Priors from Random Processes

NeurIPS, 2023
Xinyu Tang*, Ashwinee Panda*, Vikash Sehwag, Prateek Mittal*

In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition.A recent focus in private learning research is improving the performance of DP-SGD on private da…

  • HOME
  • Publications
  • A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.