Efficiently Computing Similarities to Private Datasets
- Arturs Backurs ,
- Zinan Lin ,
- Sepideh Mahabadi ,
- Sandeep Silwal ,
- Jakub Tarnawski
Many methods in differentially private model training rely on computing the similarity between a query point (such as public or synthetic data) and private data. We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function f and a large high-dimensional private dataset \(X\in {\mathbb{R}}^d\), output a differentially private (DP) data structure which approximates \(\sum_{x\in X} f(x,y)\) for any query \(y\). We consider the cases where \(f\) is a kernel function, such as \(f(x,y)=e^{-\|x-y\|_2^2 / {\sigma^2}}\) (also known as DP kernel density estimation), or a distance function such as \(f(x,y)=\|x-y\|_2\) among others.
Our theoretical results improve upon prior work and give better privacy-utility trade-offs as well as faster query times for a wide range of kernels and distance functions. The unifying approach behind our results is leveraging `low-dimensional structures’ present in the specific functions f that we study, using tools such as provable dimensionality reduction, approximation theory, and one-dimensional decomposition of the functions. Our algorithms empirically exhibit improved query times and accuracy over prior state of the art. We also present an application to DP classification. Our experiments demonstrate that the simple methodology of classifying based on average similarity is orders of magnitude faster than prior DP-SGD based approaches for comparable accuracy.