Analyzing Information Leakage of Updates to Natural Language Models
- Santiago Zanella-Béguelin ,
- Lukas Wutschitz ,
- Shruti Tople ,
- Victor Ruehle ,
- Andrew Paverd ,
- Olga Ohrimenko ,
- Boris Köpf ,
- Marc Brockschmidt
ACM Conference on Computer and Communication Security (CCS) |
Published by ACM | Organized by ACM
To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.
We show that a differential analysis of language model snapshots before and after an update can reveal a surprising amount of detailed information about changes in the training data.
We propose two new metrics—differential score and differential rank—for analyzing the leakage due to updates of natural language models. We perform leakage analysis using these metrics across models trained on several different datasets using different methods and configurations.
We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CCS ’20, November 9–13, 2020, Virtual Event, USA. © 2020 Copyright is held by the owner/author(s). Publication rights licensed to ACM.