Authors
Matt Fredrikson, Somesh Jha, Thomas Ristenpart
Publication date
2015/10/12
Book
Proceedings of the 22nd ACM SIGSAC conference on computer and communications security
Pages
1322-1333
Description
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to …
Total citations
2016201720182019202020212022202320241943132237392548604720251
Scholar articles
M Fredrikson, S Jha, T Ristenpart - Proceedings of the 22nd ACM SIGSAC conference on …, 2015