Papers
arxiv:2408.05131

Range Membership Inference Attacks

Published on Aug 9, 2024
Authors:
,

Abstract

Range membership inference attacks (RaMIAs) are introduced to more accurately measure privacy leaks by checking for data in a specified range rather than exact matches.

AI-generated summary

Machine learning models can leak private information about their training data. The standard methods to measure this privacy risk, based on membership inference attacks (MIAs), only check if a given data point exactly matches a training point, neglecting the potential of similar or partially overlapping memorized data revealing the same private information. To address this issue, we introduce the class of range membership inference attacks (RaMIAs), testing if the model was trained on any data in a specified range (defined based on the semantics of privacy). We formulate the RaMIAs game and design a principled statistical test for its composite hypotheses. We show that RaMIAs can capture privacy loss more accurately and comprehensively than MIAs on various types of data, such as tabular, image, and language. RaMIA paves the way for more comprehensive and meaningful privacy auditing of machine learning algorithms.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.05131 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.05131 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.05131 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.