SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

CISPA Helmholtz Center for Information Security

Abstract

We present SecurityNet as the first step towards conducting ML models’ security and privacy vulnerability evaluation on public models.

While advanced machine learning (ML) models are deployed in numerous real-world applications, previous works demonstrate these models have security and privacy vulnerabilities. Various empirical research has been done in this field. However, most of the experiments are performed on target ML models trained by the security researchers themselves. Due to the high computational resource requirement for training advanced models with complex architectures, researchers generally choose to train a few target models using relatively simple architectures on typical experiment datasets.

We argue that to understand ML models' vulnerabilities comprehensively, experiments should be performed on a large set of models trained with various purposes (not just the purpose of evaluating ML attacks and defenses). To this end, we propose using publicly available models with weights from the Internet (public models) for evaluating attacks and defenses on ML models. We establish a database, namely SecurityNet, containing 910 annotated image classification models. We then analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on these public models. Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models. We advocate researchers to perform experiments on public models to better demonstrate their proposed methods' effectiveness in the future.

Database

We present a database containing publicly available models with weights. We focus on one of the most popular machine learning tasks, image classification, as it is also typically used to demonstrate the effectiveness of attacks and defenses on ML models. The statistics overview of the dataset is presented below.

Figure 1. SecurityNet statistics

Evaluations

Thanks to SecurityNet, we can perform an extensive evaluation for model stealing, membership inference, and backdoor detection on a large set of public models, which, to the best of our knowledge, has not been done before. Our analyses confirm some results from previous works but on a much larger scale, discover some new insights, and show some of the previous results obtained from researchers’ self-trained models can vary on public models.

We find that the model stealing attack can perform especially poorly on certain datasets, such as CUB-200-2011, in contrast to target models (with the same architecture) trained on other datasets. Furthermore, we demonstrate that the model stealing performance negatively correlates with the model’s target task performance and is too low to be effective on some modern high-performing models.

Figure 2. Model stealing performance across different datasets

As for membership inference, we make a similar observation, as shown in previous works, that the attack performance positively correlates with the victim model’s overfit- ting level. Additionally, we find methods that perform well on experiment datasets do not guarantee similar performance on more difficult datasets. In contrast to previous work’s results, the MLP-based attack performs differently on models trained with data that contains a large number of classes (e.g., ImageNet-1k) when using different input methods.

Figure 3. Membership inference performance across different datasets
Figure 4. Membership inference performance across different attack methods

Please refer to the paper for more detailed analysis including results on benchmark vs. security models and correlation with metadata.

BibTeX

@article{ZLYHBFZ24,
      author = {Boyang Zhang and Zheng Li and Ziqing Yang and Xinlei He and Michael Backes and Mario Fritz and Yang Zhang},
      title = {{SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models}},
      journal = {{USENIX Security Symposium (USENIX Security)}},
      year = {2024}
}