[1] A. Salem, Y. Zhang, M. Humbert, M. Fritz, and M. Backes, 'MLLeaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models,' in Proc. of NDSS, San Diego, CA, USA, Feb. 2019, pp. 1–15. [2] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, 'Membership Inference Attacks against Machine Learning Models,' in Proc. of IEEE Symposium on Security and Privacy, San Jose, CA, USA, May 2017, pp. 3–18. [3] M. Nasr, R. Shokri, and A. Houmansadr, 'Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning,' in Proc. of IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 2019, pp. 739–753. [4] M. Fredrikson, S. Jha, and T. Ristenpart, 'Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures,' in Proc. of CCS, Denver, Colorado, USA, Oct. 2015, pp. 1322–1333. [5] Z. Yang, J. Zhang, E. Chang, and Z. Liang, 'Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment,' in Proc. of CCS, London, UK, Nov. 2019, pp. 225–240. [6] M. Nasr, R. Shokri, and A. Houmansadr, 'Machine Learning with Membership Privacy using Adversarial Regularization,' in Proc. of CCS, Toronto, ON, Canada, Oct. 2018, pp. 634–646. [7] J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong, 'MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples,' in Proc. of CCS, London, UK, Nov. 2019, pp. 259–274. [8] N. Papernot, P. McDaniel, and I. Goodfellow, 'Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples,' in https://arxiv.org/abs/1605.07277, 2016. [9] Z. Yang, B. Shao, B. Xuan, E.-C. Chang, and F. Zhang, 'Defending Model Inversion and Membership Inference Attacks via Prediction Purification,' in https://arxiv.org/abs/2005.03915, 2020. [10] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, 'Deep Learning with Differential Privacy,' in Proc. of CCS, Vienna, Austria, Oct. 2016, pp. 308–318. [11] Y. Liu, J. Zhang, and X. Chen, 'Towards Robust Privacy-Preserving Deep Learning with Adversarial Training,' in Proc. of IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 2020, pp. 177–194. [12] S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, 'Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting,' in Proc. of IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 2018, pp. 635–652. [13] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, 'Exploiting Unintended Feature Leakage in Collaborative Learning,' in Proc. of IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 2019, pp. 141–155. [14] X. Zhang, S. Zhu, and K. Liang, 'Deep Leakage from Gradients,' in Proc. of IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 2019, pp. 1203–1218. [15] A. Salem, Y. Zhang, M. Humbert, and M. Backes, 'ML Privacy Meter: Evaluating the Privacy Properties of Machine Learning Models,' in Proc. of USENIX Security Symposium, Boston, MA, USA, Aug. 2020, pp. 309–326.

Membership Inference Attacks and Defenses in Machine Learning: A Comprehensive Survey

原文地址: https://www.cveoy.top/t/topic/nmcw 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录