Defense Against Adversarial Attacks by Low-level Image Transformations: A Study in International Journal of Intelligent Systems
This research paper, titled "Defense Against Adversarial Attacks by Low-level Image Transformations" and published in the International Journal of Intelligent Systems (Volume 35, Issue 10, Pages 1453-1466, 2020), explores the use of low-level image transformations as a defense mechanism against adversarial attacks in deep learning models. The authors, Zhaoxia Yin, Hua Wang, Jie Wang, Jin Tang, and Wenzhong Wang, investigate the effectiveness of various transformations in improving the robustness of deep learning models against adversarial examples. The paper provides a detailed analysis of different transformation techniques and their impact on the performance of deep learning models under adversarial conditions. The study contributes to the field of adversarial robustness by presenting a novel approach to defending against adversarial attacks through low-level image transformations. The paper is available on the International Journal of Intelligent Systems website with DOI: 10.1002/int.22258. [Codes] This paper is a valuable resource for researchers and practitioners working in the areas of deep learning, adversarial robustness, and computer vision.
原文地址: https://www.cveoy.top/t/topic/p5u8 著作权归作者所有。请勿转载和采集!