Physical Adversarial Attacks on Object Detection: A Study of Adversarial Patches and Camouflage
Deep neural networks (DNNs) have been widely applied in production and daily life due to their outstanding performance. However, their vulnerability poses significant challenges to the security of some applications. Therefore, research on adversarial attacks is of great significance. However, current research on adversarial attacks mainly focuses on digital attacks and image classification tasks, which cannot fully meet the requirements of practical applications. Therefore, the focus of this paper is on the research of physical adversarial attacks based on target detection. This paper explores and studies two mainstream physical adversarial implementation methods: adversarial patches and adversarial camouflage.
Firstly, in response to the problem that existing adversarial patch generation methods ignore physical implementation constraints when optimizing in digital space, this paper proposes a visually natural adversarial patch generation method. This method addresses the suspicious appearance of adversarial patches by utilizing style transfer technology to disguise them as objects that are semantically related to the target object's context, improving the concealment of adversarial patches. Meanwhile, for the non-planarity, imaging perspective, and distance transformation of object surfaces, this paper models the target object in 3D and simulates the process of attaching adversarial patches to the target object in the physical world using neural rendering technology, bridging the gap between digital space and the physical world. In addition, based on this method, this paper designs a tampering attack that can mislead the detector to output the preset target category, making the attack output more reasonable and having application prospects. A large number of comparative experiments and ablation experiments prove that the adversarial patches generated by this method not only have a natural appearance but also have extremely strong attack capabilities.
Secondly, in response to the problem that physical attack methods for target detection often take the final output of the source model as the attack object, which is prone to overfitting and leads to a decrease in the transferability of attacks, this paper proposes a novel multi-scale feature-aware adversarial camouflage generation method. This method first attributes the position and category information of the target object to different scales of features, allocating attribution scores to each scale of features, and the magnitude and polarity of the attribution scores reflect the importance of the features. Then, the multi-scale features are weighted using the attribution scores and used as the attack object. Finally, the adversarial camouflage with high attack and transferability capabilities is obtained by optimizing training in the direction opposite to the target detection optimization direction. A large number of experiments conducted in digital and physical spaces verify that the proposed method has strong competitiveness compared to current mainstream adversarial attack algorithms.
原文地址: https://www.cveoy.top/t/topic/mZwV 著作权归作者所有。请勿转载和采集!