It has been discovered that incorporating the attention mechanism SE module into the backbone layer of the YOLOv5 network structure enhances the flexibility of SE, allowing it to be applied directly to the existing network for feature rescaling, and significantly improving detection accuracy. This is attributed to the reflective clothing dataset containing more fuzzy images, or dirty data, which results in low recognition accuracy when training the model directly. By adding the SE attention mechanism, the model is better able to focus on relevant features and extract key information, leading to improved accuracy in reflective clothing detection.

It is thus learned that after adding the attention mechanism SE module to the backbone layer of the YOLOv5 network structure the flexibility of SE that can be directly applied to the existing network

原文地址: https://www.cveoy.top/t/topic/bstG 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录