To evaluate the performance of our FCA-RAC model, we compared it with existing methods [X3D, TANet, Video_SwinT, I_A_S, Context-aware, TransRAC] in Table ef{table:Evaluation}. The results for the RepCount-A and UCFRep datasets were obtained from [TransRAC, yao2023poserac]. For the countixAV dataset, we manually annotated the start and end frames of the first action cycle since no label information was available for the action cycle boundary.

Our FCA-RAC model demonstrated superior performance compared to previous methods on the RepCount-A and countixAV datasets. In RepCount-A, the model achieved an MAE of 0.268 and an OBO of 0.47, outperforming TransRAC [TransRAC] by 0.175/0.18 on MAE/OBO. Similarly, in countixAV, our model achieved an MAE of 0.330 and an OBO of 0.58, slightly better than the result in [zhang2021repetitive]. Notably, the FCA-RAC model's performance on the UCFRep dataset was also comparable to the state-of-the-art.

Since our model's experiment setting differed from previous methods where label information was unavailable in the test set, we utilized the annotated first action cycle in the testing video to enhance generalizability. To ensure a fair comparison, we also evaluated the baseline described in Sec. ef{subsec:baseline}, and the results are displayed in Table ef{table:Evaluation}. Our model achieved 0.076/0.11 (0.054/0.11), 0.062/0.02 (0.049/0.03), 0.155/0.21 (0.061/0.08) MAE/OBO performance gains over FC-V (V-V) on the RepCount-A, countixAV, and UCFRep datasets, respectively, indicating that our model significantly outperformed the baseline on all three datasets. These findings provide strong evidence that our FCA-RAC model is an effective approach for repetitive action counting.

Performance Evaluation of FCA-RAC Model for Repetitive Action Counting

原文地址: https://www.cveoy.top/t/topic/kLE4 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录