Data imbalance is a common problem in deep learning where the number of samples in one class is significantly higher than the other classes. This can lead to biased predictions and poor performance of the model on the minority class. There are several techniques to address this problem, such as data augmentation, class weighting, oversampling, and undersampling.

Reference:

  • He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9), 1263-1284.
  • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321-357.
  • Zhang, Y., & Sun, G. (2018). Understanding data augmentation for classification: when to warp?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 1179-1187)
the data imbalance probelm in deep learning and the reference

原文地址: http://www.cveoy.top/t/topic/g1o1 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录