In the realm of machine learning, the steep expense of acquiring labeled data has prompted the rapid development and widespread application of semi-supervised learning, which leverages inexpensive unlabeled data for model training. Nevertheless, conventional semi-supervised learning frameworks presuppose that labeled and unlabeled data share the same distribution, a condition that proves challenging to uphold in many practical scenarios. The efficacy of the model is significantly compromised in the presence of unknown class data exhibiting out-of-distribution characteristics in the unlabeled data. In response to this predicament, our project has devised a secure semi-supervised learning methodology that incorporates contrastive learning as the primary framework for feature learning, leveraging all available data to attain superior representation capability and new class out-of-distribution detection ability. Additionally, our approach explicitly directs the model's update trajectory to ensure that classification performance for known classes within the distribution remains unimpaired. Finally, we have compared our proposal with three contrastive methods and verified its efficacy from diverse perspectives.

请以计算机机器学习领域的专业英语将In the field of machine learning due to the high cost of acquiring labeled data semi-supervised learning which fully utilizes cheap unlabeled data for model training has developed rap

原文地址: https://www.cveoy.top/t/topic/bE2P 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录