In the realm of machine learning, the exorbitant expense of procuring labeled data has given rise to the swift development and widespread application of semi-supervised learning, which maximizes the utility of inexpensive unlabeled data for model training. However, conventional semi-supervised learning frameworks operate under the assumption that labeled and unlabeled data share identical distributions, a premise that is difficult to sustain in numerous practical scenarios. When unlabeled data contains unknown class data with out-of-distribution features, model effectiveness is substantially compromised. To address this issue, our project has devised a secure semi-supervised learning approach: on one hand, we employ contrastive learning as the primary feature learning framework to fully harness all data in order to enhance representation ability and out-of-distribution new class detection capability; on the other hand, we explicitly steer the model's update direction to guarantee that the classification performance of known classes within the distribution remains unaffected. Finally, we compare our method with three contrastive methods and validate the efficacy of our proposed approach from multiple perspectives.

A Robust Semi-Supervised Learning Approach for Out-of-Distribution Data

原文地址: https://www.cveoy.top/t/topic/m9fC 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录