In the realm of machine learning, the cost of obtaining labeled data is often exorbitant. As a result, the use of semi-supervised learning, which maximizes the potential of inexpensive unlabeled data for model training, has become increasingly popular. However, traditional semi-supervised learning frameworks assume that labeled and unlabeled data share the same distribution, a scenario that is often not applicable in practical settings. When unlabeled data features unknown class data with out-of-distribution properties, model effectiveness can be significantly impacted.

To address this challenge, our project has developed a secure semi-supervised learning method. Firstly, we have implemented contrastive learning as the primary framework for feature learning, enabling us to maximize the potential of all data and achieve superior representation and out-of-distribution new class detection capabilities. Additionally, we have explicitly directed the model's update path to ensure that the classification performance of known classes within the distribution is not adversely affected. Finally, we have compared our method with three contrastive models and validated its effectiveness from multiple perspectives.

Secure Semi-Supervised Learning: Addressing Out-of-Distribution Data in Machine Learning

原文地址: https://www.cveoy.top/t/topic/m9fx 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录