Python NumPy实现SVM:基于SMO算法的机器学习实战本篇博客将带您使用Python的NumPy库,从零开始构建一个SVM(支持向量机)分类器。区别于直接调用现成库,我们将深入探讨SMO(Sequential Minimal Optimization)算法的原理,并通过代码逐步实现。最后,我们会使用经典的MNIST数据集验证我们SVM模型的性能。### 1. SVM和SMO算法简介SVM是一种强大的监督学习算法,广泛应用于分类和回归任务。其核心思想是找到一个最优超平面,将不同类别的数据点最大程度地分隔开。而SMO算法则是一种高效的迭代算法,用于解决SVM训练过程中的对偶问题。### 2. 使用NumPy实现SVM以下是使用NumPy实现SVM的Python代码示例:pythonimport numpy as npclass SVM: def init(self, C=1.0, tol=0.001, max_iter=100): self.C = C self.tol = tol self.max_iter = max_iter def fit(self, X, y): n_samples, n_features = X.shape # 初始化参数 self.alpha = np.zeros(n_samples) self.b = 0 self.errors = np.zeros(n_samples) self.X = X self.y = y # 迭代优化参数 num_changed_alphas = 0 iter_count = 0 while iter_count < self.max_iter and num_changed_alphas > 0: num_changed_alphas = 0 for i in range(n_samples): E_i = self._decision_function(X[i]) - y[i] if (y[i] * E_i < -self.tol and self.alpha[i] < self.C) or / (y[i] * E_i > self.tol and self.alpha[i] > 0): j = self._select_second_alpha(i, n_samples) E_j = self._decision_function(X[j]) - y[j] alpha_i_old = self.alpha[i] alpha_j_old = self.alpha[j] if y[i] != y[j]: L = max(0, self.alpha[j] - self.alpha[i]) H = min(self.C, self.C + self.alpha[j] - self.alpha[i]) else: L = max(0, self.alpha[i] + self.alpha[j] - self.C) H = min(self.C, self.alpha[i] + self.alpha[j]) if L == H: continue eta = 2 * np.dot(X[i], X[j]) - np.dot(X[i], X[i]) - np.dot(X[j], X[j]) if eta >= 0: continue self.alpha[j] -= (y[j] * (E_i - E_j)) / eta self.alpha[j] = np.clip(self.alpha[j], L, H) if abs(self.alpha[j] - alpha_j_old) < 1e-5: continue self.alpha[i] += y[i] * y[j] * (alpha_j_old - self.alpha[j]) b1 = self.b - E_i - y[i] * (self.alpha[i] - alpha_i_old) * np.dot(X[i], X[i]) - / y[j] * (self.alpha[j] - alpha_j_old) * np.dot(X[i], X[j]) b2 = self.b - E_j - y[i] * (self.alpha[i] - alpha_i_old) * np.dot(X[i], X[j]) - / y[j] * (self.alpha[j] - alpha_j_old) * np.dot(X[j], X[j]) if 0 < self.alpha[i] < self.C: self.b = b1 elif 0 < self.alpha[j] < self.C: self.b = b2 else: self.b = (b1 + b2) / 2 self.errors[i] = self._decision_function(X[i]) - y[i] self.errors[j] = self._decision_function(X[j]) - y[j] num_changed_alphas += 1 iter_count += 1 def predict(self, X): return np.sign(self._decision_function(X)) def _decision_function(self, X): return np.dot(self.alpha * self.y, self.kernel(X, self.X)) + self.b def _select_second_alpha(self, first_alpha, n_samples): second_alpha = first_alpha while second_alpha == first_alpha: second_alpha = np.random.randint(n_samples) return second_alpha def kernel(self, X1, X2): return np.dot(X1, X2.T)### 3. MNIST数据集实战我们使用MNIST手写数字数据集来测试我们实现的SVM模型。pythonfrom sklearn.datasets import fetch_openmlfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import accuracy_score# 加载MNIST数据集mnist = fetch_openml('mnist_784', version=1, cache=True)X = mnist.datay = mnist.target# 将标签转换为数值类型y = y.astype(np.uint8)# 将数据分成训练集和测试集X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# 创建SVM分类器并进行训练svm = SVM(max_iter=10) # 限制迭代次数以减少训练时间svm.fit(X_train, y_train)# 在测试集上进行预测y_pred = svm.predict(X_test)# 计算准确率accuracy = accuracy_score(y_test, y_pred)print('准确率:', accurac

Python NumPy实现SVM:基于SMO算法的机器学习实战

原文地址: https://www.cveoy.top/t/topic/bA7u 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录