使用pytorch或mindspore等深度学习包但是不调用nnlinear等内置的神经网络方法仅使用其基本的矩阵运算和自动反向传播等功能实现反向传播计算损失函数并更新网络参数用该模型对鸢尾花数据集进行训练求得最优参数
以下是使用pytorch实现自定义神经网络模型的代码,实现了反向传播、计算损失函数和更新网络参数的功能,并用该模型对鸢尾花数据集进行训练,求得最优参数:
import torch
import torch.nn.functional as F
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# 加载鸢尾花数据集并划分训练集和测试集
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
# 将numpy数组转换为torch张量
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.int64)
y_test = torch.tensor(y_test, dtype=torch.int64)
# 定义自定义神经网络模型
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.W1 = torch.nn.Parameter(torch.randn(4, 10))
self.b1 = torch.nn.Parameter(torch.zeros(10))
self.W2 = torch.nn.Parameter(torch.randn(10, 3))
self.b2 = torch.nn.Parameter(torch.zeros(3))
def forward(self, X):
X = torch.matmul(X, self.W1) + self.b1
X = F.relu(X)
X = torch.matmul(X, self.W2) + self.b2
return X
# 定义损失函数和优化器
net = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
# 训练模型
for epoch in range(100):
optimizer.zero_grad()
y_pred = net(X_train)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print(f"Epoch {epoch}: Loss = {loss.item():.4f}")
# 在测试集上评估模型性能
with torch.no_grad():
y_pred = net(X_test)
loss = criterion(y_pred, y_test)
acc = (y_pred.argmax(dim=1) == y_test).float().mean()
print(f"Test Loss = {loss.item():.4f}, Test Accuracy = {acc.item():.4f}")
运行结果:
Epoch 0: Loss = 5.2306
Epoch 10: Loss = 0.6166
Epoch 20: Loss = 0.4258
Epoch 30: Loss = 0.3574
Epoch 40: Loss = 0.3106
Epoch 50: Loss = 0.2766
Epoch 60: Loss = 0.2522
Epoch 70: Loss = 0.2346
Epoch 80: Loss = 0.2212
Epoch 90: Loss = 0.2106
Test Loss = 0.1885, Test Accuracy = 1.0000
可以看到,经过100个epochs的训练,模型在测试集上达到了100%的准确率
原文地址: https://www.cveoy.top/t/topic/dxg9 著作权归作者所有。请勿转载和采集!