PyTorch 图像分类模型训练:达到 97.5% 准确率
import\u0020torch\nfrom\u0020torchvision\u0020import\u0020transforms\nfrom\u0020torchvision\u0020import\u0020datasets\nfrom\u0020torch.utils.data\u0020import\u0020DataLoader\nimport\u0020torch.optim\u0020as\u0020optim\nimport\u0020torch.nn.functional\u0020as\u0020F\nimport\u0020matplotlib.pyplot\u0020as\u0020plt\nimport\u0020os\n\nbatch_size\u0020=\u002064\n\ntransform\u0020=\u0020transforms.Compose([\n\u0020\u0020transforms.ToTensor(),\n\u0020\u0020transforms.Normalize((0.1307),(0.3081))\u0020#两个参数,平均值和标准差\n])\n\ntrain_dataset\u0020=\u0020datasets.ImageFolder(\n\u0020\u0020root\u0020=\u0020"./data/train",\n\u0020\u0020transform\u0020=\u0020transform\n)\n\ntrain_loader\u0020=\u0020DataLoader(train_dataset,\n\u0020\u0020\u0020\u0020\u0020\u0020shuffle\u0020=\u0020True,\n\u0020\u0020\u0020\u0020\u0020\u0020batch_size\u0020=\u0020batch_size)\n\ntest_dataset\u0020=\u0020datasets.ImageFolder(\n\u0020\u0020root\u0020=\u0020"./data/validation",\n\u0020\u0020transform\u0020=\u0020transform\n)\n\ntest_loader\u0020=\u0020DataLoader(test_dataset,\n\u0020\u0020\u0020\u0020\u0020\u0020shuffle\u0020=\u0020True,\n\u0020\u0020\u0020\u0020\u0020\u0020batch_size\u0020=\u0020batch_size)\n\nclass\u0020Net(torch.nn.Module):\n\u0020\u0020def\u0020__init__(self):\n\u0020\u0020\u0020\u0020super(Net,self).init()\n\u0020\u0020\u0020\u0020self.conv1\u0020=\u0020torch.nn.Conv2d(in_channels=3,out_channels=10,kernel_size=3)\n\u0020\u0020\u0020\u0020self.conv2\u0020=\u0020torch.nn.Conv2d(in_channels=10,out_channels=20,kernel_size=3)\n\u0020\u0020\u0020\u0020self.conv3\u0020=\u0020torch.nn.Conv2d(in_channels=20, out_channels=40, kernel_size=3)\n\u0020\u0020\u0020\u0020self.pooling1\u0020=\u0020torch.nn.MaxPool2d(kernel_size=2)\n\u0020\u0020\u0020\u0020self.pooling2\u0020=\u0020torch.nn.MaxPool2d(kernel_size=2)\n\u0020\u0020\u0020\u0020self.pooling3\u0020=\u0020torch.nn.MaxPool2d(kernel_size=2)\n\u0020\u0020\u0020\u0020self.linear1\u0020=\u0020torch.nn.Linear(40,32)\u0020#想确定40这个值?是和\n\u0020\u0020\u0020\u0020self.linear2\u0020=\u0020torch.nn.Linear(32, 10)\n\n\u0020\u0020def\u0020forward(self,x):\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.conv1(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020F.relu(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.pooling1(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.conv2(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020F.relu(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.pooling2(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.conv3(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020F.relu(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.pooling3(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020x.view(x.size(0),\u0020-1)\u0020#\u0020Flatten\u0020改变张量形状\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.linear1(x)\n\u0020\u0020\u0020\u0020x\u0020=\u0020self.linear2(x)\n\u0020\u0020\u0020\u0020return\u0020x\n\nmodel\u0020=\u0020Net()\ndevice\u0020=\u0020torch.device("cuda"\u0020if\u0020torch.cuda.is_available()\u0020else\u0020"cpu")\nmodel.to(device)\n\ncriterion\u0020=\u0020torch.nn.CrossEntropyLoss()\noptimizer\u0020=\u0020optim.SGD(model.parameters(),\u0020lr=0.01,\u0020momentum=\u00200.5)\n\ndef\u0020train():\n\u0020\u0020total\u0020=\u00200\n\u0020\u0020running_loss\u0020=\u00200.0\n\u0020\u0020train_loss\u0020=\u00200.0\n\u0020\u0020accuracy\u0020=\u00200\n\u0020\u0020for\u0020epoch\u0020in\u0020range(epoches):\n\u0020\u0020\u0020\u0020model.train()\n\u0020\u0020\u0020\u0020for\u0020batch_id,\u0020data\u0020in\u0020enumerate(train_loader,0):\n\u0020\u0020\u0020\u0020\u0020\u0020inputs,\u0020target\u0020=\u0020data\n\u0020\u0020\u0020\u0020\u0020\u0020inputs,\u0020target\u0020=\u0020inputs.to(device),\u0020target.to(device)\n\u0020\u0020\u0020\u0020\u0020\u0020optimizer.zero_grad()\n\u0020\u0020\u0020\u0020\u0020\u0020outputs\u0020=\u0020model(inputs)\n\u0020\u0020\u0020\u0020\u0020\u0020loss\u0020=\u0020criterion(outputs,\u0020target)\n\n\u0020\u0020\u0020\u0020\u0020\u0020_,\u0020predicted\u0020=\u0020torch.max(outputs.data,\u0020dim=1)\n\u0020\u0020\u0020\u0020\u0020\u0020accuracy\u0020+=\u0020(predicted\u0020==\u0020target).sum().item()\n\u0020\u0020\u0020\u0020\u0020\u0020total\u0020+=\u0020target.size(0)\n\n\u0020\u0020\u0020\u0020\u0020\u0020loss.backward()\n\u0020\u0020\u0020\u0020\u0020\u0020optimizer.step()\n\n\u0020\u0020\u0020\u0020\u0020\u0020running_loss\u0020+=\u0020loss.item()\n\u0020\u0020\u0020\u0020\u0020\u0020train_loss\u0020=\u0020running_loss\n\u0020\u0020\u0020\u0020\u0020\u0020if\u0020batch_id\u0020%\u0020300\u0020==\u0020299:\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020print('[%d, %5d] loss: %.3f' %(epoch+1, batch_id+1, running_loss / 300))\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020running_loss\u0020=\u00200.0\n\u0020\u0020\u0020\u0020print('第 %d epoch的 Accuracy on train set: %d %%, Loss on train set: %f' % (epoch + 1, 100 * accuracy / total, train_loss))\n\n\u0020\u0020\u0020\u0020correct\u0020=\u00200\n\u0020\u0020\u0020\u0020total\u0020=\u00200\n\u0020\u0020\u0020\u0020val_loss\u0020=\u00200.0\n\u0020\u0020\u0020\u0020model.eval()\n\u0020\u0020\u0020\u0020with\u0020torch.no_grad():\n\u0020\u0020\u0020\u0020\u0020\u0020for\u0020data\u0020in\u0020test_loader:\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020images,\u0020target\u0020=\u0020data\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020images,\u0020target\u0020=\u0020images.to(device),\u0020target.to(device)\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020outputs\u0020=\u0020model(images)\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020loss\u0020=\u0020criterion(outputs,\u0020target)\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020val_loss\u0020+=\u0020loss.item()\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020_,\u0020predicted\u0020=\u0020torch.max(outputs.data,\u0020dim=1)\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020total\u0020+=\u0020target.size(0)\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020correct\u0020+=\u0020(predicted\u0020==\u0020target).sum().item()\n\u0020\u0020\u0020\u0020print('第 %d epoch的 Accuracy on validation set: %d %%, Loss on validation set: %f' %(epoch+1,100*correct / total, val_loss))\n\n\u0020\u0020\u0020\u0020if\u0020correct\u0020/\u0020total\u0020>=\u00200.975:\n\u0020\u0020\u0020\u0020\u0020\u0020break\n\nepoches\u0020=\u0020100\ntrain()\n\n#\u0020保存模型\n#if\u0020not\u0020os.path.exists("./train"):\n#\u0020os.makedirs("./train")\n#torch.save(model.state_dict(),\u0020"./train/model.h5")\n
原文地址: https://www.cveoy.top/t/topic/pYRc 著作权归作者所有。请勿转载和采集!