{ "title": "在PyTorch代码中添加日志记录功能", "description": "本指南介绍如何在PyTorch代码中添加日志记录功能,以记录训练过程中的关键信息,例如训练和测试损失、准确率以及最佳模型的epoch和准确率。", "keywords": "PyTorch, 日志记录, logging, 训练过程, 损失, 准确率, 最佳模型", "content": "import\x20torch\nimport\x20torchvision.models\x20as\x20models\nfrom\x20torch.utils\x20import\x20data\nfrom\x20torch\x20import\x20nn\nfrom\x20torch\x20import\x20optim\nimport\x20numpy\x20as\x20np\nimport\x20argparse\nfrom\x20data.MyDataset\x20import\x20Mydatasetpro\nfrom\x20torch.optim.lr_scheduler\x20import\x20ReduceLROnPlateau\nfrom\x20data.MyDataset\x20import\x20all_imgs_path, all_labels, transform\n\nimport\x20logging\nlogging.basicConfig(filename='log.txt', level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\n\n\nparser\x20=\x20argparse.ArgumentParser(description='Soft')\nparser.add_argument('--name', type=str, default='resnet50', metavar='N', help='model name')\nparser.add_argument('--batch_size', type=int, default=32, metavar='N', help='input batch size for training')\nparser.add_argument('--epochs', type=int, default=1000, metavar='N', help='number of epochs to train')\nparser.add_argument('--lr', type=float, default=1e-3, metavar='LR', help='learning rate')\nargs\x20=\x20parser.parse_args()\nprint(args)\n\n#\x20==================================================================\n#\x20Set\x20Device\n#\x20==================================================================\nDEVICE\x20=\x20torch.device("cuda"\x20if\x20torch.cuda.is_available()\x20else\x20"cpu")\n#\x20==================================================================\n#\x20Set\x20Model\n#\x20==================================================================\nmodel\x20=\x20models.resnet50(pretrained=True)\n#\x20==================================================================\n#\x20Divide\x20the\x20data\n#\x20==================================================================\nmy_dataset\x20=\x20Mydatasetpro(all_imgs_path, all_labels, transform)\nmy_datalodaer\x20=\x20data.DataLoader( my_dataset, batch_size=args.batch_size, shuffle=True)\n\nimgs_batch, labels_batch\x20=\x20next(iter(my_datalodaer))\nprint(imgs_batch.shape)\n\n#\x20划分数据集和测试集\nindex\x20=\x20np.random.permutation(len(all_imgs_path))\n\nall_imgs_path\x20=\x20np.array(all_imgs_path)[index]\nall_labels\x20=\x20np.array(all_labels)[index]\n\n#\x2080%做训练集\ns\x20=\x20int(len(all_imgs_path) * 0.8)\n\ntrain_imgs\x20=\x20all_imgs_path[:s]\ntrain_labels\x20=\x20all_labels[:s]\ntest_imgs\x20=\x20all_imgs_path[s:]\ntest_labels\x20=\x20all_labels[s:]\n\ntrain_ds\x20=\x20Mydatasetpro(train_imgs, train_labels, transform) # TrainSet TensorData\ntest_ds\x20=\x20Mydatasetpro(test_imgs, test_labels, transform) # TestSet TensorData\n#\x20print(train_ds)\n#\x20print(test_ds)\ntrain_dl\x20=\x20data.DataLoader(train_ds, batch_size=args.batch_size, shuffle=True) # TrainSet Labels\ntest_dl\x20=\x20data.DataLoader(train_ds, batch_size=args.batch_size, shuffle=True) # TestSet Labels\n\nin_features\x20=\x20model.fc.in_features\nmodel.fc\x20=\x20nn.Sequential( nn.Linear(in_features, 256), nn.ReLU(), #\x20nn.Dropout(0, 4), nn.Linear(256,3), nn.LogSoftmax(dim=1))\n\n#\x20将模型迁移到gpu\nmodel\x20=\x20model.to(DEVICE) \n\n#\x20优化器\nloss_fn\x20=\x20nn.CrossEntropyLoss()\nloss_fn\x20=\x20loss_fn.to(DEVICE) #\x20将loss_fn迁移到GPU\n#\x20Adam损失函数\noptimizer\x20=\x20optim.Adam(model.fc.parameters(), lr=args.lr)\n#\x20设置ReduceLROnPlateau学习率调度器\nscheduler\x20=\x20ReduceLROnPlateau(optimizer, mode='max', factor=0.1, patience=50, verbose=True)\n\nsteps\x20=\x200\nrunning_loss\x20=\x200\ncount\x20=\x2010\ntrain_losses, test_losses\x20=\x20[], []\nbest_accuracy\x20=\x200.0 #\x20保存最佳模型的准确率\nbest_epoch\x20=\x200\nno_improve_count\x20=\x200 #\x20记录连续没有提升的epoch次数\n\nfor\x20epoch\x20in\x20range(args.epochs):\n model.train()\n #\x20遍历训练集数据\n for\x20imgs, labels\x20in\x20my_datalodaer:\n #\x20==============================================================\n #\x20Train\n #\x20==============================================================\n steps\x20+=\x201\n labels\x20=\x20torch.tensor(labels, dtype=torch.long)\n imgs, labels\x20=\x20imgs.to(DEVICE), labels.to(DEVICE)\n optimizer.zero_grad() #\x20梯度归零\n outputs\x20=\x20model(imgs)\n loss\x20=\x20loss_fn(outputs, labels)\n loss.backward() #\x20反向传播计算梯度\n optimizer.step() #\x20梯度优化\n running_loss\x20+=\x20loss.item()\n\n if\x20steps\x20%\x20count\x20==\x200:\n test_loss\x20=\x200\n accuracy\x20=\x200\n model.eval()\n #\x20==============================================================\n #\x20Validate\n #\x20==============================================================\n with\x20torch.no_grad():\n #\x20遍历测试集数据\n for\x20imgs, labels\x20in\x20test_dl:\n labels\x20=\x20torch.tensor(labels, dtype=torch.long)\n imgs, labels\x20=\x20imgs.to(DEVICE), labels.to(DEVICE)\n outputs\x20=\x20model(imgs)\n loss\x20=\x20loss_fn(outputs, labels)\n test_loss\x20+=\x20loss.item()\n ps\x20=\x20torch.exp(outputs)\n top_p, top_class\x20=\x20ps.topk(1, dim=1)\n equals\x20=\x20top_class\x20==\x20labels.view(*top_class.shape)\n accuracy\x20+=\x20torch.mean(equals.type(torch.FloatTensor)).item()\n train_losses.append(running_loss / len(train_dl))\n test_losses.append(test_loss / len(test_dl))\n if\x20steps\x20%\x20(int(50))\x20==\x200:\n logging.info(f"Epoch {epoch + 1}/{args.epochs}.. Train loss: {running_loss / count:.3f}.. Test loss: {test_loss / len(test_dl):.3f}.. Test accuracy: {accuracy / len(test_dl):.3f}")\n #\x20print( # f"Epoch {epoch + 1}/{args.epochs}.. " # f"Train loss: {running_loss / count:.3f}.. " # f"Test loss: {test_loss / len(test_dl):.3f}.. " # f"Test accuracy: {accuracy / len(test_dl):.3f}")\n running_loss\x20=\x200\n model.train()\n if\x20accuracy\x20>\x20best_accuracy:\n best_accuracy\x20=\x20accuracy\n best_epoch\x20=\x20epoch\n torch.save(model, "Direction_model.pth")\n no_improve_count\x20=\x200\n else:\n no_improve_count\x20+=\x201\n if\x20no_improve_count\x20>=\x2050:\n #\x20更新学习率\n scheduler.step(best_accuracy)\n if\x20optimizer.param_groups[0]['lr']\x20<\x201e-6:\n #\x20学习率过小,停止训练\n print("Learning rate too small, training stopped.")\n break\n no_improve_count\x20=\x200\nlogging.info(f"The best epoch is: {best_epoch}; The best accuracy is: {best_accuracy}")\n#\x20print("Training completed.")\n#\x20print("The best epoch is:",best_epoch,";The best accuracy is:",best_accuracy)\n


原文地址: https://www.cveoy.top/t/topic/pvps 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录