• The first line initializes the model using the specified model type from the models dictionary.
  • The second line checks if a GPU is available and moves the model to the GPU if it is.
  • The third line initializes the optimizer with the Adam optimizer, a learning rate of 1e-4, and weight decay of 1e-5.
  • The fourth line initializes the criterion for calculating the loss, which is the cross-entropy loss.
  • The fifth line calculates the number of batches per epoch by dividing the length of the training dataset by the batch size.
  • The next few lines initialize lists to store the training and testing loss and accuracy values.
  • The next line creates a directory to save the model checkpoints.
  • The following for loop iterates over the specified number of epochs.
  • Inside the loop, the training data is iterated over in batches.
  • The input data and labels are prepared and moved to the GPU if available.
  • The train function is called to perform a forward pass, calculate the loss, and update the model parameters using the optimizer.
  • The training labels and predictions are stored in lists.
  • After each epoch, the training time, training accuracy, and training loss are calculated and stored in the respective lists.
  • The test function is called to evaluate the model on the test data and calculate the test accuracy, test loss, and other metrics.
  • The test accuracy and loss values are printed.
  • The model is saved every 10 epochs and the final model is saved at the end.
  • The test function is called again with the final model to calculate the test accuracy, test loss, NAR (Normalized Accuracy Rate), and feature list.
  • The test results are printed.
  • The feature list is saved as a CSV file.
  • The draw_result function is called to visualize the confusion matrix.
  • The draw function is called to plot the training and testing accuracy and loss curves.
  • Finally, the total training time is printed.
odel = modelsargsmodel if torchcudais_available model = modelcuda optimizer = torchoptimAdammodelparameters lr=1e-4 weight_decay=1e-5 # optimizerload_state_dicttorchloadsave_optimpt

原文地址: https://www.cveoy.top/t/topic/ibP8 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录