4Load mnist dataset Normalize the data Split the data into training validation and testing setBuild a CNN network with convolution layers pooling layers to classify the numberPlot the training loss an
To complete this task, you can follow the steps below:
- Load the MNIST dataset using the
tf.keras.datasets.mnistmodule. - Normalize the data by dividing the pixel values (0-255) by 255.
- Split the data into training, validation, and testing sets using
train_test_splitfunction fromsklearn.model_selection. - Reshape the data to have a single channel (grayscale) and add an extra dimension for the channel.
- Build a CNN network using
tf.keras.Sequentialmodel and add convolutional layers, pooling layers, and fully connected layers. - Compile the model with an appropriate optimizer (e.g., Adam) and a suitable loss function (e.g., sparse categorical crossentropy).
- Train the model using the training data and validate it using the validation data. Use the
fitmethod of the model. - Plot the training loss and validation loss as a function of epochs using
matplotlib.pyplot.plot. - Plot the training accuracy and validation accuracy as a function of epochs.
- Print the testing accuracy using the
evaluatemethod of the model.
Here's an example implementation:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Normalize the data
x_train = x_train / 255.0
x_test = x_test / 255.0
# Split the data into training, validation, and testing sets
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.2, random_state=42)
# Reshape the data to have a single channel
x_train = x_train.reshape((-1, 28, 28, 1))
x_val = x_val.reshape((-1, 28, 28, 1))
x_test = x_test.reshape((-1, 28, 28, 1))
# Build the CNN network
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Train the model
history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=10)
# Plot the training loss and validation loss
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Plot the training accuracy and validation accuracy
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# Print the testing accuracy
_, test_accuracy = model.evaluate(x_test, y_test)
print('Testing Accuracy:', test_accuracy)
This code will load the MNIST dataset, normalize the data, split it into training, validation, and testing sets, build a CNN network, train it, and evaluate its performance. Finally, it will plot the training and validation loss as well as the training and validation accuracy, and print the testing accuracy.
原文地址: https://www.cveoy.top/t/topic/i81t 著作权归作者所有。请勿转载和采集!