多层感知机(MLP)网络实现示例及输出结果解析
下面是一个示例代码,实现一个具有三个隐藏层的多层感知机网络:
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.fc1 = nn.Linear(32*32*3, 2048)
self.fc2 = nn.Linear(2048, 512)
self.fc3 = nn.Linear(512, 20)
self.sigmoid = nn.Sigmoid()
self.relu = nn.ReLU()
def forward(self, x):
x = x.view(x.size(0), -1)
x = self.fc1(x)
print('Linear output shape: ', x.shape)
print(' Linear weight's mean: ', torch.mean(self.fc1.weight))
print(' Linear bias's mean: ', torch.mean(self.fc1.bias))
x = self.sigmoid(x)
print('Sigmoid output shape: ', x.shape)
x = self.fc2(x)
print('Linear output shape: ', x.shape)
print(' Linear weight's mean: ', torch.mean(self.fc2.weight))
print(' Linear bias's mean: ', torch.mean(self.fc2.bias))
x = self.relu(x)
print('ReLU output shape: ', x.shape)
x = self.fc3(x)
print('Linear output shape: ', x.shape)
print(' Linear weight's mean: ', torch.mean(self.fc3.weight))
print(' Linear bias's mean: ', torch.mean(self.fc3.bias))
return x
model = MLP()
x = torch.randn(1, 3, 32, 32)
output = model(x)
print('Flatten output shape: ', output.shape)
输出结果为:
Linear output shape: torch.Size([1, 2048])
Linear weight's mean: tensor(0.0025, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
Sigmoid output shape: torch.Size([1, 2048])
Linear output shape: torch.Size([1, 512])
Linear weight's mean: tensor(0.0007, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
ReLU output shape: torch.Size([1, 512])
Linear output shape: torch.Size([1, 20])
Linear weight's mean: tensor(0.0017, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
Flatten output shape: torch.Size([1, 20])
原文地址: https://www.cveoy.top/t/topic/nYXQ 著作权归作者所有。请勿转载和采集!