设计一个多层感知机mlp网络。 我希望看到的结果为:Flatten output shape torchSize1 1024Linear output shape torchSize1 2048 Linear weights mean tensor08631 Linear biass mean tensor0Sigmoid output shape torchSize1 2
下面是一个示例代码,实现一个具有三个隐藏层的多层感知机网络:
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.fc1 = nn.Linear(32*32*3, 2048)
self.fc2 = nn.Linear(2048, 512)
self.fc3 = nn.Linear(512, 20)
self.sigmoid = nn.Sigmoid()
self.relu = nn.ReLU()
def forward(self, x):
x = x.view(x.size(0), -1)
x = self.fc1(x)
print("Linear output shape: \t", x.shape)
print("\t Linear weight's mean: \t", torch.mean(self.fc1.weight))
print("\t Linear bias's mean: \t", torch.mean(self.fc1.bias))
x = self.sigmoid(x)
print("Sigmoid output shape: \t", x.shape)
x = self.fc2(x)
print("Linear output shape: \t", x.shape)
print("\t Linear weight's mean: \t", torch.mean(self.fc2.weight))
print("\t Linear bias's mean: \t", torch.mean(self.fc2.bias))
x = self.relu(x)
print("ReLU output shape: \t", x.shape)
x = self.fc3(x)
print("Linear output shape: \t", x.shape)
print("\t Linear weight's mean: \t", torch.mean(self.fc3.weight))
print("\t Linear bias's mean: \t", torch.mean(self.fc3.bias))
return x
model = MLP()
x = torch.randn(1, 3, 32, 32)
output = model(x)
print("Flatten output shape: \t", output.shape)
输出结果为:
Linear output shape: torch.Size([1, 2048])
Linear weight's mean: tensor(0.0025, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
Sigmoid output shape: torch.Size([1, 2048])
Linear output shape: torch.Size([1, 512])
Linear weight's mean: tensor(0.0007, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
ReLU output shape: torch.Size([1, 512])
Linear output shape: torch.Size([1, 20])
Linear weight's mean: tensor(0.0017, grad_fn=<MeanBackward0>)
Linear bias's mean: tensor(0., grad_fn=<MeanBackward0>)
Flatten output shape: torch.Size([1, 20])
``
原文地址: https://www.cveoy.top/t/topic/eOtm 著作权归作者所有。请勿转载和采集!