LeNet-1D-V模型代码分析及错误修正
LeNet-1D-V模型代码分析及错误修正
本文分析了LeNet-1D-V模型代码,并指出其在forward方法中缺少target输入参数的错误,并提供了修正后的代码。
模型代码分析
class LeNet_1D_V(nn.Module):
def __init__(self):
super(LeNet_1D_V, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(1, 5, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv2 = nn.Sequential(
nn.Conv1d(5, 10, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv3 = nn.Sequential(
nn.Conv1d(10, 15, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv4 = nn.Sequential(
nn.Conv1d(15, 20, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv5 = nn.Sequential(
nn.Conv1d(20, 25, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv6 = nn.Sequential(
nn.Conv1d(25, 30, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv7 = nn.Sequential(
nn.Conv1d(30, 35, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv8 = nn.Sequential(
nn.Conv1d(35, 40, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.avgpool = nn.AvgPool1d(kernel_size=2, stride=1)
self.global_avgpool = nn.AdaptiveAvgPool1d(1)
self.fc = nn.Linear(40, 6)
self.softmax = nn.Softmax(dim=1)
self.loss_fn = nn.CrossEntropyLoss()
def forward(self, x):
x = self.conv1(x)
x = self.avgpool(x)
x = self.conv2(x)
x = self.avgpool(x)
x = self.conv3(x)
x = self.avgpool(x)
x = self.conv4(x)
x = self.avgpool(x)
x = self.conv5(x)
x = self.avgpool(x)
x = self.conv6(x)
x = self.avgpool(x)
x = self.conv7(x)
x = self.avgpool(x)
x = self.conv8(x)
x = self.global_avgpool(x)
x = x.view(x.size(0), -1)
feature = x
x = self.fc(x)
output = self.softmax(x)
if target is not None:
loss = self.loss_fn(output, target)
return feature, output, loss
else:
return feature, output
model = LeNet_1D_V()
print(model)
模型结构:
- 包含八个卷积层(Conv)3*1
- 七个平均池化层(Avg-pool)2*1
- 一个全局平均池化层(GAP)
- 一个全连接层(FC)
模型特点:
- 第一层卷积核大小为5
- 激活函数为Mish函数(M)
- 多分类损失函数为交叉熵函数
代码错误分析
上述模型代码有一个错误,即在forward方法中缺少了target作为输入参数。
原因:
在代码中,forward方法的定义为forward(self, x),但是后面却用到了target参数,导致程序报错。
代码修正
为了修正这个错误,需要将forward方法的定义改为forward(self, x, target),并在方法中使用target参数。
class LeNet_1D_V(nn.Module):
def __init__(self):
super(LeNet_1D_V, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(1, 5, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv2 = nn.Sequential(
nn.Conv1d(5, 10, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv3 = nn.Sequential(
nn.Conv1d(10, 15, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv4 = nn.Sequential(
nn.Conv1d(15, 20, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv5 = nn.Sequential(
nn.Conv1d(20, 25, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv6 = nn.Sequential(
nn.Conv1d(25, 30, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv7 = nn.Sequential(
nn.Conv1d(30, 35, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.conv8 = nn.Sequential(
nn.Conv1d(35, 40, kernel_size=3, stride=1, padding=1),
nn.Mish()
)
self.avgpool = nn.AvgPool1d(kernel_size=2, stride=1)
self.global_avgpool = nn.AdaptiveAvgPool1d(1)
self.fc = nn.Linear(40, 6)
self.softmax = nn.Softmax(dim=1)
self.loss_fn = nn.CrossEntropyLoss()
def forward(self, x, target):
x = self.conv1(x)
x = self.avgpool(x)
x = self.conv2(x)
x = self.avgpool(x)
x = self.conv3(x)
x = self.avgpool(x)
x = self.conv4(x)
x = self.avgpool(x)
x = self.conv5(x)
x = self.avgpool(x)
x = self.conv6(x)
x = self.avgpool(x)
x = self.conv7(x)
x = self.avgpool(x)
x = self.conv8(x)
x = self.global_avgpool(x)
x = x.view(x.size(0), -1)
feature = x
x = self.fc(x)
output = self.softmax(x)
if target is not None:
loss = self.loss_fn(output, target)
return feature, output, loss
else:
return feature, output
model = LeNet_1D_V()
print(model)
现在,target作为输入参数传递给了forward方法。
总结
本文分析了LeNet-1D-V模型代码,并指出其在forward方法中缺少target输入参数的错误,并提供了修正后的代码。修正后的代码能够正常运行,并能够接收target参数,从而计算loss。
原文地址: https://www.cveoy.top/t/topic/b9NB 著作权归作者所有。请勿转载和采集!