PyTorch LSTM模型训练中目标张量维度不匹配问题解决方案
PyTorch LSTM模型训练中目标张量维度不匹配问题解决方案
在使用PyTorch训练LSTM模型时,可能会遇到以下错误:
UserWarning: Using a target size (torch.Size([3090])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
这个问题是由于目标张量的维度(target tensor dimension)与输入张量的维度不匹配导致的。目标张量应该与输出张量具有相同的维度。在这种情况下,可以使用unsqueeze方法来添加一个额外的维度,使目标张量变成二维张量(two-dimensional tensor)。
以下代码演示了如何解决这个问题:
inputs = torch.from_numpy(X_new).float().cuda()
targets = torch.from_numpy(data['Close_Amplitude'].values).float().cuda().unsqueeze(1)
# 定义模型参数
input_size = X_new.shape[1]
hidden_size = len(X_new)
num_layers = 2
output_size = 1
batch_size = 1
# 定义LSTM模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size, batch_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.batch_size = batch_size
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
batch_size = self.batch_size # 获取输入数据的batch_size
h0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).cuda()
c0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).cuda()
x = x.unsqueeze(0)
out, _ = self.lstm(x, (h0, c0))
out = out.squeeze(0)
out = self.fc(out[:, -1, :])
return out
现在,目标张量的维度与输入张量的维度匹配,代码应该可以正常工作了。
原文地址: https://www.cveoy.top/t/topic/lDg9 著作权归作者所有。请勿转载和采集!