我正在尝试使用前6天的温度数据来预测未来5天的温度数据。我使用ConvLSTM模型,并将包含时空特征的温度数据输入模型,得到一个元组输出outputs。这个元组应该是返回的layer_output_list和last_state_list。然而,我发现outputs只返回了layer_output_list[-1]和last_state_list[-1],并且outputs中的数值似乎不是预测的温度数据,而是一堆不超过1的小数组成。/n/n现在,我需要将outputs转换为预测值,以便与标签温度值进行比较,并计算损失函数。/n/n如何将outputs转换为预测值?/n/n为了将outputs转换为预测值,可以取outputs的最后一个时间步的输出作为预测值,即outputs[:, -1, :, :, :]。/n/n代码示例:/n/npython/n# 定义训练循环/nfor epoch in range(10):/n for i, (inputs, labels) in enumerate(train_loader):/n inputs, labels = inputs.to(device), labels.to(device)/n model.to(device)/n optimizer.zero_grad()/n outputs = model(inputs)/n # 取最后一个时间步的输出作为预测值/n preds = outputs[:, -1, :, :, :]/n loss = criterion(preds, labels)/n loss.backward()/n optimizer.step()/n print(f/'Epoch {epoch+1}, Batch {i+1}, Loss: {loss.item()}/')/n/n/n为什么我的outputs会变成由一堆小数组成的?/n/nConvLSTM模型的输出是一个包含了每个时间步的隐藏状态的张量。每个隐藏状态是一个包含多个特征的向量,这些特征的值一般在0到1之间。因此,outputs中的数值是由一系列小数组成的。/n/noutputs中哪个量更适合作为预测值计算损失函数?/n/noutputs的最后一个时间步的输出更适合作为预测值计算损失函数。因为最后一个时间步的输出代表了模型对未来时间点的预测结果。/n/nConvLSTM模型代码:/n/npython/nclass ConvLSTM(nn.Module):/n def __init__(self, input_dim,hidden_dim, kernel_size, num_layers, batch_first=False, bias=True, return_all_layers=False):/n super(ConvLSTM, self).__init__()/n self._check_kernel_size_consistency(kernel_size)/n # Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers/n kernel_size = self._extend_for_multilayer(kernel_size, num_layers)/n hidden_dim = self._extend_for_multilayer(hidden_dim, num_layers)/n if not len(kernel_size) == len(hidden_dim) == num_layers:/n raise ValueError('Inconsistent list length.')/n self.input_dim = input_dim/n self.hidden_dim = hidden_dim/n self.kernel_size = kernel_size/n self.num_layers = num_layers/n self.batch_first = batch_first/n self.bias = bias/n self.return_all_layers = return_all_layers/n cell_list = []/n for i in range(0, self.num_layers):/n cur_input_dim = self.input_dim if i == 0 else self.hidden_dim[i - 1]/n cell_list.append(ConvLSTMCell(input_dim=cur_input_dim, hidden_dim=self.hidden_dim[i], kernel_size=self.kernel_size[i], bias=self.bias))/n self.cell_list = nn.ModuleList(cell_list)/n/nclass MapLayer(nn.Module):/n def init(self):/n super(MapLayer, self).init()/n self.conv = nn.Conv2d(in_channels=64, out_channels=1, kernel_size=(1, 1))/n/n def forward(self, input_tensor, hidden_state=None,x):/n if not self.batch_first:/n # (t, b, c, h, w) -> (b, t, c, h, w)/n input_tensor = input_tensor.permute(1, 0, 2, 3, 4)/n b, t, _, h, w = input_tensor.size()/n # Implement stateful ConvLSTM/n if hidden_state is not None:/n raise NotImplementedError()/n else:/n # Since the init is done in forward. Can send image size here/n hidden_state = self._init_hidden(batch_size=b, image_size=(h, w))/n layer_output_list = []/n last_state_list = []/n seq_len = input_tensor.size(1)/n cur_layer_input = input_tensor/n for layer_idx in range(self.num_layers):/n h, c = hidden_state[layer_idx]/n output_inner = []/n for t in range(seq_len):/n h, c = self.cell_list[layer_idx](input_tensor=cur_layer_input[:, t, :, :, :], cur_state=[h, c])/n c_cur =torch.zeros_like(hidden_state[1][0])/n c_cur = c_cur[:, :1, :, :]/n output_inner.append(h)/n layer_output = torch.stack(output_inner, dim=1)/n cur_layer_input = layer_output/n layer_output_list.append(layer_output)/n last_state_list.append([h, c])/n if not self.return_all_layers:/n return layer_output_list[-1], last_state_list[-1]/n else:/n return layer_output_list, last_state_list/n/n def _init_hidden(self, batch_size, image_size):/n init_states = []/n for i in range(self.num_layers):/n init_states.append(self.cell_list[i].init_hidden(batch_size, image_size))/n return init_states/n/n @staticmethod/n def _check_kernel_size_consistency(kernel_size):/n if not (isinstance(kernel_size, tuple) or (isinstance(kernel_size, list) and all([isinstance(elem, tuple) for elem in kernel_size]))):/n raise ValueError('`kernel_size` must be tuple or list of tuples')/n/n @staticmethod/n def _extend_for_multilayer(param, num_layers):/n if not isinstance(param, list):/n param = [param] * num_layers/n return param/n/n#实例化对象/nmodel = ConvLSTM(input_dim=1, hidden_dim=[64, 64], kernel_size=[(1, 55), (1, 55)], num_layers=2)/n/n#设置优化参数/ncriterion = nn.CrossEntropyLoss()/noptimizer = optim.SGD(list(model.parameters()), lr=0.001, momentum=0.9)/n/n# 读取Excel文件/ndf = pd.read_excel(r'C:/Users/19738/Desktop/数据集/01.xlsx')/ndf = df.sort_values(by=['日期', '经度', '深度'])/n/n#首先,根据时间、经度和深度对数据进行排序:/n# 将数据转换为numpy数组/ndata = df.values/ntime = df.iloc[:, 0]/n# 提取时间列/nlongitude = df.iloc[:, 1]/n# 提取经度列/ndepth = df.iloc[:, 2]/n# 提取深度列/n/n# 假设您的数据张量大小为 (num_samples,num_time_steps,1, num_lon, num_dep, )/nnum_samples = 100/nnum_lon = 2/nnum_dep = 55/nnum_time_steps = 11/n/n# 构建新的数据张量,初始化为 0/ntemp_data = np.zeros((num_samples, num_time_steps, 1, num_lon, num_dep))/nfor i in range(num_samples):/n #在这个实现中,我们使用了四个嵌套的循环来遍历数据张量中的所有位置。在每个位置上,我们根据当前时间、经度和深度信息计算对应的索引,然后从DataFrame中获取对应的温度值,并填充到数据张量中。注意,为了计算索引,我们需要将时间、经度和深度信息分别转换成字符串类型。/n for j in range(num_lon):/n for k in range(num_dep):/n for t in range(num_time_steps):/n # 根据时间、经度、深度信息计算对应的索引/n date_str = time[t].strftime('%Y/%m/%d')/n lon_str = str(longitude[j])/n dep_str = str(depth[k])/n index = (df['日期'] == date_str) & (df['经度'] == lon_str) & (df['深度'] == dep_str)/n # 获取对应的温度值/n temp_value = df.loc[index, '温度'].values[0]/n # 填充温度值到数据张量中/n temp_data[i, t, 0,j, k] = temp_value/ntemp_data_tensor = torch.from_numpy(temp_data)/n/n# 定义训练数据和标签/ntrain_data = temp_data_tensor[:, :6, :, :, :]/ntrain_label = temp_data_tensor[:, 6:, :, :, :]/nprint(train_data.shape)/nprint(train_label.shape)/ntrain_dataset = TensorDataset(train_data, train_label)/ntrain_loader = DataLoader(train_dataset, batch_size=100, shuffle=True)/n/n# 定义训练循环/nfor epoch in range(10):/n for i, (inputs, labels) in enumerate(train_loader):/n #enumerate() 函数是 Python 内置函数,它可以将一个可迭代对象转化为一个枚举对象,同时列出数据和数据下标。在这里,train_loader 是一个可迭代对象,可以用 enumerate() 对它进行遍历。i 表示当前的迭代次数,data 是从 train_loader 中迭代出来的数据和标签。通过 enumerate() 函数和 i 的使用,可以在训练过程中打印出每一次迭代的损失函数值、正确率等训练指标,方便进行模型的调试和优化。/n inputs, labels = inputs.to(device), labels.to(device)/n # 将数据移动到GPU上/n 模型中输入的 inputs 就是 train_loader 中迭代出来的 data[0],标签 labels 就是 train_loader 中迭代出来的 data[1]/n model.to(device)/n optimizer.zero_grad()/n # 梯度清零/n outputs = model(inputs)/n # 前向传播/n print(inputs.shape)/n loss = criterion(outputs, labels)/n # 计算损失函数/n loss.backward()/n # 反向传播/n optimizer.step()/n # 更新参数/n # 输出损失/n print(f/'Epoch {epoch+1}, Batch {i+1}, Loss: {loss.item()}/')/n/n

基于ConvLSTM模型的温度预测:输出解读及损失函数计算

原文地址: https://www.cveoy.top/t/topic/m4v9 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录