在一txt文件夹中有如下格式的数据集,每条样本前23位为特征值,最后一位为分类标签,现使用albert模型,对上述数据集进行八分类,训练过程有训练集、验证集和测试集,写出完整python代码,生成的python代码报RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)这个错误怎么修改内容:这个错误是由于在使用albert模型时,传入的数据类型不正确导致的。根据错误提示,模型期望的是Long或Int类型的张量,但实际传入的是torch.cuda.FloatTensor类型的张量。

要解决这个问题,可以将数据类型转换为Long或Int类型。可以使用torch.LongTensor()torch.IntTensor()函数将数据类型转换为Long或Int类型。

以下是修改后的代码示例:

import torch
from transformers import AlbertForSequenceClassification, AlbertTokenizer
from torch.utils.data import DataLoader, Dataset

# 定义数据集类
class MyDataset(Dataset):
    def __init__(self, file_path):
        self.data = []
        self.labels = []
        with open(file_path, 'r', encoding='utf-8') as f:
            lines = f.readlines()
            for line in lines:
                line = line.strip().split(',')
                features = list(map(float, line[:-1]))
                label = int(line[-1])
                self.data.append(features)
                self.labels.append(label)

    def __len__(self):
        return len(self.data)

    def __getitem__(self, index):
        return torch.tensor(self.data[index], dtype=torch.float), torch.tensor(self.labels[index], dtype=torch.long)

# 加载模型和tokenizer
model = AlbertForSequenceClassification.from_pretrained('albert-base-v2')
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')

# 加载数据集
train_dataset = MyDataset('train.txt')
valid_dataset = MyDataset('valid.txt')
test_dataset = MyDataset('test.txt')

# 创建数据加载器
train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
valid_loader = DataLoader(valid_dataset, batch_size=16, shuffle=False)
test_loader = DataLoader(test_dataset, batch_size=16, shuffle=False)

# 定义训练函数
def train(model, train_loader, valid_loader):
    # 设置设备
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    model.to(device)

    # 定义优化器和损失函数
    optimizer = torch.optim.AdamW(model.parameters(), lr=1e-5)
    criterion = torch.nn.CrossEntropyLoss()

    # 训练循环
    for epoch in range(10):
        model.train()
        for batch_data, batch_labels in train_loader:
            batch_data = batch_data.to(device)
            batch_labels = batch_labels.to(device)

            optimizer.zero_grad()
            outputs = model(batch_data)
            loss = criterion(outputs.logits, batch_labels)
            loss.backward()
            optimizer.step()

        # 验证循环
        model.eval()
        valid_loss = 0
        valid_acc = 0
        with torch.no_grad():
            for batch_data, batch_labels in valid_loader:
                batch_data = batch_data.to(device)
                batch_labels = batch_labels.to(device)

                outputs = model(batch_data)
                loss = criterion(outputs.logits, batch_labels)
                valid_loss += loss.item()
                _, predicted = torch.max(outputs.logits, 1)
                valid_acc += (predicted == batch_labels).sum().item()

        print(f'Epoch {epoch+1}:')
        print(f'  Train Loss: {loss.item():.4f}')
        print(f'  Valid Loss: {valid_loss/len(valid_loader):.4f}')
        print(f'  Valid Acc: {valid_acc/len(valid_dataset)*100:.2f}%')

# 训练模型
train(model, train_loader, valid_loader)

请根据实际情况修改数据集的文件路径、模型名称和训练参数等。

使用 ALBERT 模型进行八分类 - 数据集处理和错误解决

原文地址: https://www.cveoy.top/t/topic/qDKM 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录