这个错误提示是因为输入的input_ids张量在CPU上,而模型在GPU上。为了解决这个问题,你可以将input_ids张量移动到与模型相同的设备上。你可以使用to()方法来实现这个操作。以下是修改后的代码示例:

import torch
from modelscope import snapshot_download, Model
from modelscope.models.nlp.llama2 import Llama2Tokenizer

model_dir = snapshot_download("modelscope/Llama-2-7b-ms", revision='v1.0.1', 
                              ignore_file_pattern = [r'\w+\.safetensors'])
model = Model.from_pretrained(model_dir, device_map='auto', torch_dtype=torch.float16)
tokenizer = Llama2Tokenizer.from_pretrained(model_dir)

prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")

# Move input_ids tensor to the same device as the model
inputs.input_ids = inputs.input_ids.to(model.device)

# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

这将确保input_ids张量与模型在相同的设备上,解决了设备不匹配的错误

import torchfrom modelscope import snapshot_download Modelfrom modelscopemodelsnlpllama2 import Llama2Tokenizermodel_dir = snapshot_downloadmodelscopeLlama-2-7b-ms revision=v101

原文地址: https://www.cveoy.top/t/topic/iNiS 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录