Inference Unlimited

指南:如何在32GB RAM的计算机上运行Phi-2

引言

Phi-2 是一个强大的语言模型,需要足够强大的硬件来运行。本指南将展示如何在32GB RAM的计算机上安装和运行Phi-2。我们将涵盖所有关键步骤,从准备环境到运行模型。

前提条件

在开始安装之前,请确保您的系统满足以下要求:

安装环境

1. 安装Python

Phi-2 需要Python 3.8或更高版本。您可以使用包管理器安装它:

sudo apt update
sudo apt install python3.8 python3.8-venv

2. 创建虚拟环境

创建虚拟环境可以帮助避免与其他包的冲突:

python3.8 -m venv phi2_env
source phi2_env/bin/activate

3. 安装依赖项

安装必要的包:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install transformers accelerate bitsandbytes

下载Phi-2模型

您可以使用Hugging Face Transformers库下载Phi-2模型:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)

配置内存

对于32GB RAM的计算机,建议使用内存优化,例如8位量化:

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    load_in_8bit=True,
    offload_folder="offload",
    offload_state_dict=True,
)

运行模型

现在,您可以运行模型并测试它:

prompt = "人生的意义是什么?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

优化

1. 使用DeepSpeed

DeepSpeed 是一个用于优化内存和性能的工具:

pip install deepspeed

2. 配置DeepSpeed

创建ds_config.json文件:

{
    "train_batch_size": "auto",
    "gradient_accumulation_steps": "auto",
    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": 1e-8,
            "weight_decay": 0.01
        }
    },
    "fp16": {
        "enabled": "auto"
    },
    "zero_optimization": {
        "stage": 3,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "offload_param": {
            "device": "cpu",
            "pin_memory": true
        }
    }
}

3. 使用DeepSpeed运行

from transformers import AutoModelForCausalLM, AutoTokenizer
import deepspeed

model_name = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name, export=False)

ds_config = {
    "train_batch_size": "auto",
    "gradient_accumulation_steps": "auto",
    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": 1e-8,
            "weight_decay": 0.01
        }
    },
    "fp16": {
        "enabled": "auto"
    },
    "zero_optimization": {
        "stage": 3,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "offload_param": {
            "device": "cpu",
            "pin_memory": true
        }
    }
}

model_engine, optimizer, _, _ = deepspeed.initialize(
    model=model,
    config=ds_config
)

总结

在32GB RAM的计算机上运行Phi-2需要适当准备环境并应用内存优化。在本指南中,我们讨论了关键步骤,如安装Python、创建虚拟环境、下载模型和配置内存。通过这些步骤,您应该能够运行Phi-2并享受其强大的功能。

Język: ZH | Wyświetlenia: 12

← Powrót do listy artykułów