更新时间:2025-07-29 GMT+08:00
分享

训练启动脚本说明和参数配置【旧】

本代码包中集成了不同模型(包括llama2、llama3、Qwen、Qwen1.5 ......)的训练脚本,并可通过不同模型中的训练脚本一键式运行。训练脚本可判断是否完成预处理后的数据和权重转换的模型。如果未完成,则执行脚本,自动完成数据预处理和权重转换的过程

如果用户进行自定义数据集预处理以及权重转换,可通过Notebook环境编辑 1_preprocess_data.sh2_convert_mg_hf.sh中的具体python指令,并在Notebook环境中运行执行。本代码中有许多环境变量的设置,在下面的指导步骤中,会展开进行详细的解释。

如果用户希望自定义参数进行训练,可直接编辑对应模型的训练脚本,可编辑参数以及详细介绍如下。以llama2-13b预训练为例:

表1 模型训练脚本参数

参数

示例值

参数说明

ORIGINAL_TRAIN_DATA_PATH

【预训练:pt】预训练数据集相对或绝对地址

【微调:sft】微调数据集相对或绝对地址

【必改】。训练时指定的输入原始数据路径。请根据实际规划修改。用户根据训练情况二选一;

USER_PROCESSED_DATA_DIR

/home/ma-user/work/process_data

可选】如已有预处理完成数据可指定此目录,训练过程中会优先加载此目录,跳过数据预处理过程;默认无此参数,用户自定义自行添加

ORIGINAL_HF_WEIGHT

/home/ma-user/work/models/llama-2-13b-chat-hf

【必改】。加载tokenizer与Hugging Face权重时,对应的存放地址。请根据实际规划修改。

OUTPUT_SAVE_DIR

/home/ma-user/work/AscendFactory/saved_dir_for_output/

必改】该路径下统一保存生成的 CKPT、PLOG、LOG 文件。如果用户需要修改,可添加并自定义该变量。

ASCEND_PROCESS_LOG_PATH

/home/ma-user/work/AscendFactory/saved_dir_for_output/plog

保存训练过程中记录的程序堆栈信息日志 PLOG 文件。示例中,默认保存在“saved_dir_for_output/plog”文件夹下。如果用户需要修改,可添加并自定义该变量。

SAVE_INTERVAL

10

表示训练间隔多少step,则会保存一次权重文件。

SHELL_FOLDER

$(dirname $(readlink -f "$0"))

表示执行脚本时的路径。

MODEL_NAME

llama2-13b

对应模型名称。

STAGE

pt

表示当前的训练阶段。可选择值:【pt、sft】

  • sft:代表监督微调;
  • pt:代表预训练;

FINETUNING_TYPE

full

表示训练策略。可选择值【full、lora】:

  • full:全参微调
  • lora:lora微调

DATA_TYPE

[GeneralPretrainHandler, GeneralInstructionHandler, MOSSMultiTurnHandler, AlpacaStyleInstructionHandler, SharegptStyleInstructionHandler]

示例值需要根据数据集的不同,选择其一。

  • GeneralPretrainHandler:使用预训练的alpaca数据集。
  • GeneralInstructionHandler:使用微调的alpaca数据集。
  • MOSSMultiTurnHandler:使用微调的moss数据集。
  • AlpacaStyleInstructionHandler:使用LLama-Factory模板Alpaca数据集
  • SharegptStyleInstructionHandler:使用LLama-Factory模板Sharegpt数据集

MBS

4

表示流水线并行中一个micro batch所处理的样本量。在流水线并行中,为了减少气泡时间,会将一个step的数据切分成多个micro batch。

该值与TP和PP以及模型大小相关,可根据实际情况进行调整。

GBS

512

表示训练中所有机器一个step所处理的样本量。影响每一次训练迭代的时长。

TP

8

表示张量并行。对应训练参数 tensor-model-parallel-size

PP

1

表示流水线并行。一般此值与训练节点数相等,与权重转换时设置的值相等。对应训练参数 pipeline-model-parallel-size

CP

1

表示context并行,默认为1。应用于训练长序列文本的模型。如果训练时SEQ_LEN超过32768长度,则推荐增加CP值(CP ≥ 2)。对应训练参数 context-parallel-size

(此参数目前仅适用于Llama3系列模型长序列训练)

LR

2.5e-5

学习率设置。

MIN_LR

2.5e-6

最小学习率设置。

SEQ_LEN

4096

要处理的最大序列长度。

MAX_PE

8192

设置模型能够处理的最大序列长度。

SN

1200

必须修改。指定的输入数据集中数据的总数量。更换数据集时,需要修改。

EPOCH

5

表示训练轮次,根据实际需要修改。一个Epoch是将所有训练样本训练一次的过程。

TRAIN_ITERS

SN / GBS * EPOCH

非必填。表示训练step迭代次数,根据实际需要修改。

SEED

1234

随机种子数。每次数据采样时,保持一致。

SAVE_INTERVAL

1000

用于模型中间版本地保存。

  • 当参数值>=TRAIN_ITERS时,生成模型仅保存经过TRAIN_ITERS次训练后的最后一个版本。
  • 当参数值<TRAIN_ITERS时,生成模型会每经过SAVE_INTERVAL次,保存一次模型版本。

模型版本保存次数=TRAIN_ITERS//SAVE_INTERVAL+1

SAVE_TOTAL_LIMIT

0

用于控制权重版本保存次数。

  • 当参数不设置或<=0时,不会触发效果。
  • 参数值需<=TRAIN_ITERS//SAVE_INTERVAL+1
  • 当参数值>1时,保存模型版本次数与SAVE_TOTAL_LIMIT的值一致。

模型参数设置规定

  • TP张量并行 、PP流水线并行、CP context并行的参数设置:TP×PP×CP的值要被NPU数量(word_size)整除。
  • TP×CP的值要被模型参数中 num_attention_heads 整除。
  • MBS(micro-batch-size)、GBS(global-batch-size)的设置:需要遵循GBS/MBS的值能够被NPU/(TP×PP×CP)的值进行整除。

模型推荐的参数与NPU卡数设置

不同模型推荐的训练参数和计算规格要求如表2所示。规格与节点数中的1*节点 & 4*Ascend表示单机4卡,以此类推。

表2 不同模型推荐的参数与NPU卡数设置

序号

支持模型

支持模型参数量

训练策略类型

文本序列长度(SEQ_LEN)

并行参数设置

micro batch size (MBS)

规格与节点数

1

llama2

llama2-7b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

2

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=4

2

1*节点 & 8*Ascend

2

llama2-13b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

3

llama2-70b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

4

llama3

llama3-8b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

5

llama3-70b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

6

Qwen

qwen-7b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

7

qwen-14b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

2

1*节点 & 8*Ascend

8

qwen-72b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

9

Qwen1.5

qwen1.5-7b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

2

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

10

qwen1.5-14b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

11

qwen1.5-32b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

2

2*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

4

2*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

1

2*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

2

2*节点 & 8*Ascend

12

qwen1.5-72b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

13

Yi

yi-6b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

2

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

14

yi-34b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=4

1

2*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=4

2

2*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

15

ChatGLMv3

glm3-6b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

1

1*节点 & 4*Ascend

16

Baichuan2

baichuan2-7b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

2

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

1

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

17

baichuan2-13b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

1

2*节点 & 8*Ascend

18

Qwen2

qwen2-0.5b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

19

qwen2-1.5b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

20

qwen2-7b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

2

1*节点 & 8*Ascend

21

qwen2-72b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

22

GLMv4

glm4-9b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

1

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

23

mistral

mistral-7b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=4

2

1*节点 & 8*Ascend

24

mixtral

mixtral-8x7b

full

4096

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=8

1

2*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=2

PP(pipeline model parallel size)=8

1

2*节点 & 8*Ascend

25

llama3.1

llama3.1-8b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

26

llama3.1-70b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

4

2*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

2

2*节点 & 8*Ascend

27

Qwen2.5

qwen2.5-0.5b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

28

qwen2.5-7b

full

4096

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

1

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=2

2

1*节点 & 8*Ascend

29

qwen2.5-14b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=4

PP(pipeline model parallel size)=1

4

1*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=1

2

1*节点 & 8*Ascend

30

qwen2.5-32b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

2

2*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

4

2*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

1

2*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=2

2

2*节点 & 8*Ascend

31

qwen2.5-72b

full

4096

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

1

4*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

4

4*节点 & 8*Ascend

full

8192

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=8

1

8*节点 & 8*Ascend

lora

TP(tensor model parallel size)=8

PP(pipeline model parallel size)=4

2

4*节点 & 8*Ascend

32

llama3.2

llama3.2-1b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

33

llama3.2-3b

full

4096

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

2

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

2

1*节点 & 4*Ascend

full

8192

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=2

1

1*节点 & 4*Ascend

lora

TP(tensor model parallel size)=1

PP(pipeline model parallel size)=1

1

1*节点 & 4*Ascend

相关文档