未验证 提交 72e21235 编写于 作者: M Meiyim 提交者: GitHub

Merge pull request #354 from Meiyim/dev

update to ERNIE2.3
*.pyc
*.un~
*.swp
*.egg-info/
export FLAGS_enable_parallel_graph=1
export FLAGS_sync_nccl_allreduce=1
BERT_BASE_PATH="chinese_L-12_H-768_A-12"
TASK_NAME='xnli'
DATA_PATH=data/xnli/XNLI-MT-1.0
CKPT_PATH=pretrain_model
train(){
python -u run_classifier.py --task_name ${TASK_NAME} \
--use_cuda true \
--do_train true \
--do_val false \
--do_test false \
--batch_size 8192 \
--in_tokens true \
--init_checkpoint pretrain_model/chinese_L-12_H-768_A-12/ \
--data_dir ${DATA_PATH} \
--vocab_path pretrain_model/chinese_L-12_H-768_A-12/vocab.txt \
--checkpoints ${CKPT_PATH} \
--save_steps 1000 \
--weight_decay 0.01 \
--warmup_proportion 0.0 \
--validation_steps 25 \
--epoch 1 \
--max_seq_len 512 \
--bert_config_path pretrain_model/chinese_L-12_H-768_A-12/bert_config.json \
--learning_rate 1e-4 \
--skip_steps 10 \
--random_seed 100 \
--enable_ce \
--shuffle false
}
export CUDA_VISIBLE_DEVICES=0
train | python _ce.py
export CUDA_VISIBLE_DEVICES=0,1,2,3
train | python _ce.py
Hi!
This directory has been deprecated.
Please visit the project at [models/PaddleNLP/language_representations_kit/BERT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/language_representations_kit/BERT).
train() {
python train.py \
--train_path='data/train/sentence_file_*' \
--test_path='data/dev/sentence_file_*' \
--vocab_path data/vocabulary_min5k.txt \
--learning_rate 0.2 \
--use_gpu True \
--all_train_tokens 35479 \
--max_epoch 10 \
--log_interval 5 \
--dev_interval 20 \
--local True $@ \
--enable_ce \
--shuffle false \
--random_seed 100
}
export CUDA_VISIBLE_DEVICES=0
train | python _ce.py
export CUDA_VISIBLE_DEVICES=0,1,2,3
train | python _ce.py
Hi!
This directory has been deprecated.
Please visit the project at [models/PaddleNLP/language_representations_kit/ELMo](
https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/language_representations_kit/ELMo).
<div align="center">
<h1>
<font color="red">
ERNIE 项目已经迁移至 <a href="../README.zh.md">这里</a>
</font>
</h1>
</div>
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
&nbsp;
## ERNIE: **E**nhanced **R**epresentation through k**N**owledge **I**nt**E**gration
**** **2019-04-10 更新**: update ERNIE_stable-1.0.1.tar.gz, 将模型参数、配置 ernie_config.json、vocab.txt 打包发布 ****
**** **2019-03-18 更新**: update ERNIE_stable.tgz ****
**ERNIE** 通过建模海量数据中的词、实体及实体关系,学习真实世界的语义知识。相较于 **BERT** 学习原始语言信号,**ERNIE** 直接对先验语义知识单元进行建模,增强了模型语义表示能力。
这里我们举个例子:
```Learnt by BERT :哈 [mask] 滨是 [mask] 龙江的省会,[mask] 际冰 [mask] 文化名城。```
```Learnt by ERNIE:[mask] [mask] [mask] 是黑龙江的省会,国际 [mask] [mask] 文化名城。```
在 **BERT** 模型中,我们通过『哈』与『滨』的局部共现,即可判断出『尔』字,模型没有学习与『哈尔滨』相关的任何知识。而 **ERNIE** 通过学习词与实体的表达,使模型能够建模出『哈尔滨』与『黑龙江』的关系,学到『哈尔滨』是 『黑龙江』的省会以及『哈尔滨』是个冰雪城市。
训练数据方面,除百科类、资讯类中文语料外,**ERNIE** 还引入了论坛对话类数据,利用 **DLM**(Dialogue Language Model)建模 Query-Response 对话结构,将对话 Pair 对作为输入,引入 Dialogue Embedding 标识对话的角色,利用 Dialogue Response Loss 学习对话的隐式关系,进一步提升模型的语义表示能力。
我们在自然语言推断,语义相似度,命名实体识别,情感分析,问答匹配 5 个公开的中文数据集合上进行了效果验证,**ERNIE** 模型相较 **BERT** 取得了更好的效果。
<table>
<tbody>
<tr>
<th><strong>数据集</strong>
<br></th>
<th colspan="2"><strong>XNLI</strong></th>
<th colspan="2"><strong>LCQMC</strong></th>
<th colspan="2"><strong>MSRA-NER(SIGHAN 2006)</strong></th>
<th colspan="2"><strong>ChnSentiCorp</strong></th>
<th colspan="4"><strong>nlpcc-dbqa</strong></th></tr>
<tr>
<td rowspan="2">
<p>
<strong>评估</strong></p>
<p>
<strong>指标</strong>
<br></p>
</td>
<td colspan="2">
<strong>acc</strong>
<br></td>
<td colspan="2">
<strong>acc</strong>
<br></td>
<td colspan="2">
<strong>f1-score</strong>
<br></td>
<td colspan="2">
<strong>acc</strong>
<strong></strong>
<br></td>
<td colspan="2">
<strong>mrr</strong>
<br></td>
<td colspan="2">
<strong>f1-score</strong>
<br></td>
</tr>
<tr>
<th colspan="1" width="">
<strong>dev</strong>
<br></th>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
<td colspan="1" width="">
<strong>dev</strong>
<br></td>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
<td colspan="1" width="">
<strong>dev</strong>
<br></td>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
<td colspan="1" width="">
<strong>dev</strong>
<br></td>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
<td colspan="1" width="">
<strong>dev</strong>
<br></td>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
<td colspan="1" width="">
<strong>dev</strong>
<br></td>
<td colspan="1" width="">
<strong>test</strong>
<br></td>
</tr>
<tr>
<td>
<strong>BERT
<br></strong></td>
<td>78.1</td>
<td>77.2</td>
<td>88.8</td>
<td>87.0</td>
<td>94.0
<br></td>
<td>
<span>92.6</span></td>
<td>94.6</td>
<td>94.3</td>
<td colspan="1">94.7</td>
<td colspan="1">94.6</td>
<td colspan="1">80.7</td>
<td colspan="1">80.8</td></tr>
<tr>
<td>
<strong>ERNIE
<br></strong></td>
<td>79.9 <span>(<strong>+1.8</strong>)</span></td>
<td>78.4 <span>(<strong>+1.2</strong>)</span></td>
<td>89.7 <span>(<strong>+0.9</strong>)</span></td>
<td>87.4 <span>(<strong>+0.4</strong>)</span></td>
<td>95.0 <span>(<strong>+1.0</strong>)</span></td>
<td>93.8 <span>(<strong>+1.2</strong>)</span></td>
<td>95.2 <span>(<strong>+0.6</strong>)</span></td>
<td>95.4 <span>(<strong>+1.1</strong>)</span></td>
<td colspan="1">95.0 <span>(<strong>+0.3</strong>)</span></td>
<td colspan="1">95.1 <span>(<strong>+0.5</strong>)</span></td>
<td colspan="1">82.3 <span>(<strong>+1.6</strong>)</span></td>
<td colspan="1">82.7 <span>(<strong>+1.9</strong>)</span></td></tr>
</tbody>
</table>
- **自然语言推断任务** XNLI
```text
XNLI 由 Facebook 和纽约大学的研究者联合构建,旨在评测模型多语言的句子理解能力。目标是判断两个句子的关系(矛盾、中立、蕴含)。[链接: https://github.com/facebookresearch/XNLI]
```
- **语义相似度** LCQMC
```text
LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问答匹配数据集,其目标是判断两个问题的语义是否相同。[链接: http://aclweb.org/anthology/C18-1166]
```
- **命名实体识别任务** MSRA-NER(SIGHAN 2006)
```text
MSRA-NER(SIGHAN 2006) 数据集由微软亚研院发布,其目标是命名实体识别,是指识别文本中具有特定意义的实体,主要包括人名、地名、机构名等。
```
- **情感分析任务** ChnSentiCorp
```text
ChnSentiCorp 是中文情感分析数据集,其目标是判断一段话的情感态度。
```
- **检索式问答任务** nlpcc-dbqa
```text
nlpcc-dbqa是由国际自然语言处理和中文计算会议NLPCC于2016年举办的评测任务,其目标是选择能够回答问题的答案。[链接: http://tcci.ccf.org.cn/conference/2016/dldoc/evagline2.pdf]
```
### 模型&数据
1) 预训练模型下载
| Model | Description |
| :------| :------ |
| [模型](https://ernie.bj.bcebos.com/ERNIE_stable.tgz) | 包含预训练模型参数 |
| [模型(含配置文件及词典)](https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz)) | 包含预训练模型参数、词典 vocab.txt、模型配置 ernie_config.json|
2) [任务数据下载](https://ernie.bj.bcebos.com/task_data_zh.tgz)
### 安装
本项目依赖于 Paddle Fluid 1.3.1,请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装。
**Note**: 预训练任务和finetune任务测试机器为P40, 显存22G;如果显存低于22G, 某些任务可能会因显存不足报错;
### 预训练
#### 数据预处理
基于百科类、资讯类、论坛对话类数据构造具有上下文关系的句子对数据,利用百度内部词法分析工具对句对数据进行字、词、实体等不同粒度的切分,然后基于 [`tokenization.py`](tokenization.py) 中的 CharTokenizer 对切分后的数据进行 token 化处理,得到明文的 token 序列及切分边界,然后将明文数据根据词典 [`config/vocab.txt`](config/vocab.txt) 映射为 id 数据,在训练过程中,根据切分边界对连续的 token 进行随机 mask 操作;
我们给出了 id 化后的部分训练数据:[`data/demo_train_set.gz`](./data/demo_train_set.gz`)、和测试数据:[`data/demo_valid_set.gz`](./data/demo_valid_set.gz),每行数据为1个训练样本,示例如下:
```
1 1048 492 1333 1361 1051 326 2508 5 1803 1827 98 164 133 2777 2696 983 121 4 19 9 634 551 844 85 14 2476 1895 33 13 983 121 23 7 1093 24 46 660 12043 2 1263 6 328 33 121 126 398 276 315 5 63 44 35 25 12043 2;0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1;0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55;-1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 1 0 0 1 0 1 0 0 1 0 1 0 -1;0
```
每个样本由5个 '`;`' 分隔的字段组成,数据格式: `token_ids; sentence_type_ids; position_ids; seg_labels; next_sentence_label`;其中 `seg_labels` 表示分词边界信息: 0表示词首、1表示非词首、-1为占位符, 其对应的词为 `CLS` 或者 `SEP`
#### 开始训练
预训练任务的启动脚本是 [`script/pretrain.sh`](./script/pretrain.sh)
在开始预训练之前需要把 CUDA、cuDNN、NCCL2 等动态库路径加入到环境变量 LD_LIBRARY_PATH 之中;然后执行 `bash script/pretrain.sh` 就可以基于demo数据和默认参数配置开始预训练;
预训练任务进行的过程中会输出当前学习率、训练数据所经过的轮数、当前迭代的总步数、训练误差、训练速度等信息,根据 --validation_steps ${N} 的配置,每间隔 N 步输出模型在验证集的各种指标:
```
current learning_rate:0.000001
epoch: 1, progress: 1/1, step: 30, loss: 10.540648, ppl: 19106.925781, next_sent_acc: 0.625000, speed: 0.849662 steps/s, file: ./data/demo_train_set.gz, mask_type: mask_word
feed_queue size 70
current learning_rate:0.000001
epoch: 1, progress: 1/1, step: 40, loss: 10.529287, ppl: 18056.654297, next_sent_acc: 0.531250, speed: 0.849549 steps/s, file: ./data/demo_train_set.gz, mask_type: mask_word
feed_queue size 70
current learning_rate:0.000001
epoch: 1, progress: 1/1, step: 50, loss: 10.360563, ppl: 16398.287109, next_sent_acc: 0.625000, speed: 0.843776 steps/s, file: ./data/demo_train_set.gz, mask_type: mask_word
```
如果用自定义的真实数据进行训练,请参照[`script/pretrain.sh`](./script/pretrain.sh)脚本对参数做相应修改。
### Fine-tuning 任务
在完成 ERNIE 模型的预训练后,即可利用预训练参数在特定的 NLP 任务上做 Fine-tuning。以下基于 ERNIE 的预训练模型,示例如何进行分类任务和序列标注任务的 Fine-tuning,如果要运行这些任务,请通过 [模型&数据](#模型-数据) 一节提供的链接预先下载好对应的预训练模型。
将下载的模型解压到 `${MODEL_PATH}` 路径下,`${MODEL_PATH}` 路径下包含模型参数目录 `params`;
将下载的任务数据解压到 `${TASK_DATA_PATH}` 路径下,`${TASK_DATA_PATH}` 路径包含 `LCQMC``XNLI``MSRA-NER``ChnSentCorp``nlpcc-dbqa` 5个任务的训练数据和测试数据;
#### 单句和句对分类任务
1) **单句分类任务**:
`ChnSentiCorp` 情感分类数据集作为单句分类任务示例,数据格式为包含2个字段的tsv文件,2个字段分别为: `text_a label`, 示例数据如下:
```
label text_a
0 当当网名不符实,订货多日不见送货,询问客服只会推托,只会要求用户再下订单。如此服务留不住顾客的。去别的网站买书服务更好。
0 XP的驱动不好找!我的17号提的货,现在就降价了100元,而且还送杀毒软件!
1 <荐书> 推荐所有喜欢<红楼>的红迷们一定要收藏这本书,要知道当年我听说这本书的时候花很长时间去图书馆找和借都没能如愿,所以这次一看到当当有,马上买了,红迷们也要记得备货哦!
```
执行 `bash script/run_ChnSentiCorp.sh` 即可开始finetune,执行结束后会输出如下所示的在验证集和测试集上的测试结果:
```
[dev evaluation] ave loss: 0.189373, ave acc: 0.954167, data_num: 1200, elapsed time: 14.984404 s
[test evaluation] ave loss: 0.189387, ave acc: 0.950000, data_num: 1200, elapsed time: 14.737691 s
```
2) **句对分类任务**
`LCQMC` 语义相似度任务作为句对分类任务示例,数据格式为包含3个字段的tsv文件,3个字段分别为: `text_a text_b label`,示例数据如下:
```
text_a text_b label
开初婚未育证明怎么弄? 初婚未育情况证明怎么开? 1
谁知道她是网络美女吗? 爱情这杯酒谁喝都会醉是什么歌 0
这腰带是什么牌子 护腰带什么牌子好 0
```
执行 `bash script/run_lcqmc.sh` 即可开始finetune,执行结束后会输出如下所示的在验证集和测试集上的测试结果:
```
[dev evaluation] ave loss: 0.290925, ave acc: 0.900704, data_num: 8802, elapsed time: 32.240948 s
[test evaluation] ave loss: 0.345714, ave acc: 0.878080, data_num: 12500, elapsed time: 39.738015 s
```
#### 序列标注任务
1) **实体识别**
`MSRA-NER(SIGHAN 2006)` 作为示例,数据格式为包含2个字段的tsv文件,2个字段分别为: `text_a label`, 示例数据如下:
```
text_a label
在 这 里 恕 弟 不 恭 之 罪 , 敢 在 尊 前 一 诤 : 前 人 论 书 , 每 曰 “ 字 字 有 来 历 , 笔 笔 有 出 处 ” , 细 读 公 字 , 何 尝 跳 出 前 人 藩 篱 , 自 隶 变 而 后 , 直 至 明 季 , 兄 有 何 新 出 ? O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
相 比 之 下 , 青 岛 海 牛 队 和 广 州 松 日 队 的 雨 中 之 战 虽 然 也 是 0 ∶ 0 , 但 乏 善 可 陈 。 O O O O O B-ORG I-ORG I-ORG I-ORG I-ORG O B-ORG I-ORG I-ORG I-ORG I-ORG O O O O O O O O O O O O O O O O O O O
理 由 多 多 , 最 无 奈 的 却 是 : 5 月 恰 逢 双 重 考 试 , 她 攻 读 的 博 士 学 位 论 文 要 通 考 ; 她 任 教 的 两 所 学 校 , 也 要 在 这 段 时 日 大 考 。 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
```
执行 `bash script/run_msra_ner.sh` 即可开始 finetune,执行结束后会输出如下所示的在验证集和测试集上的测试结果:
```
[dev evaluation] f1: 0.951949, precision: 0.944636, recall: 0.959376, elapsed time: 19.156693 s
[test evaluation] f1: 0.937390, precision: 0.925988, recall: 0.949077, elapsed time: 36.565929 s
```
### FAQ
#### 如何获取输入句子经过 ERNIE 编码后的 Embedding 表示?
可以通过 ernie_encoder.py 抽取出输入句子的 Embedding 表示和句子中每个 token 的 Embedding 表示,数据格式和 [Fine-tuning 任务](#Fine-tuning-任务) 一节中介绍的各种类型 Fine-tuning 任务的训练数据格式一致;以获取 LCQM dev 数据集中的句子 Embedding 和 token embedding 为例,示例脚本如下:
```
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=7
python -u ernie_encoder.py \
--use_cuda true \
--batch_size 32 \
--output_dir "./test" \
--init_pretraining_params ${MODEL_PATH}/params \
--data_set ${TASK_DATA_PATH}/lcqmc/dev.tsv \
--vocab_path config/vocab.txt \
--max_seq_len 128 \
--ernie_config_path config/ernie_config.json
```
上述脚本运行结束后,会在当前路径的 test 目录下分别生成 `cls_emb.npy` 文件存储句子 embeddings 和 `top_layer_emb.npy` 文件存储 token embeddings; 实际使用时,参照示例脚本修改数据路径、embeddings 文件存储路径等配置即可运行;
#### 如何获取输入句子中每个 token 经过 ERNIE 编码后的 Embedding 表示?
[解决方案同上](#如何获取输入句子经过-ERNIE-编码后的-Embedding-表示?)
#### 如何利用 finetune 得到的模型对新数据进行批量预测?
我们以分类任务为例,给出了分类任务进行批量预测的脚本, 使用示例如下:
```
python -u predict_classifier.py \
--use_cuda true \
--batch_size 32 \
--vocab_path config/vocab.txt \
--init_checkpoint "./checkpoints/step_100" \
--do_lower_case true \
--max_seq_len 128 \
--ernie_config_path config/ernie_config.json \
--do_predict true \
--predict_set ${TASK_DATA_PATH}/lcqmc/test.tsv \
--num_labels 2
```
实际使用时,需要通过 `init_checkpoint` 指定预测用的模型,通过 `predict_set` 指定待预测的数据文件,通过 `num_labels` 配置分类的类别数目;
**Note**: predict_set 的数据格式是由 text_a、text_b(可选) 组成的1列/2列 tsv 文件;
......@@ -979,17 +979,13 @@ when finished running this script, `cls_emb.npy` and `top_layer_emb.npy `will b
Take classification tasks for example, here is the script for batch prediction:
```
python -u predict_classifier.py \
--use_cuda true \
--batch_size 32 \
--vocab_path ${MODEL_PATH}/vocab.txt \
--init_checkpoint "./checkpoints/step_100" \
--do_lower_case true \
--max_seq_len 128 \
--ernie_config_path ${MODEL_PATH}/ernie_config.json \
--do_predict true \
--predict_set ${TASK_DATA_PATH}/lcqmc/test.tsv \
--num_labels 2
python -u infer_classifyer.py \
--ernie_config_path ${MODEL_PATH}/ernie_config.json \
--init_checkpoint "./checkpoints/step_100" \
--save_inference_model_path ./saved_model \
--predict_set ${TASK_DATA_PATH}/xnli/test.tsv \
--vocab_path ${MODEL_PATH}/vocab.txt \
--num_labels 3
```
Argument `init_checkpoint` is the path of the model, `predict_set` is the path of test file, `num_labels` is the number of target labels.
......
......@@ -646,9 +646,11 @@ ERNIE 2.0 的英文效果验证在 GLUE 上进行。GLUE 评测的官方地址
* [序列标注任务](#序列标注任务)
* [实体识别](#实体识别)
* [阅读理解任务](#阅读理解任务-1)
* [利用Propeller进行二次开发](#利用propeller进行二次开发)
* [预训练 (ERNIE 1.0)](#预训练-ernie-10)
* [数据预处理](#数据预处理)
* [开始训练](#开始训练)
* [向量服务器](#向量服务器)
* [蒸馏](#蒸馏)
* [上线](#上线)
* [生成inference_model](#生成inference_model)
......@@ -659,11 +661,12 @@ ERNIE 2.0 的英文效果验证在 GLUE 上进行。GLUE 评测的官方地址
* [FAQ3: 运行脚本中的batch size指的是单卡分配的数据量还是多卡的总数据量?](#faq3-运行脚本中的batch-size指的是单卡分配的数据量还是多卡的总数据量)
* [FAQ4: Can not find library: libcudnn.so. Please try to add the lib path to LD_LIBRARY_PATH.](#faq4-can-not-find-library-libcudnnso-please-try-to-add-the-lib-path-to-ld_library_path)
* [FAQ5: Can not find library: libnccl.so. Please try to add the lib path to LD_LIBRARY_PATH.](#faq5-can-not-find-library-libncclso-please-try-to-add-the-lib-path-to-ld_library_path)
* [FQA6: 运行报错`ModuleNotFoundError: No module named 'propeller'`](#faq6)
## PaddlePaddle安装
本项目依赖于 Paddle Fluid 1.5,请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装。
本项目依赖于 Paddle Fluid 1.5,* 暂时不支持Paddle 1.6的使用 *,请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装。
**【重要】安装后,需要及时的将 CUDA、cuDNN、NCCL2 等动态库路径加入到环境变量 LD_LIBRARY_PATH 之中,否则训练过程中会报相关的库错误。具体的paddlepaddle配置细节请查阅[这里](http://en.paddlepaddle.org/documentation/docs/zh/1.5/beginners_guide/quick_start_cn.html)**
......@@ -891,6 +894,54 @@ text_a label
[test evaluation] em: 88.061838, f1: 93.520152, avg: 90.790995, question_num: 3493
```
## 利用Propeller进行二次开发
[Propeller](./propeller/README.md) 是基于PaddlePaddle构建的一键式训练API,对于具备一定机器学习应用经验的开发者可以使用Propeller获得定制化开发体验。
您可以通过`export PYTHONPATH=./:$PYTHONPATH`的方式引入Propeller.
Propeller基础教程可以参考`./example/propeller_xnli_demo.ipynb`.
您只需定义好自己的模型以及 Dataset, 剩下的工作,如多卡并行,模型存储等等,都交给Propeller来处理吧。
./example/ 里放了使用Propeller进行分类任务、排序任务和命名实体识别任务的finetune流程,可以作为您修改的模板。
模板中使用的demo数据可以从[这里](https://ernie.bj.bcebos.com/propeller_demo_data.tar.gz)下载,解压完成后放到 ${TASK_DATA_PATH} 中。
以分类任务为例,用下面脚本即可启动finetune,在训练的过程中框架会自动把准确率最好的模型保存在 `./output/best/inference` 下面。利用 infernce\_model 进行在线预测的方案请参考: [在线预测](#在线预测)
```script
python3 ./example/finetune_classifier.py \
--data_dir ${TASK_DATA_PATH}/chnsenticorp/ \
--warm_start_from ${MODEL_PATH}/params \
--vocab_file ${MODEL_PATH}/vocab.txt \
--max_seqlen 128 \
--run_config '{
"model_dir": "output",
"max_steps": '$((10 * 9600 / 32))',
"save_steps": 100,
"log_steps": 10,
"max_ckpt": 1,
"skip_steps": 0,
"eval_steps": 100
}' \
--hparam ${MODEL_PATH}/ernie_config.json \
--hparam '{ # model definition
"sent_type_vocab_size": None, # default term in official config
"use_task_id": False,
"task_id": 0,
}' \
--hparam '{ # learn
"warmup_proportion": 0.1,
"weight_decay": 0.01,
"use_fp16": 0,
"learning_rate": 0.00005,
"num_label": 2,
"batch_size": 32
}'
```
finetune完成后,在上述脚本中加入--do_predict 参数后即可启动开始预测:
```script
cat input_file | python3 ./example/finetune_classifier.py --do_predict ... > output_score
```
## 预训练 (ERNIE 1.0)
......@@ -926,6 +977,37 @@ epoch: 1, progress: 1/1, step: 50, loss: 10.360563, ppl: 16398.287109, next_sent
如果用自定义的真实数据进行训练,请参照[`script/zh_task/pretrain.sh`](./script/zh_task/pretrain.sh)脚本对参数做相应修改。
## 向量服务器
经过预训练的 ERNIE 模型能够直接用于文本语义表示。模型预测的句子 embedding 可以很方便地应用于语义近邻搜索(ANN), 或者下游任务feature-based finetune 任务中。为了更方便地将 ERNIE 用作特征抽取器,我们提供了一个ERNIE server来完成这项工作。
ERNIE server 依赖propeller,
您可以通过`export PYTHONPATH=./:$PYTHONPATH`的方式引入Propeller.
请从 [这里](https://ernie.bj.bcebos.com/ernie1.0_zh_inference_model.tar.gz) 下载中文 ERNIE1.0-base 模型的 inference\_model, 随后您可以通过下面的指令启动ERNIE server服务
```script
python3 ernie/service/encoder_server.py -m ./ernie1.0_base_inference_model/ -p 8888 -v --encode_layer pooler
```
通过 `--encode_layer` 可以指定特征抽取的位置,`pooler` 代表选取 ERNIE pooler fc 的输出作为特征。
您可以通过下面的方式请求ERNIE server服务,目前客户端支持python3调用:
```python
from ernie.service.client import ErnieClient
client = ErnieClient('./config/vocab.txt', host='localhost', port=8888)
ret = client(['谁有狂三这张高清的', '英雄联盟什么英雄最好']) # 单句输入
# output:
# array([[-1. , -1. , 0.9937699 , ..., -0.99991065,
# -0.9999997 , -0.9999985 ],
# [-1. , -1. , -0.05038145, ..., -0.9912302 ,
# -0.9999436 , -0.9739356 ]], dtype=float32)
ret = client(['谁有狂三这张高清的', '这张高清图,谁有'], ['英雄联盟什么英雄最好', '英雄联盟最好英雄是什么']) # 句对输入
# output:
# array([[-1. , -0.99528974, -0.99174845, ..., -0.9781673 ,
# -1. , -1. ],
# [-1. , -1. , -0.8699475 , ..., -0.997155 ,
# -1. , -0.99999994]], dtype=float32)
```
## 蒸馏
ERNIE提供了通过数据蒸馏从而达到模型压缩、加速的开发套件,具体开发流程请参考 <a href="./distill/README.md">这里</a>
......@@ -935,12 +1017,12 @@ ERNIE提供了通过数据蒸馏从而达到模型压缩、加速的开发套件
完成finetune之后只需几步操作即可生成inference\_model, PaddlePaddle可以在生产环境中加载生成的预测模型并进行高效地预测。
### 生成inference\_model
运行`classify_infer.py`或者`predict_classifier.py` 脚本时通过指定 `--save_inference_model_path` 便可生成 inference_model 到指定位置。
运行`infer_classifyer.py` 脚本时通过指定 `--save_inference_model_path` 便可生成 inference_model 到指定位置。
如果您采用 `propeller` 完成finetune,则 `BestInferenceExporter` 会在finetune过程中根据预测指标,挑最好的模型生成 inference_model . 使用 `propeller` 完成finetune的流程请参考 `propeller_xnli_demo.ipynb`
如果您采用 `propeller` 完成finetune,则 `BestInferenceExporter` 会在finetune过程中根据预测指标,挑最好的模型生成 inference_model .
### 在线预测
随后您可以使用[PaddleInference C++ API](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/native_infer.html)将模型的前向预测代码联编到您的生产环境中。或者您可以使用我们为您构建好的python预测引擎来完成一个简单的服务。只需将本代码库中的 `./propeller` 文件夹放入您的 `PYTHONPATH` 中并执行如下指令,便可以开启一个propeller server:
随后您可以使用[PaddleInference C++ API](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/native_infer.html)将模型的前向预测代码联编到您的生产环境中。或者您可以使用我们为您构建好的python预测引擎来完成一个简单的服务。执行如下指令,便可以开启一个propeller server:
```script
python -m propeller.tools.start_server -m /path/to/saved/model -p 8888
......@@ -949,18 +1031,21 @@ python -m propeller.tools.start_server -m /path/to/saved/model -p 8888
您可以在python脚本很方便地调用propeller server:
```python
from propeller.service.client import InferenceClient
client = InferenceClient('tcp://localhost:8113')
result = client(sentence_id, position_id, token_type_id, input_mask)
client = InferenceClient('tcp://localhost:8888')
sentence_id = np.array([[[20], [1560], [1175], [8], [42]]], dtype=np.int64)
position_id = np.array([[[0], [1], [2], [3], [4]]], dtype=np.int64)
token_type_id = np.array([[[0], [0], [0], [1], [1]]], dtype=np.int64)
input_mask = np.array([[1., 1., 1., 1., 1.]], dtype=np.float32)
result = client(sentence_id, token_type_id, position_id, input_mask)
```
`client`的请求参数类型是numpy array,对应了save_inference_model时指定的输入tensor. 如果是使用`classify_infer.py` 生成的inference_model则请求参数有四个:(sentence_id, position_id, token_type_id, input_mask)。 如果是`propeller` 生成的inference_model, client的请求参数对应您`eval_dataset` 的元素类型。
`client`的请求参数类型是numpy array,对应了save_inference_model时指定的输入tensor. 如果是使用`infer_classifyer.py` 生成的inference_model则请求参数有四个:(sentence_id, position_id, token_type_id, input_mask)。 如果是`propeller` 生成的inference_model, client的请求参数对应您`eval_dataset` 的元素类型。目前`InferenceClient`只支持在python3环境下使用。
## FAQ
### FAQ1: 如何获取输入句子/词经过 ERNIE 编码后的 Embedding 表示?
可以通过 ernie_encoder.py 抽取出输入句子的 Embedding 表示和句子中每个 token 的 Embedding 表示,数据格式和 [Fine-tuning 任务](#fine-tuning-任务) 一节中介绍的各种类型 Fine-tuning 任务的训练数据格式一致;以获取 LCQMC dev 数据集中的句子 Embedding 和 token embedding 为例,示例脚本如下:
可以通过 `ernie_encoder.py` 抽取出输入句子的 Embedding 表示和句子中每个 token 的 Embedding 表示,数据格式和 [Fine-tuning 任务](#fine-tuning-任务) 一节中介绍的各种类型 Fine-tuning 任务的训练数据格式一致;以获取 LCQMC dev 数据集中的句子 Embedding 和 token embedding 为例,示例脚本如下:
```
export FLAGS_sync_nccl_allreduce=1
......@@ -985,17 +1070,14 @@ python -u ernie_encoder.py \
我们以分类任务为例,给出了分类任务进行批量预测的脚本, 使用示例如下:
```
python -u predict_classifier.py \
--use_cuda true \
--batch_size 32 \
--vocab_path ${MODEL_PATH}/vocab.txt \
--init_checkpoint "./checkpoints/step_100" \
--do_lower_case true \
--max_seq_len 128 \
--ernie_config_path ${MODEL_PATH}/ernie_config.json \
--do_predict true \
--predict_set ${TASK_DATA_PATH}/lcqmc/test.tsv \
--num_labels 2
python -u infer_classifyer.py \
--ernie_config_path ${MODEL_PATH}/ernie_config.json \
--init_checkpoint "./checkpoints/step_100" \
--save_inference_model_path ./saved_model \
--predict_set ${TASK_DATA_PATH}/xnli/test.tsv \
--vocab_path ${MODEL_PATH}/vocab.txt \
--num_labels 3
```
实际使用时,需要通过 `init_checkpoint` 指定预测用的模型,通过 `predict_set` 指定待预测的数据文件,通过 `num_labels` 配置分类的类别数目;
......@@ -1003,20 +1085,20 @@ python -u predict_classifier.py \
**Note**: predict_set 的数据格式是由 text_a、text_b(可选) 组成的 1 列 / 2 列 tsv 文件。
### FAQ3: 运行脚本中的batch size指的是单卡分配的数据量还是多卡的总数据量?
单独一张显卡分配到的数据量。
### FAQ4: Can not find library: libcudnn.so. Please try to add the lib path to LD_LIBRARY_PATH.
在 LD_LIBRARY_PATH 中添加 cudnn 库的路径,如 `export LD_LIBRARY_PATH=/home/work/cudnn/cudnn_v[your cudnn version]/cuda/lib64`
### FAQ5: Can not find library: libnccl.so. Please try to add the lib path to LD_LIBRARY_PATH.
需要先下载 [NCCL](https://developer.nvidia.com/nccl/nccl-download),然后在 LD_LIBRARY_PATH 中添加 NCCL 库的路径,如`export LD_LIBRARY_PATH=/home/work/nccl/lib`
### FQA6: 运行报错`ModuleNotFoundError: No module named 'propeller'`<a name="faq6"></a>
您可以通过`export PYTHONPATH=./:$PYTHONPATH`的方式引入Propeller.
......@@ -50,11 +50,11 @@ sh ./distill/script/distill_chnsenticorp.sh
该脚本会进行前述的三步:1. 在任务数据上Fine-tune。 2. 加载Fine-tune好的模型对增强数据进行打分。 3.使用Student模型进行训练。脚本采用hard-label蒸馏,在第二步中将会直接预测出ERNIE标注的label。
该脚本涉及两个python文件:`./distill/finetune_chnsenticorp.py` 负责finetune以及预测teacher模型, `distill/distill_chnsentocorp.py` 负责student模型的训练。事先构造好的增强数据放在`${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug`
该脚本涉及两个python文件:`./example/finetune_classifier.py` 负责finetune以及预测teacher模型, `distill/distill_chnsentocorp.py` 负责student模型的训练。事先构造好的增强数据放在`${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug`
在脚本的第二步中,使用 `--do_predict` 参数进入预测模式:
```script
cat ${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug/part.0 |python3 -u ./distill/finetune_chnsenticorp.py \
cat ${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug/part.0 |python3 -u ./example/finetune_classifier.py \
--do_predict \
--data_dir ${TASK_DATA_PATH}/distill/chnsenticorp/teacher \
--warm_start_from ${MODEL_PATH}/params \
......@@ -86,7 +86,7 @@ sh ./distill/script/distill_chnsenticorp_with_propeller_server.sh
流程包含3步:1. finetune ERNIE模型。2. 取指标最好的ERNIE模型启动`propeller`服务。 3.在student模型的训练过程中访问服务获取teacher模型的标注。
此流程涉及两个python文件: `distill/finetune_chnsenticorp.py``distill/distill_chnsentocorp_with_propeller_server.py` 。其中第一步与离线蒸馏中的用法完全一样。
此流程涉及两个python文件: `example/finetune_classifier.py``distill/distill_chnsentocorp_with_propeller_server.py` 。其中第一步与离线蒸馏中的用法完全一样。
第二步中使用
```script
python3 -m propeller.tools.start_server -p 8113 -m ${teacher_dir}/best/inference/ &
......
......@@ -117,7 +117,7 @@ class ClassificationBowModel(propeller.train.Model):
return {'acc': acc}
if __name__ == '__main__':
parser = propeller.ArgumentParser('DAN model with Paddle')
parser = propeller.ArgumentParser('Distill model with Paddle')
parser.add_argument('--max_seqlen', type=int, default=128)
parser.add_argument('--vocab_file', type=str, required=True)
parser.add_argument('--unsupervise_data_dir', type=str, required=True)
......
......@@ -118,7 +118,7 @@ class ClassificationBowModel(propeller.train.Model):
return {'acc': acc}
if __name__ == '__main__':
parser = propeller.ArgumentParser('DAN model with Paddle')
parser = propeller.ArgumentParser('distill model with ERNIE')
parser.add_argument('--max_seqlen', type=int, default=128)
parser.add_argument('--vocab_file', type=str, required=True)
parser.add_argument('--teacher_vocab_file', type=str, required=True)
......
set -x
export PYTHONPATH=.:$PYTHONPATH
export PYTHONPATH=.:./ernie/:${PYTHONPATH:-}
output_dir=./output/distill
teacher_dir=${output_dir}/teacher
student_dir=${output_dir}/student
# 1. finetune teacher
CUDA_VISIBLE_DEVICES=0 \
python3 -u ./distill/finetune_chnsenticorp.py \
python3 -u ./example/finetune_classifier.py \
--data_dir ${TASK_DATA_PATH}/distill/chnsenticorp/teacher \
--warm_start_from ${MODEL_PATH}/params \
--vocab_file ${MODEL_PATH}/vocab.txt \
......@@ -29,7 +29,7 @@ python3 -u ./distill/finetune_chnsenticorp.py \
--hparam '{ # learn
"warmup_proportion": 0.1,
"weight_decay": 0.01,
"fp16": 0,
"use_fp16": 0,
"learning_rate": 0.00005,
"num_label": 2,
"batch_size": 32
......@@ -39,7 +39,7 @@ python3 -u ./distill/finetune_chnsenticorp.py \
# 2. start a prediction server
export CUDA_VISIBLE_DEVICES=0
cat ${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug/part.0 |awk -F"\t" '{print $2}' |python3 -u ./distill/finetune_chnsenticorp.py \
cat ${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug/part.0 |awk -F"\t" '{print $2}' |python3 -u ./example/finetune_classifier.py \
--do_predict \
--data_dir ${TASK_DATA_PATH}/distill/chnsenticorp/teacher \
--warm_start_from ${MODEL_PATH}/params \
......@@ -58,7 +58,7 @@ cat ${TASK_DATA_PATH}/distill/chnsenticorp/student/unsup_train_aug/part.0 |awk -
--hparam '{ # learn
"warmup_proportion": 0.1,
"weight_decay": 0.01,
"fp16": 0,
"use_fp16": 0,
"learning_rate": 0.00005,
"num_label": 2,
"batch_size": 100
......@@ -94,7 +94,6 @@ python3 ./distill/distill_chnsentocorp.py \
--hparam '{ # lr shit
"warmup_proportion": 0.1,
"weight_decay": 0.00,
"fp16": 0,
"learning_rate": 1e-4,
"batch_size": 100
}'
......
set -x
export PYTHONPATH=.:$PYTHONPATH
export PYTHONPATH=.:./ernie/:${PYTHONPATH:-}
output_dir=./output/distill
teacher_dir=${output_dir}/teacher
student_dir=${output_dir}/student
# 1. finetune teacher
CUDA_VISIBLE_DEVICES=0 \
python3 -u ./distill/finetune_chnsenticorp.py \
python3 -u ./example/finetune_classifier.py \
--data_dir ${TASK_DATA_PATH}/distill/chnsenticorp/teacher \
--warm_start_from ${MODEL_PATH}/params \
--vocab_file ${MODEL_PATH}/vocab.txt \
......@@ -29,7 +29,7 @@ python3 -u ./distill/finetune_chnsenticorp.py \
--hparam '{ # learn
"warmup_proportion": 0.1,
"weight_decay": 0.01,
"fp16": 0,
"use_fp16": 0,
"learning_rate": 0.00005,
"num_label": 2,
"batch_size": 32
......@@ -74,7 +74,6 @@ python3 ./distill/distill_chnsentocorp_with_propeller_server.py \
--hparam '{ # learn
"warmup_proportion": 0.1,
"weight_decay": 0.00,
"fp16": 0,
"learning_rate": 1e-4,
"batch_size": 100
}'
......
......@@ -22,6 +22,7 @@ import os
import time
import argparse
import numpy as np
import logging
import multiprocessing
# NOTE(paddle-dev): All of these flags should be
......@@ -40,7 +41,7 @@ from reader.task_reader import ClassifyReader
from model.ernie import ErnieConfig
from finetune.classifier import create_model
from utils.args import print_arguments, check_cuda, prepare_logger
from utils.args import print_arguments, check_cuda, prepare_logger, ArgumentGroup
from utils.init import init_pretraining_params
from finetune_args import parser
......@@ -129,6 +130,9 @@ def main(args):
if not args.use_cuda:
log.info("disable gpu")
config.disable_gpu()
else:
log.info("using gpu")
config.enable_use_gpu(1024)
# Create PaddlePredictor
predictor = create_paddle_predictor(config)
......@@ -158,12 +162,10 @@ def main(args):
# parse outputs
output = outputs[0]
log.info(output.name)
output_data = output.data.float_data()
#assert len(output_data) == args.num_labels * args.batch_size
batch_result = np.array(output_data).reshape((-1, args.num_labels))
batch_result = np.array(output_data).reshape(output.shape)
for single_example_probs in batch_result:
log.info("{} example\t{}".format(index, single_example_probs))
print('\t'.join(map(str, single_example_probs.tolist())))
index += 1
log.info("qps:{}\ttotal_time:{}\ttotal_example:{}\tbatch_size:{}".format(index/total_time, total_time, index, args.batch_size))
......
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import sys
import os
import argparse
from propeller.service.client import InferenceClient
from propeller import log
import six
import utils.data
from time import time
import numpy as np
class ErnieClient(InferenceClient):
def __init__(self,
vocab_file,
host='localhost',
port=8888,
batch_size=32,
num_coroutine=1,
timeout=10.,
max_seqlen=128):
host_port = 'tcp://%s:%d' % (host, port)
client = super(ErnieClient, self).__init__(host_port, batch_size=batch_size, num_coroutine=num_coroutine, timeout=timeout)
self.vocab = {j.strip().split(b'\t')[0].decode('utf8'): i for i, j in enumerate(open(vocab_file, 'rb'))}
self.tokenizer = utils.data.CharTokenizer(self.vocab.keys())
self.max_seqlen = max_seqlen
self.cls_id = self.vocab['[CLS]']
self.sep_id = self.vocab['[SEP]']
def txt_2_id(self, text):
ids = np.array([self.vocab[i] for i in self.tokenizer(text)])
return ids
def pad_and_batch(self, ids):
max_len = max(map(len, ids))
padded = np.stack([np.pad(i, [[0, max_len - len(i)]], mode='constant')for i in ids])
padded = np.expand_dims(padded, axis=-1)
return padded
def __call__(self, text_a, text_b=None):
if text_b is not None and len(text_a) != len(text_b):
raise ValueError('text_b %d has different size than text_a %d' % (text_b, text_a))
text_a = [i.encode('utf8') if isinstance(i, six.string_types) else i for i in text_a]
if text_b is not None:
text_b = [i.encode('utf8') if isinstance(i, six.string_types) else i for i in text_b]
ids_a = map(self.txt_2_id, text_a)
if text_b is not None:
ids_b = map(self.txt_2_id, text_b)
ret = [utils.data.build_2_pair(a, b, self.max_seqlen, self.cls_id, self.sep_id) for a, b in zip(ids_a, ids_b)]
else:
ret = [utils.data.build_1_pair(a, self.max_seqlen, self.cls_id, self.sep_id) for a in ids_a]
sen_ids, token_type_ids = zip(*ret)
sen_ids = self.pad_and_batch(sen_ids)
token_type_ids = self.pad_and_batch(token_type_ids)
ret, = super(ErnieClient, self).__call__(sen_ids, token_type_ids)
return ret
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='ernie_encoder_client')
parser.add_argument('--host', type=str, default='localhost')
parser.add_argument('-i', '--input', type=str, required=True)
parser.add_argument('-o', '--output', type=str, required=True)
parser.add_argument('-p', '--port', type=int, default=8888)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--num_coroutine', type=int, default=1)
parser.add_argument('--vocab', type=str, required=True)
args = parser.parse_args()
client = ErnieClient(args.vocab, args.host, args.port, batch_size=args.batch_size, num_coroutine=args.num_coroutine)
inputs = [i.strip().split(b'\t') for i in open(args.input, 'rb').readlines()]
if len(inputs) == 0:
raise ValueError('empty input')
send_batch = args.num_coroutine * args.batch_size
send_num = len(inputs) // send_batch + 1
rets = []
start = time()
for i in range(send_num):
slice = inputs[i * send_batch: (i + 1) * send_batch]
if len(slice) == 0:
continue
columns = list(zip(*slice))
if len(columns) > 2:
raise ValueError('inputs file has more than 2 columns')
ret = client(*columns)
if len(ret.shape) == 3:
ret = ret[:, 0, :] # take cls
rets.append(ret)
end = time()
with open(args.output, 'wb') as outf:
arr = np.concatenate(rets, 0)
np.save(outf, arr)
log.info('query num: %d average latency %.5f' % (len(inputs), (end - start)/len(inputs)))
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import sys
import os
import argparse
import logging
import logging.handlers
import re
from propeller.service.server import InferenceServer
from propeller import log
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-m', '--model_dir', type=str, required=True)
parser.add_argument('-p', '--port', type=int, default=8888)
parser.add_argument('-v', '--verbose', action='store_true')
parser.add_argument('--encode_layer', type=str, choices=[
'pooler',
'layer12',
'layer11',
'layer10',
'layer9',
'layer8',
'layer7',
'layer6',
'layer5',
'layer4',
'layer3',
'layer2',
'layer1',
], default='pooler')
args = parser.parse_args()
if args.verbose:
log.setLevel(logging.DEBUG)
cuda_env = os.getenv("CUDA_VISIBLE_DEVICES")
if cuda_env is None:
raise RuntimeError('CUDA_VISIBLE_DEVICES not set')
n_devices = len(cuda_env.split(","))
if args.encode_layer.lower() == 'pooler':
model_dir = os.path.join(args.model_dir, 'pooler')
else:
pat = re.compile(r'layer(\d+)')
match = pat.match(args.encode_layer.lower())
layer = int(match.group(1))
model_dir = os.path.join(args.model_dir, 'enc%d' % layer)
server = InferenceServer(model_dir, n_devices)
log.info('propeller server listent on port %d' % args.port)
server.listen(args.port)
......@@ -108,7 +108,7 @@ class CharTokenizer(object):
"""
self.vocab = set(vocab)
#self.pat = re.compile(r'([,.!?\u3002\uff1b\uff0c\uff1a\u201c\u201d\uff08\uff09\u3001\uff1f\u300a\u300b]|[\u4e00-\u9fa5]|[a-zA-Z0-9]+)')
self.pat = re.compile(r'\S')
self.pat = re.compile(r'([a-zA-Z0-9]+|\S)')
self.lower = lower
def __call__(self, sen):
......@@ -132,7 +132,7 @@ def build_2_pair(seg_a, seg_b, max_seqlen, cls_id, sep_id):
seqlen = sen_emb.shape[0]
#random truncate
random_begin = 0#np.random.randint(0, np.maximum(0, seqlen - max_seqlen) + 1,)
random_begin = 0 #np.random.randint(0, np.maximum(0, seqlen - max_seqlen) + 1,)
sen_emb = sen_emb[random_begin: random_begin + max_seqlen]
token_type_emb = token_type_emb[random_begin: random_begin + max_seqlen]
......@@ -147,7 +147,7 @@ def build_1_pair(seg_a, max_seqlen, cls_id, sep_id):
seqlen = sen_emb.shape[0]
#random truncate
random_begin = 0#np.random.randint(0, np.maximum(0, seqlen - max_seqlen) + 1,)
random_begin = 0 #np.random.randint(0, np.maximum(0, seqlen - max_seqlen) + 1,)
sen_emb = sen_emb[random_begin: random_begin + max_seqlen]
token_type_emb = token_type_emb[random_begin: random_begin + max_seqlen]
......
......@@ -43,9 +43,8 @@ class ClassificationErnieModel(propeller.train.Model):
def forward(self, features):
src_ids, sent_ids = features
dtype = 'float16' if self.hparam['fp16'] else 'float32'
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.cast(L.logical_not(L.equal(src_ids, zero)), dtype) # assume pad id == 0
input_mask = L.cast(L.logical_not(L.equal(src_ids, zero)), 'float32') # assume pad id == 0
#input_mask = L.unsqueeze(input_mask, axes=[2])
d_shape = L.shape(src_ids)
seqlen = d_shape[1]
......@@ -59,17 +58,17 @@ class ClassificationErnieModel(propeller.train.Model):
task_ids = L.zeros_like(src_ids) + self.hparam.task_id #this shit wont use at the moment
task_ids.stop_gradient = True
bert = ErnieModel(
ernie = ErnieModel(
src_ids=src_ids,
position_ids=pos_ids,
sentence_ids=sent_ids,
task_ids=task_ids,
input_mask=input_mask,
config=self.hparam,
use_fp16=self.hparam['fp16']
use_fp16=self.hparam['use_fp16']
)
cls_feats = bert.get_pooled_output()
cls_feats = ernie.get_pooled_output()
cls_feats = L.dropout(
x=cls_feats,
......@@ -123,7 +122,7 @@ class ClassificationErnieModel(propeller.train.Model):
if __name__ == '__main__':
parser = propeller.ArgumentParser('DAN model with Paddle')
parser = propeller.ArgumentParser('classify model with ERNIE')
parser.add_argument('--max_seqlen', type=int, default=128)
parser.add_argument('--data_dir', type=str, required=True)
parser.add_argument('--vocab_file', type=str, required=True)
......
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
import re
import time
from random import random
from functools import reduce, partial
import numpy as np
import multiprocessing
import logging
import six
import re
import paddle
import paddle.fluid as F
import paddle.fluid.layers as L
from model.ernie import ErnieModel
from optimization import optimization
import tokenization
import utils.data
from propeller import log
log.setLevel(logging.DEBUG)
import propeller.paddle as propeller
class SequenceLabelErnieModel(propeller.train.Model):
"""propeller Model wraper for paddle-ERNIE """
def __init__(self, hparam, mode, run_config):
self.hparam = hparam
self.mode = mode
self.run_config = run_config
self.num_label = len(hparam['label_list'])
def forward(self, features):
src_ids, sent_ids, input_seqlen = features
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.cast(L.equal(src_ids, zero), 'float32') # assume pad id == 0
#input_mask = L.unsqueeze(input_mask, axes=[2])
d_shape = L.shape(src_ids)
seqlen = d_shape[1]
batch_size = d_shape[0]
pos_ids = L.unsqueeze(L.range(0, seqlen, 1, dtype='int32'), axes=[0])
pos_ids = L.expand(pos_ids, [batch_size, 1])
pos_ids = L.unsqueeze(pos_ids, axes=[2])
pos_ids = L.cast(pos_ids, 'int64')
pos_ids.stop_gradient = True
input_mask.stop_gradient = True
task_ids = L.zeros_like(src_ids) + self.hparam.task_id #this shit wont use at the moment
task_ids.stop_gradient = True
model = ErnieModel(
src_ids=src_ids,
position_ids=pos_ids,
sentence_ids=sent_ids,
task_ids=task_ids,
input_mask=input_mask,
config=self.hparam,
use_fp16=self.hparam['use_fp16']
)
enc_out = model.get_sequence_output()
logits = L.fc(
input=enc_out,
size=self.num_label,
num_flatten_dims=2,
param_attr= F.ParamAttr(
name="cls_seq_label_out_w",
initializer= F.initializer.TruncatedNormal(scale=0.02)),
bias_attr=F.ParamAttr(
name="cls_seq_label_out_b",
initializer=F.initializer.Constant(0.)))
propeller.summary.histogram('pred', logits)
return logits, input_seqlen
def loss(self, predictions, labels):
logits, input_seqlen = predictions
logits = L.flatten(logits, axis=2)
labels = L.flatten(labels, axis=2)
ce_loss, probs = L.softmax_with_cross_entropy(
logits=logits, label=labels, return_softmax=True)
loss = L.mean(x=ce_loss)
return loss
def backward(self, loss):
scheduled_lr, _ = optimization(
loss=loss,
warmup_steps=int(self.run_config.max_steps * self.hparam['warmup_proportion']),
num_train_steps=self.run_config.max_steps,
learning_rate=self.hparam['learning_rate'],
train_program=F.default_main_program(),
startup_prog=F.default_startup_program(),
weight_decay=self.hparam['weight_decay'],
scheduler="linear_warmup_decay",)
propeller.summary.scalar('lr', scheduled_lr)
def metrics(self, predictions, label):
pred, seqlen = predictions
pred = L.argmax(pred, axis=-1)
pred = L.unsqueeze(pred, axes=[-1])
f1 = propeller.metrics.ChunkF1(label, pred, seqlen, self.num_label)
return {'f1': f1}
def make_sequence_label_dataset(name, input_files, label_list, tokenizer, batch_size, max_seqlen, is_train):
label_map = {v: i for i, v in enumerate(label_list)}
no_entity_id = label_map['O']
delimiter = ''
def read_bio_data(filename):
ds = propeller.data.Dataset.from_file(filename)
iterable = iter(ds)
def gen():
buf, size = [], 0
iterator = iter(ds)
while 1:
line = next(iterator)
cols = line.rstrip(b'\n').split(b'\t')
if len(cols) != 2:
continue
tokens = tokenization.convert_to_unicode(cols[0]).split(delimiter)
labels = tokenization.convert_to_unicode(cols[1]).split(delimiter)
if len(tokens) != len(labels) or len(tokens) == 0:
continue
yield [tokens, labels]
return propeller.data.Dataset.from_generator_func(gen)
def reseg_token_label(dataset):
def gen():
iterator = iter(dataset)
while True:
tokens, labels = next(iterator)
assert len(tokens) == len(labels)
ret_tokens = []
ret_labels = []
for token, label in zip(tokens, labels):
sub_token = tokenizer.tokenize(token)
if len(sub_token) == 0:
continue
ret_tokens.extend(sub_token)
ret_labels.append(label)
if len(sub_token) < 2:
continue
sub_label = label
if label.startswith("B-"):
sub_label = "I-" + label[2:]
ret_labels.extend([sub_label] * (len(sub_token) - 1))
assert len(ret_tokens) == len(ret_labels)
yield ret_tokens, ret_labels
ds = propeller.data.Dataset.from_generator_func(gen)
return ds
def convert_to_ids(dataset):
def gen():
iterator = iter(dataset)
while True:
tokens, labels = next(iterator)
if len(tokens) > max_seqlen - 2:
tokens = tokens[: max_seqlen - 2]
labels = labels[: max_seqlen - 2]
tokens = ['[CLS]'] + tokens + ['[SEP]']
token_ids = tokenizer.convert_tokens_to_ids(tokens)
label_ids = [no_entity_id] + [label_map[x] for x in labels] + [no_entity_id]
token_type_ids = [0] * len(token_ids)
input_seqlen = len(token_ids)
token_ids = np.array(token_ids, dtype=np.int64)
label_ids = np.array(label_ids, dtype=np.int64)
token_type_ids = np.array(token_type_ids, dtype=np.int64)
input_seqlen = np.array(input_seqlen, dtype=np.int64)
yield token_ids, token_type_ids, input_seqlen, label_ids
ds = propeller.data.Dataset.from_generator_func(gen)
return ds
def after(*features):
return utils.data.expand_dims(*features)
dataset = propeller.data.Dataset.from_list(input_files)
if is_train:
dataset = dataset.repeat().shuffle(buffer_size=len(input_files))
dataset = dataset.interleave(map_fn=read_bio_data, cycle_length=len(input_files), block_length=1)
if is_train:
dataset = dataset.shuffle(buffer_size=100)
dataset = reseg_token_label(dataset)
dataset = convert_to_ids(dataset)
dataset = dataset.padded_batch(batch_size).map(after)
dataset.name = name
return dataset
def make_sequence_label_dataset_from_stdin(name, tokenizer, batch_size, max_seqlen):
delimiter = ''
def stdin_gen():
if six.PY3:
source = sys.stdin.buffer
else:
source = sys.stdin
while True:
line = source.readline()
if len(line) == 0:
break
yield line,
def read_bio_data(ds):
iterable = iter(ds)
def gen():
buf, size = [], 0
iterator = iter(ds)
while 1:
line, = next(iterator)
cols = line.rstrip(b'\n').split(b'\t')
if len(cols) != 1:
continue
tokens = tokenization.convert_to_unicode(cols[0]).split(delimiter)
if len(tokens) == 0:
continue
yield tokens,
return propeller.data.Dataset.from_generator_func(gen)
def reseg_token_label(dataset):
def gen():
iterator = iter(dataset)
while True:
tokens, = next(iterator)
ret_tokens = []
for token in tokens:
sub_token = tokenizer.tokenize(token)
if len(sub_token) == 0:
continue
ret_tokens.extend(sub_token)
if len(sub_token) < 2:
continue
yield ret_tokens,
ds = propeller.data.Dataset.from_generator_func(gen)
return ds
def convert_to_ids(dataset):
def gen():
iterator = iter(dataset)
while True:
tokens, = next(iterator)
if len(tokens) > max_seqlen - 2:
tokens = tokens[: max_seqlen - 2]
tokens = ['[CLS]'] + tokens + ['[SEP]']
token_ids = tokenizer.convert_tokens_to_ids(tokens)
token_type_ids = [0] * len(token_ids)
input_seqlen = len(token_ids)
token_ids = np.array(token_ids, dtype=np.int64)
token_type_ids = np.array(token_type_ids, dtype=np.int64)
input_seqlen = np.array(input_seqlen, dtype=np.int64)
yield token_ids, token_type_ids, input_seqlen
ds = propeller.data.Dataset.from_generator_func(gen)
return ds
def after(*features):
return utils.data.expand_dims(*features)
dataset = propeller.data.Dataset.from_generator_func(stdin_gen)
dataset = read_bio_data(dataset)
dataset = reseg_token_label(dataset)
dataset = convert_to_ids(dataset)
dataset = dataset.padded_batch(batch_size).map(after)
dataset.name = name
return dataset
if __name__ == '__main__':
parser = propeller.ArgumentParser('NER model with ERNIE')
parser.add_argument('--max_seqlen', type=int, default=128)
parser.add_argument('--data_dir', type=str, required=True)
parser.add_argument('--vocab_file', type=str, required=True)
parser.add_argument('--do_predict', action='store_true')
parser.add_argument('--warm_start_from', type=str)
args = parser.parse_args()
run_config = propeller.parse_runconfig(args)
hparams = propeller.parse_hparam(args)
tokenizer = tokenization.FullTokenizer(args.vocab_file)
vocab = tokenizer.vocab
sep_id = vocab['[SEP]']
cls_id = vocab['[CLS]']
unk_id = vocab['[UNK]']
pad_id = vocab['[PAD]']
label_list = ['B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'O']
hparams['label_list'] = label_list
if not args.do_predict:
train_data_dir = os.path.join(args.data_dir, 'train')
train_input_files = [os.path.join(train_data_dir, filename) for filename in os.listdir(train_data_dir)]
dev_data_dir = os.path.join(args.data_dir, 'dev')
dev_input_files = [os.path.join(dev_data_dir, filename) for filename in os.listdir(dev_data_dir)]
test_data_dir = os.path.join(args.data_dir, 'test')
test_input_files = [os.path.join(test_data_dir, filename) for filename in os.listdir(test_data_dir)]
train_ds = make_sequence_label_dataset(name='train',
input_files=train_input_files,
label_list=label_list,
tokenizer=tokenizer,
batch_size=hparams.batch_size,
max_seqlen=args.max_seqlen,
is_train=True)
dev_ds = make_sequence_label_dataset(name='dev',
input_files=dev_input_files,
label_list=label_list,
tokenizer=tokenizer,
batch_size=hparams.batch_size,
max_seqlen=args.max_seqlen,
is_train=False)
test_ds = make_sequence_label_dataset(name='test',
input_files=test_input_files,
label_list=label_list,
tokenizer=tokenizer,
batch_size=hparams.batch_size,
max_seqlen=args.max_seqlen,
is_train=False)
shapes = ([-1, args.max_seqlen, 1], [-1, args.max_seqlen, 1], [-1, 1], [-1, args.max_seqlen, 1])
types = ('int64', 'int64', 'int64', 'int64')
train_ds.data_shapes = shapes
train_ds.data_types = types
dev_ds.data_shapes = shapes
dev_ds.data_types = types
test_ds.data_shapes = shapes
test_ds.data_types = types
varname_to_warmstart = re.compile(r'^encoder.*[wb]_0$|^.*embedding$|^.*bias$|^.*scale$|^pooled_fc.[wb]_0$')
warm_start_dir = args.warm_start_from
ws = propeller.WarmStartSetting(
predicate_fn=lambda v: varname_to_warmstart.match(v.name) and os.path.exists(os.path.join(warm_start_dir, v.name)),
from_dir=warm_start_dir
)
best_exporter = propeller.train.exporter.BestExporter(os.path.join(run_config.model_dir, 'best'), cmp_fn=lambda old, new: new['dev']['f1'] > old['dev']['f1'])
propeller.train.train_and_eval(
model_class_or_model_fn=SequenceLabelErnieModel,
params=hparams,
run_config=run_config,
train_dataset=train_ds,
eval_dataset={'dev': dev_ds, 'test': test_ds},
warm_start_setting=ws,
exporters=[best_exporter])
for k in best_exporter._best['dev'].keys():
if 'loss' in k:
continue
dev_v = best_exporter._best['dev'][k]
test_v = best_exporter._best['test'][k]
print('dev_%s\t%.5f\ntest_%s\t%.5f' % (k, dev_v, k, test_v))
else:
predict_ds = make_sequence_label_dataset_from_stdin(name='pred',
tokenizer=tokenizer,
batch_size=hparams.batch_size,
max_seqlen=args.max_seqlen)
shapes = ([-1, args.max_seqlen, 1], [-1, args.max_seqlen, 1], [-1, 1])
types = ('int64', 'int64', 'int64')
predict_ds.data_shapes = shapes
predict_ds.data_types = types
rev_label_map = {i: v for i, v in enumerate(label_list)}
best_exporter = propeller.train.exporter.BestExporter(os.path.join(run_config.model_dir, 'best'), cmp_fn=lambda old, new: new['dev']['f1'] > old['dev']['f1'])
learner = propeller.Learner(SequenceLabelErnieModel, run_config, hparams)
for pred, _ in learner.predict(predict_ds, ckpt=-1):
pred_str = ' '.join([rev_label_map[idx] for idx in np.argmax(pred, 1).tolist()])
print(pred_str)
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import time
import logging
import six
import sys
import io
from random import random
from functools import reduce, partial, wraps
import numpy as np
import multiprocessing
import re
import paddle
import paddle.fluid as F
import paddle.fluid.layers as L
from model.ernie import ErnieModel
from optimization import optimization
import utils.data
from propeller import log
import propeller.paddle as propeller
log.setLevel(logging.DEBUG)
class RankingErnieModel(propeller.train.Model):
"""propeller Model wraper for paddle-ERNIE """
def __init__(self, hparam, mode, run_config):
self.hparam = hparam
self.mode = mode
self.run_config = run_config
def forward(self, features):
src_ids, sent_ids, qid = features
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.cast(L.logical_not(L.equal(src_ids, zero)), 'float32') # assume pad id == 0
#input_mask = L.unsqueeze(input_mask, axes=[2])
d_shape = L.shape(src_ids)
seqlen = d_shape[1]
batch_size = d_shape[0]
pos_ids = L.unsqueeze(L.range(0, seqlen, 1, dtype='int32'), axes=[0])
pos_ids = L.expand(pos_ids, [batch_size, 1])
pos_ids = L.unsqueeze(pos_ids, axes=[2])
pos_ids = L.cast(pos_ids, 'int64')
pos_ids.stop_gradient = True
input_mask.stop_gradient = True
task_ids = L.zeros_like(src_ids) + self.hparam.task_id #this shit wont use at the moment
task_ids.stop_gradient = True
ernie = ErnieModel(
src_ids=src_ids,
position_ids=pos_ids,
sentence_ids=sent_ids,
task_ids=task_ids,
input_mask=input_mask,
config=self.hparam,
use_fp16=self.hparam['use_fp16']
)
cls_feats = ernie.get_pooled_output()
cls_feats = L.dropout(
x=cls_feats,
dropout_prob=0.1,
dropout_implementation="upscale_in_train"
)
logits = L.fc(
input=cls_feats,
size=self.hparam['num_label'],
param_attr=F.ParamAttr(
name="cls_out_w",
initializer=F.initializer.TruncatedNormal(scale=0.02)),
bias_attr=F.ParamAttr(
name="cls_out_b", initializer=F.initializer.Constant(0.))
)
propeller.summary.histogram('pred', logits)
if self.mode is propeller.RunMode.PREDICT:
probs = L.softmax(logits)
return qid, probs
else:
return qid, logits
def loss(self, predictions, labels):
qid, predictions = predictions
ce_loss, probs = L.softmax_with_cross_entropy(
logits=predictions, label=labels, return_softmax=True)
#L.Print(ce_loss, message='per_example_loss')
loss = L.mean(x=ce_loss)
return loss
def metrics(self, predictions, label):
qid, logits = predictions
positive_class_logits = L.slice(logits, axes=[1], starts=[1], ends=[2])
mrr = propeller.metrics.Mrr(qid, label, positive_class_logits)
predictions = L.argmax(logits, axis=1)
predictions = L.unsqueeze(predictions, axes=[1])
f1 = propeller.metrics.F1(label, predictions)
acc = propeller.metrics.Acc(label, predictions)
#auc = propeller.metrics.Auc(label, predictions)
return {'acc': acc, 'f1': f1, 'mrr': mrr}
def backward(self, loss):
scheduled_lr, _ = optimization(
loss=loss,
warmup_steps=int(self.run_config.max_steps * self.hparam['warmup_proportion']),
num_train_steps=self.run_config.max_steps,
learning_rate=self.hparam['learning_rate'],
train_program=F.default_main_program(),
startup_prog=F.default_startup_program(),
weight_decay=self.hparam['weight_decay'],
scheduler="linear_warmup_decay",)
propeller.summary.scalar('lr', scheduled_lr)
if __name__ == '__main__':
parser = propeller.ArgumentParser('ranker model with ERNIE')
parser.add_argument('--do_predict', action='store_true')
parser.add_argument('--predict_model', type=str, default=None)
parser.add_argument('--max_seqlen', type=int, default=128)
parser.add_argument('--vocab_file', type=str, required=True)
parser.add_argument('--data_dir', type=str, required=True)
parser.add_argument('--warm_start_from', type=str)
parser.add_argument('--sentence_piece_model', type=str, default=None)
args = parser.parse_args()
run_config = propeller.parse_runconfig(args)
hparams = propeller.parse_hparam(args)
vocab = {j.strip().split(b'\t')[0].decode('utf8') : i for i, j in enumerate(open(args.vocab_file, 'rb'))}
sep_id = vocab['[SEP]']
cls_id = vocab['[CLS]']
unk_id = vocab['[UNK]']
if args.sentence_piece_model is not None:
tokenizer = utils.data.JBSPTokenizer(args.sentence_piece_model, jb=True, lower=True)
else:
tokenizer = utils.data.CharTokenizer(vocab.keys())
def tokenizer_func(inputs):
'''avoid pickle error'''
ret = tokenizer(inputs)
return ret
shapes = ([-1, args.max_seqlen, 1], [-1, args.max_seqlen, 1], [-1, 1], [-1, 1])
types = ('int64', 'int64', 'int64', 'int64')
if not args.do_predict:
feature_column = propeller.data.FeatureColumns([
propeller.data.LabelColumn('qid'),
propeller.data.TextColumn('title', vocab_dict=vocab, tokenizer=tokenizer_func, unk_id=unk_id),
propeller.data.TextColumn('comment', vocab_dict=vocab, tokenizer=tokenizer_func, unk_id=unk_id),
propeller.data.LabelColumn('label'),
])
def before(qid, seg_a, seg_b, label):
sentence, segments = utils.data.build_2_pair(seg_a, seg_b, max_seqlen=args.max_seqlen, cls_id=cls_id, sep_id=sep_id)
return sentence, segments, qid, label
def after(sentence, segments, qid, label):
sentence, segments, qid, label = utils.data.expand_dims(sentence, segments, qid, label)
return sentence, segments, qid, label
train_ds = feature_column.build_dataset('train', data_dir=os.path.join(args.data_dir, 'train'), shuffle=True, repeat=True, use_gz=False) \
.map(before) \
.padded_batch(hparams.batch_size, (0, 0, 0, 0)) \
.map(after)
dev_ds = feature_column.build_dataset('dev', data_dir=os.path.join(args.data_dir, 'dev'), shuffle=False, repeat=False, use_gz=False) \
.map(before) \
.padded_batch(hparams.batch_size, (0, 0, 0, 0)) \
.map(after)
test_ds = feature_column.build_dataset('test', data_dir=os.path.join(args.data_dir, 'test'), shuffle=False, repeat=False, use_gz=False) \
.map(before) \
.padded_batch(hparams.batch_size, (0, 0, 0, 0)) \
.map(after)
train_ds.data_shapes = shapes
train_ds.data_types = types
dev_ds.data_shapes = shapes
dev_ds.data_types = types
test_ds.data_shapes = shapes
test_ds.data_types = types
varname_to_warmstart = re.compile(r'^encoder.*[wb]_0$|^.*embedding$|^.*bias$|^.*scale$|^pooled_fc.[wb]_0$')
warm_start_dir = args.warm_start_from
ws = propeller.WarmStartSetting(
predicate_fn=lambda v: varname_to_warmstart.match(v.name) and os.path.exists(os.path.join(warm_start_dir, v.name)),
from_dir=warm_start_dir
)
best_exporter = propeller.train.exporter.BestExporter(os.path.join(run_config.model_dir, 'best'), cmp_fn=lambda old, new: new['dev']['f1'] > old['dev']['f1'])
propeller.train_and_eval(
model_class_or_model_fn=RankingErnieModel,
params=hparams,
run_config=run_config,
train_dataset=train_ds,
eval_dataset={'dev': dev_ds, 'test': test_ds},
warm_start_setting=ws,
exporters=[best_exporter])
print('dev_mrr\t%.5f\ntest_mrr\t%.5f\ndev_f1\t%.5f\ntest_f1\t%.5f' % (
best_exporter._best['dev']['mrr'], best_exporter._best['test']['mrr'],
best_exporter._best['dev']['f1'], best_exporter._best['test']['f1'],
))
else:
feature_column = propeller.data.FeatureColumns([
propeller.data.LabelColumn('qid'),
propeller.data.TextColumn('title', unk_id=unk_id, vocab_dict=vocab, tokenizer=tokenizer_func),
propeller.data.TextColumn('comment', unk_id=unk_id, vocab_dict=vocab, tokenizer=tokenizer_func),
])
def before(qid, seg_a, seg_b):
sentence, segments = utils.data.build_2_pair(seg_a, seg_b, max_seqlen=args.max_seqlen, cls_id=cls_id, sep_id=sep_id)
return sentence, segments, qid
def after(sentence, segments, qid):
sentence, segments, qid = utils.data.expand_dims(sentence, segments, qid)
return sentence, segments, qid
predict_ds = feature_column.build_dataset_from_stdin('predict') \
.map(before) \
.padded_batch(hparams.batch_size, (0, 0, 0)) \
.map(after)
predict_ds.data_shapes = shapes[: -1]
predict_ds.data_types = types[: -1]
est = propeller.Learner(RankingErnieModel, run_config, hparams)
for qid, res in est.predict(predict_ds, ckpt=-1):
print('%d\t%d\t%.5f\t%.5f' % (qid[0], np.argmax(res), res[0], res[1]))
#for i in predict_ds:
# sen = i[0]
# for ss in np.squeeze(sen):
# print(' '.join(map(str, ss)))
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"import os\n",
"import numpy as np\n",
"import re\n",
"import logging\n",
"import json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sys.path.append('../ernie')\n",
"sys.path.append('../')\n",
"%env CUDA_VICIBLE_DEVICES=7\n",
"# if CUDA_VICIBLE_DEVICES is changed, relaunch jupyter kernel to inform paddle"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import propeller.paddle as propeller\n",
"import paddle\n",
"import paddle.fluid as F\n",
"import paddle.fluid.layers as L\n",
"#import model defenition from original ERNIE\n",
"from model.ernie import ErnieModel\n",
"from tokenization import FullTokenizer\n",
"from optimization import optimization\n",
"from propeller import log\n",
"log.setLevel(logging.DEBUG)\n",
"\n",
"if paddle.__version__ not in ['1.5.1', '1.5.2']:\n",
" raise RuntimeError('propeller works in paddle1.5.1')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"# download pretrained model&config(ernie1.0) and xnli data\n",
"mkdir ernie1.0_pretrained\n",
"if [ ! -f ernie1.0_pretrained/ERNIE_stable-1.0.1.tar.gz ]\n",
"then\n",
" echo \"download model\"\n",
" wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -P ernie1.0_pretrained\n",
"fi\n",
"\n",
"if [ ! -f task_data_zh.tgz ]\n",
"then\n",
" echo \"download data\"\n",
" wget --no-check-certificate https://ernie.bj.bcebos.com/task_data_zh.tgz\n",
"fi\n",
"\n",
"tar xzf ernie1.0_pretrained/ERNIE_stable-1.0.1.tar.gz -C ernie1.0_pretrained\n",
"tar xzf task_data_zh.tgz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#define basic training settings\n",
"EPOCH=3\n",
"BATCH=16\n",
"LR=5e-3\n",
"MAX_SEQLEN=128\n",
"TASK_DATA='./task_data/'\n",
"MODEL='./ernie1.0_pretrained/'\n",
"OUTPUT_DIR='./output'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!rm -rf {OUTPUT_DIR}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#skip header, and reorganize train data into ./xnli_data \n",
"!mkdir xnli_data\n",
"!mkdir xnli_data/train\n",
"!mkdir xnli_data/test\n",
"!mkdir xnli_data/dev\n",
"\n",
"def remove_header_and_save(fname_in, fname_out):\n",
" with open(fname_out, 'w') as fout:\n",
" buf = open(fname_in).readlines()[1:]\n",
" for i in buf:\n",
" fout.write(i)\n",
" return len(buf)\n",
"train_data_size = remove_header_and_save(TASK_DATA + '/xnli/train.tsv', './xnli_data/train/part.0') \n",
"dev_data_size = remove_header_and_save(TASK_DATA + '/xnli/dev.tsv', './xnli_data/dev/part.0') \n",
"test_data_size = remove_header_and_save(TASK_DATA + '/xnli/test.tsv', './xnli_data/test/part.0') \n",
"print(train_data_size)\n",
"print(dev_data_size)\n",
"print(test_data_size)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokenizer = FullTokenizer(MODEL + 'vocab.txt')\n",
"vocab = {j.strip().split('\\t')[0]: i for i, j in enumerate(open(MODEL + 'vocab.txt', encoding='utf8'))}\n",
"\n",
"print(tokenizer.tokenize('今天很热'))\n",
"print(tokenizer.tokenize('coding in paddle is cool'))\n",
"print(tokenizer.tokenize('[CLS]i have an pen')) # note: special token like [CLS], will be segmented, so please add these id after tokenization.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`propeller.data.FeatureColumns` defines the data schema in every data file.\n",
"\n",
"our data consist of 3 columns: seg_a, seg_b, label. with \"\\t\" as delemeter.\n",
"\n",
"`TextColumn` will do 3 things for you: \n",
"\n",
"1. tokenize input sentence with user-defined `tokenizer_func`\n",
"2. vocab lookup\n",
"3. serialize to protobuf bin file (optional)\n",
"\n",
"data file is organized into following patten:\n",
"\n",
"```script\n",
"./xnli_data\n",
"|-- dev\n",
"| `-- part.0\n",
"|-- test\n",
"| `-- part.0\n",
"|-- train\n",
" `-- part.0\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"sep_id = vocab['[SEP]']\n",
"cls_id = vocab['[CLS]']\n",
"unk_id = vocab['[UNK]']\n",
"\n",
"label_map = {\n",
" b\"contradictory\": 0,\n",
" b\"contradiction\": 0,\n",
" b\"entailment\": 1,\n",
" b\"neutral\": 2,\n",
"}\n",
"def tokenizer_func(inputs):\n",
" ret = tokenizer.tokenize(inputs) #`tokenize` will conver bytes to str, so we use a str vocab\n",
" return ret\n",
"\n",
"feature_column = propeller.data.FeatureColumns([\n",
" propeller.data.TextColumn('title', unk_id=unk_id, vocab_dict=vocab, tokenizer=tokenizer_func),\n",
" propeller.data.TextColumn('comment', unk_id=unk_id, vocab_dict=vocab, tokenizer=tokenizer_func),\n",
" propeller.data.LabelColumn('label', vocab_dict=label_map), #be careful, Columns deal with python3 bytes directly.\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## trian model in propeller can be defined in 2 ways:\n",
"1. subclass of `propeller.train.Model` which implements:\n",
" 1. `__init__` (hyper_param, mode, run_config)\n",
" 2. `forward` (features) => (prediction)\n",
" 3. `backword` (loss) => None\n",
" 4. `loss` (predictoin) => (loss)\n",
" 5. `metrics` (optional) (prediction) => (dict of propeller.Metrics)\n",
" \n",
"2. a callable takes following args:\n",
" 1. features\n",
" 2. param\n",
" 3. mode\n",
" 4. run_config(optional)\n",
" \n",
" and returns a propeller.ModelSpec\n",
" \n",
"we use the subclasss approch here"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class ClassificationErnieModel(propeller.train.Model):\n",
" def __init__(self, hparam, mode, run_config):\n",
" self.hparam = hparam\n",
" self.mode = mode\n",
" self.run_config = run_config\n",
"\n",
" def forward(self, features):\n",
" src_ids, sent_ids = features\n",
" dtype = 'float16' if self.hparam['use_fp16'] else 'float32'\n",
" zero = L.fill_constant([1], dtype='int64', value=0)\n",
" input_mask = L.cast(L.equal(src_ids, zero), dtype) # assume pad id == 0\n",
" #input_mask = L.unsqueeze(input_mask, axes=[2])\n",
" d_shape = L.shape(src_ids)\n",
" seqlen = d_shape[1]\n",
" batch_size = d_shape[0]\n",
" pos_ids = L.unsqueeze(L.range(0, seqlen, 1, dtype='int32'), axes=[0])\n",
" pos_ids = L.expand(pos_ids, [batch_size, 1])\n",
" pos_ids = L.unsqueeze(pos_ids, axes=[2])\n",
" pos_ids = L.cast(pos_ids, 'int64')\n",
" pos_ids.stop_gradient = True\n",
" input_mask.stop_gradient = True\n",
" task_ids = L.zeros_like(src_ids) + self.hparam.task_id #this shit wont use at the moment\n",
" task_ids.stop_gradient = True\n",
"\n",
" ernie = ErnieModel(\n",
" src_ids=src_ids,\n",
" position_ids=pos_ids,\n",
" sentence_ids=sent_ids,\n",
" task_ids=task_ids,\n",
" input_mask=input_mask,\n",
" config=self.hparam,\n",
" use_fp16=self.hparam['use_fp16']\n",
" )\n",
"\n",
" cls_feats = ernie.get_pooled_output()\n",
"\n",
" cls_feats = L.dropout(\n",
" x=cls_feats,\n",
" dropout_prob=0.1,\n",
" dropout_implementation=\"upscale_in_train\"\n",
" )\n",
"\n",
" logits = L.fc(\n",
" input=cls_feats,\n",
" size=self.hparam['num_label'],\n",
" param_attr=F.ParamAttr(\n",
" name=\"cls_out_w\",\n",
" initializer=F.initializer.TruncatedNormal(scale=0.02)),\n",
" bias_attr=F.ParamAttr(\n",
" name=\"cls_out_b\", initializer=F.initializer.Constant(0.))\n",
" )\n",
"\n",
" propeller.summary.histogram('pred', logits)\n",
"\n",
" if self.mode is propeller.RunMode.PREDICT:\n",
" probs = L.softmax(logits)\n",
" return probs\n",
" else:\n",
" return logits\n",
"\n",
" def loss(self, predictions, labels):\n",
" ce_loss, probs = L.softmax_with_cross_entropy(\n",
" logits=predictions, label=labels, return_softmax=True)\n",
" #L.Print(ce_loss, message='per_example_loss')\n",
" loss = L.mean(x=ce_loss)\n",
" return loss\n",
"\n",
" def backward(self, loss):\n",
" scheduled_lr, loss_scale = optimization(\n",
" loss=loss,\n",
" warmup_steps=int(self.run_config.max_steps * self.hparam['warmup_proportion']),\n",
" num_train_steps=self.run_config.max_steps,\n",
" learning_rate=self.hparam['learning_rate'],\n",
" train_program=F.default_main_program(),\n",
" startup_prog=F.default_startup_program(),\n",
" weight_decay=self.hparam['weight_decay'],\n",
" scheduler=\"linear_warmup_decay\",)\n",
" propeller.summary.scalar('lr', scheduled_lr)\n",
"\n",
" def metrics(self, predictions, label):\n",
" predictions = L.argmax(predictions, axis=1)\n",
" predictions = L.unsqueeze(predictions, axes=[1])\n",
" acc = propeller.metrics.Acc(label, predictions)\n",
" #auc = propeller.metrics.Auc(label, predictions)\n",
" return {'acc': acc}\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# define some utility function.\n",
"\n",
"def build_2_pair(seg_a, seg_b):\n",
" token_type_a = np.ones_like(seg_a, dtype=np.int64) * 0\n",
" token_type_b = np.ones_like(seg_b, dtype=np.int64) * 1\n",
" sen_emb = np.concatenate([[cls_id], seg_a, [sep_id], seg_b, [sep_id]], 0)\n",
" token_type_emb = np.concatenate([[0], token_type_a, [0], token_type_b, [1]], 0)\n",
" #seqlen = sen_emb.shape[0]\n",
" #deteministic truncate\n",
" sen_emb = sen_emb[0: MAX_SEQLEN]\n",
" token_type_emb = token_type_emb[0: MAX_SEQLEN]\n",
" return sen_emb, token_type_emb\n",
"\n",
"def expand_dims(*args):\n",
" func = lambda i: np.expand_dims(i, -1)\n",
" ret = [func(i) for i in args]\n",
" return ret\n",
"\n",
"def before_pad(seg_a, seg_b, label):\n",
" sentence, segments = build_2_pair(seg_a, seg_b)\n",
" return sentence, segments, label\n",
"\n",
"def after_pad(sentence, segments, label):\n",
" sentence, segments, label = expand_dims(sentence, segments, label)\n",
" return sentence, segments, label"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# a `propeller.paddle.data.Dataset` is built from FeatureColumns\n",
"\n",
"train_ds = feature_column.build_dataset('train', use_gz=False, data_dir='./xnli_data/train', shuffle=True, repeat=True) \\\n",
" .map(before_pad) \\\n",
" .padded_batch(BATCH, (0, 0, 0)) \\\n",
" .map(after_pad)\n",
"\n",
"dev_ds = feature_column.build_dataset('dev', use_gz=False, data_dir='./xnli_data/dev', shuffle=False, repeat=False) \\\n",
" .map(before_pad) \\\n",
" .padded_batch(BATCH, (0, 0, 0)) \\\n",
" .map(after_pad)\n",
"\n",
"shapes = ([-1, MAX_SEQLEN, 1], [-1, MAX_SEQLEN, 1], [-1, 1])\n",
"types = ('int64', 'int64', 'int64')\n",
"train_ds.data_shapes = shapes\n",
"train_ds.data_types = types\n",
"dev_ds.data_shapes = shapes\n",
"dev_ds.data_types = types\n",
"\n",
"warm_start_dir = MODEL + '/params'\n",
"# only the encoder and embedding is loaded from pretrained model\n",
"varname_to_warmstart = re.compile('^encoder.*w_0$|^encoder.*b_0$|^.*embedding$|^.*bias$|^.*scale$')\n",
"ws = propeller.WarmStartSetting(\n",
" predicate_fn=lambda v: varname_to_warmstart.match(v.name) and os.path.exists(os.path.join(warm_start_dir, v.name)),\n",
" from_dir=warm_start_dir\n",
" )\n",
"\n",
"# propeller will export model of highest performance, the criteria is up to you. \n",
"# here we pick the model with maximum evaluatoin accuracy.\n",
"#`BestInferenceModelExporter` is used to export serveable models\n",
"best_inference_exporter = propeller.train.exporter.BestInferenceModelExporter(\n",
" os.path.join(OUTPUT_DIR, 'best'), \n",
" cmp_fn=lambda old, new: new['eval']['acc'] > old['eval']['acc'])\n",
"#`BestExporter` is used to export restartable checkpoint, so that we can restore from it and check test-set accuracy.\n",
"best_exporter = propeller.train.exporter.BestExporter(\n",
" os.path.join(OUTPUT_DIR, 'best_model'), \n",
" cmp_fn=lambda old, new: new['eval']['acc'] > old['eval']['acc'])\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#ERNIE1.0 config \n",
"ernie_config = propeller.HParams(**json.loads(open(MODEL + '/ernie_config.json').read()))\n",
"\n",
"# default term in official config\n",
"ernie_v2_config = propeller.HParams(**{\n",
" \"sent_type_vocab_size\": None, \n",
" \"use_task_id\": False,\n",
" \"task_id\": 0,\n",
"})\n",
"\n",
"# train schema\n",
"train_config = propeller.HParams(**{ \n",
" \"warmup_proportion\": 0.1,\n",
" \"weight_decay\": 0.01,\n",
" \"use_fp16\": 0,\n",
" \"learning_rate\": 0.00005,\n",
" \"num_label\": 3,\n",
" \"batch_size\": 32\n",
"})\n",
"\n",
"config = ernie_config.join(ernie_v2_config).join(train_config)\n",
"\n",
"run_config = propeller.RunConfig(\n",
" model_dir=OUTPUT_DIR,\n",
" max_steps=EPOCH * train_data_size / BATCH,\n",
" skip_steps=10,\n",
" eval_steps=1000,\n",
" save_steps=1000,\n",
" log_steps=10,\n",
" max_ckpt=3\n",
")\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Finetune and Eval"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# `train_and_eval` takes key-word args only\n",
"# we are now ready to train\n",
"hooks = [propeller.train.TqdmNotebookProgressBarHook(run_config.max_steps)] # to show the progress bar, you need to `pip install tqdm ipywidgets`\n",
"propeller.train_and_eval(\n",
" model_class_or_model_fn=ClassificationErnieModel, #**careful**, you should pass a Class to `train_and_eval`, propeller will try to instantiate it.\n",
" params=config, \n",
" run_config=run_config, \n",
" train_dataset=train_ds, \n",
" eval_dataset=dev_ds, \n",
" warm_start_setting=ws, \n",
" exporters=[best_exporter, best_inference_exporter],\n",
" train_hooks=hooks,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Predict"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# after training you might want to check your model performance on test-set\n",
"# let's do this via `propeller.predict`\n",
"# keep in mind that model of best performace has been exported during thet `train_and_eval` phrase\n",
"\n",
"best_filename = [file for file in os.listdir(os.path.join(OUTPUT_DIR, 'best_model')) if 'model' in file][0]\n",
"best_model_path = os.path.join(os.path.join(OUTPUT_DIR, 'best_model'), best_filename)\n",
"true_label = [label_map[(line.strip().split(b'\\t')[-1])]for line in open('./xnli_data/test/part.0', 'rb')]\n",
"\n",
"def drop_label(sentence, segments, label): #we drop the label column here\n",
" return sentence, segments\n",
"\n",
"test_ds = feature_column.build_dataset('test', use_gz=False, data_dir='./xnli_data/test', shuffle=False, repeat=False) \\\n",
" .map(before_pad) \\\n",
" .padded_batch(BATCH, (0, 0, 0)) \\\n",
" .map(after_pad) \\\n",
" .map(drop_label)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result = []\n",
"learner = propeller.Learner(ClassificationErnieModel, run_config, params=config, )\n",
"for pred in learner.predict(test_ds, ckpt=-1):\n",
" result.append(np.argmax(pred))\n",
" \n",
"result, true_label = np.array(result), np.array(true_label)\n",
"\n",
"test_acc = (result == true_label).sum() / len(true_label)\n",
"print('test accuracy:%.5f' % test_acc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Serving\n",
"your model is now ready to serve! \n",
"you can open up a server by propeller with \n",
"```script\n",
"python -m propeller.tools.start_server -m /path/to/saved/model -p 8888\n",
"```\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Load classifier's checkpoint to do prediction or save inference model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import argparse
import numpy as np
import multiprocessing
# NOTE(paddle-dev): All of these flags should be
# set before `import paddle`. Otherwise, it would
# not take any effect.
os.environ['FLAGS_eager_delete_tensor_gb'] = '0' # enable gc
import paddle.fluid as fluid
from reader.task_reader import ClassifyReader
from model.ernie import ErnieConfig
from finetune.classifier import create_model
from utils.args import ArgumentGroup, print_arguments
from utils.init import init_pretraining_params
from finetune_args import parser
# yapf: disable
parser = argparse.ArgumentParser(__doc__)
model_g = ArgumentGroup(parser, "model", "options to init, resume and save model.")
model_g.add_arg("ernie_config_path", str, None, "Path to the json file for ernie model config.")
model_g.add_arg("init_checkpoint", str, None, "Init checkpoint to resume training from.")
model_g.add_arg("save_inference_model_path", str, "inference_model", "If set, save the inference model to this path.")
model_g.add_arg("use_fp16", bool, False, "Whether to resume parameters from fp16 checkpoint.")
model_g.add_arg("num_labels", int, 2, "num labels for classify")
model_g.add_arg("ernie_version", str, "1.0", "ernie_version")
data_g = ArgumentGroup(parser, "data", "Data paths, vocab paths and data processing options.")
data_g.add_arg("predict_set", str, None, "Predict set file")
data_g.add_arg("vocab_path", str, None, "Vocabulary path.")
data_g.add_arg("label_map_config", str, None, "Label_map_config json file.")
data_g.add_arg("max_seq_len", int, 128, "Number of words of the longest seqence.")
data_g.add_arg("batch_size", int, 32, "Total examples' number in batch for training. see also --in_tokens.")
data_g.add_arg("do_lower_case", bool, True,
"Whether to lower case the input text. Should be True for uncased models and False for cased models.")
run_type_g = ArgumentGroup(parser, "run_type", "running type options.")
run_type_g.add_arg("use_cuda", bool, True, "If set, use GPU for training.")
run_type_g.add_arg("do_prediction", bool, True, "Whether to do prediction on test set.")
args = parser.parse_args()
# yapf: enable.
def main(args):
ernie_config = ErnieConfig(args.ernie_config_path)
ernie_config.print_config()
reader = ClassifyReader(
vocab_path=args.vocab_path,
label_map_config=args.label_map_config,
max_seq_len=args.max_seq_len,
do_lower_case=args.do_lower_case,
in_tokens=False,
is_inference=True)
predict_prog = fluid.Program()
predict_startup = fluid.Program()
with fluid.program_guard(predict_prog, predict_startup):
with fluid.unique_name.guard():
predict_pyreader, probs, feed_target_names = create_model(
args,
pyreader_name='predict_reader',
ernie_config=ernie_config,
is_classify=True,
is_prediction=True,
ernie_version=args.ernie_version)
predict_prog = predict_prog.clone(for_test=True)
if args.use_cuda:
place = fluid.CUDAPlace(0)
dev_count = fluid.core.get_cuda_device_count()
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
place = fluid.CUDAPlace(0) if args.use_cuda == True else fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(predict_startup)
if args.init_checkpoint:
init_pretraining_params(exe, args.init_checkpoint, predict_prog)
else:
raise ValueError("args 'init_checkpoint' should be set for prediction!")
assert args.save_inference_model_path, "args save_inference_model_path should be set for prediction"
_, ckpt_dir = os.path.split(args.init_checkpoint.rstrip('/'))
dir_name = ckpt_dir + '_inference_model'
model_path = os.path.join(args.save_inference_model_path, dir_name)
print("save inference model to %s" % model_path)
fluid.io.save_inference_model(
model_path,
feed_target_names, [probs],
exe,
main_program=predict_prog)
print("load inference model from %s" % model_path)
infer_program, feed_target_names, probs = fluid.io.load_inference_model(
model_path, exe)
src_ids = feed_target_names[0]
sent_ids = feed_target_names[1]
pos_ids = feed_target_names[2]
input_mask = feed_target_names[3]
if args.ernie_version == "2.0":
task_ids = feed_target_names[4]
predict_data_generator = reader.data_generator(
input_file=args.predict_set,
batch_size=args.batch_size,
epoch=1,
shuffle=False)
print("-------------- prediction results --------------")
np.set_printoptions(precision=4, suppress=True)
index = 0
for sample in predict_data_generator():
src_ids_data = sample[0]
sent_ids_data = sample[1]
pos_ids_data = sample[2]
task_ids_data = sample[3]
input_mask_data = sample[4]
if args.ernie_version == "1.0":
output = exe.run(
infer_program,
feed={src_ids: src_ids_data,
sent_ids: sent_ids_data,
pos_ids: pos_ids_data,
input_mask: input_mask_data},
fetch_list=probs)
elif args.ernie_version == "2.0":
output = exe.run(
infer_program,
feed={src_ids: src_ids_data,
sent_ids: sent_ids_data,
pos_ids: pos_ids_data,
task_ids: task_ids_data,
input_mask: input_mask_data},
fetch_list=probs)
else:
raise ValueError("ernie_version must be 1.0 or 2.0")
for single_result in output[0]:
print("example_index:{}\t{}".format(index, single_result))
index += 1
if __name__ == '__main__':
print_arguments(args)
main(args)
......@@ -21,7 +21,7 @@ Propeller provide the following benefits:
## install
```script
cd propeller && pip install .
pip install --user .
```
## Getting Started
......@@ -71,7 +71,6 @@ cd propeller && pip install .
# Start training!
propeller.train_and_eval(BowModel, hparams, run_config, train_ds, eval_ds)
```
More detail see example/toy/
## Main Feature
1. train_and_eval
......@@ -91,9 +90,9 @@ More detail see example/toy/
4. Summary
To trace tensor histogram in training, simply:
```python
propeller.summary.histogram('loss', tensor)
```
```python
propeller.summary.histogram('loss', tensor)
```
## Contributing
......
......@@ -21,7 +21,7 @@ Propeller 具有下列优势:
## install|安装
cd propeller && pip install .
pip install --user .
## Getting Started|快速开始
```python
......@@ -70,7 +70,6 @@ cd propeller && pip install .
# 开始训练!
propeller.train_and_eval(BowModel, hparams, run_config, train_ds, eval_ds)
```
先洗详细请见example/toy/
## 主要构件
1. train_and_eval
......@@ -89,10 +88,10 @@ cd propeller && pip install .
4. Summary
对训练过程中的某些参数进行log追踪,只需要:
```python
propeller.summary.histogram('loss', tensor)
```python
propeller.summary.histogram('loss', tensor)
```
```
## Contributing|贡献
......
......@@ -256,8 +256,10 @@ class Mrr(Metrics):
sorted(
tup, key=lambda t: t[2], reverse=True)) if l != 0
]
ranks = ranks[0]
return ranks
if len(ranks):
return ranks[0]
else:
return 0.
mrr_for_qid = [
calc_func(tup)
......
此差异已折叠。
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_sync_nccl_allreduce=1
export FLAGS_eager_delete_tensor_gb=0.0
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -20,7 +21,7 @@ batch_size=64
epoch=3
for i in {1..5};do
python -u run_classifier.py \
python -u ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -20,7 +21,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
--tokenizer ${TOKENIZER:-"FullTokenizer"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -19,7 +20,7 @@ epoch=4
for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -4,6 +4,7 @@ R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -22,7 +23,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -21,7 +22,7 @@ for i in {1..1};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--ernie_config_path script/en_glue/ernie_base/ernie_config.json \
--validation_steps 1000000000000 \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -16,7 +17,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -18,7 +19,7 @@ epoch=4
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -18,7 +19,7 @@ epoch=3
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -20,7 +21,7 @@ epoch=4
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_sync_nccl_allreduce=1
export FLAGS_eager_delete_tensor_gb=0.0
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -19,7 +20,7 @@ epoch=5
for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -4,6 +4,7 @@ R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -17,7 +18,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
--tokenizer ${TOKENIZER:-"FullTokenizer"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -20,7 +21,7 @@ epoch=4
for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -4,6 +4,7 @@ R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -21,7 +22,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -16,7 +17,7 @@ for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--ernie_config_path script/en_glue/ernie_large/ernie_config.json \
--validation_steps 1000000000000 \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -16,7 +17,7 @@ mkdir -p log/
for i in {1..5};do
timestamp=`date "+%Y-%m-%d-%H-%M-%S"`
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
mkdir -p log/
......@@ -19,7 +20,7 @@ epoch=4
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -15,7 +16,7 @@ mkdir -p log/
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--use_cuda true \
--for_cn False \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -3,6 +3,7 @@
R_DIR=`dirname $0`; MYDIR=`cd $R_DIR;pwd`
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export PYTHONPATH=./ernie:${PYTHONPATH:-}
if [[ -f ./model_conf ]];then
source ./model_conf
......@@ -19,7 +20,7 @@ epoch=4
for i in {1..5};do
python -u run_classifier.py \
python -u ./ernie/run_classifier.py \
--for_cn False \
--use_cuda true \
--use_fast_executor ${e_executor:-"true"} \
......
......@@ -4,7 +4,8 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,7 +4,8 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u ./run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,12 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_mrc.py --use_cuda true\
./ernie/run_mrc.py --use_cuda true\
--batch_size 16 \
--in_tokens false\
--use_fast_executor true \
......
......@@ -4,13 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_classifier.py \
./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,12 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_mrc.py --use_cuda true\
./ernie/run_mrc.py --use_cuda true\
--batch_size 16 \
--in_tokens false\
--use_fast_executor true \
......
......@@ -4,7 +4,8 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,7 +4,8 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_sequence_labeling.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_sequence_labeling.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -4,7 +4,8 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -4,12 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_classifier.py \
./ernie/run_classifier.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -3,7 +3,9 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -3,7 +3,8 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u ./run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,12 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_mrc.py --use_cuda true\
./ernie/run_mrc.py --use_cuda true\
--batch_size 8 \
--in_tokens false\
--use_fast_executor true \
......
......@@ -3,12 +3,13 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_classifier.py \
./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -4,12 +4,13 @@ export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_mrc.py --use_cuda true\
./ernie/run_mrc.py --use_cuda true\
--batch_size 8 \
--in_tokens false\
--use_fast_executor true \
......
......@@ -3,7 +3,8 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--verbose true \
--do_train true \
......
......@@ -3,7 +3,8 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0
python -u run_sequence_labeling.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_sequence_labeling.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -3,7 +3,8 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -u run_classifier.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python -u ./ernie/run_classifier.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -3,13 +3,13 @@ set -eux
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./finetune_launch.py \
export PYTHONPATH=./ernie:${PYTHONPATH:-}
python ./ernie/finetune_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
run_classifier.py \
./ernie/run_classifier.py \
--use_cuda true \
--do_train true \
--do_val true \
......
......@@ -3,12 +3,12 @@ set -eux
export FLAGS_eager_delete_tensor_gb=0
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python ./pretrain_launch.py \
python ./ernie/pretrain_launch.py \
--nproc_per_node 8 \
--selected_gpus 0,1,2,3,4,5,6,7 \
--node_ips $(hostname -i) \
--node_id 0 \
./train.py --use_cuda True \
./ernie/train.py --use_cuda True \
--is_distributed False\
--use_fast_executor True \
--weight_sharing True \
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册