Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
ERNIE
提交
546ce5e9
E
ERNIE
项目概览
PaddlePaddle
/
ERNIE
大约 1 年 前同步成功
通知
109
Star
5997
Fork
1270
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
29
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
E
ERNIE
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
29
Issue
29
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
546ce5e9
编写于
8月 01, 2019
作者:
T
tianxin
提交者:
GitHub
8月 01, 2019
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #242 from PaddlePaddle/refine_bert
some update
上级
ce6b1adc
0d1c2acb
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
14 addition
and
33 deletion
+14
-33
BERT/README.md
BERT/README.md
+8
-4
BERT/run_classifier.py
BERT/run_classifier.py
+3
-20
BERT/run_squad.py
BERT/run_squad.py
+3
-9
未找到文件。
BERT/README.md
浏览文件 @
546ce5e9
...
...
@@ -77,6 +77,8 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
./train.sh
-local
y
```
如果采用 CPU 多核的方式进行预训练,则需要通过环境设置所用 CPU 的核数,例如
`export CPU_NUM=5`
,否则会占据所有的CPU。
这里需要特别说明的是,参数
`generate_neg_sample`
为
`True`
表示在预训练过程中,
`Next Sentence Prediction`
任务的负样本是根据训练数据中的正样本动态生成的,我们给出的样例训练数据
[
`demo_wiki_train.gz`
](
data/train/demo_wiki_train.gz
)
只包含
`Next Sentence Prediction`
任务的正样本;如果已事先构造了
`Next Sentence Prediction`
任务的正负样本,则需要将
`generate_neg_sample`
置为
`False`
。
预训练任务进行的过程中会输出当前学习率、训练数据所经过的轮数、当前迭代的总步数、训练误差、训练速度等信息,根据
`--validation_steps ${N}`
的配置,每间隔
`N`
步输出模型在验证集的各种指标:
...
...
@@ -122,8 +124,8 @@ export current_endpoint=192.168.0.17:9185
对于
[
GLUE 数据
](
https://gluebenchmark.com/tasks
)
,请运行这个
[
脚本
](
https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e
)
予以下载; 对于 XNLI 任务,则需分别下载
[
XNLI dev/test set
](
https://bert-data.bj.bcebos.com/XNLI-1.0.zip
)
和
[
XNLI machine-translated training set
](
https://bert-data.bj.bcebos.com/XNLI-MT-1.0.zip
)
,然后解压到同一个目录。以 XNLI 任务为例,启动 Fine-tuning 的方式如下:
```
shell
export
FLAGS_
enable_parallel_graph
=
1
export
FLAGS_
sync_nccl_allreduce
=
1
export
FLAGS_
sync_nccl_allreduce
=
0
export
FLAGS_
eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3,4,5,6,7
BERT_BASE_PATH
=
"chinese_L-12_H-768_A-12"
...
...
@@ -183,8 +185,8 @@ SQuAD v1.1
对于 SQuAD v1.1, 按如下方式启动 Fine-tuning:
```
shell
export
FLAGS_
enable_parallel_graph
=
1
export
FLAGS_
sync_nccl_allreduce
=
1
export
FLAGS_
sync_nccl_allreduce
=
0
export
FLAGS_
eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
BERT_BASE_PATH
=
"uncased_L-12_H-768_A-12"
...
...
@@ -229,6 +231,8 @@ python ${SQUAD_PATH}/evaluate-v1.1.py ${SQUAD_PATH}/dev-v1.1.json ${CHECKPOINT_P
对于 SQuAD v2.0, 按如下方式启动 Fine-tuning:
```
shell
export
FLAGS_sync_nccl_allreduce
=
0
export
FLAGS_eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
BERT_BASE_PATH
=
"uncased_L-12_H-768_A-12"
CHECKPOINT_PATH
=
/path/to/save/checkpoints/
...
...
BERT/run_classifier.py
浏览文件 @
546ce5e9
...
...
@@ -208,12 +208,6 @@ def main(args):
use_fp16
=
args
.
use_fp16
,
loss_scaling
=
args
.
loss_scaling
)
fluid
.
memory_optimize
(
input_program
=
train_program
,
skip_opt_set
=
[
loss
.
name
,
probs
.
name
,
accuracy
.
name
,
num_seqs
.
name
])
if
args
.
verbose
:
if
args
.
in_tokens
:
lower_mem
,
upper_mem
,
unit
=
fluid
.
contrib
.
memory_usage
(
...
...
@@ -279,22 +273,11 @@ def main(args):
train_data_generator
=
fluid
.
contrib
.
reader
.
distributed_batch_reader
(
train_data_generator
)
train_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
,
build_strategy
=
build_strategy
,
main_program
=
train_program
)
train_compiled_program
=
fluid
.
CompiledProgram
(
train_program
).
with_data_parallel
(
loss_name
=
loss
.
name
,
build_strategy
=
build_strategy
)
train_pyreader
.
decorate_tensor_provider
(
train_data_generator
)
else
:
train_exe
=
None
if
args
.
do_val
or
args
.
do_test
:
test_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
main_program
=
test_prog
,
share_vars_from
=
train_exe
)
if
args
.
do_train
:
train_pyreader
.
start
()
...
...
@@ -317,7 +300,7 @@ def main(args):
else
:
fetch_list
=
[]
outputs
=
train_exe
.
run
(
fetch_list
=
fetch_list
)
outputs
=
exe
.
run
(
train_compiled_program
,
fetch_list
=
fetch_list
)
if
steps
%
args
.
skip_steps
==
0
:
if
warmup_steps
<=
0
:
...
...
BERT/run_squad.py
浏览文件 @
546ce5e9
...
...
@@ -279,7 +279,6 @@ def train(args):
use_fp16
=
args
.
use_fp16
,
loss_scaling
=
args
.
loss_scaling
)
fluid
.
memory_optimize
(
train_program
,
skip_opt_set
=
[
loss
.
name
,
num_seqs
.
name
])
if
args
.
verbose
:
if
args
.
in_tokens
:
...
...
@@ -301,8 +300,6 @@ def train(args):
bert_config
=
bert_config
,
is_training
=
False
)
fluid
.
memory_optimize
(
test_prog
,
skip_opt_set
=
[
unique_ids
.
name
,
start_logits
.
name
,
end_logits
.
name
,
num_seqs
.
name
])
test_prog
=
test_prog
.
clone
(
for_test
=
True
)
...
...
@@ -341,11 +338,8 @@ def train(args):
exec_strategy
.
num_threads
=
dev_count
exec_strategy
.
num_iteration_per_drop_scope
=
args
.
num_iteration_per_drop_scope
train_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
,
main_program
=
train_program
)
train_compiled_program
=
fluid
.
CompiledProgram
(
train_program
).
with_data_parallel
(
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
)
train_pyreader
.
decorate_tensor_provider
(
train_data_generator
)
...
...
@@ -366,7 +360,7 @@ def train(args):
else
:
fetch_list
=
[]
outputs
=
train_exe
.
run
(
fetch_list
=
fetch_list
)
outputs
=
exe
.
run
(
train_compiled_program
,
fetch_list
=
fetch_list
)
if
steps
%
args
.
skip_steps
==
0
:
if
warmup_steps
<=
0
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录