Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
ERNIE
提交
70db68a5
E
ERNIE
项目概览
PaddlePaddle
/
ERNIE
大约 1 年 前同步成功
通知
109
Star
5997
Fork
1270
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
29
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
E
ERNIE
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
29
Issue
29
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
70db68a5
编写于
7月 31, 2019
作者:
L
liuyibing01
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Use gc & compiled program for fine-tuning
上级
87d3c630
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
12 addition
and
33 deletion
+12
-33
BERT/README.md
BERT/README.md
+6
-4
BERT/run_classifier.py
BERT/run_classifier.py
+3
-20
BERT/run_squad.py
BERT/run_squad.py
+3
-9
未找到文件。
BERT/README.md
浏览文件 @
70db68a5
...
...
@@ -122,8 +122,8 @@ export current_endpoint=192.168.0.17:9185
对于
[
GLUE 数据
](
https://gluebenchmark.com/tasks
)
,请运行这个
[
脚本
](
https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e
)
予以下载; 对于 XNLI 任务,则需分别下载
[
XNLI dev/test set
](
https://bert-data.bj.bcebos.com/XNLI-1.0.zip
)
和
[
XNLI machine-translated training set
](
https://bert-data.bj.bcebos.com/XNLI-MT-1.0.zip
)
,然后解压到同一个目录。以 XNLI 任务为例,启动 Fine-tuning 的方式如下:
```
shell
export
FLAGS_
enable_parallel_graph
=
1
export
FLAGS_
sync_nccl_allreduce
=
1
export
FLAGS_
sync_nccl_allreduce
=
0
export
FLAGS_
eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3,4,5,6,7
BERT_BASE_PATH
=
"chinese_L-12_H-768_A-12"
...
...
@@ -183,8 +183,8 @@ SQuAD v1.1
对于 SQuAD v1.1, 按如下方式启动 Fine-tuning:
```
shell
export
FLAGS_
enable_parallel_graph
=
1
export
FLAGS_
sync_nccl_allreduce
=
1
export
FLAGS_
sync_nccl_allreduce
=
0
export
FLAGS_
eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
BERT_BASE_PATH
=
"uncased_L-12_H-768_A-12"
...
...
@@ -229,6 +229,8 @@ python ${SQUAD_PATH}/evaluate-v1.1.py ${SQUAD_PATH}/dev-v1.1.json ${CHECKPOINT_P
对于 SQuAD v2.0, 按如下方式启动 Fine-tuning:
```
shell
export
FLAGS_sync_nccl_allreduce
=
0
export
FLAGS_eager_delete_tensor_gb
=
1
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
BERT_BASE_PATH
=
"uncased_L-12_H-768_A-12"
CHECKPOINT_PATH
=
/path/to/save/checkpoints/
...
...
BERT/run_classifier.py
浏览文件 @
70db68a5
...
...
@@ -208,12 +208,6 @@ def main(args):
use_fp16
=
args
.
use_fp16
,
loss_scaling
=
args
.
loss_scaling
)
fluid
.
memory_optimize
(
input_program
=
train_program
,
skip_opt_set
=
[
loss
.
name
,
probs
.
name
,
accuracy
.
name
,
num_seqs
.
name
])
if
args
.
verbose
:
if
args
.
in_tokens
:
lower_mem
,
upper_mem
,
unit
=
fluid
.
contrib
.
memory_usage
(
...
...
@@ -279,22 +273,11 @@ def main(args):
train_data_generator
=
fluid
.
contrib
.
reader
.
distributed_batch_reader
(
train_data_generator
)
train_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
,
build_strategy
=
build_strategy
,
main_program
=
train_program
)
train_compiled_program
=
fluid
.
CompiledProgram
(
train_program
).
with_data_parallel
(
loss_name
=
loss
.
name
,
build_strategy
=
build_strategy
)
train_pyreader
.
decorate_tensor_provider
(
train_data_generator
)
else
:
train_exe
=
None
if
args
.
do_val
or
args
.
do_test
:
test_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
main_program
=
test_prog
,
share_vars_from
=
train_exe
)
if
args
.
do_train
:
train_pyreader
.
start
()
...
...
@@ -317,7 +300,7 @@ def main(args):
else
:
fetch_list
=
[]
outputs
=
train_exe
.
run
(
fetch_list
=
fetch_list
)
outputs
=
exe
.
run
(
train_compiled_program
,
fetch_list
=
fetch_list
)
if
steps
%
args
.
skip_steps
==
0
:
if
warmup_steps
<=
0
:
...
...
BERT/run_squad.py
浏览文件 @
70db68a5
...
...
@@ -279,7 +279,6 @@ def train(args):
use_fp16
=
args
.
use_fp16
,
loss_scaling
=
args
.
loss_scaling
)
fluid
.
memory_optimize
(
train_program
,
skip_opt_set
=
[
loss
.
name
,
num_seqs
.
name
])
if
args
.
verbose
:
if
args
.
in_tokens
:
...
...
@@ -301,8 +300,6 @@ def train(args):
bert_config
=
bert_config
,
is_training
=
False
)
fluid
.
memory_optimize
(
test_prog
,
skip_opt_set
=
[
unique_ids
.
name
,
start_logits
.
name
,
end_logits
.
name
,
num_seqs
.
name
])
test_prog
=
test_prog
.
clone
(
for_test
=
True
)
...
...
@@ -341,11 +338,8 @@ def train(args):
exec_strategy
.
num_threads
=
dev_count
exec_strategy
.
num_iteration_per_drop_scope
=
args
.
num_iteration_per_drop_scope
train_exe
=
fluid
.
ParallelExecutor
(
use_cuda
=
args
.
use_cuda
,
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
,
main_program
=
train_program
)
train_compiled_program
=
fluid
.
CompiledProgram
(
train_program
).
with_data_parallel
(
loss_name
=
loss
.
name
,
exec_strategy
=
exec_strategy
)
train_pyreader
.
decorate_tensor_provider
(
train_data_generator
)
...
...
@@ -366,7 +360,7 @@ def train(args):
else
:
fetch_list
=
[]
outputs
=
train_exe
.
run
(
fetch_list
=
fetch_list
)
outputs
=
exe
.
run
(
train_compiled_program
,
fetch_list
=
fetch_list
)
if
steps
%
args
.
skip_steps
==
0
:
if
warmup_steps
<=
0
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录