Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
987517fd
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1528
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
987517fd
编写于
10月 15, 2021
作者:
S
stephon
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add multi_node and amp train in script
上级
8bb9fb7e
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
16 addition
and
12 deletion
+16
-12
configs/det/det_mv3_db_amp.yml
configs/det/det_mv3_db_amp.yml
+2
-2
tests/configs/ppocr_det_mobile_params.txt
tests/configs/ppocr_det_mobile_params.txt
+2
-2
tests/test_python.sh
tests/test_python.sh
+8
-4
tools/train.py
tools/train.py
+4
-4
未找到文件。
configs/det/det_mv3_db_amp.yml
浏览文件 @
987517fd
...
@@ -14,8 +14,8 @@ Global:
...
@@ -14,8 +14,8 @@ Global:
use_visualdl
:
False
use_visualdl
:
False
infer_img
:
doc/imgs_en/img_10.jpg
infer_img
:
doc/imgs_en/img_10.jpg
save_res_path
:
./output/det_db/predicts_db.txt
save_res_path
:
./output/det_db/predicts_db.txt
#amp related
AMP
:
use_amp
:
True
scale_loss
:
1024.0
scale_loss
:
1024.0
use_dynamic_loss_scaling
:
True
use_dynamic_loss_scaling
:
True
...
...
tests/configs/ppocr_det_mobile_params.txt
浏览文件 @
987517fd
===========================train_params===========================
===========================train_params===========================
model_name:ocr_det
model_name:ocr_det
python:python3.7
python:python3.7
gpu_list:0|0,1
gpu_list:0|0,1
|10.21.226.181,10.21.226.133;0,1
Global.use_gpu:True|True
Global.use_gpu:True|True
Global.auto_cast:
null
Global.auto_cast:
fp32|amp
Global.epoch_num:lite_train_infer=1|whole_train_infer=300
Global.epoch_num:lite_train_infer=1|whole_train_infer=300
Global.save_model_dir:./output/
Global.save_model_dir:./output/
Train.loader.batch_size_per_card:lite_train_infer=2|whole_train_infer=4
Train.loader.batch_size_per_card:lite_train_infer=2|whole_train_infer=4
...
...
tests/test_python.sh
浏览文件 @
987517fd
...
@@ -253,6 +253,11 @@ else
...
@@ -253,6 +253,11 @@ else
env
=
" "
env
=
" "
fi
fi
for
autocast
in
${
autocast_list
[*]
}
;
do
for
autocast
in
${
autocast_list
[*]
}
;
do
if
[
${
autocast
}
=
"amp"
]
;
then
set_amp_config
=
"Gloabl.use_amp=True Global.scale_loss=1024.0 Global.use_dynamic_loss_scaling=True"
else
set_amp_config
=
" "
fi
for
trainer
in
${
trainer_list
[*]
}
;
do
for
trainer
in
${
trainer_list
[*]
}
;
do
flag_quant
=
False
flag_quant
=
False
if
[
${
trainer
}
=
${
pact_key
}
]
;
then
if
[
${
trainer
}
=
${
pact_key
}
]
;
then
...
@@ -279,7 +284,6 @@ else
...
@@ -279,7 +284,6 @@ else
if
[
${
run_train
}
=
"null"
]
;
then
if
[
${
run_train
}
=
"null"
]
;
then
continue
continue
fi
fi
set_autocast
=
$(
func_set_params
"
${
autocast_key
}
"
"
${
autocast
}
"
)
set_autocast
=
$(
func_set_params
"
${
autocast_key
}
"
"
${
autocast
}
"
)
set_epoch
=
$(
func_set_params
"
${
epoch_key
}
"
"
${
epoch_num
}
"
)
set_epoch
=
$(
func_set_params
"
${
epoch_key
}
"
"
${
epoch_num
}
"
)
set_pretrain
=
$(
func_set_params
"
${
pretrain_model_key
}
"
"
${
pretrain_model_value
}
"
)
set_pretrain
=
$(
func_set_params
"
${
pretrain_model_key
}
"
"
${
pretrain_model_value
}
"
)
...
@@ -295,11 +299,11 @@ else
...
@@ -295,11 +299,11 @@ else
set_save_model
=
$(
func_set_params
"
${
save_model_key
}
"
"
${
save_log
}
"
)
set_save_model
=
$(
func_set_params
"
${
save_model_key
}
"
"
${
save_log
}
"
)
if
[
${#
gpu
}
-le
2
]
;
then
# train with cpu or single gpu
if
[
${#
gpu
}
-le
2
]
;
then
# train with cpu or single gpu
cmd
=
"
${
python
}
${
run_train
}
${
set_use_gpu
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
${
run_train
}
${
set_use_gpu
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
elif
[
${#
gpu
}
-le
15
]
;
then
# train with multi-gpu
elif
[
${#
gpu
}
-le
15
]
;
then
# train with multi-gpu
cmd
=
"
${
python
}
-m paddle.distributed.launch --gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
-m paddle.distributed.launch --gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
else
# train with multi-machine
else
# train with multi-machine
cmd
=
"
${
python
}
-m paddle.distributed.launch --ips=
${
ips
}
--gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_pretrain
}
${
set_epoch
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
-m paddle.distributed.launch --ips=
${
ips
}
--gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_pretrain
}
${
set_epoch
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
fi
fi
# run train
# run train
eval
"unset CUDA_VISIBLE_DEVICES"
eval
"unset CUDA_VISIBLE_DEVICES"
...
...
tools/train.py
浏览文件 @
987517fd
...
@@ -103,16 +103,16 @@ def main(config, device, logger, vdl_writer):
...
@@ -103,16 +103,16 @@ def main(config, device, logger, vdl_writer):
logger
.
info
(
'valid dataloader has {} iters'
.
format
(
logger
.
info
(
'valid dataloader has {} iters'
.
format
(
len
(
valid_dataloader
)))
len
(
valid_dataloader
)))
use_amp
=
True
if
"AMP"
in
config
else
False
use_amp
=
config
[
"Global"
].
get
(
"use_amp"
,
False
)
if
use_amp
:
if
use_amp
:
AMP_RELATED_FLAGS_SETTING
=
{
AMP_RELATED_FLAGS_SETTING
=
{
'FLAGS_cudnn_batchnorm_spatial_persistent'
:
1
,
'FLAGS_cudnn_batchnorm_spatial_persistent'
:
1
,
'FLAGS_max_inplace_grad_add'
:
8
,
'FLAGS_max_inplace_grad_add'
:
8
,
}
}
paddle
.
fluid
.
set_flags
(
AMP_RELATED_FLAGS_SETTING
)
paddle
.
fluid
.
set_flags
(
AMP_RELATED_FLAGS_SETTING
)
scale_loss
=
config
[
"
AMP
"
].
get
(
"scale_loss"
,
1.0
)
scale_loss
=
config
[
"
Global
"
].
get
(
"scale_loss"
,
1.0
)
use_dynamic_loss_scaling
=
config
[
"
AMP"
].
get
(
"use_dynamic_loss_scaling"
,
use_dynamic_loss_scaling
=
config
[
"
Global"
].
get
(
False
)
"use_dynamic_loss_scaling"
,
False
)
scaler
=
paddle
.
amp
.
GradScaler
(
scaler
=
paddle
.
amp
.
GradScaler
(
init_loss_scaling
=
scale_loss
,
init_loss_scaling
=
scale_loss
,
use_dynamic_loss_scaling
=
use_dynamic_loss_scaling
)
use_dynamic_loss_scaling
=
use_dynamic_loss_scaling
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录