Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
069d994c
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
069d994c
编写于
6月 28, 2021
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
test to test_v5
上级
3ba4d543
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
368 addition
and
192 deletion
+368
-192
test/ocr_det_params.txt
test/ocr_det_params.txt
+35
-0
test/paddleocr_ci_params.txt
test/paddleocr_ci_params.txt
+0
-15
test/prepare.sh
test/prepare.sh
+138
-0
test/test.sh
test/test.sh
+195
-177
未找到文件。
test/ocr_det_params.txt
0 → 100644
浏览文件 @
069d994c
model_name:ocr_det
python:python3.7
gpu_list:-1|0|0,1
Global.auto_cast:False|True
Global.epoch_num:10
Global.save_model_dir:./output/
Global.save_inference_dir:./output/
Train.loader.batch_size_per_card:
Global.use_gpu
Global.pretrained_model
trainer:norm|pact|fpgm
norm_train:tools/train.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./pretrain_models/MobileNetV3_large_x0_5_pretrained
quant_train:deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./pretrain_models/det_mv3_db_v2.0_train/best_accuracy
fpgm_train:null
distill_train:null
eval:tools/eval.py -c configs/det/det_mv3_db.yml -o
norm_export:tools/export_model.py -c configs/det/det_mv3_db.yml -o
quant_export:deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
fpgm_export:deploy/slim/prune/export_prune_model.py
distill_export:null
inference:tools/infer/predict_det.py
--use_gpu:True|False
--enable_mkldnn:True|False
--cpu_threads:1|6
--rec_batch_num:1
--use_tensorrt:True|False
--precision:fp32|fp16|int8
--det_model_dir
--image_dir
--save_log_path
test/paddleocr_ci_params.txt
已删除
100644 → 0
浏览文件 @
3ba4d543
train_model_list: ocr_det
gpu_list: -1|0|0,1
auto_cast_list: False|True
trainer_list: norm|pact|fpgm
python: python3.7
inference: python
devices: cpu|gpu
use_mkldnn_list: True|False
cpu_threads_list: 1|6
rec_batch_size_list: 1|6
gpu_trt_list: True|False
gpu_precision_list: fp32|fp16|int8
infer_gpu_id: 0
log_path: ./output
test/
infer
.sh
→
test/
prepare
.sh
浏览文件 @
069d994c
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer']
MODE
=
$2
dataline
=
$(
cat
${
FILENAME
}
)
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
function
func_parser
(){
function
func_parser_key
(){
strs
=
$1
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[0]
}
echo
${
tmp
}
}
function
func_parser_value
(){
strs
=
$1
IFS
=
":
"
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[1]
}
echo
${
tmp
}
}
IFS
=
$'
\n
'
# The training params
train_model_list
=
$(
func_parser
"
${
lines
[0]
}
"
)
slim_trainer_list
=
$(
func_parser
"
${
lines
[3]
}
"
)
python
=
$(
func_parser
"
${
lines
[4]
}
"
)
# inference params
# inference=$(func_parser "${lines[5]}")
devices
=
$(
func_parser
"
${
lines
[6]
}
"
)
use_mkldnn_list
=
$(
func_parser
"
${
lines
[7]
}
"
)
cpu_threads_list
=
$(
func_parser
"
${
lines
[8]
}
"
)
rec_batch_size_list
=
$(
func_parser
"
${
lines
[9]
}
"
)
gpu_trt_list
=
$(
func_parser
"
${
lines
[10]
}
"
)
gpu_precision_list
=
$(
func_parser
"
${
lines
[11]
}
"
)
model_name
=
$(
func_parser_value
"
${
lines
[0]
}
"
)
train_model_list
=
$(
func_parser_value
"
${
lines
[0]
}
"
)
slim_trainer_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
infer_gpu_id
=
$(
func_parser
"
${
lines
[12]
}
"
)
log_path
=
$(
func_parser
"
${
lines
[13]
}
"
)
status_log
=
"
${
log_path
}
/result.log"
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer']
MODE
=
$2
# prepare pretrained weights and dataset
wget
-nc
-P
./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
wget
-nc
-P
./pretrain_models/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
cd
pretrain_models
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
cd
../
# install requirments
${
python
}
-m
pip
install
pynvml
;
${
python
}
-m
pip
install
psutil
;
${
python
}
-m
pip
install
GPUtil
;
if
[
${
MODE
}
=
"lite_train_infer"
]
;
then
# pretrain lite train data
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
cd
./train_data/
&&
tar
xf icdar2015_lite.tar
ln
-s
./icdar2015_lite ./icdar2015
cd
../
epoch
=
10
eval_batch_step
=
10
elif
[
${
MODE
}
=
"whole_train_infer"
]
;
then
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015.tar
cd
./train_data/
&&
tar
xf icdar2015.tar
&&
cd
../
epoch
=
500
eval_batch_step
=
200
elif
[
${
MODE
}
=
"whole_infer"
]
;
then
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_infer.tar
cd
./train_data/
&&
tar
xf icdar2015_infer.tar
ln
-s
./icdar2015_infer ./icdar2015
cd
../
epoch
=
10
eval_batch_step
=
10
else
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar
if
[
${
model_name
}
=
"ocr_det"
]
;
then
eval_model_name
=
"ch_ppocr_mobile_v2.0_det_train"
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
cd
./inference
&&
tar
xf
${
eval_model_name
}
.tar
&&
cd
../
else
eval_model_name
=
"ch_ppocr_mobile_v2.0_rec_train"
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_train.tar
cd
./inference
&&
tar
xf
${
eval_model_name
}
.tar
&&
cd
../
fi
fi
paddle_info
=
"
$(
${
python
}
-c
"import paddle;print(f'paddle_version:{paddle.__version__}');print(f'paddle_commit:{paddle.__git_commit__}')"
)
"
echo
-e
"
\0
33[33m
$paddle_info
\0
33[0m"
|
tee
-a
${
status_log
}
cpu_model
=
`
cat
/proc/cpuinfo |
grep
"model name"
|
awk
-F
':'
'{print $2}'
|
sort
|
uniq
`
echo
-e
"
\0
33[33m cpu_info:
$cpu_model
\0
33[0m"
|
tee
-a
${
status_log
}
ip
=
`
ifconfig|
grep
-A
1
'eth0'
|grep
'inet'
|awk
-F
':'
'{print $2}'
|awk
'{print $1}'
`
echo
-e
"
\0
33[33m ip_info:
$ip
\0
33[0m"
|
tee
-a
${
status_log
}
function
status_check
(){
last_status
=
$1
# the exit code
run_model
=
$2
run_command
=
$3
run_log
=
$4
if
[
$last_status
-eq
0
]
;
then
echo
-e
"
\0
33[33m
$run_model
successfully with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
else
echo
-e
"
\0
33[33m
$case
failed with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
fi
}
IFS
=
'|'
for
train_model
in
${
train_model_list
[*]
}
;
do
if
[
${
train_model
}
=
"ocr_det"
]
;
then
...
...
@@ -113,61 +134,5 @@ for train_model in ${train_model_list[*]}; do
cd
./inference
&&
tar
xf
${
eval_model_name
}
.tar
&&
cd
../
fi
fi
save_log_path
=
"
${
log_path
}
/
${
eval_model_name
}
"
command
=
"
${
python
}
tools/eval.py -c
${
yml_file
}
-o Global.pretrained_model='./inference/
${
eval_model_name
}
/best_accuracy' Global.save_model_dir=
${
save_log_path
}
Eval.dataset.data_dir=
${
data_dir
}
Eval.dataset.label_file_list=
${
data_label_file
}
"
${
python
}
tools/eval.py
-c
${
yml_file
}
-o
Global.pretrained_model
=
./inference/
${
eval_model_name
}
/best_accuracy Global.save_model_dir
=
${
save_log_path
}
Eval.dataset.data_dir
=
${
data_dir
}
Eval.dataset.label_file_list
=
${
data_label_file
}
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
command
=
"
${
python
}
tools/export_model.py -c
${
yml_file
}
-o Global.pretrained_model="
${
eval_model_name
}
/best_accuracy
" Global.save_inference_dir=
${
log_path
}
/
${
eval_model_name
}
_infer Global.save_model_dir=
${
save_log_path
}
"
${
python
}
tools/export_model.py
-c
${
yml_file
}
-o
Global.pretrained_model
=
"./inference/
${
eval_model_name
}
/best_accuracy"
Global.save_inference_dir
=
"
${
log_path
}
/
${
eval_model_name
}
_infer"
Global.save_model_dir
=
${
save_log_path
}
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
if
[
$?
-eq
0
]
;
then
echo
-e
"
\0
33[33m training of
$model_name
successfully!
\0
33[0m"
|
tee
-a
${
save_log
}
/train.log
else
cat
${
save_log
}
/train.log
echo
-e
"
\0
33[33m training of
$model_name
failed!
\0
33[0m"
|
tee
-a
${
save_log
}
/train.log
fi
if
[
"
${
model_name
}
"
=
"det"
]
;
then
export
rec_batch_size_list
=(
"1"
)
inference
=
"tools/infer/predict_det.py"
det_model_dir
=
"
${
log_path
}
/
${
eval_model_name
}
_infer"
rec_model_dir
=
""
elif
[
"
${
model_name
}
"
=
"rec"
]
;
then
inference
=
"tools/infer/predict_rec.py"
rec_model_dir
=
"
${
log_path
}
/
${
eval_model_name
}
_infer"
det_model_dir
=
""
fi
# inference
for
device
in
${
devices
[*]
}
;
do
if
[
${
device
}
=
"cpu"
]
;
then
for
use_mkldnn
in
${
use_mkldnn_list
[*]
}
;
do
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
rec_batch_size
in
${
rec_batch_size_list
[*]
}
;
do
save_log_path
=
"
${
log_path
}
/
${
model_name
}
_
${
slim_trainer
}
_cpu_usemkldnn_
${
use_mkldnn
}
_cputhreads_
${
threads
}
_recbatchnum_
${
rec_batch_size
}
_infer.log"
command
=
"
${
python
}
${
inference
}
--enable_mkldnn=
${
use_mkldnn
}
--use_gpu=False --cpu_threads=
${
threads
}
--benchmark=True --det_model_dir=
${
det_model_dir
}
--rec_batch_num=
${
rec_batch_size
}
--rec_model_dir=
${
rec_model_dir
}
--image_dir=
${
img_dir
}
--save_log_path=
${
save_log_path
}
"
${
python
}
${
inference
}
--enable_mkldnn
=
${
use_mkldnn
}
--use_gpu
=
False
--cpu_threads
=
${
threads
}
--benchmark
=
True
--det_model_dir
=
${
det_model_dir
}
--rec_batch_num
=
${
rec_batch_size
}
--rec_model_dir
=
${
rec_model_dir
}
--image_dir
=
${
img_dir
}
--save_log_path
=
${
save_log_path
}
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
done
done
done
else
# env="export CUDA_VISIBLE_DEVICES=${infer_gpu_id}"
for
use_trt
in
${
gpu_trt_list
[*]
}
;
do
for
precision
in
${
gpu_precision_list
[*]
}
;
do
if
[
${
use_trt
}
=
"False"
]
&&
[
${
precision
}
!=
"fp32"
]
;
then
continue
fi
for
rec_batch_size
in
${
rec_batch_size_list
[*]
}
;
do
save_log_path
=
"
${
log_path
}
/
${
model_name
}
_
${
slim_trainer
}
_gpu_usetensorrt_
${
use_trt
}
_usefp16_
${
precision
}
_recbatchnum_
${
rec_batch_size
}
_infer.log"
command
=
"
${
python
}
${
inference
}
--use_gpu=True --use_tensorrt=
${
use_trt
}
--precision=
${
precision
}
--benchmark=True --det_model_dir=
${
log_path
}
/
${
eval_model_name
}
_infer --rec_batch_num=
${
rec_batch_size
}
--rec_model_dir=
${
rec_model_dir
}
--image_dir=
${
img_dir
}
--save_log_path=
${
save_log_path
}
"
${
python
}
${
inference
}
--use_gpu
=
True
--use_tensorrt
=
${
use_trt
}
--precision
=
${
precision
}
--benchmark
=
True
--det_model_dir
=
${
log_path
}
/
${
eval_model_name
}
_infer
--rec_batch_num
=
${
rec_batch_size
}
--rec_model_dir
=
${
rec_model_dir
}
--image_dir
=
${
img_dir
}
--save_log_path
=
${
save_log_path
}
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
done
done
done
fi
done
done
done
test/test.sh
浏览文件 @
069d994c
#!/bin/bash
# Usage:
# bash test/test.sh ./test/paddleocr_ci_params.txt 'lite_train_infer'
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer']
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer']
MODE
=
$2
# prepare pretrained weights and dataset
wget
-nc
-P
./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
wget
-nc
-P
./pretrain_models/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
cd
pretrain_models
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
cd
../
if
[
${
MODE
}
=
"lite_train_infer"
]
;
then
# pretrain lite train data
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
cd
./train_data/
&&
tar
xf icdar2015_lite.tar
ln
-s
./icdar2015_lite ./icdar2015
cd
../
epoch
=
10
eval_batch_step
=
10
elif
[
${
MODE
}
=
"whole_train_infer"
]
;
then
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015.tar
cd
./train_data/
&&
tar
xf icdar2015.tar
&&
cd
../
epoch
=
500
eval_batch_step
=
200
else
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_infer.tar
cd
./train_data/
&&
tar
xf icdar2015_infer.tar
ln
-s
./icdar2015_infer ./icdar2015
cd
../
epoch
=
10
eval_batch_step
=
10
fi
img_dir
=
"./train_data/icdar2015/text_localization/ch4_test_images/"
dataline
=
$(
cat
${
FILENAME
}
)
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
function
func_parser
(){
function
func_parser_key
(){
strs
=
$1
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[0]
}
echo
${
tmp
}
}
function
func_parser_value
(){
strs
=
$1
IFS
=
":
"
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[1]
}
echo
${
tmp
}
}
IFS
=
$'
\n
'
# The training params
train_model_list
=
$(
func_parser
"
${
lines
[0]
}
"
)
gpu_list
=
$(
func_parser
"
${
lines
[1]
}
"
)
auto_cast_list
=
$(
func_parser
"
${
lines
[2]
}
"
)
slim_trainer_list
=
$(
func_parser
"
${
lines
[3]
}
"
)
python
=
$(
func_parser
"
${
lines
[4]
}
"
)
# inference params
inference
=
$(
func_parser
"
${
lines
[5]
}
"
)
devices
=
$(
func_parser
"
${
lines
[6]
}
"
)
use_mkldnn_list
=
$(
func_parser
"
${
lines
[7]
}
"
)
cpu_threads_list
=
$(
func_parser
"
${
lines
[8]
}
"
)
rec_batch_size_list
=
$(
func_parser
"
${
lines
[9]
}
"
)
gpu_trt_list
=
$(
func_parser
"
${
lines
[10]
}
"
)
gpu_precision_list
=
$(
func_parser
"
${
lines
[11]
}
"
)
log_path
=
$(
func_parser
"
${
lines
[13]
}
"
)
status_log
=
"
${
log_path
}
/result.log"
# install requirments
${
python
}
-m
pip
install
pynvml
;
${
python
}
-m
pip
install
psutil
;
${
python
}
-m
pip
install
GPUtil
;
${
python
}
-m
pip
install
paddlesim
==
2.0.0
paddle_info
=
"
$(
${
python
}
-c
"import paddle;print(f'paddle_version:{paddle.__version__}');print(f'paddle_commit:{paddle.__git_commit__}')"
)
"
echo
-e
"
\0
33[33m
$paddle_info
\0
33[0m"
|
tee
-a
${
status_log
}
cpu_model
=
`
cat
/proc/cpuinfo |
grep
"model name"
|
awk
-F
':'
'{print $2}'
|
sort
|
uniq
`
echo
-e
"
\0
33[33m cpu_info:
$cpu_model
\0
33[0m"
|
tee
-a
${
status_log
}
ip
=
`
ifconfig|
grep
-A
1
'eth0'
|grep
'inet'
|awk
-F
':'
'{print $2}'
|awk
'{print $1}'
`
echo
-e
"
\0
33[33m ip_info:
$ip
\0
33[0m"
|
tee
-a
${
status_log
}
function
status_check
(){
last_status
=
$1
# the exit code
run_model
=
$2
run_command
=
$3
run_log
=
$4
run_command
=
$2
run_log
=
$3
if
[
$last_status
-eq
0
]
;
then
echo
-e
"
\0
33[33m
$run_model
successfully with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
echo
-e
"
\0
33[33m
Run
successfully with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
else
echo
-e
"
\0
33[33m
$case
failed with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
echo
-e
"
\0
33[33m
Run
failed with command -
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
fi
}
IFS
=
"|"
for
train_model
in
${
train_model_list
[*]
}
;
do
if
[
${
train_model
}
=
"ocr_det"
]
;
then
model_name
=
"det"
yml_file
=
"configs/det/det_mv3_db.yml"
elif
[
${
train_model
}
=
"ocr_rec"
]
;
then
model_name
=
"rec"
yml_file
=
"configs/rec/rec_mv3_none_bilstm_ctc.yml"
else
model_name
=
"det"
yml_file
=
"configs/det/det_mv3_db.yml"
fi
IFS
=
"|"
for
gpu
in
${
gpu_list
[*]
}
;
do
use_gpu
=
True
if
[
${
gpu
}
=
"-1"
]
;
then
use_gpu
=
False
env
=
""
elif
[
${#
gpu
}
-le
1
]
;
then
env
=
"CUDA_VISIBLE_DEVICES=
${
gpu
}
"
IFS
=
$'
\n
'
# The training params
model_name
=
$(
func_parser_value
"
${
lines
[0]
}
"
)
python
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
gpu_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
autocast_list
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
autocast_key
=
$(
func_parser_key
"
${
lines
[3]
}
"
)
epoch_key
=
$(
func_parser_key
"
${
lines
[4]
}
"
)
save_model_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
save_infer_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
train_batch_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
train_use_gpu_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
pretrain_model_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
trainer_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
norm_trainer
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
pact_trainer
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
fpgm_trainer
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
distill_trainer
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
eval_py
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
norm_export
=
$(
func_parser_value
"
${
lines
[16]
}
"
)
pact_export
=
$(
func_parser_value
"
${
lines
[17]
}
"
)
fpgm_export
=
$(
func_parser_value
"
${
lines
[18]
}
"
)
distill_export
=
$(
func_parser_value
"
${
lines
[19]
}
"
)
inference_py
=
$(
func_parser_value
"
${
lines
[20]
}
"
)
use_gpu_key
=
$(
func_parser_key
"
${
lines
[21]
}
"
)
use_gpu_list
=
$(
func_parser_value
"
${
lines
[21]
}
"
)
use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[22]
}
"
)
use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[22]
}
"
)
cpu_threads_key
=
$(
func_parser_key
"
${
lines
[23]
}
"
)
cpu_threads_list
=
$(
func_parser_value
"
${
lines
[23]
}
"
)
batch_size_key
=
$(
func_parser_key
"
${
lines
[24]
}
"
)
batch_size_list
=
$(
func_parser_value
"
${
lines
[24]
}
"
)
use_trt_key
=
$(
func_parser_key
"
${
lines
[25]
}
"
)
use_trt_list
=
$(
func_parser_value
"
${
lines
[25]
}
"
)
precision_key
=
$(
func_parser_key
"
${
lines
[26]
}
"
)
precision_list
=
$(
func_parser_value
"
${
lines
[26]
}
"
)
model_dir_key
=
$(
func_parser_key
"
${
lines
[27]
}
"
)
image_dir_key
=
$(
func_parser_key
"
${
lines
[28]
}
"
)
save_log_key
=
$(
func_parser_key
"
${
lines
[29]
}
"
)
LOG_PATH
=
"./test/output"
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results.log"
if
[
${
MODE
}
=
"lite_train_infer"
]
;
then
export
infer_img_dir
=
"./train_data/icdar2015/text_localization/ch4_test_images/"
export
epoch_num
=
10
elif
[
${
MODE
}
=
"whole_infer"
]
;
then
export
infer_img_dir
=
"./train_data/icdar2015/text_localization/ch4_test_images/"
export
epoch_num
=
10
elif
[
${
MODE
}
=
"whole_train_infer"
]
;
then
export
infer_img_dir
=
"./train_data/icdar2015/text_localization/ch4_test_images/"
export
epoch_num
=
300
else
export
infer_img_dir
=
"./inference/ch_det_data_50/all-sum-510"
export
infer_model_dir
=
"./inference/ch_ppocr_mobile_v2.0_det_train/best_accuracy"
fi
function
func_inference
(){
IFS
=
'|'
_python
=
$1
_script
=
$2
_model_dir
=
$3
_log_path
=
$4
_img_dir
=
$5
# inference
for
use_gpu
in
${
use_gpu_list
[*]
}
;
do
if
[
${
use_gpu
}
=
"False"
]
;
then
for
use_mkldnn
in
${
use_mkldnn_list
[*]
}
;
do
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_
${
batch_size
}
"
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_mkldnn_key
}
=
${
use_mkldnn
}
${
cpu_threads_key
}
=
${
threads
}
${
model_dir_key
}
=
${
_model_dir
}
${
batch_size_key
}
=
${
batch_size
}
${
image_dir_key
}
=
${
_img_dir
}
${
save_log_key
}
=
${
_save_log_path
}
"
eval
$command
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
done
done
else
IFS
=
","
array
=(
${
gpu
}
)
env
=
"CUDA_VISIBLE_DEVICES=
${
array
[0]
}
"
IFS
=
"|"
fi
for
auto_cast
in
${
auto_cast_list
[*]
}
;
do
for
slim_trainer
in
${
slim_trainer_list
[*]
}
;
do
if
[
${
slim_trainer
}
=
"norm"
]
;
then
trainer
=
"tools/train.py"
export_model
=
"tools/export_model.py"
pretrain
=
"./pretrain_models/MobileNetV3_large_x0_5_pretrained"
elif
[
${
slim_trainer
}
=
"pact"
]
;
then
trainer
=
"deploy/slim/quantization/quant.py"
export_model
=
"deploy/slim/quantization/export_model.py"
pretrain
=
"./pretrain_models/det_mv3_db_v2.0_train/best_accuracy"
elif
[
${
slim_trainer
}
=
"fpgm"
]
;
then
trainer
=
"deploy/slim/prune/sensitivity_anal.py"
export_model
=
"deploy/slim/prune/export_prune_model.py"
pretrain
=
"./pretrain_models/det_mv3_db_v2.0_train/best_accuracy"
wget
-nc
-P
https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/sen.pickle
elif
[
${
slim_trainer
}
=
"distill"
]
;
then
trainer
=
"deploy/slim/distill/train_dml.py"
export_model
=
"deploy/slim/distill/export_distill_model.py"
pretrain
=
""
else
trainer
=
"tools/train.py"
export_model
=
"tools/export_model.py"
pretrain
=
"./pretrain_models/MobileNetV3_large_x0_5_pretrained"
fi
save_log
=
"
${
log_path
}
/
${
model_name
}
_
${
slim_trainer
}
_autocast_
${
auto_cast
}
_gpuid_
${
gpu
}
"
if
[
${#
gpu
}
-le
2
]
;
then
command
=
"
${
python
}
${
trainer
}
-c
${
yml_file
}
-o Global.epoch_num=
${
epoch
}
Global.eval_batch_step=
${
eval_batch_step
}
Global.auto_cast=
${
auto_cast
}
Global.pretrained_model=
${
pretrain
}
Global.save_model_dir=
${
save_log
}
Global.use_gpu=
${
use_gpu
}
Train.loader.batch_size_per_card=2"
${
python
}
${
trainer
}
-c
${
yml_file
}
-o
Global.epoch_num
=
${
epoch
}
Global.eval_batch_step
=
${
eval_batch_step
}
Global.auto_cast
=
${
auto_cast
}
Global.pretrained_model
=
${
pretrain
}
Global.save_model_dir
=
${
save_log
}
Global.use_gpu
=
${
use_gpu
}
Train.loader.batch_size_per_card
=
2
else
command
=
"
${
python
}
-m paddle.distributed.launch --log_dir=./debug/ --gpus
${
gpu
}
${
trainer
}
-c
${
yml_file
}
-o Global.epoch_num=
${
epoch
}
Global.eval_batch_step=
${
eval_batch_step
}
Global.auto_cast=
${
auto_cast
}
Global.pretrained_model=
${
pretrain
}
Global.save_model_dir=
${
save_log
}
Global.use_gpu=
${
use_gpu
}
Train.loader.batch_size_per_card=2"
${
python
}
-m
paddle.distributed.launch
--log_dir
=
./debug/
--gpus
${
gpu
}
${
trainer
}
-c
${
yml_file
}
-o
Global.epoch_num
=
${
epoch
}
Global.eval_batch_step
=
${
eval_batch_step
}
Global.auto_cast
=
${
auto_cast
}
Global.pretrained_model
=
${
pretrain
}
Global.save_model_dir
=
${
save_log
}
Global.use_gpu
=
${
use_gpu
}
Train.loader.batch_size_per_card
=
2
fi
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
command
=
"
${
python
}
${
export_model
}
-c
${
yml_file
}
-o Global.pretrained_model=
${
save_log
}
/latest Global.save_inference_dir=
${
save_log
}
_infer/ Global.save_model_dir=
${
save_log
}
"
${
python
}
${
export_model
}
-c
${
yml_file
}
-o
Global.pretrained_model
=
${
save_log
}
/latest Global.save_inference_dir
=
${
save_log
}
_infer/ Global.save_model_dir
=
${
save_log
}
status_check
$?
"
${
trainer
}
"
"
${
command
}
"
"
${
status_log
}
"
if
[
"
${
model_name
}
"
=
"det"
]
;
then
export
rec_batch_size_list
=(
"1"
)
inference
=
"tools/infer/predict_det.py"
det_model_dir
=
${
save_log
}
_infer
rec_model_dir
=
""
elif
[
"
${
model_name
}
"
=
"rec"
]
;
then
inference
=
"tools/infer/predict_rec.py"
rec_model_dir
=
${
save_log
}
_infer
det_model_dir
=
""
fi
# inference
for
device
in
${
devices
[*]
}
;
do
if
[
${
device
}
=
"cpu"
]
;
then
for
use_mkldnn
in
${
use_mkldnn_list
[*]
}
;
do
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
rec_batch_size
in
${
rec_batch_size_list
[*]
}
;
do
save_log_path
=
"
${
log_path
}
/
${
model_name
}
_
${
slim_trainer
}
_cpu_usemkldnn_
${
use_mkldnn
}
_cputhreads_
${
threads
}
_recbatchnum_
${
rec_batch_size
}
_infer.log"
command
=
"
${
python
}
${
inference
}
--enable_mkldnn=
${
use_mkldnn
}
--use_gpu=False --cpu_threads=
${
threads
}
--benchmark=True --det_model_dir=
${
det_model_dir
}
--rec_batch_num=
${
rec_batch_size
}
--rec_model_dir=
${
rec_model_dir
}
--image_dir=
${
img_dir
}
--save_log_path=
${
save_log_path
}
"
${
python
}
${
inference
}
--enable_mkldnn
=
${
use_mkldnn
}
--use_gpu
=
False
--cpu_threads
=
${
threads
}
--benchmark
=
True
--det_model_dir
=
${
det_model_dir
}
--rec_batch_num
=
${
rec_batch_size
}
--rec_model_dir
=
${
rec_model_dir
}
--image_dir
=
${
img_dir
}
--save_log_path
=
${
save_log_path
}
status_check
$?
"
${
inference
}
"
"
${
command
}
"
"
${
status_log
}
"
done
done
done
else
for
use_trt
in
${
gpu_trt_list
[*]
}
;
do
for
precision
in
${
gpu_precision_list
[*]
}
;
do
if
[
${
use_trt
}
=
"False"
]
&&
[
${
precision
}
!=
"fp32"
]
;
then
continue
fi
for
rec_batch_size
in
${
rec_batch_size_list
[*]
}
;
do
save_log_path
=
"
${
log_path
}
/
${
model_name
}
_
${
slim_trainer
}
_gpu_usetensorrt_
${
use_trt
}
_usefp16_
${
precision
}
_recbatchnum_
${
rec_batch_size
}
_infer.log"
command
=
"
${
python
}
${
inference
}
--use_gpu=True --use_tensorrt=
${
use_trt
}
--precision=
${
precision
}
--benchmark=True --det_model_dir=
${
det_model_dir
}
--rec_batch_num=
${
rec_batch_size
}
--rec_model_dir=
${
rec_model_dir
}
--image_dir=
${
img_dir
}
--save_log_path=
${
save_log_path
}
"
${
python
}
${
inference
}
--use_gpu
=
True
--use_tensorrt
=
${
use_trt
}
--precision
=
${
precision
}
--benchmark
=
True
--det_model_dir
=
${
det_model_dir
}
--rec_batch_num
=
${
rec_batch_size
}
--rec_model_dir
=
${
rec_model_dir
}
--image_dir
=
${
img_dir
}
--save_log_path
=
${
save_log_path
}
status_check
$?
"
${
inference
}
"
"
${
command
}
"
"
${
status_log
}
"
done
done
done
for
use_trt
in
${
use_trt_list
[*]
}
;
do
for
precision
in
${
precision_list
[*]
}
;
do
if
[
${
use_trt
}
=
"False"
]
&&
[
${
precision
}
!=
"fp32"
]
;
then
continue
fi
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
"
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_trt_key
}
=
${
use_trt
}
${
precision_key
}
=
${
precision
}
${
model_dir_key
}
=
${
_model_dir
}
${
batch_size_key
}
=
${
batch_size
}
${
image_dir_key
}
=
${
_img_dir
}
${
save_log_key
}
=
${
_save_log_path
}
"
eval
$command
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
done
done
fi
done
}
if
[
${
MODE
}
!=
"infer"
]
;
then
IFS
=
"|"
for
gpu
in
${
gpu_list
[*]
}
;
do
use_gpu
=
True
if
[
${
gpu
}
=
"-1"
]
;
then
use_gpu
=
False
env
=
""
elif
[
${#
gpu
}
-le
1
]
;
then
env
=
"export CUDA_VISIBLE_DEVICES=
${
gpu
}
"
elif
[
${#
gpu
}
-le
15
]
;
then
IFS
=
","
array
=(
${
gpu
}
)
env
=
"export CUDA_VISIBLE_DEVICES=
${
array
[0]
}
"
IFS
=
"|"
else
IFS
=
";"
array
=(
${
gpu
}
)
ips
=
${
array
[0]
}
gpu
=
${
array
[1]
}
IFS
=
"|"
fi
for
autocast
in
${
autocast_list
[*]
}
;
do
for
trainer
in
${
trainer_list
[*]
}
;
do
if
[
${
trainer
}
=
"pact"
]
;
then
run_train
=
${
pact_trainer
}
run_export
=
${
pact_export
}
elif
[
${
trainer
}
=
"fpgm"
]
;
then
run_train
=
${
fpgm_trainer
}
run_export
=
${
fpgm_export
}
elif
[
${
trainer
}
=
"distill"
]
;
then
run_train
=
${
distill_trainer
}
run_export
=
${
distill_export
}
else
run_train
=
${
norm_trainer
}
run_export
=
${
norm_export
}
fi
if
[
${
run_train
}
=
"null"
]
;
then
continue
fi
if
[
${
run_export
}
=
"null"
]
;
then
continue
fi
save_log
=
"
${
LOG_PATH
}
/
${
trainer
}
_gpus_
${
gpu
}
_autocast_
${
autocast
}
"
if
[
${#
gpu
}
-le
2
]
;
then
# epoch_num #TODO
cmd
=
"
${
python
}
${
run_train
}
${
train_use_gpu_key
}
=
${
use_gpu
}
${
autocast_key
}
=
${
autocast
}
${
epoch_key
}
=
${
epoch_num
}
${
save_model_key
}
=
${
save_log
}
"
elif
[
${#
gpu
}
-le
15
]
;
then
cmd
=
"
${
python
}
-m paddle.distributed.launch --gpus=
${
gpu
}
${
run_train
}
${
autocast_key
}
=
${
autocast
}
${
epoch_key
}
=
${
epoch_num
}
${
save_model_key
}
=
${
save_log
}
"
else
cmd
=
"
${
python
}
-m paddle.distributed.launch --ips=
${
ips
}
--gpus=
${
gpu
}
${
run_train
}
${
autocast_key
}
=
${
autocast
}
${
epoch_key
}
=
${
epoch_num
}
${
save_model_key
}
=
${
save_log
}
"
fi
# run train
eval
$cmd
status_check
$?
"
${
cmd
}
"
"
${
status_log
}
"
# run eval
eval_cmd
=
"
${
python
}
${
eval_py
}
${
save_model_key
}
=
${
save_log
}
${
pretrain_model_key
}
=
${
save_log
}
/latest"
eval
$eval_cmd
status_check
$?
"
${
eval_cmd
}
"
"
${
status_log
}
"
# run export model
save_infer_path
=
"
${
save_log
}
"
export_cmd
=
"
${
python
}
${
run_export
}
${
save_model_key
}
=
${
save_log
}
${
pretrain_model_key
}
=
${
save_log
}
/latest
${
save_infer_key
}
=
${
save_infer_path
}
"
eval
$export_cmd
status_check
$?
"
${
export_cmd
}
"
"
${
status_log
}
"
#run inference
save_infer_path
=
"
${
save_log
}
"
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_path
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
done
done
done
else
save_infer_path
=
"
${
LOG_PATH
}
/
${
MODE
}
"
run_export
=
${
norm_export
}
export_cmd
=
"
${
python
}
${
run_export
}
${
save_model_key
}
=
${
save_infer_path
}
${
pretrain_model_key
}
=
${
infer_model_dir
}
${
save_infer_key
}
=
${
save_infer_path
}
"
eval
$export_cmd
status_check
$?
"
${
export_cmd
}
"
"
${
status_log
}
"
#run inference
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_path
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
fi
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录