Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
9f135336
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
9f135336
编写于
10月 12, 2021
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
opt tests
上级
0bf6a75e
变更
9
隐藏空白更改
内联
并排
Showing
9 changed file
with
126 addition
and
105 deletion
+126
-105
tests/configs/ocr_ppocr_det_mobile_params.txt
tests/configs/ocr_ppocr_det_mobile_params.txt
+18
-2
tests/configs/ocr_ppocr_det_server_params.txt
tests/configs/ocr_ppocr_det_server_params.txt
+0
-0
tests/configs/ocr_ppocr_rec_mobile_params.txt
tests/configs/ocr_ppocr_rec_mobile_params.txt
+0
-0
tests/configs/ocr_ppocr_rec_server_params.txt
tests/configs/ocr_ppocr_rec_server_params.txt
+0
-0
tests/configs/ocr_ppocr_sys_mobile_params.txt
tests/configs/ocr_ppocr_sys_mobile_params.txt
+0
-0
tests/configs/ocr_ppocr_sys_server_params.txt
tests/configs/ocr_ppocr_sys_server_params.txt
+0
-0
tests/debug.sh
tests/debug.sh
+16
-0
tests/ocr_kl_quant_params.txt
tests/ocr_kl_quant_params.txt
+0
-51
tests/test.sh
tests/test.sh
+92
-52
未找到文件。
tests/
ocr_det
_params.txt
→
tests/
configs/ocr_ppocr_det_mobile
_params.txt
浏览文件 @
9f135336
...
...
@@ -46,7 +46,7 @@ inference:tools/infer/predict_det.py
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
--save_log_path
:null
null
:null
--benchmark:True
null:null
===========================cpp_infer_params===========================
...
...
@@ -79,4 +79,20 @@ op.det.local_service_conf.thread_num:1|6
op.det.local_service_conf.use_trt:False|True
op.det.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_http_client.py --image_dir=../../doc/imgs
===========================kl_quant_params===========================
infer_model:./inference/ch_ppocr_mobile_v2.0_det_infer/
infer_export:tools/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o
infer_quant:False
inference:tools/infer/predict_det.py
--use_gpu:True|False
--enable_mkldnn:True|False
--cpu_threads:1|6
--rec_batch_num:1
--use_tensorrt:False|True
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
null:null
--benchmark:True
null:null
null:null
\ No newline at end of file
tests/ocr_det_server_params.txt
→
tests/
configs/ocr_pp
ocr_det_server_params.txt
浏览文件 @
9f135336
文件已移动
tests/
ocr_rec
_params.txt
→
tests/
configs/ocr_ppocr_rec_mobile
_params.txt
浏览文件 @
9f135336
文件已移动
tests/ocr_rec_server_params.txt
→
tests/
configs/ocr_pp
ocr_rec_server_params.txt
浏览文件 @
9f135336
文件已移动
tests/
ocr_ppocr
_mobile_params.txt
→
tests/
configs/ocr_ppocr_sys
_mobile_params.txt
浏览文件 @
9f135336
文件已移动
tests/
ocr_ppocr
_server_params.txt
→
tests/
configs/ocr_ppocr_sys
_server_params.txt
浏览文件 @
9f135336
文件已移动
tests/debug.sh
0 → 100644
浏览文件 @
9f135336
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer', 'serving_infer']
MODE
=
$2
if
[
${
MODE
}
=
"cpp_infer"
]
;
then
dataline
=
$(
awk
'NR==52, NR==66{print}'
$FILENAME
)
elif
[
${
MODE
}
=
"serving_infer"
]
;
then
dataline
=
$(
awk
'NR==67, NR==81{print}'
$FILENAME
)
else
dataline
=
$(
awk
'NR==1, NR==51{print}'
$FILENAME
)
fi
count
=
0
for
line
in
${
dataline
[*]
}
;
do
let
count++
echo
$count
$line
done
tests/ocr_kl_quant_params.txt
已删除
100644 → 0
浏览文件 @
0bf6a75e
===========================train_params===========================
model_name:ocr_system
python:python3.7
gpu_list:null
Global.use_gpu:null
Global.auto_cast:null
Global.epoch_num:null
Global.save_model_dir:./output/
Train.loader.batch_size_per_card:null
Global.pretrained_model:null
train_model_name:null
train_infer_img_dir:null
null:null
##
trainer:
norm_train:null
pact_train:null
fpgm_train:null
distill_train:null
null:null
null:null
##
===========================eval_params===========================
eval:null
null:null
##
===========================infer_params===========================
Global.save_inference_dir:./output/
Global.pretrained_model:
norm_export:null
quant_export:null
fpgm_export:null
distill_export:null
export1:null
export2:null
##
infer_model:./inference/ch_ppocr_mobile_v2.0_det_infer/
kl_quant:deploy/slim/quantization/quant_kl.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o
infer_quant:True
inference:tools/infer/predict_det.py
--use_gpu:TrueFalse
--enable_mkldnn:True|False
--cpu_threads:1|6
--rec_batch_num:1
--use_tensorrt:False|True
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
--save_log_path:null
--benchmark:True
null:null
tests/test.sh
浏览文件 @
9f135336
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer']
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer'
, 'serving_infer', 'klquant_infer'
]
MODE
=
$2
dataline
=
$(
cat
${
FILENAME
}
)
if
[
${
MODE
}
=
"cpp_infer"
]
;
then
dataline
=
$(
awk
'NR==67, NR==81{print}'
$FILENAME
)
elif
[
${
MODE
}
=
"serving_infer"
]
;
then
dataline
=
$(
awk
'NR==52, NR==66{print}'
$FILENAME
)
elif
[
${
MODE
}
=
"klquant_infer"
]
;
then
dataline
=
$(
awk
'NR==82, NR==98{print}'
$FILENAME
)
else
dataline
=
$(
awk
'NR==1, NR==51{print}'
$FILENAME
)
fi
# parser params
IFS
=
$'
\n
'
...
...
@@ -144,61 +151,93 @@ benchmark_key=$(func_parser_key "${lines[49]}")
benchmark_value
=
$(
func_parser_value
"
${
lines
[49]
}
"
)
infer_key1
=
$(
func_parser_key
"
${
lines
[50]
}
"
)
infer_value1
=
$(
func_parser_value
"
${
lines
[50]
}
"
)
# parser serving
trans_model_py
=
$(
func_parser_value
"
${
lines
[67]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[68]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[68]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[69]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[69]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[70]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[70]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[71]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[71]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[72]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[72]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[73]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[74]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[75]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[75]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[76]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[76]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[77]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[77]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[78]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[78]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[79]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[79]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[80]
}
"
)
# parser serving
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
# parser inference model
infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_export_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
infer_is_quant
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
# parser inference
inference_py
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
use_gpu_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
use_gpu_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
cpu_threads_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
cpu_threads_list
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
batch_size_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
batch_size_list
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
use_trt_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
use_trt_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
precision_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
precision_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
infer_model_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
image_dir_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
infer_img_dir
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
save_log_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
benchmark_key
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
benchmark_value
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
infer_key1
=
$(
func_parser_key
"
${
lines
[15]
}
"
)
infer_value1
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
fi
# parser serving
if
[
${
MODE
}
=
"server_infer"
]
;
then
trans_model_py
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[2]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[3]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[4]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
fi
if
[
${
MODE
}
=
"cpp_infer"
]
;
then
# parser cpp inference model
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[
53
]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[
54
]
}
"
)
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[
1
]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[
2
]
}
"
)
# parser cpp inference
inference_cmd
=
$(
func_parser_value
"
${
lines
[
55
]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[
56
]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[
56
]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[5
7
]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[5
7
]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[
58
]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[
58
]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[
59
]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[
59
]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[
60
]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[
60
]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[
61
]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[
61
]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[
62
]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[
63
]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[
63
]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[
64
]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[
64
]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[
65
]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[
65
]
}
"
)
inference_cmd
=
$(
func_parser_value
"
${
lines
[
3
]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[
4
]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[
4
]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[
6
]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[
6
]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[
7
]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[
7
]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[
8
]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[
8
]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[
9
]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[
9
]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[
10
]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[
11
]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[
12
]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[
13
]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[
13
]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[
14
]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[
14
]
}
"
)
fi
LOG_PATH
=
"./tests/output"
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results.log"
...
...
@@ -414,7 +453,7 @@ function func_cpp_inference(){
done
}
if
[
${
MODE
}
=
"infer"
]
;
then
if
[
${
MODE
}
=
"infer"
]
||
[
${
MODE
}
=
"klquant_infer"
]
;
then
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
env
=
" "
...
...
@@ -447,7 +486,6 @@ if [ ${MODE} = "infer" ]; then
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
Count
=
$((
$Count
+
1
))
done
elif
[
${
MODE
}
=
"cpp_infer"
]
;
then
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
...
...
@@ -481,6 +519,8 @@ elif [ ${MODE} = "serving_infer" ]; then
#run serving
func_serving
"
${
web_service_cmd
}
"
else
IFS
=
"|"
export
Count
=
0
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录