Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
033cc4cf
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
033cc4cf
编写于
10月 12, 2021
作者:
M
MissPenguin
提交者:
GitHub
10月 12, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #4294 from LDOUBLEV/dygraph
[full chain] opt params.txt
上级
38282ab8
22d59470
变更
13
显示空白变更内容
内联
并排
Showing
13 changed file
with
128 addition
and
115 deletion
+128
-115
tests/configs/ppocr_det_mobile_params.txt
tests/configs/ppocr_det_mobile_params.txt
+19
-3
tests/configs/ppocr_det_server_params.txt
tests/configs/ppocr_det_server_params.txt
+6
-6
tests/configs/ppocr_rec_mobile_params.txt
tests/configs/ppocr_rec_mobile_params.txt
+0
-0
tests/configs/ppocr_rec_server_params.txt
tests/configs/ppocr_rec_server_params.txt
+0
-0
tests/configs/ppocr_sys_mobile_params.txt
tests/configs/ppocr_sys_mobile_params.txt
+0
-0
tests/configs/ppocr_sys_server_params.txt
tests/configs/ppocr_sys_server_params.txt
+0
-0
tests/ocr_kl_quant_params.txt
tests/ocr_kl_quant_params.txt
+0
-51
tests/prepare.sh
tests/prepare.sh
+11
-3
tests/results/ppocr_det_mobile_results_fp16.txt
tests/results/ppocr_det_mobile_results_fp16.txt
+0
-0
tests/results/ppocr_det_mobile_results_fp16_cpp.txt
tests/results/ppocr_det_mobile_results_fp16_cpp.txt
+0
-0
tests/results/ppocr_det_mobile_results_fp32.txt
tests/results/ppocr_det_mobile_results_fp32.txt
+0
-0
tests/results/ppocr_det_mobile_results_fp32_cpp.txt
tests/results/ppocr_det_mobile_results_fp32_cpp.txt
+0
-0
tests/test.sh
tests/test.sh
+92
-52
未找到文件。
tests/
ocr_det
_params.txt
→
tests/
configs/ppocr_det_mobile
_params.txt
浏览文件 @
033cc4cf
...
...
@@ -40,13 +40,13 @@ infer_quant:False
inference:tools/infer/predict_det.py
--use_gpu:True|False
--enable_mkldnn:True|False
--cpu_threads:6
--cpu_threads:
1|
6
--rec_batch_num:1
--use_tensorrt:False|True
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
--save_log_path
:null
null
:null
--benchmark:True
null:null
===========================cpp_infer_params===========================
...
...
@@ -79,4 +79,20 @@ op.det.local_service_conf.thread_num:1|6
op.det.local_service_conf.use_trt:False|True
op.det.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_http_client.py --image_dir=../../doc/imgs
===========================kl_quant_params===========================
infer_model:./inference/ch_ppocr_mobile_v2.0_det_infer/
infer_export:tools/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o
infer_quant:False
inference:tools/infer/predict_det.py
--use_gpu:True|False
--enable_mkldnn:True|False
--cpu_threads:1|6
--rec_batch_num:1
--use_tensorrt:False|True
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
null:null
--benchmark:True
null:null
null:null
\ No newline at end of file
tests/ocr_det_server_params.txt
→
tests/
configs/pp
ocr_det_server_params.txt
浏览文件 @
033cc4cf
...
...
@@ -12,10 +12,10 @@ train_model_name:latest
train_infer_img_dir:./train_data/icdar2015/text_localization/ch4_test_images/
null:null
##
trainer:norm_train|pact_train
norm_train:tools/train.py -c tests/configs/det_r50_vd_db.yml -o
Global.pretrained_model=""
pact_train:null
fpgm_
train:null
trainer:norm_train|pact_train
|fpgm_export
norm_train:tools/train.py -c tests/configs/det_r50_vd_db.yml -o
quant_export:deploy/slim/quantization/export_model.py -c tests/configs/det_r50_vd_db.yml -o
fpgm_
export:deploy/slim/prune/export_prune_model.py -c tests/configs/det_r50_vd_db.yml -o
distill_train:null
null:null
null:null
...
...
@@ -34,8 +34,8 @@ distill_export:null
export1:null
export2:null
##
infer_model:./inference/ch_ppocr_server_v2.0_det_infer/
infer_export:
null
train_model:./inference/ch_ppocr_server_v2.0_det_train/best_accuracy
infer_export:
tools/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_res18_db_v2.0.yml -o
infer_quant:False
inference:tools/infer/predict_det.py
--use_gpu:True|False
...
...
tests/
ocr_rec
_params.txt
→
tests/
configs/ppocr_rec_mobile
_params.txt
浏览文件 @
033cc4cf
文件已移动
tests/ocr_rec_server_params.txt
→
tests/
configs/pp
ocr_rec_server_params.txt
浏览文件 @
033cc4cf
文件已移动
tests/
ocr_ppocr
_mobile_params.txt
→
tests/
configs/ppocr_sys
_mobile_params.txt
浏览文件 @
033cc4cf
文件已移动
tests/
ocr_ppocr
_server_params.txt
→
tests/
configs/ppocr_sys
_server_params.txt
浏览文件 @
033cc4cf
文件已移动
tests/ocr_kl_quant_params.txt
已删除
100644 → 0
浏览文件 @
38282ab8
===========================train_params===========================
model_name:ocr_system
python:python3.7
gpu_list:null
Global.use_gpu:null
Global.auto_cast:null
Global.epoch_num:null
Global.save_model_dir:./output/
Train.loader.batch_size_per_card:null
Global.pretrained_model:null
train_model_name:null
train_infer_img_dir:null
null:null
##
trainer:
norm_train:null
pact_train:null
fpgm_train:null
distill_train:null
null:null
null:null
##
===========================eval_params===========================
eval:null
null:null
##
===========================infer_params===========================
Global.save_inference_dir:./output/
Global.pretrained_model:
norm_export:null
quant_export:null
fpgm_export:null
distill_export:null
export1:null
export2:null
##
infer_model:./inference/ch_ppocr_mobile_v2.0_det_infer/
kl_quant:deploy/slim/quantization/quant_kl.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o
infer_quant:True
inference:tools/infer/predict_det.py
--use_gpu:TrueFalse
--enable_mkldnn:True|False
--cpu_threads:1|6
--rec_batch_num:1
--use_tensorrt:False|True
--precision:fp32|fp16|int8
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
--save_log_path:null
--benchmark:True
null:null
tests/prepare.sh
浏览文件 @
033cc4cf
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer', 'serving_infer']
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer',
# 'cpp_infer', 'serving_infer', 'klquant_infer']
MODE
=
$2
dataline
=
$(
cat
${
FILENAME
}
)
...
...
@@ -72,9 +74,9 @@ elif [ ${MODE} = "infer" ];then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
cd
./inference
&&
tar
xf
${
eval_model_name
}
.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
elif
[
${
model_name
}
=
"ocr_server_det"
]
;
then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_
infer
.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_
train
.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar
cd
./inference
&&
tar
xf ch_ppocr_server_v2.0_det_
infer
.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
cd
./inference
&&
tar
xf ch_ppocr_server_v2.0_det_
train
.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
elif
[
${
model_name
}
=
"ocr_system_mobile"
]
;
then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar
...
...
@@ -98,6 +100,12 @@ elif [ ${MODE} = "infer" ];then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar
cd
./inference
&&
tar
xf
${
eval_model_name
}
.tar
&&
tar
xf rec_inference.tar
&&
cd
../
fi
elif
[
${
MODE
}
=
"klquant_infer"
]
;
then
if
[
${
model_name
}
=
"ocr_det"
]
;
then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar
cd
./inference
&&
tar
xf ch_ppocr_mobile_v2.0_det_infer.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
fi
elif
[
${
MODE
}
=
"cpp_infer"
]
;
then
if
[
${
model_name
}
=
"ocr_det"
]
;
then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar
...
...
tests/results/
det_results_gpu_trt
_fp16.txt
→
tests/results/
ppocr_det_mobile_results
_fp16.txt
浏览文件 @
033cc4cf
文件已移动
tests/results/
det_results_gpu_trt
_fp16_cpp.txt
→
tests/results/
ppocr_det_mobile_results
_fp16_cpp.txt
浏览文件 @
033cc4cf
文件已移动
tests/results/
det_results_gpu
_fp32.txt
→
tests/results/
ppocr_det_mobile_results
_fp32.txt
浏览文件 @
033cc4cf
文件已移动
tests/results/
det_results_gpu_trt
_fp32_cpp.txt
→
tests/results/
ppocr_det_mobile_results
_fp32_cpp.txt
浏览文件 @
033cc4cf
文件已移动
tests/test.sh
浏览文件 @
033cc4cf
#!/bin/bash
FILENAME
=
$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer']
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'cpp_infer'
, 'serving_infer', 'klquant_infer'
]
MODE
=
$2
dataline
=
$(
cat
${
FILENAME
}
)
if
[
${
MODE
}
=
"cpp_infer"
]
;
then
dataline
=
$(
awk
'NR==67, NR==81{print}'
$FILENAME
)
elif
[
${
MODE
}
=
"serving_infer"
]
;
then
dataline
=
$(
awk
'NR==52, NR==66{print}'
$FILENAME
)
elif
[
${
MODE
}
=
"klquant_infer"
]
;
then
dataline
=
$(
awk
'NR==82, NR==98{print}'
$FILENAME
)
else
dataline
=
$(
awk
'NR==1, NR==51{print}'
$FILENAME
)
fi
# parser params
IFS
=
$'
\n
'
...
...
@@ -144,61 +151,93 @@ benchmark_key=$(func_parser_key "${lines[49]}")
benchmark_value
=
$(
func_parser_value
"
${
lines
[49]
}
"
)
infer_key1
=
$(
func_parser_key
"
${
lines
[50]
}
"
)
infer_value1
=
$(
func_parser_value
"
${
lines
[50]
}
"
)
# parser serving
trans_model_py
=
$(
func_parser_value
"
${
lines
[67]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[68]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[68]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[69]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[69]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[70]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[70]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[71]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[71]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[72]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[72]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[73]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[74]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[75]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[75]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[76]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[76]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[77]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[77]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[78]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[78]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[79]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[79]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[80]
}
"
)
# parser serving
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
# parser inference model
infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_export_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
infer_is_quant
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
# parser inference
inference_py
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
use_gpu_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
use_gpu_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
cpu_threads_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
cpu_threads_list
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
batch_size_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
batch_size_list
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
use_trt_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
use_trt_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
precision_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
precision_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
infer_model_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
image_dir_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
infer_img_dir
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
save_log_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
benchmark_key
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
benchmark_value
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
infer_key1
=
$(
func_parser_key
"
${
lines
[15]
}
"
)
infer_value1
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
fi
# parser serving
if
[
${
MODE
}
=
"server_infer"
]
;
then
trans_model_py
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[2]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[3]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[4]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
fi
if
[
${
MODE
}
=
"cpp_infer"
]
;
then
# parser cpp inference model
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[
53
]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[
54
]
}
"
)
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[
1
]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[
2
]
}
"
)
# parser cpp inference
inference_cmd
=
$(
func_parser_value
"
${
lines
[
55
]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[
56
]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[
56
]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[5
7
]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[5
7
]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[
58
]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[
58
]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[
59
]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[
59
]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[
60
]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[
60
]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[
61
]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[
61
]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[
62
]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[
63
]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[
63
]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[
64
]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[
64
]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[
65
]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[
65
]
}
"
)
inference_cmd
=
$(
func_parser_value
"
${
lines
[
3
]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[
4
]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[
4
]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[
6
]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[
6
]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[
7
]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[
7
]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[
8
]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[
8
]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[
9
]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[
9
]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[
10
]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[
11
]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[
12
]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[
13
]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[
13
]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[
14
]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[
14
]
}
"
)
fi
LOG_PATH
=
"./tests/output"
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results.log"
...
...
@@ -414,7 +453,7 @@ function func_cpp_inference(){
done
}
if
[
${
MODE
}
=
"infer"
]
;
then
if
[
${
MODE
}
=
"infer"
]
||
[
${
MODE
}
=
"klquant_infer"
]
;
then
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
env
=
" "
...
...
@@ -447,7 +486,6 @@ if [ ${MODE} = "infer" ]; then
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
Count
=
$((
$Count
+
1
))
done
elif
[
${
MODE
}
=
"cpp_infer"
]
;
then
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
...
...
@@ -481,6 +519,8 @@ elif [ ${MODE} = "serving_infer" ]; then
#run serving
func_serving
"
${
web_service_cmd
}
"
else
IFS
=
"|"
export
Count
=
0
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录