Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
a8960021
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
a8960021
编写于
11月 10, 2021
作者:
M
MissPenguin
提交者:
GitHub
11月 10, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #4604 from tink2123/rename_params
rename params for serving and paddle2onnx
上级
3e36fec5
9874b6c0
变更
12
隐藏空白更改
内联
并排
Showing
12 changed file
with
126 addition
and
6 deletion
+126
-6
test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+12
-0
test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+18
-0
test_tipc/configs/ppocr_det_server/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+12
-0
test_tipc/configs/ppocr_det_server/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+18
-0
test_tipc/configs/ppocr_rec_mobile/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+12
-0
test_tipc/configs/ppocr_rec_mobile/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+18
-0
test_tipc/configs/ppocr_rec_server/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+12
-0
test_tipc/configs/ppocr_rec_server/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+18
-0
test_tipc/docs/test_paddle2onnx.md
test_tipc/docs/test_paddle2onnx.md
+2
-2
test_tipc/docs/test_serving.md
test_tipc/docs/test_serving.md
+2
-2
test_tipc/test_paddle2onnx.sh
test_tipc/test_paddle2onnx.sh
+1
-1
test_tipc/test_serving.sh
test_tipc/test_serving.sh
+1
-1
未找到文件。
test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================paddle2onnx_params===========================
2onnx: paddle2onnx
--model_dir:./inference/ch_ppocr_mobile_v2.0_det_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./inference/det_mobile_onnx/model.onnx
--opset_version:10
--enable_onnx_checker:True
inference:tools/infer/predict_det.py
--use_gpu:True|False
--det_model_dir:
--image_dir:./inference/ch_det_data_50/all-sum-510/
\ No newline at end of file
test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
a8960021
===========================serving_params===========================
model_name:ocr_det_mobile
python:python3.7|cpp
trans_model:-m paddle_serving_client.convert
--dirname:./inference/ch_ppocr_mobile_v2.0_det_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/pdserving/ppocr_det_mobile_2.0_serving/
--serving_client:./deploy/pdserving/ppocr_det_mobile_2.0_client/
serving_dir:./deploy/pdserving
web_service:web_service_det.py --config=config.yml --opt op.det.concurrency=1
op.det.local_service_conf.devices:null|0
op.det.local_service_conf.use_mkldnn:True|False
op.det.local_service_conf.thread_num:1|6
op.det.local_service_conf.use_trt:False|True
op.det.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_rpc_client.py|pipeline_http_client.py
--image_dir:../../doc/imgs
\ No newline at end of file
test_tipc/configs/ppocr_det_server/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================paddle2onnx_params===========================
2onnx: paddle2onnx
--model_dir:./inference/ch_ppocr_server_v2.0_det_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./inference/det_server_onnx/model.onnx
--opset_version:10
--enable_onnx_checker:True
inference:tools/infer/predict_det.py
--use_gpu:True|False
--det_model_dir:
--image_dir:./inference/det_inference
\ No newline at end of file
test_tipc/configs/ppocr_det_server/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================serving_params===========================
model_name:ocr_det_server
python:python3.7|cpp
trans_model:-m paddle_serving_client.convert
--dirname:./inference/ch_ppocr_server_v2.0_det_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/pdserving/ppocr_det_server_2.0_serving/
--serving_client:./deploy/pdserving/ppocr_det_server_2.0_client/
serving_dir:./deploy/pdserving
web_service:web_service_det.py --config=config.yml --opt op.det.concurrency=1
op.det.local_service_conf.devices:null|0
op.det.local_service_conf.use_mkldnn:True|False
op.det.local_service_conf.thread_num:1|6
op.det.local_service_conf.use_trt:False|True
op.det.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_rpc_client.py|pipeline_http_client.py
--image_dir:../../doc/imgs_words_en
\ No newline at end of file
test_tipc/configs/ppocr_rec_mobile/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================paddle2onnx_params===========================
2onnx: paddle2onnx
--model_dir:./inference/ch_ppocr_mobile_v2.0_rec_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./inference/rec_mobile_onnx/model.onnx
--opset_version:10
--enable_onnx_checker:True
inference:tools/infer/predict_rec.py
--use_gpu:True|False
--rec_model_dir:
--image_dir:./inference/rec_inference
\ No newline at end of file
test_tipc/configs/ppocr_rec_mobile/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================serving_params===========================
model_name:ocr_rec_mobile
python:python3.7|cpp
trans_model:-m paddle_serving_client.convert
--dirname:./inference/ch_ppocr_mobile_v2.0_rec_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/pdserving/ppocr_rec_mobile_2.0_serving/
--serving_client:./deploy/pdserving/ppocr_rec_mobile_2.0_client/
serving_dir:./deploy/pdserving
web_service:web_service_rec.py --config=config.yml --opt op.rec.concurrency=1
op.rec.local_service_conf.devices:null|0
op.rec.local_service_conf.use_mkldnn:True|False
op.rec.local_service_conf.thread_num:1|6
op.rec.local_service_conf.use_trt:False|True
op.rec.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_rpc_client.py|pipeline_http_client.py
--image_dir:../../doc/imgs_words_en
\ No newline at end of file
test_tipc/configs/ppocr_rec_server/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================paddle2onnx_params===========================
2onnx: paddle2onnx
--model_dir:./inference/ch_ppocr_server_v2.0_rec_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./inference/rec_server_onnx/model.onnx
--opset_version:10
--enable_onnx_checker:True
inference:tools/infer/predict_rec.py
--use_gpu:True|False
--rec_model_dir:
--image_dir:./inference/rec_inference
\ No newline at end of file
test_tipc/configs/ppocr_rec_server/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
a8960021
===========================serving_params===========================
model_name:ocr_rec_server
python:python3.7
trans_model:-m paddle_serving_client.convert
--dirname:./inference/ch_ppocr_server_v2.0_rec_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/pdserving/ppocr_rec_server_2.0_serving/
--serving_client:./deploy/pdserving/ppocr_rec_server_2.0_client/
serving_dir:./deploy/pdserving
web_service:web_service_rec.py --config=config.yml --opt op.rec.concurrency=1
op.rec.local_service_conf.devices:null|0
op.rec.local_service_conf.use_mkldnn:True|False
op.rec.local_service_conf.thread_num:1|6
op.rec.local_service_conf.use_trt:False|True
op.rec.local_service_conf.precision:fp32|fp16|int8
pipline:pipeline_rpc_client.py|pipeline_http_client.py
--image_dir:../../doc/imgs_words_en
\ No newline at end of file
test_tipc/docs/test_paddle2onnx.md
浏览文件 @
a8960021
...
...
@@ -18,10 +18,10 @@ PaddleServing预测功能测试的主程序为`test_paddle2onnx.sh`,可以测
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_paddle2onnx.sh`
进行测试,最终在
```test_tipc/output```
目录下生成
`paddle2onnx_infer_*.log`
后缀的日志文件。
```
shell
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
_params
.txt
"paddle2onnx_infer"
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu
.txt
"paddle2onnx_infer"
# 用法:
bash test_tipc/test_paddle2onnx.sh ./test_tipc/configs/ppocr_det_mobile
_params
.txt
bash test_tipc/test_paddle2onnx.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu
.txt
```
#### 运行结果
...
...
test_tipc/docs/test_serving.md
浏览文件 @
a8960021
...
...
@@ -20,10 +20,10 @@ PaddleServing预测功能测试的主程序为`test_serving.sh`,可以测试
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_serving.sh`
进行测试,最终在
```test_tipc/output```
目录下生成
`serving_infer_*.log`
后缀的日志文件。
```
shell
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
_params
.txt
"serving_infer"
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu
.txt
"serving_infer"
# 用法:
bash test_tipc/test_serving.sh ./test_tipc/configs/ppocr_det_mobile
_params
.txt
bash test_tipc/test_serving.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_serving_python_linux_gpu_cpu
.txt
```
#### 运行结果
...
...
test_tipc/test_paddle2onnx.sh
浏览文件 @
a8960021
...
...
@@ -11,7 +11,7 @@ python=$(func_parser_value "${lines[2]}")
# parser params
dataline
=
$(
awk
'NR==1
11, NR==123
{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==1
, NR==12
{print}'
$FILENAME
)
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
...
...
test_tipc/test_serving.sh
浏览文件 @
a8960021
...
...
@@ -2,7 +2,7 @@
source
test_tipc/common_func.sh
FILENAME
=
$1
dataline
=
$(
awk
'NR==
67, NR==84
{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==
1, NR==18
{print}'
$FILENAME
)
# parser params
IFS
=
$'
\n
'
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录