Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
d79dd99b
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1528
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
d79dd99b
编写于
8月 19, 2020
作者:
M
MissPenguin
提交者:
GitHub
8月 19, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #568 from wangjiawei04/pdserving_readme
pdserving add convert to serving model
上级
d7cd666a
0d8fe758
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
34 addition
and
7 deletion
+34
-7
deploy/pdserving/det_local_server.py
deploy/pdserving/det_local_server.py
+4
-2
deploy/pdserving/ocr_local_server.py
deploy/pdserving/ocr_local_server.py
+2
-1
deploy/pdserving/readme.md
deploy/pdserving/readme.md
+17
-0
deploy/pdserving/rec_local_server.py
deploy/pdserving/rec_local_server.py
+11
-4
未找到文件。
deploy/pdserving/det_local_server.py
浏览文件 @
d79dd99b
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Div, Normalize, Transpose
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Div, Normalize, Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server_gpu.web_service
import
WebService
from
paddle_serving_server_gpu.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
import
time
import
time
import
re
import
re
...
@@ -67,11 +67,13 @@ class OCRService(WebService):
...
@@ -67,11 +67,13 @@ class OCRService(WebService):
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
.
load_model_config
(
"ocr_det_model"
)
ocr_service
.
load_model_config
(
"ocr_det_model"
)
ocr_service
.
init_det
()
if
sys
.
argv
[
1
]
==
'gpu'
:
if
sys
.
argv
[
1
]
==
'gpu'
:
ocr_service
.
set_gpus
(
"0"
)
ocr_service
.
set_gpus
(
"0"
)
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"gpu"
,
gpuid
=
0
)
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"gpu"
,
gpuid
=
0
)
ocr_service
.
run_debugger_service
(
gpu
=
True
)
elif
sys
.
argv
[
1
]
==
'cpu'
:
elif
sys
.
argv
[
1
]
==
'cpu'
:
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
)
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
)
ocr_service
.
run_debugger_service
()
ocr_service
.
init_det
()
ocr_service
.
init_det
()
ocr_service
.
run_debugger_service
()
ocr_service
.
run_web_service
()
ocr_service
.
run_web_service
()
deploy/pdserving/ocr_local_server.py
浏览文件 @
d79dd99b
...
@@ -104,10 +104,11 @@ class OCRService(WebService):
...
@@ -104,10 +104,11 @@ class OCRService(WebService):
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
.
load_model_config
(
"ocr_rec_model"
)
ocr_service
.
load_model_config
(
"ocr_rec_model"
)
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
)
ocr_service
.
init_det_debugger
(
det_model_config
=
"ocr_det_model"
)
ocr_service
.
init_det_debugger
(
det_model_config
=
"ocr_det_model"
)
if
sys
.
argv
[
1
]
==
'gpu'
:
if
sys
.
argv
[
1
]
==
'gpu'
:
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"gpu"
,
gpuid
=
0
)
ocr_service
.
run_debugger_service
(
gpu
=
True
)
ocr_service
.
run_debugger_service
(
gpu
=
True
)
elif
sys
.
argv
[
1
]
==
'cpu'
:
elif
sys
.
argv
[
1
]
==
'cpu'
:
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"cpu"
)
ocr_service
.
run_debugger_service
()
ocr_service
.
run_debugger_service
()
ocr_service
.
run_web_service
()
ocr_service
.
run_web_service
()
deploy/pdserving/readme.md
浏览文件 @
d79dd99b
...
@@ -55,6 +55,23 @@ tar -xzvf ocr_det.tar.gz
...
@@ -55,6 +55,23 @@ tar -xzvf ocr_det.tar.gz
```
```
执行上述命令会下载
`db_crnn_mobile`
的模型,如果想要下载规模更大的
`db_crnn_server`
模型,可以在下载预测模型并解压之后。参考
[
如何从Paddle保存的预测模型转为Paddle Serving格式可部署的模型
](
https://github.com/PaddlePaddle/Serving/blob/develop/doc/INFERENCE_TO_SERVING_CN.md
)
。
执行上述命令会下载
`db_crnn_mobile`
的模型,如果想要下载规模更大的
`db_crnn_server`
模型,可以在下载预测模型并解压之后。参考
[
如何从Paddle保存的预测模型转为Paddle Serving格式可部署的模型
](
https://github.com/PaddlePaddle/Serving/blob/develop/doc/INFERENCE_TO_SERVING_CN.md
)
。
我们以
`ch_rec_r34_vd_crnn`
模型作为例子,下载链接在:
```
wget --no-check-certificate https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar
tar xf ch_rec_r34_vd_crnn_infer.tar
```
因此我们按照Serving模型转换教程,运行下列python文件。
```
from paddle_serving_client.io import inference_model_to_serving
inference_model_dir = "ch_rec_r34_vd_crnn"
serving_client_dir = "serving_client_dir"
serving_server_dir = "serving_server_dir"
feed_var_names, fetch_var_names = inference_model_to_serving(
inference_model_dir, serving_client_dir, serving_server_dir, model_filename="model", params_filename="params")
```
最终会在
`serving_client_dir`
和
`serving_server_dir`
生成客户端和服务端的模型配置。
### 3. 启动服务
### 3. 启动服务
启动服务可以根据实际需求选择启动
`标准版`
或者
`快速版`
,两种方式的对比如下表:
启动服务可以根据实际需求选择启动
`标准版`
或者
`快速版`
,两种方式的对比如下表:
...
...
deploy/pdserving/rec_local_server.py
浏览文件 @
d79dd99b
...
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
...
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
from
paddle_serving_app.reader
import
Sequential
,
URL2Image
,
ResizeByFactor
from
paddle_serving_app.reader
import
Sequential
,
URL2Image
,
ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
from
paddle_serving_server_gpu.web_service
import
WebService
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server_gpu.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
import
time
import
re
import
re
import
base64
import
base64
...
@@ -65,8 +68,12 @@ class OCRService(WebService):
...
@@ -65,8 +68,12 @@ class OCRService(WebService):
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
=
OCRService
(
name
=
"ocr"
)
ocr_service
.
load_model_config
(
"ocr_rec_model"
)
ocr_service
.
load_model_config
(
"ocr_rec_model"
)
ocr_service
.
set_gpus
(
"0"
)
ocr_service
.
init_rec
()
ocr_service
.
init_rec
()
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"gpu"
,
gpuid
=
0
)
if
sys
.
argv
[
1
]
==
'gpu'
:
ocr_service
.
run_debugger_service
()
ocr_service
.
set_gpus
(
"0"
)
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"gpu"
,
gpuid
=
0
)
ocr_service
.
run_debugger_service
(
gpu
=
True
)
elif
sys
.
argv
[
1
]
==
'cpu'
:
ocr_service
.
prepare_server
(
workdir
=
"workdir"
,
port
=
9292
,
device
=
"cpu"
)
ocr_service
.
run_debugger_service
()
ocr_service
.
run_web_service
()
ocr_service
.
run_web_service
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录