diff --git a/README.md b/README.md
index 1ce6fabb55e231a7748bf2baeefc2d2fb4ffb2e5..c8e033dbb0fbe61f8241a2d5ca652541b5ed8baf 100644
--- a/README.md
+++ b/README.md
@@ -93,7 +93,7 @@ For a new language request, please refer to [Guideline for new language_requests
- [Quick Inference Based on PIP](./doc/doc_en/whl_en.md)
- [Python Inference](./doc/doc_en/inference_en.md)
- [C++ Inference](./deploy/cpp_infer/readme_en.md)
- - [Serving](./deploy/hubserving/readme_en.md)
+ - [Serving](./deploy/pdserving/readme_en.md)
- [Mobile](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/lite/readme_en.md)
- [Benchmark](./doc/doc_en/benchmark_en.md)
- Data Annotation and Synthesis
diff --git a/README_ch.md b/README_ch.md
index 9fe853f92510e540e35b7452f6ab00e5898ddb55..bea567a8d6e367b29d88ab1eafd294781d60ab99 100755
--- a/README_ch.md
+++ b/README_ch.md
@@ -88,7 +88,7 @@ PaddleOCR同时支持动态图与静态图两种编程范式
- [基于pip安装whl包快速推理](./doc/doc_ch/whl.md)
- [基于Python脚本预测引擎推理](./doc/doc_ch/inference.md)
- [基于C++预测引擎推理](./deploy/cpp_infer/readme.md)
- - [服务化部署](./deploy/hubserving/readme.md)
+ - [服务化部署](./deploy/pdserving/readme.md)
- [端侧部署](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/lite/readme.md)
- [Benchmark](./doc/doc_ch/benchmark.md)
- 数据集
diff --git a/deploy/pdserving/README.md b/deploy/pdserving/README.md
index 9dcc71233ed47d5baa1ca50b2be5032e122cc9e6..6675d5df815315ac088a6246f64459adfd13d68c 100644
--- a/deploy/pdserving/README.md
+++ b/deploy/pdserving/README.md
@@ -2,79 +2,93 @@
(English|[简体中文](./README_CN.md))
-This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md) to deploy the PPOCR dynamic graph model as a pipeline online service.
+PaddleOCR provides two service deployment methods:
+- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
+- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please follow this tutorial.
-Compared with hubserving deployment, PaddleServing supports high concurrency and efficient communication between the client and the server.
-The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://aistudio.baidu.com/aistudio/projectdetail/1550674).
+# Service deployment based on PaddleServing
+
+This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the PPOCR dynamic graph model as a pipeline online service.
+
+Some Key Features of Paddle Serving:
+- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
+- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
+- Highly concurrent and efficient communication between clients and servers supported.
+
+The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
## Contents
-- Environmental preparation
-- Model conversion
-- Paddle Serving pipeline deployment
-- FAQ
+- [Environmental preparation](#environmental-preparation)
+- [Model conversion](#model-conversion)
+- [Paddle Serving pipeline deployment](#paddle-serving-pipeline-deployment)
+- [FAQ](#faq)
+
## Environmental preparation
-Need to prepare PaddleOCR operating environment and Paddle Serving operating environment.
+PaddleOCR operating environment and Paddle Serving operating environment are needed.
-1. Prepare PaddleOCR operating environment reference [link](../../doc/doc_ch/installation.md)
+1. Please prepare PaddleOCR operating environment reference [link](../../doc/doc_ch/installation.md).
-2. Prepare the operating environment of PaddleServing, the steps are as follows
+2. The steps of PaddleServing operating environment prepare are as follows:
-Install serving, used to start the service
-```
-pip3 install paddle-serving-server==0.5.0 # for CPU
-pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
-# Other GPU environments need to confirm the environment and then choose to execute the following commands
-pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
-pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
-pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
-pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
-```
+ Install serving which used to start the service
+ ```
+ pip3 install paddle-serving-server==0.5.0 # for CPU
+ pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
+ # Other GPU environments need to confirm the environment and then choose to execute the following commands
+ pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
+ pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
+ pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
+ pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
+ ```
-2. Install the client to send requests to the service
-```
-pip3 install paddle-serving-client==0.5.0 # for CPU
+3. Install the client to send requests to the service
+ ```
+ pip3 install paddle-serving-client==0.5.0 # for CPU
-pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
-```
-
-3. Install serving-app
-```
-pip3 install paddle-serving-app==0.3.0
-# fix local_predict to support load dynamic model
-# find the install directoory of paddle_serving_app
-vim /usr/local/lib/python3.7/site-packages/paddle_serving_app/local_predict.py
-# replace line 85 of local_predict.py config = AnalysisConfig(model_path) with:
-if os.path.exists(os.path.join(model_path, "__params__")):
- config = AnalysisConfig(os.path.join(model_path, "__model__"), os.path.join(model_path, "__params__"))
-else:
- config = AnalysisConfig(model_path)
-```
+ pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
+ ```
+4. Install serving-app
+ ```
+ pip3 install paddle-serving-app==0.3.0
+ # fix local_predict to support load dynamic model
+ # find the install directoory of paddle_serving_app
+ vim /usr/local/lib/python3.7/site-packages/paddle_serving_app/local_predict.py
+ # replace line 85 of local_predict.py config = AnalysisConfig(model_path) with:
+ if os.path.exists(os.path.join(model_path, "__params__")):
+ config = AnalysisConfig(os.path.join(model_path, "__model__"), os.path.join(model_path, "__params__"))
+ else:
+ config = AnalysisConfig(model_path)
+ ```
-**note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
+ **note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
+
## Model conversion
When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.
-First, download the [inference model] of PPOCR(https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15)
+Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
```
# Download and unzip the OCR text detection model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar
# Download and unzip the OCR text recognition model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar
-# Conversion detection model
+```
+Then, you can use installed paddle_serving_client tool to convert inference model to server model.
+```
+# Detection model conversion
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_server_v2.0_det_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ppocr_det_server_2.0_serving/ \
--serving_client ./ppocr_det_server_2.0_client/
-# Conversion recognition model
+# Recognition model conversion
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_server_v2.0_rec_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
@@ -98,44 +112,45 @@ After the detection model is converted, there will be additional folders of `ppo
```
The recognition model is the same.
+
## Paddle Serving pipeline deployment
1. Download the PaddleOCR code, if you have already downloaded it, you can skip this step.
-```
-git clone https://github.com/PaddlePaddle/PaddleOCR
-
-# Enter the working directory
-cd PaddleOCR/deploy/pdserver/
-```
-
-The pdserver directory contains the code to start the pipeline service and send prediction requests, including:
-```
-__init__.py
-config.yml # Start the service configuration file
-ocr_reader.py # OCR model pre-processing and post-processing code implementation
-pipeline_http_client.py # Script to send pipeline prediction request
-web_service.py # Start the script of the pipeline server
-```
+ ```
+ git clone https://github.com/PaddlePaddle/PaddleOCR
+
+ # Enter the working directory
+ cd PaddleOCR/deploy/pdserver/
+ ```
+
+ The pdserver directory contains the code to start the pipeline service and send prediction requests, including:
+ ```
+ __init__.py
+ config.yml # Start the service configuration file
+ ocr_reader.py # OCR model pre-processing and post-processing code implementation
+ pipeline_http_client.py # Script to send pipeline prediction request
+ web_service.py # Start the script of the pipeline server
+ ```
2. Run the following command to start the service.
-```
-# Start the service and save the running log in log.txt
-python3 web_service.py &>log.txt &
-```
-After the service is successfully started, a log similar to the following will be printed in log.txt
-![](./imgs/start_server.png)
-
+ ```
+ # Start the service and save the running log in log.txt
+ python3 web_service.py &>log.txt &
+ ```
+ After the service is successfully started, a log similar to the following will be printed in log.txt
+ ![](./imgs/start_server.png)
3. Send service request
-```
-python3 pipeline_http_client.py
-```
-After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
-![](./imgs/results.png)
-
+ ```
+ python3 pipeline_http_client.py
+ ```
+ After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
+ ![](./imgs/results.png)
+
## FAQ
-** Q1**: No result return after sending the request
+** Q1**: No result return after sending the request.
+
** A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
```
unset https_proxy
diff --git a/deploy/pdserving/README_CN.md b/deploy/pdserving/README_CN.md
index d8f6095cbd1876a376e0d613bf472e057a697ba6..5c4d6156df1df116f1d388a1ce54983b8aef0d93 100644
--- a/deploy/pdserving/README_CN.md
+++ b/deploy/pdserving/README_CN.md
@@ -2,62 +2,75 @@
([English](./README.md)|简体中文)
+PaddleOCR提供2种服务部署方式:
+- 基于PaddleHub Serving的部署:代码路径为"`./deploy/hubserving`",使用方法参考[文档](../../deploy/hubserving/readme.md);
+- 基于PaddleServing的部署:代码路径为"`./deploy/pdserving`",按照本教程使用。
+
+# 基于PaddleServing的服务部署
+
本文档将介绍如何使用[PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)工具部署PPOCR
动态图模型的pipeline在线服务。
-相比较于hubserving部署,PaddleServing支持客户端和服务端之间 高并发和高效通信,更多有关Paddle Serving服务化部署框架介绍和使用教程参考[文档](https://aistudio.baidu.com/aistudio/projectdetail/1550674)。
+相比较于hubserving部署,PaddleServing具备以下优点:
+- 支持客户端和服务端之间高并发和高效通信
+- 支持 工业级的服务能力 例如模型管理,在线加载,在线A/B测试等
+- 支持 多种编程语言 开发客户端,例如C++, Python和Java
-## 目录
-- 环境准备
-- 模型转换
-- Paddle Serving pipeline部署
-- FAQ
+更多有关PaddleServing服务化部署框架介绍和使用教程参考[文档](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)。
+## 目录
+- [环境准备](#环境准备)
+- [模型转换](#模型转换)
+- [Paddle Serving pipeline部署](#部署)
+- [FAQ](#FAQ)
+
## 环境准备
需要准备PaddleOCR的运行环境和Paddle Serving的运行环境。
-1. 准备PaddleOCR的运行环境参考[链接](../../doc/doc_ch/installation.md)
+- 准备PaddleOCR的运行环境参考[链接](../../doc/doc_ch/installation.md)
-2. 准备PaddleServing的运行环境,步骤如下
+- 准备PaddleServing的运行环境,步骤如下
-安装serving,用于启动服务
-```
-pip3 install paddle-serving-server==0.5.0 # for CPU
-pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
-# 其他GPU环境需要确认环境再选择执行如下命令
-pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
-pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
-pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
-pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
-```
+1. 安装serving,用于启动服务
+ ```
+ pip3 install paddle-serving-server==0.5.0 # for CPU
+ pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
+ # 其他GPU环境需要确认环境再选择执行如下命令
+ pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
+ pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
+ pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
+ pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
+ ```
2. 安装client,用于向服务发送请求
-```
-pip3 install paddle-serving-client==0.5.0 # for CPU
+ ```
+ pip3 install paddle-serving-client==0.5.0 # for CPU
-pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
-```
+ pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
+ ```
3. 安装serving-app
-```
-pip3 install paddle-serving-app==0.3.0
-```
-**note:** 安装0.3.0版本的serving-app后,为了能加载动态图模型,需要修改serving_app的源码,具体为:
-```
-# 找到paddle_serving_app的安装目录,找到并编辑local_predict.py文件
-vim /usr/local/lib/python3.7/site-packages/paddle_serving_app/local_predict.py
-# 将local_predict.py 的第85行 config = AnalysisConfig(model_path) 替换为:
-if os.path.exists(os.path.join(model_path, "__params__")):
- config = AnalysisConfig(os.path.join(model_path, "__model__"), os.path.join(model_path, "__params__"))
-else:
- config = AnalysisConfig(model_path)
-```
-
-**note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)。
-
+ ```
+ pip3 install paddle-serving-app==0.3.0
+ ```
+ **note:** 安装0.3.0版本的serving-app后,为了能加载动态图模型,需要修改serving_app的源码,具体为:
+ ```
+ # 找到paddle_serving_app的安装目录,找到并编辑local_predict.py文件
+ vim /usr/local/lib/python3.7/site-packages/paddle_serving_app/local_predict.py
+ # 将local_predict.py 的第85行 config = AnalysisConfig(model_path) 替换为:
+ if os.path.exists(os.path.join(model_path, "__params__")):
+ config = AnalysisConfig(os.path.join(model_path, "__model__"), os.path.join(model_path, "__params__"))
+ else:
+ config = AnalysisConfig(model_path)
+ ```
+
+ **Note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)。
+
+
## 模型转换
+
使用PaddleServing做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。
首先,下载PPOCR的[inference模型](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15)
@@ -66,7 +79,11 @@ else:
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar
# 下载并解压 OCR 文本识别模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar
+```
+
+接下来,用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。
+```
# 转换检测模型
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_server_v2.0_det_infer/ \
--model_filename inference.pdmodel \
@@ -97,44 +114,45 @@ python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_server_v2.0_rec_in
```
识别模型同理。
+
## Paddle Serving pipeline部署
-```
-# 下载PaddleOCR代码,若已下载可跳过此步骤
-git clone https://github.com/PaddlePaddle/PaddleOCR
-
-# 进入到工作目录
-cd PaddleOCR/deploy/pdserver/
-```
-pdserver目录包含启动pipeline服务和发送预测请求的代码,包括:
-```
-__init__.py
-config.yml # 启动服务的配置文件
-ocr_reader.py # OCR模型预处理和后处理的代码实现
-pipeline_http_client.py # 发送pipeline预测请求的脚本
-web_service.py # 启动pipeline服务端的脚本
-```
-
-启动服务可运行如下命令
-```
-# 启动服务,运行日志保存在log.txt
-python3 web_service.py &>log.txt &
-```
-成功启动服务后,log.txt中会打印类似如下日志
-![](./imgs/start_server.png)
-
-
-发送服务请求
-```
-python3 pipeline_http_client.py
-```
-成功运行后,模型预测的结果会打印在cmd窗口中,结果示例为:
-![](./imgs/results.png)
-
-
-
+1. 下载PaddleOCR代码,若已下载可跳过此步骤
+ ```
+ git clone https://github.com/PaddlePaddle/PaddleOCR
+
+ # 进入到工作目录
+ cd PaddleOCR/deploy/pdserver/
+ ```
+ pdserver目录包含启动pipeline服务和发送预测请求的代码,包括:
+ ```
+ __init__.py
+ config.yml # 启动服务的配置文件
+ ocr_reader.py # OCR模型预处理和后处理的代码实现
+ pipeline_http_client.py # 发送pipeline预测请求的脚本
+ web_service.py # 启动pipeline服务端的脚本
+ ```
+
+2. 启动服务可运行如下命令:
+ ```
+ # 启动服务,运行日志保存在log.txt
+ python3 web_service.py &>log.txt &
+ ```
+ 成功启动服务后,log.txt中会打印类似如下日志
+ ![](./imgs/start_server.png)
+
+3. 发送服务请求:
+ ```
+ python3 pipeline_http_client.py
+ ```
+ 成功运行后,模型预测的结果会打印在cmd窗口中,结果示例为:
+ ![](./imgs/results.png)
+
+
+
## FAQ
-** Q1**: 发送请求后得不到结果
+** Q1**: 发送请求后没有结果返回或者提示输出解码报错
+
** A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
```
unset https_proxy