From d7d04f41446a9443068e70a9eede125459fd07f1 Mon Sep 17 00:00:00 2001 From: WenmuZhou <572459439@qq.com> Date: Wed, 30 Mar 2022 14:46:35 +0000 Subject: [PATCH] add update log --- deploy/hubserving/readme.md | 46 ++++++++++++++++++--------------- deploy/hubserving/readme_en.md | 47 +++++++++++++++++++--------------- 2 files changed, 51 insertions(+), 42 deletions(-) diff --git a/deploy/hubserving/readme.md b/deploy/hubserving/readme.md index f8f7cf03..3280df5d 100755 --- a/deploy/hubserving/readme.md +++ b/deploy/hubserving/readme.md @@ -1,16 +1,17 @@ [English](readme_en.md) | 简体中文 - [基于PaddleHub Serving的服务部署](#基于paddlehub-serving的服务部署) - - [快速启动服务](#快速启动服务) - - [1. 准备环境](#1-准备环境) - - [2. 下载推理模型](#2-下载推理模型) - - [3. 安装服务模块](#3-安装服务模块) - - [4. 启动服务](#4-启动服务) - - [方式1. 命令行命令启动(仅支持CPU)](#方式1-命令行命令启动仅支持cpu) - - [方式2. 配置文件启动(支持CPU、GPU)](#方式2-配置文件启动支持cpugpu) - - [发送预测请求](#发送预测请求) - - [返回结果格式说明](#返回结果格式说明) - - [自定义修改服务模块](#自定义修改服务模块) + - [1. 近期更新](#1-近期更新) + - [2. 快速启动服务](#2-快速启动服务) + - [2.1 准备环境](#21-准备环境) + - [2.2 下载推理模型](#22-下载推理模型) + - [2.3 安装服务模块](#23-安装服务模块) + - [2.4 启动服务](#24-启动服务) + - [2.4.1. 命令行命令启动(仅支持CPU)](#241-命令行命令启动仅支持cpu) + - [2.4.2 配置文件启动(支持CPU、GPU)](#242-配置文件启动支持cpugpu) + - [3. 发送预测请求](#3-发送预测请求) + - [4. 返回结果格式说明](#4-返回结果格式说明) + - [5. 自定义修改服务模块](#5-自定义修改服务模块) PaddleOCR提供2种服务部署方式: @@ -19,7 +20,7 @@ PaddleOCR提供2种服务部署方式: # 基于PaddleHub Serving的服务部署 -hubserving服务部署目录下包括检测、识别、2阶段串联和表格识别四种服务包,请根据需求选择相应的服务包进行安装和启动。目录结构如下: +hubserving服务部署目录下包括检测、识别、2阶段串联,表格识别和PP-Structure四种服务包,请根据需求选择相应的服务包进行安装和启动。目录结构如下: ``` deploy/hubserving/ └─ ocr_cls 分类模块服务包 @@ -38,17 +39,20 @@ deploy/hubserving/ocr_system/ └─ module.py 主模块,必选,包含服务的完整逻辑 └─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数 ``` +## 1. 近期更新 -## 快速启动服务 +* 2022.03.30 新增PP-Structure和表格识别两种服务。 + +## 2. 快速启动服务 以下步骤以检测+识别2阶段串联服务为例,如果只需要检测服务或识别服务,替换相应文件路径即可。 -### 1. 准备环境 +### 2.1 准备环境 ```shell # 安装paddlehub # paddlehub 需要 python>3.6.2 pip3 install paddlehub==2.1.0 --upgrade -i https://mirror.baidu.com/pypi/simple ``` -### 2. 下载推理模型 +### 2.2 下载推理模型 安装服务模块前,需要准备推理模型并放到正确路径。默认使用的是PP-OCRv2模型,默认模型路径为: ``` 检测模型:./inference/ch_PP-OCRv2_det_infer/ @@ -59,7 +63,7 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://mirror.baidu.com/pypi/simple **模型路径可在`params.py`中查看和修改。** 更多模型可以从PaddleOCR提供的模型库[PP-OCR](../../doc/doc_ch/models_list.md)和[PP-Structure](../../ppstructure/docs/models_list.md)下载,也可以替换成自己训练转换好的模型。 -### 3. 安装服务模块 +### 2.3 安装服务模块 PaddleOCR提供5种服务模块,根据需要安装所需模块。 * 在Linux环境下,安装示例如下: @@ -104,8 +108,8 @@ hub install deploy\hubserving\structure_table\ hub install deploy\hubserving\structure_system\ ``` -### 4. 启动服务 -#### 方式1. 命令行命令启动(仅支持CPU) +### 2.4 启动服务 +#### 2.4.1. 命令行命令启动(仅支持CPU) **启动命令:** ```shell $ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \ @@ -127,7 +131,7 @@ $ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \ 这样就完成了一个服务化API的部署,使用默认端口号8866。 -#### 方式2. 配置文件启动(支持CPU、GPU) +#### 2.4.2 配置文件启动(支持CPU、GPU) **启动命令:** ```hub serving start -c config.json``` @@ -164,7 +168,7 @@ export CUDA_VISIBLE_DEVICES=3 hub serving start -c deploy/hubserving/ocr_system/config.json ``` -## 发送预测请求 +## 3. 发送预测请求 配置好服务端,可使用以下命令发送预测请求,获取预测结果: ```python tools/test_hubserving.py server_url image_path``` @@ -186,7 +190,7 @@ hub serving start -c deploy/hubserving/ocr_system/config.json 访问示例: ```python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir./doc/imgs/ --visualize=false``` -## 返回结果格式说明 +## 4. 返回结果格式说明 返回结果为列表(list),列表中的每一项为词典(dict),词典一共可能包含3种字段,信息如下: |字段名称|数据类型|意义| @@ -211,7 +215,7 @@ hub serving start -c deploy/hubserving/ocr_system/config.json **说明:** 如果需要增加、删除、修改返回字段,可在相应模块的`module.py`文件中进行修改,完整流程参考下一节自定义修改服务模块。 -## 自定义修改服务模块 +## 5. 自定义修改服务模块 如果需要修改服务逻辑,你一般需要操作以下步骤(以修改`ocr_system`为例): - 1、 停止服务 diff --git a/deploy/hubserving/readme_en.md b/deploy/hubserving/readme_en.md index 79a07687..f360edbe 100755 --- a/deploy/hubserving/readme_en.md +++ b/deploy/hubserving/readme_en.md @@ -1,16 +1,17 @@ English | [简体中文](readme.md) - [Service deployment based on PaddleHub Serving](#service-deployment-based-on-paddlehub-serving) - - [Quick start service](#quick-start-service) - - [1. Prepare the environment](#1-prepare-the-environment) - - [2. Download inference model](#2-download-inference-model) - - [3. Install Service Module](#3-install-service-module) - - [4. Start service](#4-start-service) - - [Way 1. Start with command line parameters (CPU only)](#way-1-start-with-command-line-parameters-cpu-only) - - [Way 2. Start with configuration file(CPU、GPU)](#way-2-start-with-configuration-filecpugpu) - - [Send prediction requests](#send-prediction-requests) - - [Returned result format](#returned-result-format) - - [User defined service module modification](#user-defined-service-module-modification) + - [1. Update](#1-update) + - [2. Quick start service](#2-quick-start-service) + - [2.1 Prepare the environment](#21-prepare-the-environment) + - [2.2 Download inference model](#22-download-inference-model) + - [2.3 Install Service Module](#23-install-service-module) + - [2.4 Start service](#24-start-service) + - [2.4.1 Start with command line parameters (CPU only)](#241-start-with-command-line-parameters-cpu-only) + - [2.4.2 Start with configuration file(CPU、GPU)](#242-start-with-configuration-filecpugpu) + - [3. Send prediction requests](#3-send-prediction-requests) + - [4. Returned result format](#4-returned-result-format) + - [5. User defined service module modification](#5-user-defined-service-module-modification) PaddleOCR provides 2 service deployment methods: @@ -19,7 +20,7 @@ PaddleOCR provides 2 service deployment methods: # Service deployment based on PaddleHub Serving -The hubserving service deployment directory includes three service packages: text detection, text recognition, two-stage series connection and table recognition. Please select the corresponding service package to install and start service according to your needs. The directory is as follows: +The hubserving service deployment directory includes five service packages: text detection, text recognition, two-stage series connection, table recognition and PP-Structure. Please select the corresponding service package to install and start service according to your needs. The directory is as follows: ``` deploy/hubserving/ └─ ocr_det text detection module service package @@ -38,18 +39,22 @@ deploy/hubserving/ocr_system/ └─ module.py Main module file, required, contains the complete logic of the service └─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters ``` +## 1. Update -## Quick start service +* 2022.03.30 add PP-Structure and table recognition services。 + + +## 2. Quick start service The following steps take the 2-stage series service as an example. If only the detection service or recognition service is needed, replace the corresponding file path. -### 1. Prepare the environment +### 2.1 Prepare the environment ```shell # Install paddlehub # python>3.6.2 is required bt paddlehub pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` -### 2. Download inference model +### 2.2 Download inference model Before installing the service module, you need to prepare the inference model and put it in the correct path. By default, the PP-OCRv2 models are used, and the default model path is: ``` text detection model: ./inference/ch_PP-OCRv2_det_infer/ @@ -60,7 +65,7 @@ tanle recognition: ./inference/en_ppocr_mobile_v2.0_table_structure_infer/ **The model path can be found and modified in `params.py`.** More models provided by PaddleOCR can be obtained from the [model library](../../doc/doc_en/models_list_en.md). You can also use models trained by yourself. -### 3. Install Service Module +### 2.3 Install Service Module PaddleOCR provides 5 kinds of service modules, install the required modules according to your needs. * On Linux platform, the examples are as follows. @@ -105,8 +110,8 @@ hub install deploy/hubserving/structure_table/ hub install deploy\hubserving\structure_system\ ``` -### 4. Start service -#### Way 1. Start with command line parameters (CPU only) +### 2.4 Start service +#### 2.4.1 Start with command line parameters (CPU only) **start command:** ```shell @@ -131,7 +136,7 @@ hub serving start -m ocr_system This completes the deployment of a service API, using the default port number 8866. -#### Way 2. Start with configuration file(CPU、GPU) +#### 2.4.2 Start with configuration file(CPU、GPU) **start command:** ```shell hub serving start --config/-c config.json @@ -168,7 +173,7 @@ export CUDA_VISIBLE_DEVICES=3 hub serving start -c deploy/hubserving/ocr_system/config.json ``` -## Send prediction requests +## 3. Send prediction requests After the service starts, you can use the following command to send a prediction request to obtain the prediction result: ```shell python tools/test_hubserving.py server_url image_path @@ -194,7 +199,7 @@ For example, if using the configuration file to start the text angle classificat python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir./doc/imgs/ --visualize=false` ``` -## Returned result format +## 4. Returned result format The returned result is a list. Each item in the list is a dict. The dict may contain three fields. The information is as follows: |field name|data type|description| @@ -219,7 +224,7 @@ The fields returned by different modules are different. For example, the results **Note:** If you need to add, delete or modify the returned fields, you can modify the file `module.py` of the corresponding module. For the complete process, refer to the user-defined modification service module in the next section. -## User defined service module modification +## 5. User defined service module modification If you need to modify the service logic, the following steps are generally required (take the modification of `ocr_system` for example): - 1. Stop service -- GitLab