readme_en.md 10.3 KB
Newer Older
W
WenmuZhou 已提交
1 2
English | [简体中文](readme.md)

文幕地方's avatar
文幕地方 已提交
3 4 5 6 7 8 9 10 11 12 13 14 15
- [Service deployment based on PaddleHub Serving](#service-deployment-based-on-paddlehub-serving)
  - [Quick start service](#quick-start-service)
    - [1. Prepare the environment](#1-prepare-the-environment)
    - [2. Download inference model](#2-download-inference-model)
    - [3. Install Service Module](#3-install-service-module)
    - [4. Start service](#4-start-service)
      - [Way 1. Start with command line parameters (CPU only)](#way-1-start-with-command-line-parameters-cpu-only)
      - [Way 2. Start with configuration file(CPU、GPU)](#way-2-start-with-configuration-filecpugpu)
  - [Send prediction requests](#send-prediction-requests)
  - [Returned result format](#returned-result-format)
  - [User defined service module modification](#user-defined-service-module-modification)


W
WenmuZhou 已提交
16 17
PaddleOCR provides 2 service deployment methods:
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please follow this tutorial.
L
LDOUBLEV 已提交
18
- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please refer to the [tutorial](../../deploy/pdserving/README.md) for usage.
W
WenmuZhou 已提交
19 20 21

# Service deployment based on PaddleHub Serving  

文幕地方's avatar
文幕地方 已提交
22
The hubserving service deployment directory includes three service packages: text detection, text recognition, two-stage series connection and table recognition. Please select the corresponding service package to install and start service according to your needs. The directory is as follows:  
W
WenmuZhou 已提交
23 24
```
deploy/hubserving/
文幕地方's avatar
文幕地方 已提交
25 26 27
  └─  ocr_det     text detection module service package
  └─  ocr_cls     text angle class module service package
  └─  ocr_rec     text recognition module service package
W
WenmuZhou 已提交
28
  └─  ocr_system  two-stage series connection service package
文幕地方's avatar
文幕地方 已提交
29
  └─  structure_table  table recognition service package
W
WenmuZhou 已提交
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
```

Each service pack contains 3 files. Take the 2-stage series connection service package as an example, the directory is as follows:  
```
deploy/hubserving/ocr_system/
  └─  __init__.py    Empty file, required
  └─  config.json    Configuration file, optional, passed in as a parameter when using configuration to start the service
  └─  module.py      Main module file, required, contains the complete logic of the service
  └─  params.py      Parameter file, required, including parameters such as model path, pre- and post-processing parameters
```

## Quick start service
The following steps take the 2-stage series service as an example. If only the detection service or recognition service is needed, replace the corresponding file path.

### 1. Prepare the environment
```shell
# Install paddlehub  
W
opt doc  
WenmuZhou 已提交
47
# python>3.6.2 is required bt paddlehub
littletomatodonkey's avatar
littletomatodonkey 已提交
48
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
W
WenmuZhou 已提交
49 50 51
```

### 2. Download inference model
52
Before installing the service module, you need to prepare the inference model and put it in the correct path. By default, the PP-OCRv2 models are used, and the default model path is:  
W
WenmuZhou 已提交
53
```
文幕地方's avatar
文幕地方 已提交
54 55 56 57
text detection model: ./inference/ch_PP-OCRv2_det_infer/
text recognition model: ./inference/ch_PP-OCRv2_rec_infer/
text angle classifier: ./inference/ch_ppocr_mobile_v2.0_cls_infer/
tanle recognition: ./inference/en_ppocr_mobile_v2.0_table_structure_infer/
W
WenmuZhou 已提交
58 59 60 61 62
```  

**The model path can be found and modified in `params.py`.** More models provided by PaddleOCR can be obtained from the [model library](../../doc/doc_en/models_list_en.md). You can also use models trained by yourself.

### 3. Install Service Module
文幕地方's avatar
文幕地方 已提交
63
PaddleOCR provides 5 kinds of service modules, install the required modules according to your needs.
W
WenmuZhou 已提交
64 65 66

* On Linux platform, the examples are as follows.
```shell
文幕地方's avatar
文幕地方 已提交
67
# Install the text detection service module:
W
WenmuZhou 已提交
68 69
hub install deploy/hubserving/ocr_det/

文幕地方's avatar
文幕地方 已提交
70
# Or, install the text angle class service module:
W
WenmuZhou 已提交
71 72
hub install deploy/hubserving/ocr_cls/

文幕地方's avatar
文幕地方 已提交
73
# Or, install the text recognition service module:
W
WenmuZhou 已提交
74 75 76 77
hub install deploy/hubserving/ocr_rec/

# Or, install the 2-stage series service module:
hub install deploy/hubserving/ocr_system/
文幕地方's avatar
文幕地方 已提交
78 79 80

# Or install table recognition service module
hub install deploy/hubserving/structure_table/
W
WenmuZhou 已提交
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
```

* On Windows platform, the examples are as follows.
```shell
# Install the detection service module:
hub install deploy\hubserving\ocr_det\

# Or, install the angle class service module:
hub install deploy\hubserving\ocr_cls\

# Or, install the recognition service module:
hub install deploy\hubserving\ocr_rec\

# Or, install the 2-stage series service module:
hub install deploy\hubserving\ocr_system\
文幕地方's avatar
文幕地方 已提交
96 97 98

# Or install table recognition service module
hub install deploy/hubserving/structure_table/
W
WenmuZhou 已提交
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113
```

### 4. Start service
#### Way 1. Start with command line parameters (CPU only)

**start command:**  
```shell
$ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \
                    --port XXXX \
                    --use_multiprocess \
                    --workers \
```  
**parameters:**  

|parameters|usage|  
文幕地方's avatar
文幕地方 已提交
114
|---|---|  
W
WenmuZhou 已提交
115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866|  
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|  

For example, start the 2-stage series service:  
```shell
hub serving start -m ocr_system
```  

This completes the deployment of a service API, using the default port number 8866.  

#### Way 2. Start with configuration file(CPU、GPU)
**start command:**  
```shell
hub serving start --config/-c config.json
```  
Wherein, the format of `config.json` is as follows:
```python
{
    "modules_info": {
        "ocr_system": {
            "init_args": {
                "version": "1.0.0",
                "use_gpu": true
            },
            "predict_args": {
            }
        }
    },
    "port": 8868,
    "use_multiprocess": false,
    "workers": 2
}
```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them, **when `use_gpu` is `true`, it means that the GPU is used to start the service**.
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.

**Note:**  
- When using the configuration file to start the service, other parameters will be ignored.
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it.
- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.**  

For example, use GPU card No. 3 to start the 2-stage series service:
```shell
export CUDA_VISIBLE_DEVICES=3
hub serving start -c deploy/hubserving/ocr_system/config.json
```  

## Send prediction requests
After the service starts, you can use the following command to send a prediction request to obtain the prediction result:  
```shell
python tools/test_hubserving.py server_url image_path
```  

Two parameters need to be passed to the script:
- **server_url**:service address,format of which is
`http://[ip_address]:[port]/predict/[module_name]`  
文幕地方's avatar
文幕地方 已提交
173 174
For example, if using the configuration file to start the text angle classification, text detection, text recognition, detection+classification+recognition 3 stages, table recognition service, then the `server_url` to send the request will be:

W
WenmuZhou 已提交
175 176 177 178
`http://127.0.0.1:8865/predict/ocr_det`  
`http://127.0.0.1:8866/predict/ocr_cls`  
`http://127.0.0.1:8867/predict/ocr_rec`  
`http://127.0.0.1:8868/predict/ocr_system`  
文幕地方's avatar
文幕地方 已提交
179
`http://127.0.0.1:8869/predict/structure_table`
文幕地方's avatar
文幕地方 已提交
180 181
- **image_dir**:Test image path, can be a single image path or an image directory path
- **visualize**:Whether to visualize the results, the default value is False
W
WenmuZhou 已提交
182 183 184

**Eg.**
```shell
文幕地方's avatar
文幕地方 已提交
185
python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir./doc/imgs/ --visualize=false`
W
WenmuZhou 已提交
186 187 188 189 190 191 192 193 194 195 196
```

## Returned result format
The returned result is a list. Each item in the list is a dict. The dict may contain three fields. The information is as follows:

|field name|data type|description|
|----|----|----|
|angle|str|angle|
|text|str|text content|
|confidence|float|text recognition confidence|
|text_region|list|text location coordinates|
文幕地方's avatar
文幕地方 已提交
197
|html|str|table html str|
W
WenmuZhou 已提交
198 199 200

The fields returned by different modules are different. For example, the results returned by the text recognition service module do not contain `text_region`. The details are as follows:

文幕地方's avatar
文幕地方 已提交
201 202 203 204 205 206 207
| field name/module name| ocr_det | ocr_cls | ocr_rec | ocr_system | structure_table |
|  ----  |  ----  |  ----  |  ----  |  ----  | ----  |
|angle| | ✔ | | ✔ | |
|text| | |✔|✔| |
|confidence| |✔ |✔| | |
|text_region| ✔| | |✔ | |
|html| | | | |✔ |
W
WenmuZhou 已提交
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231

**Note:** If you need to add, delete or modify the returned fields, you can modify the file `module.py` of the corresponding module. For the complete process, refer to the user-defined modification service module in the next section.

## User defined service module modification
If you need to modify the service logic, the following steps are generally required (take the modification of `ocr_system` for example):

- 1. Stop service
```shell
hub serving stop --port/-p XXXX
```
- 2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs.  
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `det_model_dir` and `rec_model_dir` in `params.py`. If you want to turn off the text direction classifier, set the parameter `use_angle_cls` to `False`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation. It is suggested to run `module.py` directly for debugging after modification before starting the service test.  
- 3. Uninstall old service module
```shell
hub uninstall ocr_system
```
- 4. Install modified service module
```shell
hub install deploy/hubserving/ocr_system/
```
- 5. Restart service
```shell
hub serving start -m ocr_system
```