From f20549fd8f4006a36b16f5f9fd7f9979dbf1cedb Mon Sep 17 00:00:00 2001 From: MissPenguin Date: Fri, 17 Jul 2020 12:20:18 +0800 Subject: [PATCH] add detection_en.md & serving_en.md --- doc/doc_en/detection_en.md | 29 +++--- doc/doc_en/serving_en.md | 179 +++++++++++++++++++++++++++++++++++++ 2 files changed, 194 insertions(+), 14 deletions(-) create mode 100644 doc/doc_en/serving_en.md diff --git a/doc/doc_en/detection_en.md b/doc/doc_en/detection_en.md index 6bb496c9..1478cb37 100644 --- a/doc/doc_en/detection_en.md +++ b/doc/doc_en/detection_en.md @@ -22,12 +22,12 @@ After decompressing the data set and downloading the annotation file, PaddleOCR/ └─ test_icdar2015_label.txt Test annotation of icdar dataset ``` -The provided annotation file format is as follow: +The provided annotation file format is as follow, seperated by "\t": ``` " Image file name Image annotation information encoded by json.dumps" ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]], ...}] ``` -The image annotation after json.dumps() encoding is a list containing multiple dictionaries. The `points` in the dictionary represent the coordinates (x, y) of the four points of the text box, arranged clockwise from the point at the upper left corner. +The image annotation before json.dumps() encoding is a list containing multiple dictionaries. The `points` in the dictionary represent the coordinates (x, y) of the four points of the text box, arranged clockwise from the point at the upper left corner. `transcription` represents the text of the current text box, and this information is not needed in the text detection task. If you want to train PaddleOCR on other datasets, you can build the annotation file according to the above format. @@ -56,7 +56,8 @@ tar xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models ``` -**START TRAINING** +**START TRAINING** +*If CPU version installed, please set the parameter `use_gpu` in the configuration to `false`.* ``` python3 tools/train.py -c configs/det/det_mv3_db.yml ``` @@ -64,20 +65,20 @@ python3 tools/train.py -c configs/det/det_mv3_db.yml In the above instruction, use `-c` to select the training to use the configs/det/det_db_mv3.yml configuration file. For a detailed explanation of the configuration file, please refer to [link](./config_en.md). -You can also use the `-o` parameter to change the training parameters without modifying the yml file. For example, adjust the training learning rate to 0.0001 +You can also use `-o` to change the training parameters without modifying the yml file. For example, adjust the training learning rate to 0.0001 ``` python3 tools/train.py -c configs/det/det_mv3_db.yml -o Optimizer.base_lr=0.0001 ``` **load trained model and conntinue training** -If you expect to load trained model and continue the training again, you can specify the `Global.checkpoints` parameter as the model path to be loaded. +If you expect to load trained model and continue the training again, you can specify the parameter `Global.checkpoints` as the model path to be loaded. For example: ``` python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./your/trained/model ``` -**Note**:The priority of Global.checkpoints is higher than the priority of Global.pretrain_weights, that is, when two parameters are specified at the same time, the model specified by Global.checkpoints will be loaded first. If the model path specified by Global.checkpoints is wrong, the one specified by Global.pretrain_weights will be loaded. +**Note**:The priority of `Global.checkpoints` is higher than that of `Global.pretrain_weights`, that is, when two parameters are specified at the same time, the model specified by Global.checkpoints will be loaded first. If the model path specified by `Global.checkpoints` is wrong, the one specified by `Global.pretrain_weights` will be loaded. ## EVALUATION @@ -86,34 +87,34 @@ PaddleOCR calculates three indicators for evaluating performance of OCR detectio Run the following code to calculate the evaluation indicators. The result will be saved in the test result file specified by `save_res_path` in the configuration file `det_db_mv3.yml` -When evaluating, set post-processing parameters box_thresh=0.6, unclip_ratio=1.5. If you use different datasets, different models for training, these two parameters should be adjusted for better result. +When evaluating, set post-processing parameters `box_thresh=0.6`, `unclip_ratio=1.5`. If you use different datasets, different models for training, these two parameters should be adjusted for better result. ``` python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5 ``` -The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set Global.checkpoints to point to the saved parameter file. +The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set `Global.checkpoints` to point to the saved parameter file. Such as: -``` +```shell python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="./output/det_db/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5 ``` -* Note: box_thresh and unclip_ratio are parameters required for DB post-processing, and not need to be set when evaluating the EAST model. +* Note: `box_thresh` and `unclip_ratio` are parameters required for DB post-processing, and not need to be set when evaluating the EAST model. -## TEST DETECTION RESULT +## TEST Test the detection result on a single image: -``` +```shell python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/img_10.jpg" Global.checkpoints="./output/det_db/best_accuracy" ``` When testing the DB model, adjust the post-processing threshold: -``` +```shell python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/img_10.jpg" Global.checkpoints="./output/det_db/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5 ``` Test the detection result on all images in the folder: -``` +```shell python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/" Global.checkpoints="./output/det_db/best_accuracy" ``` diff --git a/doc/doc_en/serving_en.md b/doc/doc_en/serving_en.md new file mode 100644 index 00000000..6d847d90 --- /dev/null +++ b/doc/doc_en/serving_en.md @@ -0,0 +1,179 @@ +# Service deployment + +PaddleOCR provides 2 service deployment methods:: +- Based on **HubServing**:Has been integrated into PaddleOCR ([code](https://github.com/PaddlePaddle/PaddleOCR/tree/develop/deploy/hubserving)). Please follow this tutorial. +- Based on **PaddleServing**:See PaddleServing official website for details ([demo](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/ocr)). Follow-up will also be integrated into PaddleOCR. + +The service deployment directory includes three service packages: detection, recognition, and two-stage series connection. Select the corresponding service package to install and start service according to your needs. The directory is as follows: +``` +deploy/hubserving/ + └─ ocr_det detection module service package + └─ ocr_rec recognition module service package + └─ ocr_system two-stage series connection service package +``` + +Each service pack contains 3 files. Take the 2-stage series connection service package as an example, the directory is as follows: +``` +deploy/hubserving/ocr_system/ + └─ __init__.py Empty file, required + └─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service + └─ module.py Main module file, required, contains the complete logic of the service + └─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters +``` + +## Quick start service +The following steps take the 2-stage series service as an example. If only the detection service or recognition service is needed, replace the corresponding file path. + +### 1. Prepare the environment +```shell +# Install paddlehub +pip3 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple + +# Set environment variables +export PYTHONPATH=. +``` + +### 2. Install Service Module +PaddleOCR provides 3 kinds of service modules, install the required modules according to your needs. Such as: + +Install the detection service module: +```shell +hub install deploy/hubserving/ocr_det/ +``` +Or, install the recognition service module: +```shell +hub install deploy/hubserving/ocr_rec/ +``` +Or, install the 2-stage series service module: +```shell +hub install deploy/hubserving/ocr_system/ +``` + +### 3. Start service +#### Way 1. Start with command line parameters (CPU only) + +**start command:** +```shell +$ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \ + --port XXXX \ + --use_multiprocess \ + --workers \ +``` +**parameters:** + +|parameters|usage| +|-|-| +|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs
*`When Version is not specified, the latest version is selected by default`*| +|--port/-p|Service port, default is 8866| +|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines
*`Windows operating system only supports single-process mode`*| +|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores| + +For example, start the 2-stage series service: +```shell +hub serving start -m ocr_system +``` + +This completes the deployment of a service API, using the default port number 8866. + +#### Way 2. Start with configuration file(CPU、GPU) +**start command:** +```shell +hub serving start --config/-c config.json +``` +Wherein, the format of `config.json` is as follows: +```python +{ + "modules_info": { + "ocr_system": { + "init_args": { + "version": "1.0.0", + "use_gpu": true + }, + "predict_args": { + } + } + }, + "port": 8868, + "use_multiprocess": false, + "workers": 2 +} +``` +- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them, **when `use_gpu` is `true`, it means that the GPU is used to start the service**. +- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. + +**Note:** +- When using the configuration file to start the service, other parameters will be ignored. +- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it. +- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.** + +For example, use GPU card No. 3 to start the 2-stage series service: +```shell +export CUDA_VISIBLE_DEVICES=3 +hub serving start -c deploy/hubserving/ocr_system/config.json +``` + +## Send prediction requests +After the service starts, you can use the following command to send a prediction request to obtain the prediction result: +```shell +python tools/test_hubserving.py server_url image_path +``` + +Two parameters need to be passed to the script: +- **server_url**:service address,format of which is +`http://[ip_address]:[port]/predict/[module_name]` +For example, if the detection, recognition and 2-stage serial services are started with provided configuration files, the respective `server_url` would be: +`http://127.0.0.1:8866/predict/ocr_det` +`http://127.0.0.1:8867/predict/ocr_rec` +`http://127.0.0.1:8868/predict/ocr_system` +- **image_path**:Test image path, can be a single image path or an image directory path + +**Eg.** +```shell +python tools/test_hubserving.py http://127.0.0.1:8868/predict/ocr_system ./doc/imgs/ +``` + +## Returned result format +The returned result is a list. Each item in the list is a dict. The dict may contain three fields. The information is as follows: + +|field name|data type|description| +|-|-|-| +|text|str|text content| +|confidence|float|text recognition confidence| +|text_region|list|text location coordinates| + +The fields returned by different modules are different. For example, the results returned by the text recognition service module do not contain `text_region`. The details are as follows: + +|field name/module name|ocr_det|ocr_rec|ocr_system| +|-|-|-|-| +|text||✔|✔| +|confidence||✔|✔| +|text_region|✔||✔| + +**Note:** If you need to add, delete or modify the returned fields, you can modify the file `module.py` of the corresponding module. For the complete process, refer to the user-defined modification service module in the next section. + +## User defined service module modification +If you need to modify the service logic, the following steps are generally required (take the modification of `ocr_system` for example): + +- 1. Stop service +```shell +hub serving stop --port/-p XXXX +``` + +- 2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. +For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `det_model_dir` and `rec_model_dir` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation. It is suggested to run `module.py` directly for debugging after modification before starting the service test. + +- 3. Uninstall old service module +```shell +hub uninstall ocr_system +``` + +- 4. Install modified service module +```shell +hub install deploy/hubserving/ocr_system/ +``` + +- 5. Restart service +```shell +hub serving start -m ocr_system +``` + -- GitLab