diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index 66be846516f1926f45bf6978fbb5b9b17a035cbd..30b8628517d605c74008378078aef3f03528e7cf 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -18,6 +18,7 @@ PaddleOCR模型部署。 * 首先需要从opencv官网上下载在Linux环境下源码编译的包,以opencv3.4.7为例,下载命令如下。 ``` +cd deploy/cpp_infer wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz tar -xf 3.4.7.tar.gz ``` diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index 6c0a18db4f76d4e2971cea16130216434ff01d7b..b03187a7659a5f3bb7ca67970febe853dd201fa1 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -18,6 +18,7 @@ PaddleOCR model deployment. * First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows. ``` +cd deploy/cpp_infer wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz tar -xf 3.4.7.tar.gz ``` diff --git a/deploy/hubserving/readme.md b/deploy/hubserving/readme.md index 9351fa8d4fb8ee507d8e4f838397ecb615c20612..11b843fec1052c3ad401ca0b7d1cb602401af8f8 100755 --- a/deploy/hubserving/readme.md +++ b/deploy/hubserving/readme.md @@ -29,6 +29,7 @@ deploy/hubserving/ocr_system/ ### 1. 准备环境 ```shell # 安装paddlehub +# paddlehub 需要 python>3.6.2 pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` diff --git a/deploy/hubserving/readme_en.md b/deploy/hubserving/readme_en.md index 98ffcad63c822b4b03e58ae088cafd584aa824ab..539ad722cae78b8315b87d35f9af6ab81140c5b3 100755 --- a/deploy/hubserving/readme_en.md +++ b/deploy/hubserving/readme_en.md @@ -30,6 +30,7 @@ The following steps take the 2-stage series service as an example. If only the d ### 1. Prepare the environment ```shell # Install paddlehub +# python>3.6.2 is required bt paddlehub pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` diff --git a/doc/doc_ch/detection.md b/doc/doc_ch/detection.md index 08b309cbc0def059df1f12a180be39d94511a78c..6fc85992c04123a10ad937f2694b513b50a37876 100644 --- a/doc/doc_ch/detection.md +++ b/doc/doc_ch/detection.md @@ -18,9 +18,9 @@ PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支 ``` # 将官网下载的标签文件转换为 train_icdar2015_label.txt -python gen_label.py --mode="det" --root_path="icdar_c4_train_imgs/" \ - --input_path="ch4_training_localization_transcription_gt" \ - --output_label="train_icdar2015_label.txt" +python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \ + --input_path="/path/to/ch4_training_localization_transcription_gt" \ + --output_label="/path/to/train_icdar2015_label.txt" ``` 解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是: diff --git a/doc/doc_ch/inference.md b/doc/doc_ch/inference.md index d4c566b134ce4461cecdebbfccacebf3a6feb534..b9be1e4cb2d1b256a05b82ef5d6db49dfcb2f31f 100755 --- a/doc/doc_ch/inference.md +++ b/doc/doc_ch/inference.md @@ -221,7 +221,7 @@ python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Gl ``` -**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,**可以执行如下命令: +SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,可以执行如下命令: ``` python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True ``` diff --git a/doc/doc_en/inference_en.md b/doc/doc_en/inference_en.md index 16285c087c5463a9e4c209ab91a21adc5150cc9c..e30355fb8e29031bd4ce040a86ad0f57d18ce398 100755 --- a/doc/doc_en/inference_en.md +++ b/doc/doc_en/inference_en.md @@ -230,7 +230,7 @@ First, convert the model saved in the SAST text detection training process into python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.pretrained_model=./det_r50_vd_sast_totaltext_v2.0_train/best_accuracy Global.save_inference_dir=./inference/det_sast_tt ``` -**For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`**, run the following command: +For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`, run the following command: ``` python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True diff --git a/ppocr/utils/gen_label.py b/ppocr/utils/gen_label.py index 43afe9ddf182ad0da8df023ff29cd3759011d890..fb78bd38bcfc1a59cac48a28bbb655ecb83bcb3f 100644 --- a/ppocr/utils/gen_label.py +++ b/ppocr/utils/gen_label.py @@ -1,16 +1,16 @@ -#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. # -#Licensed under the Apache License, Version 2.0 (the "License"); -#you may not use this file except in compliance with the License. -#You may obtain a copy of the License at +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # -#Unless required by applicable law or agreed to in writing, software -#distributed under the License is distributed on an "AS IS" BASIS, -#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -#See the License for the specific language governing permissions and -#limitations under the License. +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. import os import argparse import json @@ -31,7 +31,9 @@ def gen_det_label(root_path, input_dir, out_label): for label_file in os.listdir(input_dir): img_path = root_path + label_file[3:-4] + ".jpg" label = [] - with open(os.path.join(input_dir, label_file), 'r') as f: + with open( + os.path.join(input_dir, label_file), 'r', + encoding='utf-8-sig') as f: for line in f.readlines(): tmp = line.strip("\n\r").replace("\xef\xbb\xbf", "").split(',')