diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index b563ecf48c2aba03e25a03ae0328c244bb900356..f81d9c75e93808b81f0659ccf46b629091b2c9fb 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -1,6 +1,8 @@ # 服务器端C++预测 -本教程将介绍在服务器端部署PaddleOCR超轻量中文检测、识别模型的详细步骤。 +本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)。 +C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成 +PaddleOCR模型部署。 ## 1. 准备环境 diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index 41c764bc18a69965da6ad2ea521f438840c286e6..8a0bd62ecc6bf617c6e2954d1080bf97a6582acd 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -1,7 +1,9 @@ # Server-side C++ inference - -In this tutorial, we will introduce the detailed steps of deploying PaddleOCR ultra-lightweight Chinese detection and recognition models on the server side. +This chapter introduces the C++ deployment method of the PaddleOCR model, and the corresponding python predictive deployment method refers to [document](../../doc/doc_ch/inference.md). +C++ is better than python in terms of performance calculation. Therefore, in most CPU and GPU deployment scenarios, C++ deployment is mostly used. +This section will introduce how to configure the C++ environment and complete it in the Linux\Windows (CPU\GPU) environment +PaddleOCR model deployment. ## 1. Prepare the environment diff --git a/doc/doc_ch/inference.md b/doc/doc_ch/inference.md index f0a8983c4ba7d2413ed8b6dbced79eecced85140..7968b355ea936d465b3c173c0fcdb3e08f12f16e 100755 --- a/doc/doc_ch/inference.md +++ b/doc/doc_ch/inference.md @@ -2,10 +2,11 @@ # 基于Python预测引擎推理 inference 模型(`paddle.jit.save`保存的模型) -一般是模型训练完成后保存的固化模型,多用于预测部署。训练过程中保存的模型是checkpoints模型,保存的是模型的参数,多用于恢复训练等。 -与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合与实际系统集成。 +一般是模型训练,把模型结构和模型参数保存在文件中的固化模型,多用于预测部署场景。 +训练过程中保存的模型是checkpoints模型,保存的只有模型的参数,多用于恢复训练等。 +与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合于实际系统集成。 -接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联基于预测引擎推理。 +接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联在CPU、GPU上的预测方法。 - [一、训练模型转inference模型](#训练模型转inference模型) diff --git a/tools/infer/utility.py b/tools/infer/utility.py index 70e855c7437675ba499129ee219b64d91548a443..18dedbda99c11d5b877388c1c516b2548b1a8335 100755 --- a/tools/infer/utility.py +++ b/tools/infer/utility.py @@ -48,7 +48,7 @@ def parse_args(): parser.add_argument("--det_db_unclip_ratio", type=float, default=1.6) parser.add_argument("--max_batch_size", type=int, default=10) parser.add_argument("--use_dialtion", type=bool, default=False) - + # EAST parmas parser.add_argument("--det_east_score_thresh", type=float, default=0.8) parser.add_argument("--det_east_cover_thresh", type=float, default=0.1) diff --git a/tools/program.py b/tools/program.py index 6277d7475868912ad86e1475c2ea6ac930cd6297..ae6491768ceff0379c917c45fd29b30514eba9c1 100755 --- a/tools/program.py +++ b/tools/program.py @@ -394,6 +394,7 @@ def preprocess(is_train=False): logger = get_logger(name='root', log_file=log_file) if config['Global']['use_visualdl']: from visualdl import LogWriter + save_model_dir = config['Global']['save_model_dir'] vdl_writer_path = '{}/vdl/'.format(save_model_dir) os.makedirs(vdl_writer_path, exist_ok=True) vdl_writer = LogWriter(logdir=vdl_writer_path) diff --git a/train.sh b/train.sh index 8fe861a3d79d38929fc4a4f4464187f77d27ff2f..4225470cb9f545b874e5f806af22405895e8f6c7 100644 --- a/train.sh +++ b/train.sh @@ -1,2 +1,2 @@ # recommended paddle.__version__ == 2.0.0 -python3 -m paddle.distributed.launch --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml +python3 -m paddle.distributed.launch --log_dir=./debug/ --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml