From f4b8dded1ef7555325997cedd056fdb8f728ac9d Mon Sep 17 00:00:00 2001 From: qingqing01 Date: Tue, 29 Mar 2022 14:53:39 +0800 Subject: [PATCH] Update docs for PP-YOLOE (#5494) --- configs/ppyoloe/README.md | 8 ++++---- configs/ppyoloe/README_cn.md | 12 ++++++------ 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index c7d4df813..365c18e76 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -75,7 +75,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 ### 4. Deployment -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) +- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) - [PaddleServing](https://github.com/PaddlePaddle/Serving) @@ -86,16 +86,16 @@ For deployment on GPU or benchmarked, model should be first exported to inferenc Exporting PP-YOLOE for Paddle Inference **without TensorRT**, use following command. ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams ``` Exporting PP-YOLOE for Paddle Inference **with TensorRT** for better performance, use following command with extra `-o trt=True` setting. ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -`deploy/python/infer.py` is used to load exported paddle inference model above for inference and benchmark through PaddleInference. +`deploy/python/infer.py` is used to load exported paddle inference model above for inference and benchmark through Paddle Inference. ```bash # inference single image diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 32c259667..51159b483 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -76,7 +76,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 ### 4. 部署 -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) +- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) - [PaddleServing](https://github.com/PaddlePaddle/Serving) @@ -84,19 +84,19 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 PP-YOLOE在GPU上部署或者推理benchmark需要通过`tools/export_model.py`导出模型。 -当你使用PaddleInferenced但不使用TensorRT时,运行以下的命令进行导出 +当你使用Paddle Inference但不使用TensorRT时,运行以下的命令进行导出 ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams ``` -当你使用PaddleInference的TensorRT时,需要指定`-o trt=True`进行导出 +当你使用Paddle Inference的TensorRT时,需要指定`-o trt=True`进行导出 ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -`deploy/python/infer.py`使用上述导出后的PaddleInference模型用于推理和benchnark. +`deploy/python/infer.py`使用上述导出后的Paddle Inference模型用于推理和benchnark. ```bash # 推理单张图片 -- GitLab