diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index c7d4df8131d905779cc1b85c4e759a0321f9c61a..365c18e765ea0e2e31045d44adca0db9a4f914c8 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -75,7 +75,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 ### 4. Deployment -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) +- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) - [PaddleServing](https://github.com/PaddlePaddle/Serving) @@ -86,16 +86,16 @@ For deployment on GPU or benchmarked, model should be first exported to inferenc Exporting PP-YOLOE for Paddle Inference **without TensorRT**, use following command. ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams ``` Exporting PP-YOLOE for Paddle Inference **with TensorRT** for better performance, use following command with extra `-o trt=True` setting. ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -`deploy/python/infer.py` is used to load exported paddle inference model above for inference and benchmark through PaddleInference. +`deploy/python/infer.py` is used to load exported paddle inference model above for inference and benchmark through Paddle Inference. ```bash # inference single image diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 32c259667ada4ffb9e3b46e94a6a620012d1cada..51159b4832ac6ad1bfa3e24e2e3a91b6977fc470 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -76,7 +76,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 ### 4. 部署 -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) +- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) - [PaddleServing](https://github.com/PaddlePaddle/Serving) @@ -84,19 +84,19 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 PP-YOLOE在GPU上部署或者推理benchmark需要通过`tools/export_model.py`导出模型。 -当你使用PaddleInferenced但不使用TensorRT时,运行以下的命令进行导出 +当你使用Paddle Inference但不使用TensorRT时,运行以下的命令进行导出 ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams ``` -当你使用PaddleInference的TensorRT时,需要指定`-o trt=True`进行导出 +当你使用Paddle Inference的TensorRT时,需要指定`-o trt=True`进行导出 ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -`deploy/python/infer.py`使用上述导出后的PaddleInference模型用于推理和benchnark. +`deploy/python/infer.py`使用上述导出后的Paddle Inference模型用于推理和benchnark. ```bash # 推理单张图片