diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index 06f3cd6ac710cf43ba3de760eebb9b836c0d88ff..595854a9e915904585c2278ef18ccafeac60f36e 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -136,7 +136,7 @@ PP-YOLOE can be deployed by following approches: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) - - [PaddleSlim](../configs/slim) + - [PaddleSlim](../slim) Next, we will introduce how to use Paddle Inference to deploy PP-YOLOE models in TensorRT FP16 mode. diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 52fe9706b46a4d5567538ccb7e2c13bd450561cf..52d70a9f7c659e200bfea10ecccedf2b745c8b1d 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -138,7 +138,7 @@ PP-YOLOE可以使用以下方式进行部署: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) - - [PaddleSlim 模型量化](../configs/slim) + - [PaddleSlim模型量化](../slim) 接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署