From 2629454d0c87fc0d2295b11ffa529829e5fb4c5f Mon Sep 17 00:00:00 2001 From: Wenyu Date: Thu, 12 May 2022 10:13:54 +0800 Subject: [PATCH] update e quant link (#5938) --- configs/ppyoloe/README.md | 2 +- configs/ppyoloe/README_cn.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index 06f3cd6ac..595854a9e 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -136,7 +136,7 @@ PP-YOLOE can be deployed by following approches: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) - - [PaddleSlim](../configs/slim) + - [PaddleSlim](../slim) Next, we will introduce how to use Paddle Inference to deploy PP-YOLOE models in TensorRT FP16 mode. diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 52fe9706b..52d70a9f7 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -138,7 +138,7 @@ PP-YOLOE可以使用以下方式进行部署: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) - - [PaddleSlim 模型量化](../configs/slim) + - [PaddleSlim模型量化](../slim) 接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署 -- GitLab