From 81f84e237b71fb152d749a06860d34d34938dde1 Mon Sep 17 00:00:00 2001 From: Wenyu Date: Tue, 10 May 2022 17:23:06 +0800 Subject: [PATCH] add slim link (#5908) --- configs/ppyoloe/README.md | 1 + configs/ppyoloe/README_cn.md | 1 + 2 files changed, 2 insertions(+) diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index f6a78004d..06f3cd6ac 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -136,6 +136,7 @@ PP-YOLOE can be deployed by following approches: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) + - [PaddleSlim](../configs/slim) Next, we will introduce how to use Paddle Inference to deploy PP-YOLOE models in TensorRT FP16 mode. diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 5ce397ce5..52fe9706b 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -138,6 +138,7 @@ PP-YOLOE可以使用以下方式进行部署: - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) - [PaddleServing](https://github.com/PaddlePaddle/Serving) + - [PaddleSlim 模型量化](../configs/slim) 接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署 -- GitLab