未验证 提交 2629454d 编写于 作者: W Wenyu 提交者: GitHub

update e quant link (#5938)

上级 a8530168
......@@ -136,7 +136,7 @@ PP-YOLOE can be deployed by following approches:
- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [Paddle-TensorRT](../../deploy/TENSOR_RT.md)
- [PaddleServing](https://github.com/PaddlePaddle/Serving)
- [PaddleSlim](../configs/slim)
- [PaddleSlim](../slim)
Next, we will introduce how to use Paddle Inference to deploy PP-YOLOE models in TensorRT FP16 mode.
......
......@@ -138,7 +138,7 @@ PP-YOLOE可以使用以下方式进行部署:
- Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [Paddle-TensorRT](../../deploy/TENSOR_RT.md)
- [PaddleServing](https://github.com/PaddlePaddle/Serving)
- [PaddleSlim 模型量化](../configs/slim)
- [PaddleSlim模型量化](../slim)
接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册