diff --git a/README_cn.md b/README_cn.md index 42b71b8e6f3badc6396f523bce9706cf8094fbb1..9c56254ed8a2a718b692d8c4e6febf6067fb637c 100644 --- a/README_cn.md +++ b/README_cn.md @@ -255,6 +255,7 @@ - `PP-YOLO`在COCO数据集精度45.9%,Tesla V100预测速度72.9FPS,精度速度均优于[YOLOv4](https://arxiv.org/abs/2004.10934) - `PP-YOLO v2`是对`PP-YOLO`模型的进一步优化,在COCO数据集精度49.5%,Tesla V100预测速度68.9FPS - `PP-YOLOE`是对`PP-YOLO v2`模型的进一步优化,在COCO数据集精度51.4%,Tesla V100预测速度78.1FPS +- [`YOLOX`](configs/yolox)和[`YOLOv5`](https://github.com/nemonameless/PaddleDetection_YOLOv5/tree/main/configs/yolov5)均为基于PaddleDetection复现算法 - 图中模型均可在[模型库](#模型库)中获取 diff --git a/deploy/TENSOR_RT.md b/deploy/TENSOR_RT.md index 5a5a0b3d6d223dd9ae5fa4ea030bef18ffd702cb..10116f28510a86c352a6790ea3e6d6a523cfddb3 100644 --- a/deploy/TENSOR_RT.md +++ b/deploy/TENSOR_RT.md @@ -1,5 +1,5 @@ # TensorRT预测部署教程 -TensorRT是NVIDIA提出的用于统一模型部署的加速库,可以应用于V100、JETSON Xavier等硬件,它可以极大提高预测速度。Paddle TensorRT教程请参考文档[使用Paddle-TensorRT库预测](https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html#) +TensorRT是NVIDIA提出的用于统一模型部署的加速库,可以应用于V100、JETSON Xavier等硬件,它可以极大提高预测速度。Paddle TensorRT教程请参考文档[使用Paddle-TensorRT库预测](https://www.paddlepaddle.org.cn/inference/optimize/paddle_trt.html) ## 1. 安装PaddleInference预测库 - Python安装包,请从[这里](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html#python) 下载带有tensorrt的安装包进行安装 diff --git a/docs/images/fps_map.png b/docs/images/fps_map.png index 44f3e846a877fd08fcd905027a1eced14ccb5539..5c22d725b01d374de8f394096f140b3d33cebfa7 100644 Binary files a/docs/images/fps_map.png and b/docs/images/fps_map.png differ