{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. 模型简介\n", "PP-YOLOE+是PP-YOLOE的升级版本,从大规模的obj365目标检测预训练模型入手,在大幅提升收敛速度的同时,提升了模型在COCO数据集上的速度。同时,PP-YOLOE+大幅提升了包括数据预处理在内的端到端的预测速度。关于PP-YOLOE+的更多细节可以参考我们的[官方文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/ppyoloe/README_cn.md)。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. 模型效果\n", "PP-YOLOE+_l在COCO test-dev2017达到了53.3的mAP, 同时其速度在Tesla V100上达到了78.1 FPS。如下图所示,PP-YOLOE+_s/m/x同样具有卓越的精度速度性价比。\n", "
\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. 模型如何使用\n", "首先克隆PaddleDetection,并将数据集存放在`dataset/coco/`目录下面" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "%cd ~/work\n", "!git clone https://gitee.com/paddlepaddle/PaddleDetection\n", "%cd PaddleDetection\n", "!pip install -r requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.1 训练\n", "执行以下指令训练PP-YOLOE+" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "# 单卡训练\n", "!python tools/train.py -c configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml --eval --amp\n", "\n", "# 多卡训练\n", "!python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml --eval --amp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**注意:**\n", "- 如果需要边训练边评估,请添加`--eval`.\n", "- PP-YOLOE支持混合精度训练,请添加`--amp`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 推理部署\n", "#### 3.2.1 使用Paddle-TRT进行推理部署\n", "接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署\n", "\n", "首先,参考[Paddle Inference文档](https://www.paddlepaddle.org.cn/inference/master/user_guides/download_lib.html#python),下载并安装与你的CUDA, CUDNN和TensorRT相应的wheel包。\n", "\n", "然后,运行以下命令导出模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "!python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco.pdparams trt=True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "最后,使用TensorRT FP16进行推理" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "# 推理单张图片\n", "!CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_plus_crn_l_80e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_mode=trt_fp16\n", "\n", "# 推理文件夹下的所有图片\n", "!CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_plus_crn_l_80e_coco --image_dir=demo/ --device=gpu --run_mode=trt_fp16" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**注意:**\n", "- TensorRT会根据网络的定义,执行针对当前硬件平台的优化,生成推理引擎并序列化为文件。该推理引擎只适用于当前软硬件平台。如果你的软硬件平台没有发生变化,你可以设置[enable_tensorrt_engine](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/infer.py#L745)的参数`use_static=True`,这样生成的序列化文件将会保存在`output_inference`文件夹下,下次执行TensorRT时将加载保存的序列化文件。\n", "- PaddleDetection release/2.4及其之后的版本将支持NMS调用TensorRT,需要依赖PaddlePaddle release/2.3及其之后的版本。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.2.2 使用PaddleInference进行推理部署\n", "对于一些不支持TensorRT的AI加速硬件,我们可以直接使用PaddleInference进行部署。首先运行以下命令导出模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "!python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco.pdparams" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "直接使用PaddleInference进行推理" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "# 推理单张图片\n", "!CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_plus_crn_l_80e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_mode=paddle\n", "\n", "# 推理文件夹下的所有图片\n", "!CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_plus_crn_l_80e_coco --image_dir=demo/ --device=gpu --run_mode=paddle" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.2.3 使用ONNX-TensorRT进行速度测试\n", "**使用 ONNX 和 TensorRT** 进行测速,执行以下命令:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "# 导出模型\n", "!python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams exclude_nms=True trt=True\n", "\n", "# 转化成ONNX格式\n", "!paddle2onnx --model_dir output_inference/ppyoloe_plus_crn_s_80e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file ppyoloe_plus_crn_s_80e_coco.onnx\n", "\n", "# 测试速度,半精度,batch_size=1\n", "!trtexec --onnx=./ppyoloe_plus_crn_s_80e_coco.onnx --saveEngine=./ppyoloe_s_bs1.engine --workspace=1024 --avgRuns=1000 --shapes=image:1x3x640x640,scale_factor:1x2 --fp16\n", "\n", "# 测试速度,半精度,batch_size=32\n", "!trtexec --onnx=./ppyoloe_plus_crn_s_80e_coco.onnx --saveEngine=./ppyoloe_s_bs32.engine --workspace=1024 --avgRuns=1000 --shapes=image:32x3x640x640,scale_factor:32x2 --fp16\n", "\n", "# 使用上边的脚本, 在T4 和 TensorRT 7.2的环境下,PPYOLOE-plus-s模型速度如下\n", "# batch_size=1, 2.80ms, 357fps\n", "# batch_size=32, 67.69ms, 472fps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. 模型原理\n", "PP-YOLOE+和PP-YOLOE的整体结构基本一致,其相对于PP-YOLOE的改进点如下:\n", "- 使用大规模数据集obj365预训练模型\n", "- 在backbone中block分支中增加alpha参数\n", "- 优化端到端推理速度,提升训练收敛速度" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.注意事项\n", "**所有的命令默认运行在AI Studio的`jupyter`上, 如果运行在终端上,去掉命令开头的符号%或!**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. 相关论文及引用信息\n", "```\n", "@article{xu2022pp,\n", " title={PP-YOLOE: An evolved version of YOLO},\n", " author={Xu, Shangliang and Wang, Xinxin and Lv, Wenyu and Chang, Qinyao and Cui, Cheng and Deng, Kaipeng and Wang, Guanzhong and Dang, Qingqing and Wei, Shengyu and Du, Yuning and others},\n", " journal={arXiv preprint arXiv:2203.16250},\n", " year={2022}\n", "}\n", "```" ] } ], "metadata": { "language_info": { "name": "python" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }