未验证 提交 23e9f249 编写于 作者: F Feng Ni 提交者: GitHub

[cherry-pick][doc]Update changelog and doc (#5817)

* update doc, test=document_fix

* update doc, test=document_fix

* update changelog, test=document_fix

* update readme model, test=document_fix
上级 2702ef0f
...@@ -31,6 +31,8 @@ ...@@ -31,6 +31,8 @@
- 应用落地难点剖析与解决方案 - 应用落地难点剖析与解决方案
- 行人分析实战与Docker云上训练部署 - 行人分析实战与Docker云上训练部署
🔥 **[课程回放链接](https://aistudio.baidu.com/aistudio/education/group/info/23670)**🔥
赶紧扫码报名上车吧!! 赶紧扫码报名上车吧!!
<div align="left"> <div align="left">
...@@ -42,9 +44,10 @@ ...@@ -42,9 +44,10 @@
- 🔥 **2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)** - 🔥 **2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)**
- 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe)COCO数据集精度51.4%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。 - 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe)发布s/m/l/x版本,l版本COCO test2017数据集精度51.4%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。
- 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),精度提升2%左右,CPU预测速度提升63%,新增参数量0.7M的PicoDet-XS模型,提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。 - 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),精度提升2%左右,CPU预测速度提升63%,新增参数量0.7M的PicoDet-XS模型,提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。
- 发布实时行人分析工具[PP-Human](deploy/pphuman),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。 - 发布实时行人分析工具[PP-Human](deploy/pphuman),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。
- 新增[YOLOX](configs/yolox)目标检测模型,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8%。
- 2021.11.03: PaddleDetection发布[release/2.3版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3) - 2021.11.03: PaddleDetection发布[release/2.3版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3)
...@@ -135,6 +138,8 @@ ...@@ -135,6 +138,8 @@
<li>YOLOv4</li> <li>YOLOv4</li>
<li>PP-YOLOv1/v2</li> <li>PP-YOLOv1/v2</li>
<li>PP-YOLO-Tiny</li> <li>PP-YOLO-Tiny</li>
<li>PP-YOLOE</li>
<li>YOLOX</li>
<li>SSD</li> <li>SSD</li>
<li>CornerNet-Squeeze</li> <li>CornerNet-Squeeze</li>
<li>FCOS</li> <li>FCOS</li>
...@@ -160,7 +165,7 @@ ...@@ -160,7 +165,7 @@
<ul> <ul>
<li>JDE</li> <li>JDE</li>
<li>FairMOT</li> <li>FairMOT</li>
<li>DeepSort</li> <li>DeepSORT</li>
</ul> </ul>
<li><b>KeyPoint-Detection</b></li> <li><b>KeyPoint-Detection</b></li>
<ul> <ul>
...@@ -240,6 +245,7 @@ ...@@ -240,6 +245,7 @@
<li>Color Distort</li> <li>Color Distort</li>
<li>Random Erasing</li> <li>Random Erasing</li>
<li>Mixup </li> <li>Mixup </li>
<li>AugmentHSV</li>
<li>Mosaic</li> <li>Mosaic</li>
<li>Cutmix </li> <li>Cutmix </li>
<li>Grid Mask</li> <li>Grid Mask</li>
...@@ -344,6 +350,7 @@ ...@@ -344,6 +350,7 @@
- [DeepSORT](configs/mot/deepsort/README_cn.md) - [DeepSORT](configs/mot/deepsort/README_cn.md)
- [JDE](configs/mot/jde/README_cn.md) - [JDE](configs/mot/jde/README_cn.md)
- [FairMOT](configs/mot/fairmot/README_cn.md) - [FairMOT](configs/mot/fairmot/README_cn.md)
- [ByteTrack](configs/mot/bytetrack/README.md)
- 垂类领域 - 垂类领域
- [行人检测](configs/pedestrian/README.md) - [行人检测](configs/pedestrian/README.md)
- [车辆检测](configs/vehicle/README.md) - [车辆检测](configs/vehicle/README.md)
......
...@@ -18,9 +18,10 @@ English | [简体中文](README_cn.md) ...@@ -18,9 +18,10 @@ English | [简体中文](README_cn.md)
- 🔥 **2022.3.24:PaddleDetection [release 2.4 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)** - 🔥 **2022.3.24:PaddleDetection [release 2.4 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)**
- Release GPU SOTA object detection series models (s/m/l/x) [PP-YOLOE](configs/ppyoloe), achieving mAP as 51.4% on COCO test dataset and 78.1 FPS on Nvidia V100, supporting AMP training and its training speed is 33% faster than PP-YOLOv2. - Release GPU SOTA object detection series models (s/m/l/x) [PP-YOLOE](configs/ppyoloe), supporting s/m/l/x version, achieving mAP as 51.4% on COCO test dataset and 78.1 FPS on Nvidia V100 by PP-YOLOE-l, supporting AMP training and its training speed is 33% faster than PP-YOLOv2.
- Release enhanced models of [PP-PicoDet](configs/picodet), including PP-PicoDet-XS model with 0.7M parameters, its mAP promoted ~2% on COCO, inference speed accelerated 63% on CPU, and post-processing integrated into the network to optimize deployment pipeline. - Release enhanced models of [PP-PicoDet](configs/picodet), including PP-PicoDet-XS model with 0.7M parameters, its mAP promoted ~2% on COCO, inference speed accelerated 63% on CPU, and post-processing integrated into the network to optimize deployment pipeline.
- Release real-time human analysis tool [PP-Human](deploy/pphuman), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition. - Release real-time human analysis tool [PP-Human](deploy/pphuman), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition.
- Release [YOLOX](configs/yolox), supporting nano/tiny/s/m/l/x version, achieving mAP as 51.8% on COCO val dataset by YOLOX-x.
- 2021.11.03: Release [release/2.3](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.3) version. Release mobile object detection model ⚡[PP-PicoDet](configs/picodet), mobile keypoint detection model ⚡[PP-TinyPose](configs/keypoint/tiny_pose),Real-time tracking system [PP-Tracking](deploy/pptracking). Release object detection models, including [Swin-Transformer](configs/faster_rcnn), [TOOD](configs/tood), [GFL](configs/gfl), release [Sniper](configs/sniper) tiny object detection models and optimized [PP-YOLO-EB](configs/ppyolo) model for EdgeBoard. Release mobile keypoint detection model [Lite HRNet](configs/keypoint). - 2021.11.03: Release [release/2.3](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.3) version. Release mobile object detection model ⚡[PP-PicoDet](configs/picodet), mobile keypoint detection model ⚡[PP-TinyPose](configs/keypoint/tiny_pose),Real-time tracking system [PP-Tracking](deploy/pptracking). Release object detection models, including [Swin-Transformer](configs/faster_rcnn), [TOOD](configs/tood), [GFL](configs/gfl), release [Sniper](configs/sniper) tiny object detection models and optimized [PP-YOLO-EB](configs/ppyolo) model for EdgeBoard. Release mobile keypoint detection model [Lite HRNet](configs/keypoint).
...@@ -107,6 +108,8 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl ...@@ -107,6 +108,8 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl
<li>YOLOv4</li> <li>YOLOv4</li>
<li>PP-YOLOv1/v2</li> <li>PP-YOLOv1/v2</li>
<li>PP-YOLO-Tiny</li> <li>PP-YOLO-Tiny</li>
<li>PP-YOLOE</li>
<li>YOLOX</li>
<li>SSD</li> <li>SSD</li>
<li>CornerNet-Squeeze</li> <li>CornerNet-Squeeze</li>
<li>FCOS</li> <li>FCOS</li>
...@@ -132,7 +135,7 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl ...@@ -132,7 +135,7 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl
<ul> <ul>
<li>JDE</li> <li>JDE</li>
<li>FairMOT</li> <li>FairMOT</li>
<li>DeepSort</li> <li>DeepSORT</li>
</ul> </ul>
<li><b>KeyPoint-Detection</b></li> <li><b>KeyPoint-Detection</b></li>
<ul> <ul>
...@@ -213,6 +216,7 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl ...@@ -213,6 +216,7 @@ PaddleDetection is an end-to-end object detection development kit based on Paddl
<li>Random Erasing</li> <li>Random Erasing</li>
<li>Mixup </li> <li>Mixup </li>
<li>Mosaic</li> <li>Mosaic</li>
<li>AugmentHSV</li>
<li>Cutmix </li> <li>Cutmix </li>
<li>Grid Mask</li> <li>Grid Mask</li>
<li>Auto Augment</li> <li>Auto Augment</li>
...@@ -319,6 +323,7 @@ The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of represen ...@@ -319,6 +323,7 @@ The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of represen
- [DeepSORT](configs/mot/deepsort/README.md) - [DeepSORT](configs/mot/deepsort/README.md)
- [JDE](configs/mot/jde/README.md) - [JDE](configs/mot/jde/README.md)
- [FairMOT](configs/mot/fairmot/README.md) - [FairMOT](configs/mot/fairmot/README.md)
- [ByteTrack](configs/mot/bytetrack/README.md)
- Vertical field - Vertical field
- [Face detection](configs/face_detection/README_en.md) - [Face detection](configs/face_detection/README_en.md)
- [Pedestrian detection](configs/pedestrian/README.md) - [Pedestrian detection](configs/pedestrian/README.md)
......
...@@ -37,7 +37,7 @@ PP-YOLOE由以下方法组成 ...@@ -37,7 +37,7 @@ PP-YOLOE由以下方法组成
- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。 - PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
- PP-YOLOE模型推理速度测试采用单卡V100,batch size=1进行测试,使用**CUDA 10.2**, **CUDNN 7.6.5**,TensorRT推理速度测试使用**TensorRT 6.0.1.8** - PP-YOLOE模型推理速度测试采用单卡V100,batch size=1进行测试,使用**CUDA 10.2**, **CUDNN 7.6.5**,TensorRT推理速度测试使用**TensorRT 6.0.1.8**
- 参考[速度测试](#速度测试)以复现PP-YOLOE推理速度测试结果。 - 参考[速度测试](#速度测试)以复现PP-YOLOE推理速度测试结果。
- 如果你设置了`--run_benchnark=True`, 你首先需要安装以下依赖`pip install pynvml psutil GPUtil` - 如果你设置了`--run_benchmark=True`, 你首先需要安装以下依赖`pip install pynvml psutil GPUtil`
## 使用教程 ## 使用教程
......
...@@ -10,10 +10,11 @@ ...@@ -10,10 +10,11 @@
| YOLOX-s | 640 | 8 | 300e | ---- | 40.4 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams) | [配置文件](./yolox_s_300e_coco.yml) | | YOLOX-s | 640 | 8 | 300e | ---- | 40.4 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams) | [配置文件](./yolox_s_300e_coco.yml) |
| YOLOX-m | 640 | 8 | 300e | ---- | 46.9 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_m_300e_coco.pdparams) | [配置文件](./yolox_m_300e_coco.yml) | | YOLOX-m | 640 | 8 | 300e | ---- | 46.9 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_m_300e_coco.pdparams) | [配置文件](./yolox_m_300e_coco.yml) |
| YOLOX-l | 640 | 8 | 300e | ---- | 50.1 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | [配置文件](./yolox_l_300e_coco.yml) | | YOLOX-l | 640 | 8 | 300e | ---- | 50.1 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | [配置文件](./yolox_l_300e_coco.yml) |
| YOLOX-x | 640 | 8 | 300e | ---- | 51.4 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_x_300e_coco.pdparams) | [配置文件](./yolox_x_300e_coco.yml) | | YOLOX-x | 640 | 8 | 300e | ---- | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_x_300e_coco.pdparams) | [配置文件](./yolox_x_300e_coco.yml) |
**注意:** **注意:**
- 以上模型默认采用8 GPUs训练,总batch_size为64,均训练300 epochs; - YOLOX模型训练使用COCO train2017作为训练集,Box AP为在COCO val2017上的`mAP(IoU=0.5:0.95)`结果;
- YOLOX模型训练过程中默认使用8 GPUs进行混合精度训练,默认单卡batch_size为8,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率;
- 为保持高mAP的同时提高推理速度,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`nms_top_k`修改为`1000`,将`keep_top_k`修改为`100`,mAP会下降约0.1~0.2%; - 为保持高mAP的同时提高推理速度,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`nms_top_k`修改为`1000`,将`keep_top_k`修改为`100`,mAP会下降约0.1~0.2%;
- 为快速的demo演示效果,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`score_threshold`修改为`0.25`,将`nms_threshold`修改为`0.45`,但mAP会下降较多; - 为快速的demo演示效果,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`score_threshold`修改为`0.25`,将`nms_threshold`修改为`0.45`,但mAP会下降较多;
...@@ -45,15 +46,19 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/yolox/yolox_s_300e_coco. ...@@ -45,15 +46,19 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/yolox/yolox_s_300e_coco.
``` ```
### 4. 部署 ### 4. 部署
#### 4.1. 导出模型
YOLOX在GPU上推理部署或benchmark测速等需要通过`tools/export_model.py`导出模型。 YOLOX在GPU上推理部署或benchmark测速等需要通过`tools/export_model.py`导出模型。
运行以下的命令进行导出: 运行以下的命令进行导出:
```bash ```bash
python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams
``` ```
`deploy/python/infer.py`使用上述导出后的Paddle Inference模型用于推理和benchnark测速.
#### 4.2. Python部署
`deploy/python/infer.py`使用上述导出后的Paddle Inference模型用于推理和benchnark测速,如果设置了`--run_benchmark=True`, 首先需要安装以下依赖`pip install pynvml psutil GPUtil`
```bash ```bash
# 推理单张图片 # Python部署推理单张图片
python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu
# 推理文件夹下的所有图片 # 推理文件夹下的所有图片
...@@ -69,6 +74,12 @@ python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --i ...@@ -69,6 +74,12 @@ python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --i
python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True --trt_max_shape=640 --trt_min_shape=640 --trt_opt_shape=640 --run_mode=trt_fp16 python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True --trt_max_shape=640 --trt_min_shape=640 --trt_opt_shape=640 --run_mode=trt_fp16
``` ```
#### 4.2. C++部署
`deploy/cpp/build/main`使用上述导出后的Paddle Inference模型用于C++推理部署, 首先按照[docs](../../deploy/cpp/docs)编译安装环境。
```bash
# C++部署推理单张图片
./deploy/cpp/build/main --model_dir=output_inference/yolox_s_300e_coco/ --image_file=demo/000000014439_640x640.jpg --run_mode=paddle --device=GPU --threshold=0.5 --output_dir=cpp_infer_output/yolox_s_300e_coco
```
## Citations ## Citations
``` ```
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
### 2.4(03.24/2022) ### 2.4(03.24/2022)
- PP-YOLOE: - PP-YOLOE:
- 发布PP-YOLOE特色模型,COCO数据集精度51.4%,V100预测速度78.1 FPS,精度速度服务器端SOTA - 发布PP-YOLOE特色模型,l版本COCO test2017数据集精度51.4%,V100预测速度78.1 FPS,精度速度服务器端SOTA
- 发布s/m/l/x系列模型,打通TensorRT、ONNX部署能力 - 发布s/m/l/x系列模型,打通TensorRT、ONNX部署能力
- 支持混合精度训练,训练较PP-YOLOv2加速33% - 支持混合精度训练,训练较PP-YOLOv2加速33%
...@@ -22,6 +22,9 @@ ...@@ -22,6 +22,9 @@
- ReID支持Centroid模型 - ReID支持Centroid模型
- 动作识别支持ST-GCN摔倒检测 - 动作识别支持ST-GCN摔倒检测
- 模型丰富度:
- 发布YOLOX,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8%
- 框架功能优化: - 框架功能优化:
- EMA训练速度优化20%,优化EMA训练模型保存方式 - EMA训练速度优化20%,优化EMA训练模型保存方式
- 支持infer预测结果保存为COCO格式 - 支持infer预测结果保存为COCO格式
......
...@@ -7,7 +7,7 @@ English | [简体中文](./CHANGELOG.md) ...@@ -7,7 +7,7 @@ English | [简体中文](./CHANGELOG.md)
### 2.4(03.24/2022) ### 2.4(03.24/2022)
- PP-YOLOE: - PP-YOLOE:
- Release PP-YOLOE object detection models, achieve mAP as 51.4% on COCO test dataset and 78.1 FPS on Nvidia V100, reach SOTA performance for object detection on GPU`` - Release PP-YOLOE object detection models, achieve mAP as 51.4% on COCO test dataset and 78.1 FPS on Nvidia V100 by PP-YOLOE-l, reach SOTA performance for object detection on GPU``
- Release series models: s/m/l/x, and support deployment base on TensorRT & ONNX - Release series models: s/m/l/x, and support deployment base on TensorRT & ONNX
- Spport AMP training and training speed is 33% faster than PP-YOLOv2 - Spport AMP training and training speed is 33% faster than PP-YOLOv2
...@@ -22,6 +22,9 @@ English | [简体中文](./CHANGELOG.md) ...@@ -22,6 +22,9 @@ English | [简体中文](./CHANGELOG.md)
- Release Centroid model for ReID - Release Centroid model for ReID
- Release ST-GCN model for falldown action recognition - Release ST-GCN model for falldown action recognition
- Model richness:
- Publish YOLOX object detection model, release series models: nano/tiny/s/m/l/x, and YOLOX-x achieves mAP as 51.8% on COCO val2017 dataset
- Function Optimize: - Function Optimize:
- Optimize 20% training speed when training with EMA, improve saving method of EMA weights - Optimize 20% training speed when training with EMA, improve saving method of EMA weights
- Support saving inference results in COCO format - Support saving inference results in COCO format
......
...@@ -87,6 +87,14 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -87,6 +87,14 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) 请参考[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)
### PP-YOLOE
请参考[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe)
### YOLOX
请参考[YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox)
## 旋转框检测 ## 旋转框检测
...@@ -94,6 +102,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -94,6 +102,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/) 请参考[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/)
## 关键点检测 ## 关键点检测
### PP-TinyPose ### PP-TinyPose
...@@ -108,16 +117,21 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -108,16 +117,21 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) 请参考[HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet)
## 多目标跟踪 ## 多目标跟踪
### DeepSort ### DeepSORT
请参考[DeepSort](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) 请参考[DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort)
### JDE ### JDE
请参考[JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde) 请参考[JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
### fairmot ### FairMOT
请参考[FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### ByteTrack
请参考[FairMot](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) 请参考[ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack)
...@@ -86,6 +86,14 @@ Please refer to[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develo ...@@ -86,6 +86,14 @@ Please refer to[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develo
Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)
### PP-YOLOE
Please refer to[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe)
### YOLOX
Please refer to[YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox)
## Rotating frame detection ## Rotating frame detection
...@@ -93,6 +101,7 @@ Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/de ...@@ -93,6 +101,7 @@ Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/de
Please refer to[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/) Please refer to[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/)
## KeyPoint Detection ## KeyPoint Detection
### PP-TinyPose ### PP-TinyPose
...@@ -107,16 +116,21 @@ Please refer to [HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/dev ...@@ -107,16 +116,21 @@ Please refer to [HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/dev
Please refer to [HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) Please refer to [HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet)
## Multi-Object Tracking ## Multi-Object Tracking
### DeepSort ### DeepSORT
Please refer to [DeepSort](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) Please refer to [DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort)
### JDE ### JDE
Please refer to [JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde) Please refer to [JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
### fairmot ### FairMOT
Please refer to [FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### ByteTrack
Please refer to [FairMot](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack)
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册