未验证 提交 bb4fbe84 编写于 作者: J JYChen 提交者: GitHub

[Cherry-pick ] add pdparams links (#5866)

* fix kpt doc error

* add pdparams links
上级 530fb7c4
......@@ -52,7 +52,6 @@ PaddleDetection 关键点检测能力紧跟业内最新最优算法方案,包
## 模型库
## 模型库
COCO数据集
| 模型 | 方案 |输入尺寸 | AP(coco val) | 模型下载 | 配置文件 |
| :---------------- | -------- | :----------: | :----------------------------------------------------------: | ----------------------------------------------------| ------- |
......@@ -77,12 +76,12 @@ MPII数据集
| :---- | ---|----- | :--------: | :------------: | :----------------------------------------------------------: | -------------------------------------------- |
| HRNet-w32 | Top-Down|256x256 | 90.6 | 38.5 | [hrnet_w32_256x256_mpii.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x256_mpii.pdparams) | [config](./hrnet/hrnet_w32_256x256_mpii.yml) |
场景模型
| 模型 | 方案 | 输入尺寸 | 精度 | 预测速度 |模型权重 | 部署模型 | 说明|
| :---- | ---|----- | :--------: | :--------: | :------------: |:------------: |:-------------------: |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pphuman/README.md) |
我们同时推出了基于LiteHRNet(Top-Down)针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。
| 模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32)| 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16)|
| :------------------------ | :-------: | :------: | :------: |:---: | :---: | :---: | :---: | :---: | :---: |
| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) |
| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) |
## 快速开始
......
......@@ -82,6 +82,12 @@ MPII Dataset
| HRNet-w32 | 256x256 | 90.6 | 38.5 | [hrnet_w32_256x256_mpii.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x256_mpii.pdparams) | [config](./hrnet/hrnet_w32_256x256_mpii.yml) |
Model for Scenes
| Model | Strategy | Input Size | Precision | Inference Speed |Model Weights | Model Inference and Deployment | description|
| :---- | ---|----- | :--------: | :-------: |:------------: |:------------: |:-------------------: |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pphuman/README_en.md) |
We also release [PP-TinyPose](./tiny_pose/README_en.md), a real-time keypoint detection model optimized for mobile devices. Welcome to experience.
## Getting Start
......@@ -145,7 +151,6 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/infer.py -c configs/keypoint/higherhrnet/hi
##### Deployment for Top-Down models
```shell
#Export Detection Model
python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
......
......@@ -40,14 +40,14 @@ pip install -r requirements.txt
PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模型,以实现不同使用场景,用户可以直接下载使用
| 任务 | 适用场景 | 精度 | 预测速度(ms) | 预测部署模型 |
| :---------: |:---------: |:--------------- | :-------: | :------: |
| 目标检测 | 图片输入 | mAP: 56.3 | 28.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪 | 视频输入 | MOTA: 72.0 | 33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | 视频输入 跨镜跟踪 | mAP: 98.8 | 单人1.5ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
| 任务 | 适用场景 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 |
| :---------: |:---------: |:--------------- | :-------: | :------: | :------: |
| 目标检测 | 图片输入 | mAP: 56.3 | 28.0ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪 | 视频输入 | MOTA: 72.0 | 33.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | 视频输入 跨镜跟踪 | mAP: 98.8 | 单人1.5ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
下载模型后,解压至`./output_inference`文件夹
......
......@@ -41,13 +41,14 @@ pip install -r requirements.txt
To make users have access to models of different scenarios, PP-Human provides pre-trained models of object detection, attribute recognition, behavior recognition, and ReID.
| Task | Scenario | Precision | Inference Speed(FPS) | Model Inference and Deployment |
| :---------: |:---------: |:--------------- | :-------: | :------: |
| Object Detection | Image/Video Input | mAP: 56.3 | 28.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Attribute Recognition | Image/Video Input Attribute Recognition | MOTA: 72.0 | 33.1ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| Keypoint Detection | Video Input Action Recognition | mA: 94.86 | 2ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| Behavior Recognition | Video Input Bheavior Recognition | Precision 96.43 | 2.7ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | Multi-Target Multi-Camera Tracking | mAP: 98.8 | 1.5ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
| Task | Scenario | Precision | Inference Speed(FPS) | Model Weights |Model Inference and Deployment |
| :---------: |:---------: |:--------------- | :-------: | :------: | :------: |
| Object Detection | Image/Video Input | mAP: 56.3 | 28.0ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Object Tracking | Image/Video Input | MOTA: 72.0 | 33.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Attribute Recognition | Image/Video Input Attribute Recognition | mA: 94.86 | 2ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| Keypoint Detection | Video Input Action Recognition | AP: 87.1 | 2.9ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| Action Recognition | Video Input Action Recognition | Precision 96.43 | 2.7ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | Multi-Target Multi-Camera Tracking | mAP: 98.8 | 1.5ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
Then, unzip the downloaded model to the folder `./output_inference`.
......
[English](action_en.md) | 简体中文
# PP-Human行为识别模块
行为识别在智慧社区,安防监控等方向具有广泛应用,PP-Human中集成了基于骨骼点的行为识别模块。
......@@ -10,11 +12,11 @@
## 模型库
在这里,我们提供了检测/跟踪、关键点识别以及识别摔倒动作的预训练模型,用户可以直接下载使用。
| 任务 | 算法 | 精度 | 预测速度(ms) | 下载链接 |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3 <br> MOTA: 72.0 | 检测: 28ms <br> 跟踪:33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 关键点识别 | HRNet | AP: 87.1 | 单人 2.9ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)|
| 行为识别 | ST-GCN | 准确率: 96.43 | 单人 2.7ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 |
|:---------------------|:---------:|:------:|:------:| :------: |:---------------------------------------------------------------------------------: |
| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3 <br> MOTA: 72.0 | 检测: 28ms <br> 跟踪:33.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 关键点识别 | HRNet | AP: 87.1 | 单人 2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)|
| 行为识别 | ST-GCN | 准确率: 96.43 | 单人 2.7ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
注:
......
English | [简体中文](action.md)
# Action Recognition Module of PP-Human
Action Recognition is widely used in the intelligent community/smart city, and security monitoring. PP-Human provides the module of skeleton-based action recognition.
......@@ -12,11 +14,11 @@ used for academic research here. </center>
There are multiple available pretrained models including pedestrian detection/tracking, keypoint detection, and fall detection models. Users can download and use them directly.
| Task | Algorithm | Precision | Inference Speed(ms) | Download Link |
|:----------------------------- |:---------:|:-------------------------:|:-----------------------------------:|:-----------------------------------------------------------------------------------------:|
| Pedestrian Detection/Tracking | PP-YOLOE | mAP: 56.3 <br> MOTA: 72.0 | Detection: 28ms <br>Tracking:33.1ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Keypoint Detection | HRNet | AP: 87.1 | Single Person 2.9ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) |
| Action Recognition | ST-GCN | Precision Rate: 96.43 | Single Person 2.7ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| Task | Algorithm | Precision | Inference Speed(ms) | Model Weights |Model Inference and Deployment |
|:----------------------------- |:---------:|:-------------------------:|:-----------------------------------:| :-----------------: |:-----------------------------------------------------------------------------------------:|
| Pedestrian Detection/Tracking | PP-YOLOE | mAP: 56.3 <br> MOTA: 72.0 | Detection: 28ms <br>Tracking:33.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Keypoint Detection | HRNet | AP: 87.1 | Single Person 2.9ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) |
| Action Recognition | ST-GCN | Precision Rate: 96.43 | Single Person 2.7ms | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
Note:
......
[English](mot_en.md) | 简体中文
# PP-Human检测跟踪模块
行人检测与跟踪在智慧社区,工业巡检,交通监控等方向都具有广泛应用,PP-Human中集成了检测跟踪模块,是关键点检测、属性行为识别等任务的基础。我们提供了预训练模型,用户可以直接下载使用。
......
English | [简体中文](mot.md)
# Detection and Tracking Module of PP-Human
Pedestrian detection and tracking is widely used in the intelligent community, industrial inspection, transportation monitoring and so on. PP-Human has the detection and tracking module, which is fundamental to keypoint detection, attribute action recognition, etc. Users enjoy easy access to pretrained models here.
......
[English](mtmct_en.md) | 简体中文
# PP-Human跨镜头跟踪模块
跨镜头跟踪任务,是在单镜头跟踪的基础上,实现不同摄像头中人员的身份匹配关联。在安放、智慧零售等方向有较多的应用。
......
English | [简体中文](mtmct.md)
# Multi-Target Multi-Camera Tracking Module of PP-Human
Multi-target multi-camera tracking, or MTMCT, matches the identity of a person in different cameras based on the single-camera tracking. MTMCT is usually applied to the security system and the smart retailing.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册