未验证 提交 b9554990 编写于 作者: Y YixinKristy 提交者: GitHub

Merge branch 'PaddlePaddle:develop' into develop

......@@ -17,3 +17,4 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
......@@ -17,3 +17,4 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
......@@ -17,3 +17,4 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: trainval_split/s2anet_trainval_paddle_coco.json
dataset_dir: dataset/DOTA_1024_s2anet/
......@@ -17,6 +17,7 @@ EvalDataset:
TestDataset:
!ImageFolder
dataset_dir: dataset/mot/MOT17
anno_path: annotations/val_half.json
......
......@@ -17,6 +17,7 @@ EvalDataset:
TestDataset:
!ImageFolder
dataset_dir: dataset/mot/MOT17
anno_path: annotations/val_half.json
......
......@@ -92,20 +92,26 @@ ATTR:
### 3. 预测部署
```
# 指定配置文件路径和测试图片
# 行人检测,指定配置文件路径和测试图片
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu
# 指定配置文件路径和测试视频,完成属性识别
# 行人跟踪,指定配置文件路径和测试视频
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu
# 行人跟踪,指定配置文件路径,模型路径和测试视频
# 命令行中指定的模型路径优先级高于配置文件
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/
# 行人属性识别,指定配置文件路径和测试视频
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_attr=True
# 指定配置文件路径和测试视频,完成行为识别
# 行为识别,指定配置文件路径和测试视频
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_action=True
# 指定配置文件路径,模型路径和测试视频,完成多目标跟踪
# 命令行中指定的模型路径优先级高于配置文件
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/
```
其他用法请参考[子任务文档](./docs)
#### 3.1 参数说明
| 参数 | 是否必须|含义 |
......@@ -136,17 +142,17 @@ PP-Human整体方案如下图所示
</div>
### 1. 目标检测
### 1. 行人检测
- 采用PP-YOLOE L 作为目标检测模型
- 详细文档参考[PP-YOLOE](../../configs/ppyoloe/)[检测跟踪文档](docs/mot.md)
### 2. 多目标跟踪
- 采用SDE方案完成多目标跟踪
### 2. 行人跟踪
- 采用SDE方案完成行人跟踪
- 检测模型使用PP-YOLOE L
- 跟踪模块采用Bytetrack方案
- 详细文档参考[Bytetrack](../../configs/mot/bytetrack)[检测跟踪文档](docs/mot.md)
### 3. 跨镜跟踪
### 3. 跨镜行人跟踪
- 使用PP-YOLOE + Bytetrack得到单镜头多目标跟踪轨迹
- 使用ReID(centroid网络)对每一帧的检测结果提取特征
- 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果
......
......@@ -3,11 +3,11 @@ English | [简体中文](README.md)
# PP-Human— a Real-Time Pedestrian Analysis Tool
PP-Human serves as the first open-source tool of real-time pedestrian anaylsis relying on the PaddlePaddle deep learning framework. Versatile and efficient in deployment, it has been used in various senarios. PP-Human
offers many input options, including image/single-camera video/multi-camera video, and covers multi-object tracking, attribute recognition, and action recognition. PP-Human can be applied to intelligent traffic, the intelligent community, industiral patrol, and so on. It supports server-side deployment and TensorRT acceleration,and even can achieve real-time analysis on the T4 server.
offers many input options, including image/single-camera video/multi-camera video, and covers multi-object tracking, attribute recognition, and action recognition. PP-Human can be applied to intelligent traffic, the intelligent community, industiral patrol, and so on. It supports server-side deployment and TensorRT acceleration,and achieves real-time analysis on the T4 server.
## I. Environment Preparation
Requirement: PaddleDetection version >= release/2.4
Requirement: PaddleDetection version >= release/2.4 or develop
The installation of PaddlePaddle and PaddleDetection
......@@ -38,18 +38,19 @@ To make users have access to models of different scenarios, PP-Human provides pr
| Task | Scenario | Precision | Inference Speed(FPS) | Model Inference and Deployment |
| :---------: |:---------: |:--------------- | :-------: | :------: |
| Object Detection | Image/Video Input | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Attribute Recognition | Image/Video Input Attribute Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.tar) |
| Keypoint Detection | Video Input Action Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| Behavior Recognition | Video Input Bheavior Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | Multi-Target Multi-Camera Tracking | - | - | [Link]() |
| Object Detection | Image/Video Input | mAP: 56.3 | 28.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Attribute Recognition | Image/Video Input Attribute Recognition | MOTA: 72.0 | 33.1ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| Keypoint Detection | Video Input Action Recognition | mA: 94.86 | 2ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| Behavior Recognition | Video Input Bheavior Recognition | Precision 96.43 | 2.7ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | Multi-Target Multi-Camera Tracking | mAP: 99.7 | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
Then, unzip the downloaded model to the folder `./output_inference`.
**Note: **
- The model precision is decided by the fusion of datasets which include open-source datasets and enterprise ones.
- When the inference speed is T4, use TensorRT FP16.
- The precision on ReID model is evaluated on Market1501.
- The inference speed is tested on T4, using TensorRT FP16. The pipeline of preprocess, prediction and postprocess is included.
### 2. Preparation of Configuration Files
......@@ -80,31 +81,44 @@ ATTR:
batch_size: 8
```
**Note: **
- For different tasks, users could add `--enable_attr=True` or `--enable_action=True` in command line and do not need to set config file.
- if only need to change the model path, users could add `--model_dir det=ppyoloe/` in command line and do not need to set config file. For details info please refer to doc below.
### 3. Inference and Deployment
```
# Specify the config file path and test images
# Pedestrian detection. Specify the config file path and test images
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu
# Specify the config file path and test videos,and finish the attribute recognition
# Pedestrian tracking. Specify the config file path and test videos
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu
# Pedestrian tracking. Specify the config file path, the model path and test videos
# The model path specified on the command line prioritizes over the config file
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/
# Attribute recognition. Specify the config file path and test videos
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_attr=True
# Specify the config file path and test videos,and finish the Action Recognition
# Action Recognition. Specify the config file path and test videos
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_action=True
# Specify the config file path, the model path and test videos,and finish the multi-object tracking
# The model path specified on the command line prioritizes over the config file
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/
# Multi-Camera pedestrian tracking. Specify the config file path and test videos
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=test_video_dir/ --device=gpu
```
Other usage please refer to [sub-task docs](./docs)
### 3.1 Description of Parameters
| Parameter | Optional or not| Meaning |
|-------|-------|----------|
| --config | Yes | Config file path |
| --model_dir | Option | the model paths of different tasks in PP-Human, with a priority higher than config files |
| --model_dir | Option | the model paths of different tasks in PP-Human, with a priority higher than config files. For example, `--model_dir det=better_det/ attr=better_attr/` |
| --image_file | Option | Images to-be-predicted |
| --image_dir | Option | The path of folders of to-be-predicted images |
| --video_file | Option | Videos to-be-predicted |
......@@ -130,21 +144,21 @@ The overall solution of PP-Human is as follows:
### 1. Object Detection
- Use PP-YOLOE L as the model of object detection
- For details, please refer to [PP-YOLOE](../../configs/ppyoloe/)
- For details, please refer to [PP-YOLOE](../../configs/ppyoloe/) and [Detection and Tracking](docs/mot_en.md)
### 2. Multi-Object Tracking
- Conduct multi-object tracking with the SDE solution
- Use PP-YOLOE L as the detection model
- Use the Bytetrack solution to track modules
- For details, refer to [Bytetrack](configs/mot/bytetrack)
- For details, refer to [Bytetrack](configs/mot/bytetrack) and [Detection and Tracking](docs/mot_en.md)
### 3. Cross-Camera Tracking
### 3. Multi-Camera Tracking
- Use PP-YOLOE + Bytetrack to obtain the tracks of single-camera multi-object tracking
- Use ReID(centroid network)to extract features of the detection result of each frame
- Match the features of multi-camera tracks to get the cross-camera tracking result
- For details, please refer to [Cross-Camera Tracking](docs/mtmct_en.md)
- For details, please refer to [Multi-Camera Tracking](docs/mtmct_en.md)
### 4. Multi-Target Multi-Camera Tracking
### 4. Attribute Recognition
- Use PP-YOLOE + Bytetrack to track humans
- Use StrongBaseline(a multi-class model)to conduct attribute recognition, and the main attributes include age, gender, hats, eyes, clothing, and backpacks.
- For details, please refer to [Attribute Recognition](docs/attribute_en.md)
......
......@@ -39,6 +39,11 @@ def get_categories(metric_type, anno_file=None, arch=None):
if arch == 'keypoint_arch':
return (None, {'id': 'keypoint'})
if anno_file == None or (not os.path.isfile(anno_file)):
logger.warning("anno_file '{}' is None or not set or not exist, "
"please recheck TrainDataset/EvalDataset/TestDataset.anno_path, "
"otherwise the default categories will be used by metric_type.".format(anno_file))
if metric_type.lower() == 'coco' or metric_type.lower(
) == 'rbox' or metric_type.lower() == 'snipercoco':
if anno_file and os.path.isfile(anno_file):
......@@ -55,8 +60,9 @@ def get_categories(metric_type, anno_file=None, arch=None):
# anno file not exist, load default categories of COCO17
else:
if metric_type.lower() == 'rbox':
logger.warning("metric_type: {}, load default categories of DOTA.".format(metric_type))
return _dota_category()
logger.warning("metric_type: {}, load default categories of COCO.".format(metric_type))
return _coco17_category()
elif metric_type.lower() == 'voc':
......@@ -77,6 +83,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
# anno file not exist, load default categories of
# VOC all 20 categories
else:
logger.warning("metric_type: {}, load default categories of VOC.".format(metric_type))
return _vocall_category()
elif metric_type.lower() == 'oid':
......@@ -104,6 +111,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name
# anno file not exist, load default category 'pedestrian'.
else:
logger.warning("metric_type: {}, load default categories of pedestrian MOT.".format(metric_type))
return _mot_category(category='pedestrian')
elif metric_type.lower() in ['kitti', 'bdd100kmot']:
......@@ -122,6 +130,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name
# anno file not exist, load default categories of visdrone all 10 categories
else:
logger.warning("metric_type: {}, load default categories of VisDrone.".format(metric_type))
return _visdrone_category()
else:
......
......@@ -26,8 +26,6 @@ from motmetrics.math_util import quiet_divide
import numpy as np
import pandas as pd
import paddle
import paddle.nn.functional as F
from .metrics import Metric
import motmetrics as mm
import openpyxl
......@@ -311,6 +309,8 @@ class MCMOTEvaluator(object):
self.gt_filename = os.path.join(self.data_root, '../',
'sequences',
'{}.txt'.format(self.seq_name))
if not os.path.exists(self.gt_filename):
logger.warning("gt_filename '{}' of MCMOTEvaluator is not exist, so the MOTA will be -inf.")
def reset_accumulator(self):
import motmetrics as mm
......
......@@ -22,8 +22,7 @@ import sys
import math
from collections import defaultdict
import numpy as np
import paddle
import paddle.nn.functional as F
from ppdet.modeling.bbox_utils import bbox_iou_np_expand
from .map_utils import ap_per_class
from .metrics import Metric
......@@ -36,8 +35,10 @@ __all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric']
def read_mot_results(filename, is_gt=False, is_ignore=False):
valid_labels = {1}
ignore_labels = {2, 7, 8, 12} # only in motchallenge datasets like 'MOT16'
valid_label = [1]
ignore_labels = [2, 7, 8, 12] # only in motchallenge datasets like 'MOT16'
logger.info("In MOT16/17 dataset the valid_label of ground truth is '{}', "
"in other dataset it should be '0' for single classs MOT.".format(valid_label[0]))
results_dict = dict()
if os.path.isfile(filename):
with open(filename, 'r') as f:
......@@ -50,12 +51,10 @@ def read_mot_results(filename, is_gt=False, is_ignore=False):
continue
results_dict.setdefault(fid, list())
box_size = float(linelist[4]) * float(linelist[5])
if is_gt:
label = int(float(linelist[7]))
mark = int(float(linelist[6]))
if mark == 0 or label not in valid_labels:
if mark == 0 or label not in valid_label:
continue
score = 1
elif is_ignore:
......@@ -118,6 +117,8 @@ class MOTEvaluator(object):
assert self.data_type == 'mot'
gt_filename = os.path.join(self.data_root, self.seq_name, 'gt',
'gt.txt')
if not os.path.exists(gt_filename):
logger.warning("gt_filename '{}' of MOTEvaluator is not exist, so the MOTA will be -inf.")
self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True)
self.gt_ignore_frame_dict = read_mot_results(
gt_filename, is_ignore=True)
......
......@@ -62,7 +62,7 @@ class TestGFL(TestFasterRCNN):
class TestPicoDet(TestFasterRCNN):
def set_config(self):
self.cfg_file = 'configs/picodet/picodet_s_320_coco.yml'
self.cfg_file = 'configs/picodet/picodet_s_320_coco_lcnet.yml'
if __name__ == '__main__':
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册