pphuman_mtmct_en.md 3.5 KB
Newer Older
Z
zhiboniu 已提交
1
English | [简体中文](pphuman_mtmct.md)
J
JYChen 已提交
2

W
wangguanzhong 已提交
3 4 5 6 7 8 9
# Multi-Target Multi-Camera Tracking Module of PP-Human

Multi-target multi-camera tracking, or MTMCT, matches the identity of a person in different cameras based on the single-camera tracking. MTMCT is usually applied to the security system and the smart retailing.
The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleline which is simple, and efficient.

## How to Use

Z
zhiboniu 已提交
10
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./pphuman_mot.md).
W
wangguanzhong 已提交
11

F
Feng Ni 已提交
12
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
W
wangguanzhong 已提交
13
```python
Z
zhiboniu 已提交
14
python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
W
wangguanzhong 已提交
15 16
```

Z
zhiboniu 已提交
17
3. Configuration can be modified in `./deploy/pipeline/config/infer_cfg_pphuman.yml`.
W
wangguanzhong 已提交
18 19

```python
Z
zhiboniu 已提交
20
python3 deploy/pipeline/pipeline.py
Z
zhiboniu 已提交
21
        --config deploy/pipeline/config/infer_cfg_pphuman.yml -o REID.model_dir=reid_best/
W
wangguanzhong 已提交
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
        --video_dir=[your_video_file_directory]
        --device=gpu
```

## Intorduction to the Solution

MTMCT module consists of the multi-target multi-camera tracking pipeline and the REID model.

1. Multi-Target Multi-Camera Tracking Pipeline

```

single-camera tracking[id+bbox]

capture the target in the original image according to bbox——│
        │            │
    REID model      quality assessment (covered or not, complete or not, brightness, etc.)
        │            │
    [feature]        [quality]
        │            │
   datacollector—————│

      sort out and filter features

 calculate the similarity of IDs in the videos

  make the IDs cluster together and rearrange them
```

51
2. The model solution is [reid-strong-baseline](https://github.com/michuanhaohao/reid-strong-baseline), with ResNet50 as the backbone.
W
wangguanzhong 已提交
52 53 54 55 56 57 58 59 60 61 62 63 64

Under the above circumstances, the REID model used in MTMCT integrates open-source datasets and compresses model features to 128-dimensional features to optimize the generalization. In this way, the actual generalization result becomes much better.

### Other Suggestions

- The provided REID model is obtained from open-source dataset training. It is recommended to add your own data to get a more powerful REID model, notably improving the MTMCT effect.
- The quality assessment is based on simple logic +OpenCV, whose effect is limited. If possible, it is advisable to conduct specific training on the quality assessment model.


### Example

- camera 1:
<div width="1080" align="center">
W
wangguanzhong 已提交
65
  <img src="../images/c1.gif"/>
W
wangguanzhong 已提交
66 67 68 69
</div>

- camera 2:
<div width="1080" align="center">
W
wangguanzhong 已提交
70
  <img src="../images/c2.gif"/>
W
wangguanzhong 已提交
71 72 73 74 75
</div>


## Reference
```
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91
@InProceedings{Luo_2019_CVPR_Workshops,
author = {Luo, Hao and Gu, Youzhi and Liao, Xingyu and Lai, Shenqi and Jiang, Wei},
title = {Bag of Tricks and a Strong Baseline for Deep Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}

@ARTICLE{Luo_2019_Strong_TMM,
author={H. {Luo} and W. {Jiang} and Y. {Gu} and F. {Liu} and X. {Liao} and S. {Lai} and J. {Gu}},
journal={IEEE Transactions on Multimedia},
title={A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification},
year={2019},
pages={1-1},
doi={10.1109/TMM.2019.2958756},
ISSN={1941-0077},
W
wangguanzhong 已提交
92 93
}
```