| High-Precision Model | PP-HGNet_small | mA: 95.4 | per person 1.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) |
| Pedestrian Attribute Analysis | StrongBaseline | ma: 94.86 | Per Person 2ms | [Download Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.tar) |
| Fast Model | PP-LCNet_x1_0 | mA: 94.5 | per person 0.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) |
| Balanced Model | PP-HGNet_tiny | mA: 95.2 | per person 1.14ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) |
1. The precision of detection/ tracking models is obtained by training and testing on the dataset consist of [MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/) and some business data.
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The inference speed is V100, the speed of using TensorRT FP16.
3. The inference speed is T4, the speed of using TensorRT FP16.
## Instruction
## Instruction
...
@@ -70,7 +70,7 @@ Data Source and Copyright:Skyinfor Technology. Thanks for the provision of act
...
@@ -70,7 +70,7 @@ Data Source and Copyright:Skyinfor Technology. Thanks for the provision of act
- Boots: Yes; No
- Boots: Yes; No
```
```
4. The model adopted in the attribute recognition is [StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf), where the structure is the multi-class network structure based on ResNet50, and Weighted BCE loss and EMA are introduced for effect optimization.
4. The model adopted in the attribute recognition is [StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf), where the structure is the multi-class network structure based on PP-HGNet、PP-LCNet, and Weighted BCE loss is introduced for effect optimization.
@@ -49,7 +49,7 @@ capture the target in the original image according to bbox——│
...
@@ -49,7 +49,7 @@ capture the target in the original image according to bbox——│
make the IDs cluster together and rearrange them
make the IDs cluster together and rearrange them
```
```
2. The model solution is [reid-centroids](https://github.com/mikwieczorek/centroids-reid), with ResNet50 as the backbone. It is worth noting that the solution employs different features of the same ID to enhance the similarity.
2. The model solution is [reid-strong-baseline](https://github.com/michuanhaohao/reid-strong-baseline), with ResNet50 as the backbone.
Under the above circumstances, the REID model used in MTMCT integrates open-source datasets and compresses model features to 128-dimensional features to optimize the generalization. In this way, the actual generalization result becomes much better.
Under the above circumstances, the REID model used in MTMCT integrates open-source datasets and compresses model features to 128-dimensional features to optimize the generalization. In this way, the actual generalization result becomes much better.
...
@@ -74,11 +74,21 @@ Under the above circumstances, the REID model used in MTMCT integrates open-sour
...
@@ -74,11 +74,21 @@ Under the above circumstances, the REID model used in MTMCT integrates open-sour
## Reference
## Reference
```
```
@article{Wieczorek2021OnTU,
@InProceedings{Luo_2019_CVPR_Workshops,
title={On the Unreasonable Effectiveness of Centroids in Image Retrieval},
author = {Luo, Hao and Gu, Youzhi and Liao, Xingyu and Lai, Shenqi and Jiang, Wei},
author={Mikolaj Wieczorek and Barbara Rychalska and Jacek Dabrowski},
title = {Bag of Tricks and a Strong Baseline for Deep Person Re-Identification},
journal={ArXiv},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2021},
month = {June},
volume={abs/2104.13643}
year = {2019}
}
@ARTICLE{Luo_2019_Strong_TMM,
author={H. {Luo} and W. {Jiang} and Y. {Gu} and F. {Liu} and X. {Liao} and S. {Lai} and J. {Gu}},
journal={IEEE Transactions on Multimedia},
title={A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification},