From 7fa8061b16fdbc69091e72f10ca967628577ac6a Mon Sep 17 00:00:00 2001
From: YixinKristy <48054808+YixinKristy@users.noreply.github.com>
Date: Mon, 28 Mar 2022 17:19:59 +0800
Subject: [PATCH] Create attribute_en.md
---
deploy/pphuman/docs/attribute_en.md | 86 +++++++++++++++++++++++++++++
1 file changed, 86 insertions(+)
create mode 100644 deploy/pphuman/docs/attribute_en.md
diff --git a/deploy/pphuman/docs/attribute_en.md b/deploy/pphuman/docs/attribute_en.md
new file mode 100644
index 000000000..c0776950b
--- /dev/null
+++ b/deploy/pphuman/docs/attribute_en.md
@@ -0,0 +1,86 @@
+English | [简体中文](attribute.md)
+
+# Attribute Recognition Modules of PP-Human
+
+Pedestrian attribute recognition has been widely used in the intelligent community, industrial, and transportation monitoring. Many attribute recognition modules have been gathered in PP-Human, including gender, age, hats, eyes, clothing and up to 26 attributes in total. Also, the pre-trained models are offered here and users can download and use them directly.
+
+| Task | Algorithm | Precision | Inference Speed(ms) | Download Link |
+|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
+| Pedestrian Detection/ Tracking | PP-YOLOE | mAP: 56.3
MOTA: 72.0 | Detection: 28ms
Tracking:33.1ms | [Download Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
+| Pedestrian Attribute Analysis | StrongBaseline | ma: 94.86 | Per Person 2ms | [Download Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.tar) |
+
+1. The precision of detection/ tracking models is MOT17, obtained by conducting the integration training and testing of CrowdHuman, HIEVE, and some business data.
+2. The precision of pedestiran attribute analysis is PA100k, obtained by conducting the integration training and testing of RAPv2, PETA, and some business data.
+3. The inference speed is T4, the speed of using TensorRT FP16.
+
+## Instruction
+
+1. Download the model from the link in the above table, and unzip it to```./output_inference```.
+2. When inputting the image, run the command as follows:
+```python
+python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
+ --image_file=test_image.jpg \
+ --device=gpu \
+ --enable_attr=True
+```
+3. When inputting the video, run the command as follows:
+```python
+python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
+ --video_file=test_video.mp4 \
+ --device=gpu \
+ --enable_attr=True
+```
+4. If you want to change the model path, there are two methods:
+
+ - In ```./deploy/pphuman/config/infer_cfg.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
+ - Add `--model_dir` in the command line to change the model path:
+```python
+python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
+ --video_file=test_video.mp4 \
+ --device=gpu \
+ --enable_attr=True \
+ --model_dir det=ppyoloe/
+```
+
+The test result is:
+
+