diff --git a/deploy/pipeline/README.md b/deploy/pipeline/README.md index 895e1dd7e9c3dfac7f269baa56611e1af98daeb9..c0aa67a4c15e4c043eb6013240255f7f33613ced 100644 --- a/deploy/pipeline/README.md +++ b/deploy/pipeline/README.md @@ -1,4 +1,4 @@ -[English](README_en.md) | 简体中文 +简体中文 # 实时行人分析工具 PP-Human diff --git a/deploy/pipeline/docs/tutorials/action.md b/deploy/pipeline/docs/tutorials/action.md index 75d8cf676de0ddcd272786587ccc1ddf2dd620c8..c3b31a1547ef1667f93f99735242b9ae1f7d10af 100644 --- a/deploy/pipeline/docs/tutorials/action.md +++ b/deploy/pipeline/docs/tutorials/action.md @@ -71,7 +71,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ### 方案说明 1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md)。 2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人。 -3. 使用[关键点识别模型](../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../docs/tutorials/PrepareKeypointDataSet_cn.md)中的`COCO数据集`部分。 +3. 使用[关键点识别模型](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../../docs/tutorials/data/PrepareKeypointDataSet.md)中的`COCO数据集`部分。 4. 每个跟踪ID对应的目标行人各自累计骨骼特征点结果,组成该人物的时序关键点序列。当累计到预定帧数或跟踪丢失后,使用行为识别模型判断时序关键点序列的动作类型。当前版本模型支持摔倒行为的识别,预测得到的`class id`对应关系为: ``` 0: 摔倒, diff --git a/deploy/pipeline/docs/tutorials/action_en.md b/deploy/pipeline/docs/tutorials/action_en.md index e804aad42640e51161d20ae5d71d0259a44081f7..51ab1cf41a5b5c07dbcdaef253f3f789376cd52e 100644 --- a/deploy/pipeline/docs/tutorials/action_en.md +++ b/deploy/pipeline/docs/tutorials/action_en.md @@ -88,7 +88,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model 1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe). 2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box. -3. In this strategy, we use the [keypoint detection model](../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../docs/tutorials/PrepareKeypointDataSet_en.md). +3. In this strategy, we use the [keypoint detection model](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../../docs/tutorials/data/PrepareKeypointDataSet_en.md). 4. Each target pedestrian with a tracking ID has their own accumulation of skeleton keypoints, which is used to form a keypoint sequence in time order. When the number of accumulated frames reach a preset threshold or the tracking is lost, the action recognition model will be applied to judging the action type of the time-ordered keypoint sequence. The current model only supports the recognition of the act of falling down, and the relationship between the action type and `class id` is: