From 8d22b60a56f9b3335c6545b90cb329bf1f05ed8d Mon Sep 17 00:00:00 2001 From: zhiboniu <31800336+zhiboniu@users.noreply.github.com> Date: Wed, 13 Jul 2022 22:34:45 +0800 Subject: [PATCH] fix pphuman deadlink (#6425) --- deploy/pipeline/README.md | 2 +- deploy/pipeline/docs/tutorials/action.md | 2 +- deploy/pipeline/docs/tutorials/action_en.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/deploy/pipeline/README.md b/deploy/pipeline/README.md index 895e1dd7e..c0aa67a4c 100644 --- a/deploy/pipeline/README.md +++ b/deploy/pipeline/README.md @@ -1,4 +1,4 @@ -[English](README_en.md) | 简体中文 +简体中文 # 实时行人分析工具 PP-Human diff --git a/deploy/pipeline/docs/tutorials/action.md b/deploy/pipeline/docs/tutorials/action.md index 75d8cf676..c3b31a154 100644 --- a/deploy/pipeline/docs/tutorials/action.md +++ b/deploy/pipeline/docs/tutorials/action.md @@ -71,7 +71,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ### 方案说明 1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md)。 2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人。 -3. 使用[关键点识别模型](../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../docs/tutorials/PrepareKeypointDataSet_cn.md)中的`COCO数据集`部分。 +3. 使用[关键点识别模型](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../../docs/tutorials/data/PrepareKeypointDataSet.md)中的`COCO数据集`部分。 4. 每个跟踪ID对应的目标行人各自累计骨骼特征点结果,组成该人物的时序关键点序列。当累计到预定帧数或跟踪丢失后,使用行为识别模型判断时序关键点序列的动作类型。当前版本模型支持摔倒行为的识别,预测得到的`class id`对应关系为: ``` 0: 摔倒, diff --git a/deploy/pipeline/docs/tutorials/action_en.md b/deploy/pipeline/docs/tutorials/action_en.md index e804aad42..51ab1cf41 100644 --- a/deploy/pipeline/docs/tutorials/action_en.md +++ b/deploy/pipeline/docs/tutorials/action_en.md @@ -88,7 +88,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model 1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe). 2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box. -3. In this strategy, we use the [keypoint detection model](../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../docs/tutorials/PrepareKeypointDataSet_en.md). +3. In this strategy, we use the [keypoint detection model](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../../docs/tutorials/data/PrepareKeypointDataSet_en.md). 4. Each target pedestrian with a tracking ID has their own accumulation of skeleton keypoints, which is used to form a keypoint sequence in time order. When the number of accumulated frames reach a preset threshold or the tracking is lost, the action recognition model will be applied to judging the action type of the time-ordered keypoint sequence. The current model only supports the recognition of the act of falling down, and the relationship between the action type and `class id` is: -- GitLab