diff --git a/configs/keypoint/README.md b/configs/keypoint/README.md index d8eb1ebc5f92b934ebb7112055f5fc012f5fd5ca..d4c97c1783b69a494a1242b125206675127d7639 100644 --- a/configs/keypoint/README.md +++ b/configs/keypoint/README.md @@ -97,7 +97,7 @@ MPII数据集 ### 2、数据准备 -​ 目前KeyPoint模型支持[COCO](https://cocodataset.org/#keypoints-2017)数据集和[MPII](http://human-pose.mpi-inf.mpg.de/#overview)数据集,数据集的准备方式请参考[关键点数据准备](../../docs/tutorials/data/PrepareDetDataSet.md)。 +​ 目前KeyPoint模型支持[COCO](https://cocodataset.org/#keypoints-2017)数据集和[MPII](http://human-pose.mpi-inf.mpg.de/#overview)数据集,数据集的准备方式请参考[关键点数据准备](../../docs/tutorials/data/PrepareKeypointDataSet.md)。 ​ 关于config配置文件内容说明请参考[关键点配置文件说明](../../docs/tutorials/KeyPointConfigGuide_cn.md)。 diff --git a/configs/keypoint/README_en.md b/configs/keypoint/README_en.md index 3a813f97ab82f2e2af674d29b5f062a26d3e0478..a7aa7a6c599e3eb1f65bc182bedf59e1e52c7f0e 100644 --- a/configs/keypoint/README_en.md +++ b/configs/keypoint/README_en.md @@ -106,7 +106,7 @@ We also release [PP-TinyPose](./tiny_pose/README_en.md), a real-time keypoint de ### 2.Dataset Preparation -​ Currently, KeyPoint Detection Models support [COCO](https://cocodataset.org/#keypoints-2017) and [MPII](http://human-pose.mpi-inf.mpg.de/#overview). Please refer to [Keypoint Dataset Preparation](../../docs/tutorials/data/PrepareDetDataSet_en.md) to prepare dataset. +​ Currently, KeyPoint Detection Models support [COCO](https://cocodataset.org/#keypoints-2017) and [MPII](http://human-pose.mpi-inf.mpg.de/#overview). Please refer to [Keypoint Dataset Preparation](../../docs/tutorials/data/PrepareKeypointDataSet_en.md) to prepare dataset. ​ About the description for config files, please refer to [Keypoint Config Guild](../../docs/tutorials/KeyPointConfigGuide_en.md). diff --git a/docs/advanced_tutorials/customization/keypoint_detection.md b/docs/advanced_tutorials/customization/keypoint_detection.md index 384bdf7055a9e5d1abbd8ceecb47d6f456ec3290..26db128c73e323ee2d6bb66c45e5cbbdcf355f0e 100644 --- a/docs/advanced_tutorials/customization/keypoint_detection.md +++ b/docs/advanced_tutorials/customization/keypoint_detection.md @@ -232,6 +232,11 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml ``` +注意:由于测试依赖pycocotools工具,其默认为`COCO`数据集的17点,如果修改后的模型并非预测17点,直接使用评估命令会报错。 +需要修改以下内容以获得正确的评估结果: +- [sigma列表](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/keypoint_utils.py#L219),表示每个关键点的范围方差,越大则容忍度越高。其长度与预测点数一致。根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 +- [pycocotools sigma列表](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L523),含义及内容同上,取值与sigma列表一致。 + ### 模型导出及预测 #### Top-Down模型联合部署 ```shell