diff --git a/configs/keypoint/README.md b/configs/keypoint/README.md index a16fd72f7a679cff18f50f11a331ed8d2bcaf5ab..dcec551b9cc0675e9a9392337506384d319a54a8 100644 --- a/configs/keypoint/README.md +++ b/configs/keypoint/README.md @@ -21,9 +21,9 @@ - [模型部署](#模型部署) - [Top-Down模型联合部署](#top-down模型联合部署) - [Bottom-Up模型独立部署](#bottom-up模型独立部署) - - [与多目标跟踪联合部署](#与多目标跟踪模型fairmot联合部署预测) - - + - [与多目标跟踪联合部署](#与多目标跟踪模型fairmot联合部署) +- [自定义数据训练](#自定义数据训练) +- [BenchMark](#benchmark) ## 简介 @@ -36,14 +36,14 @@ PaddleDetection 关键点检测能力紧跟业内最新最优算法方案,包 |检测模型| 关键点模型 | 输入尺寸 | COCO数据集精度| 平均推理耗时 (FP16) | 模型权重 | Paddle-Lite部署模型(FP16)| -| :----| :------------------------ | :-------: | :------: | :------: | :---: | :---: | +| :----| :------------------------ | :-------: | :------: | :------: | :---: | :---: | | [PicoDet-S-Pedestrian](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[PP-TinyPose](./tinypose_128x96.yml) | 检测:192x192
关键点:128x96 | 检测mAP:29.0
关键点AP:58.1 | 检测耗时:2.37ms
关键点耗时:3.27ms | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams)
[关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb)
[关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | | [PicoDet-S-Pedestrian](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) |[PP-TinyPose](./tinypose_256x192.yml)| 检测:320x320
关键点:256x192 | 检测mAP:38.5
关键点AP:68.8 | 检测耗时:6.30ms
关键点耗时:8.33ms | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams)
[关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams)| [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb)
[关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | *详细关于PP-TinyPose的使用请参考[文档]((./tiny_pose/README.md))。 ### 服务端模型推荐 -|检测模型| 关键点模型 | 输入尺寸 | COCO数据集精度| 模型权重 | +|检测模型| 关键点模型 | 输入尺寸 | COCO数据集精度| 模型权重 | | :----| :------------------------ | :-------: | :------: | :------: | | [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) |[HRNet-w32](./hrnet/hrnet_w32_384x288.yml)| 检测:640x640
关键点:384x288 | 检测mAP:49.5
关键点AP:77.8 | [检测](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
[关键点](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | | [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) |[HRNet-w32](./hrnet/hrnet_w32_256x192.yml) | 检测:640x640
关键点:256x192 | 检测mAP:49.5
关键点AP:76.9 | [检测](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
[关键点](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | @@ -51,7 +51,7 @@ PaddleDetection 关键点检测能力紧跟业内最新最优算法方案,包 ## 模型库 COCO数据集 -| 模型 | 方案 |输入尺寸 | AP(coco val) | 模型下载 | 配置文件 | +| 模型 | 方案 |输入尺寸 | AP(coco val) | 模型下载 | 配置文件 | | :---------------- | -------- | :----------: | :----------------------------------------------------------: | ----------------------------------------------------| ------- | | HigherHRNet-w32 |Bottom-Up| 512 | 67.1 | [higherhrnet_hrnet_w32_512.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_512.yml) | | HigherHRNet-w32 | Bottom-Up| 640 | 68.3 | [higherhrnet_hrnet_w32_640.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_640.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_640.yml) | @@ -145,7 +145,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/infer.py -c configs/keypoint/higherhrnet/hi ##### Top-Down模型联合部署 ```shell #导出检测模型 -python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams #导出关键点模型 python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams @@ -174,6 +174,45 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc **注意:** 跟踪模型导出教程请参考[文档](../mot/README.md)。 +### 4、完整部署教程及Demo + +​ 我们提供了PaddleInference(服务器端)、PaddleLite(移动端)、第三方部署(MNN、OpenVino)支持。无需依赖训练代码,deploy文件夹下相应文件夹提供独立完整部署代码。 详见 [部署文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README.md)介绍。 + +## 自定义数据训练 + +我们以[tinypose_256x192](.tiny_pose/README.md)为例来说明对于自定义数据如何修改: + +#### 1、配置文件[tinypose_256x192.yml](../../configs/keypoint/tiny_pose/tinypose_256x192.yml) + +基本的修改内容及其含义如下: + +``` +num_joints: &num_joints 17 #自定义数据的关键点数量 +train_height: &train_height 256 #训练图片尺寸-高度h +train_width: &train_width 192 #训练图片尺寸-宽度w +hmsize: &hmsize [48, 64] #对应训练尺寸的输出尺寸,这里是输入[w,h]的1/4 +flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]] #关键点定义中左右对称的关键点,用于flip增强。若没有对称结构在 TrainReader 的 RandomFlipHalfBodyTransform 一栏中 flip_pairs 后面加一行 "flip: False"(注意缩紧对齐) +num_joints_half_body: 8 #半身关键点数量,用于半身增强 +prob_half_body: 0.3 #半身增强实现概率,若不需要则修改为0 +upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] #上半身对应关键点id,用于半身增强中获取上半身对应的关键点。 +``` + +上述是自定义数据时所需要的修改部分,完整的配置及含义说明可参考文件:[关键点配置文件说明](../../docs/tutorials/KeyPointConfigGuide_cn.md)。 + +#### 2、其他代码修改(影响测试、可视化) +- keypoint_utils.py中的sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,.87, .87, .89, .89]) / 10.0,表示每个关键点的确定范围方差,根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 +- visualizer.py中的draw_pose函数中的EDGES,表示可视化时关键点之间的连接线关系。 +- pycocotools工具中的sigmas,同第一个keypoint_utils.py中的设置。用于coco指标评估时计算。 + +#### 3、数据准备注意 +- 训练数据请按coco数据格式处理。需要包括关键点[Nx3]、检测框[N]标注。 +- 请注意area>0,area=0时数据会被过滤掉。 + +如有遗漏,欢迎反馈 + +## BenchMark + +我们给出了不同运行环境下的测试结果,供您在选用模型时参考。详细数据请见[Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md)。 ## 引用 ``` diff --git a/configs/keypoint/README_en.md b/configs/keypoint/README_en.md index 1eb8952d7218abf2a614b2bcafe47be8a11adfe2..130a7dfd94c6eaa19fe2ae69998d5747dcaf32c1 100644 --- a/configs/keypoint/README_en.md +++ b/configs/keypoint/README_en.md @@ -16,14 +16,15 @@ - [Deploy Inference](#deploy-inference) - [Deployment for Top-Down models](#deployment-for-top-down-models) - [Deployment for Bottom-Up models](#deployment-for-bottom-up-models) - - [joint inference with Multi-Object Tracking model FairMOT](#joint-inference-with-multi-object-tracking-model-fairmot) - + - [Joint Inference with Multi-Object Tracking Model FairMOT](#joint-inference-with-multi-object-tracking-model-fairmot) +- [Train with custom data](#Train-with-custom-data) +- [BenchMark](#benchmark) ## Introduction -The keypoint detection part in PaddleDetection follows the state-of-the-art algorithm closely, including Top-Down and Bottom-Up methods, which can satisfy the different needs of users. +The keypoint detection part in PaddleDetection follows the state-of-the-art algorithm closely, including Top-Down and Bottom-Up methods, which can satisfy the different needs of users. -Top-Down detects the object first and then detect the specific keypoint. The accuracy of Top-Down models will be higher, but the time required will increase by the number of objects. +Top-Down detects the object first and then detect the specific keypoint. The accuracy of Top-Down models will be higher, but the time required will increase by the number of objects. Differently, Bottom-Up detects the point first and then group or connect those points to form several instances of human pose. The speed of Bottom-Up is fixed and will not increase by the number of objects, but the accuracy will be lower. @@ -36,7 +37,7 @@ At the same time, PaddleDetection provides [PP-TinyPose](./tiny_pose/README.md) ## Model Recommendation ### Mobile Terminal |Detection Model| Keypoint Model | Input Size | Accuracy of COCO| Average Inference Time (FP16) | Model Weight | Paddle-Lite Inference Model(FP16)| -| :----| :------------------------ | :-------: | :------: | :------: | :---: | :---: | +| :----| :------------------------ | :-------: | :------: | :------: | :---: | :---: | | [PicoDet-S-Pedestrian](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[PP-TinyPose](./tinypose_128x96.yml) | Detection:192x192
Keypoint:128x96 | Detection mAP:29.0
Keypoint AP:58.1 | Detection:2.37ms
Keypoint:3.27ms | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams)
[Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb)
[Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | | [PicoDet-S-Pedestrian](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) |[PP-TinyPose](./tinypose_256x192.yml)| Detection:320x320
Keypoint:256x192 | Detection mAP:38.5
Keypoint AP:68.8 | Detection:6.30ms
Keypoint:8.33ms | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams)
[Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams)| [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb)
[Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | @@ -44,7 +45,7 @@ At the same time, PaddleDetection provides [PP-TinyPose](./tiny_pose/README.md) ### Teminal Server -|Detection Model| Keypoint Model | Input Size | Accuracy of COCO| Model Weight | +|Detection Model| Keypoint Model | Input Size | Accuracy of COCO| Model Weight | | :----| :------------------------ | :-------: | :------: | :------: | | [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) |[HRNet-w32](./hrnet/hrnet_w32_384x288.yml)| Detection:640x640
Keypoint:384x288 | Detection mAP:49.5
Keypoint AP:77.8 | [Detection](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
[Keypoint](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | | [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) |[HRNet-w32](./hrnet/hrnet_w32_256x192.yml) | Detection:640x640
Keypoint:256x192 | Detection mAP:49.5
Keypoint AP:76.9 | [Detection](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
[Keypoint](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | @@ -140,7 +141,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/infer.py -c configs/keypoint/higherhrnet/hi ```shell #Export Detection Model -python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams #Export Keypoint Model python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams @@ -170,6 +171,44 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc **Note:** To export MOT model, please refer to [Here](../../configs/mot/README_en.md). +### 4.Complete Deploy Instruction and Demo + +​ We provide standalone deploy of PaddleInference(Server-GPU)、PaddleLite(mobile、ARM)、Third-Engine(MNN、OpenVino), which is independent of training codes。For detail, please click [Deploy-docs](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README_en.md)。 + +## Train with custom data + +We take an example of [tinypose_256x192](.tiny_pose/README_en.md) to show how to train with custom data. + +#### 1、For configs [tinypose_256x192.yml](../../configs/keypoint/tiny_pose/tinypose_256x192.yml) + +you may need to modity these for your job: + +``` +num_joints: &num_joints 17 #the number of joints in your job +train_height: &train_height 256 #the height of model input +train_width: &train_width 192 #the width of model input +hmsize: &hmsize [48, 64] #the shape of model output,usually 1/4 of [w,h] +flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]] #the correspondence between left and right keypoint id,used for flip transform。You can add an line(by "flip: False") behind of flip_pairs in RandomFlipHalfBodyTransform of TrainReader if you don't need it +num_joints_half_body: 8 #The joint numbers of half body, used for half_body transform +prob_half_body: 0.3 #The probility of half_body transform, set to 0 if you don't need it +upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] #The joint ids of half(upper) body, used to get the upper joints in half_body transform +``` + +For more configs, please refer to [KeyPointConfigGuide](../../docs/tutorials/KeyPointConfigGuide_en.md)。 + +#### 2、Others(used for test and visualization) +- In keypoint_utils.py, please set: "sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,.87, .87, .89, .89]) / 10.0", the value indicate the variance of a joint locations,normally 0.25-0.5 means the location is highly accuracy,for example: eyes。0.5-1.0 means the location is not sure so much,for example: shoulder。0.75 is recommand if you not sure。 +- In visualizer.py, please set "EDGES" in draw_pose function,this indicate the line to show between joints for visualization。 +- In pycocotools you installed, please set "sigmas",it is the same as that in keypoint_utils.py, but used for coco evaluation。 + +#### 3、Note for data preparation +- The data should has the same format as Coco data, and the keypoints(Nx3) and bbox(N) should be annotated. +- please set "area">0 in annotations files otherwise it will be skiped while training. + + +## BenchMark + +We provide benchmarks in different runtime environments for your reference when choosing models. See [Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md) for details. ## Reference ``` diff --git a/configs/keypoint/tiny_pose/README.md b/configs/keypoint/tiny_pose/README.md index 28177f907477b18060a29ab892918cd498da8e29..f38cdee91e045c4efbe961002b30a57f61ff4032 100644 --- a/configs/keypoint/tiny_pose/README.md +++ b/configs/keypoint/tiny_pose/README.md @@ -45,8 +45,8 @@ PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测 ### 行人检测模型 | 模型 | 输入尺寸 | mAP (COCO Val) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | | :------------------- | :------: | :------------: | :-----------------: | :-----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | -| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | -| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | diff --git a/configs/keypoint/tiny_pose/README_en.md b/configs/keypoint/tiny_pose/README_en.md index c7147c632e1e915f0f840018cddcfd937f638a27..0db1520315a34b1721d15c395896b3133d1ffefa 100644 --- a/configs/keypoint/tiny_pose/README_en.md +++ b/configs/keypoint/tiny_pose/README_en.md @@ -43,8 +43,8 @@ If you want to deploy it on the mobile devives, you also need: ### Pedestrian Detection Model | Model | Input Size | mAP (COCO Val) | Average Inference Time (FP32)| Average Inference Time (FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)| | :------------------------ | :-------: | :------: | :------: | :---: | :---: | :---: | :---: | :---: | :---: | -| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | -| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | **Tips**