diff --git a/configs/keypoint/README.md b/configs/keypoint/README.md index 50a91e8ca9ff38ae8a72fc3da7838a4506be56bd..0406c2f9520ae7e65f42390f3073850f53a584aa 100644 --- a/configs/keypoint/README.md +++ b/configs/keypoint/README.md @@ -22,7 +22,8 @@ - [模型部署](#模型部署) - [Top-Down模型联合部署](#top-down模型联合部署) - [Bottom-Up模型独立部署](#bottom-up模型独立部署) - - [与多目标跟踪联合部署](#与多目标跟踪模型fairmot联合部署预测) + - [与多目标跟踪联合部署](#与多目标跟踪模型fairmot联合部署) + - [完整部署教程及Demo](#4完整部署教程及Demo) - [BenchMark](#benchmark) ## 简介 @@ -76,7 +77,6 @@ MPII数据集 我们同时推出了基于LiteHRNet(Top-Down)针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。 - ## 快速开始 ### 1、环境安装 @@ -159,7 +159,7 @@ python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=gpu --threshold=0.5 ``` -##### 与多目标跟踪模型FairMOT联合部署预测 +##### 与多目标跟踪模型FairMOT联合部署 ```shell #导出FairMOT跟踪模型 @@ -172,6 +172,10 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc **注意:** 跟踪模型导出教程请参考[文档](../mot/README.md)。 +### 4、完整部署教程及Demo + +​ 我们提供了PaddleInference(服务器端)、PaddleLite(移动端)、第三方部署(MNN、OpenVino)支持。无需依赖训练代码,deploy文件夹下相应文件夹提供独立完整部署代码。 详见 [部署文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README.md)介绍。 + ## BenchMark 我们给出了不同运行环境下的测试结果,供您在选用模型时参考。详细数据请见[Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md)。 diff --git a/configs/keypoint/README_en.md b/configs/keypoint/README_en.md index c8f0f922ef475278147c8772451a8ca9d625a022..52ccfbddcdbf8d2ad1c2870599fcaa135f1f4f89 100644 --- a/configs/keypoint/README_en.md +++ b/configs/keypoint/README_en.md @@ -19,6 +19,7 @@ - [Deployment for Top-Down models](#deployment-for-top-down-models) - [Deployment for Bottom-Up models](#deployment-for-bottom-up-models) - [Joint Inference with Multi-Object Tracking Model FairMOT](#joint-inference-with-multi-object-tracking-model-fairmot) + - [Complete Deploy Instruction and Demo](#4Complete Deploy Instruction and Demo) - [BenchMark](#benchmark) ## Introduction @@ -176,6 +177,10 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc **Note:** To export MOT model, please refer to [Here](../../configs/mot/README_en.md). +### 4. Complete Deploy Instruction and Demo + +​ We provide standalone deploy of PaddleInference(Server-GPU)、PaddleLite(mobile、ARM)、Third-Engine(MNN、OpenVino), which is independent of training codes。For detail, please click [Deploy-docs](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README_en.md)。 + ## BenchMark We provide benchmarks in different runtime environments for your reference when choosing models. See [Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md) for details.