{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. PP-TinyPose Introduction\n", "PP-TinyPose is a real-time keypoint detection model optimized by PaddleDetecion for mobile devices, which can smoothly run multi-person pose estimation tasks on mobile devices. With the excellent self-developed lightweight detection model [PicoDet](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/picodet/README.md),\n", "\n", "we also provide a lightweight pedestrian detection model. PP-TinyPose has the following dependency requirements:\n", "PaddlePaddle>=2.2\n", "\n", "If you want to deploy it on the mobile devives, you also need:\n", "\n", "Paddle-Lite>=2.11" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "More deployment cases can be referred to[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/keypoint/tiny_pose/README.md)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Model effects and application scenarios\n", "### 2.1 Key point detection task:\n", "\n", "#### 2.1.1 dataset\n", "\n", "The current Keypoint model supports[COCO](https://cocodataset.org/#keypoints-2017) and [MPII](http://human-pose.mpi-inf.mpg.de/#overview).Please refer to[Key point data preparation](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/docs/tutorials/data/PrepareKeypointDataSet_en.md)\n", "\n", "#### 2.1.2 Model effects:\n", "\n", "\n", "![](https://user-images.githubusercontent.com/15810355/181733705-d0f84232-c6a2-43dd-be70-4a3a246b8fbc.gif)\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. How to use the model\n", "\n", "### 3.1 model Inference:\n", "\n", "(When not running on Jupyter Notebook, you need to set \"!\" or \"%\" removed)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# clone PaddleDetection\n", "%mkdir -p ~/work\n", "%cd ~/work/\n", "!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n", "\n", "# Other Dependencies\n", "%cd PaddleDetection\n", "%mkdir -p demo_input demo_output\n", "!pip install -r requirements.txt\n", "!python setup.py install" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Verify whether the installation is successful. If an error is reported, just perform the previous step" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!python ppdet/modeling/tests/test_architectures.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* quickly start\n", "\n", "congratulations! You have successfully installed PaddleDetection. Next, we will quickly detect the effect of key points. You can directly download the corresponding predictive deployment model provided in the model base, obtain the predictive deployment models of pedestrian detection model and key point detection model respectively, and decompress them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Download model\n", "!mkdir -p output_inference\n", "%cd output_inference\n", "# Download pedestrian detection model\n", "!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.zip\n", "!unzip picodet_s_320_lcnet_pedestrian.zip\n", "# Download key point detection model\n", "!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.zip\n", "!unzip tinypose_128x96.zip" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%cd ~/work/PaddleDetection/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Predict a image\n", "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-TinyPose/000000568213.jpg\n", "!python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_v2_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file=demo_input/000000568213.jpg --device=GPU --output_dir=demo_output" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# predict video\n", "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-TinyPose/demo_PP-TinyPose.mp4\n", "!python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_v2_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file=demo_input/demo_PP-TinyPose.mp4 --device=GPU --output_dir=demo_output" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 Train:\n", "* clone PaddleDetection refer 3.1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Dataset preparation\n", "\n", " The training set of key point detection model and pedestrian detection model include `COCO` and[AI Challenger](https://arxiv.org/abs/1711.06475),The key points of each dataset are defined as follows:\n", " ```\n", " COCO keypoint Description:\n", " 0: \"Nose\",\n", " 1: \"Left Eye\",\n", " 2: \"Right Eye\",\n", " 3: \"Left Ear\",\n", " 4: \"Right Ear\",\n", " 5: \"Left Shoulder,\n", " 6: \"Right Shoulder\",\n", " 7: \"Left Elbow\",\n", " 8: \"Right Elbow\",\n", " 9: \"Left Wrist\",\n", " 10: \"Right Wrist\",\n", " 11: \"Left Hip\",\n", " 12: \"Right Hip\",\n", " 13: \"Left Knee\",\n", " 14: \"Right Knee\",\n", " 15: \"Left Ankle\",\n", " 16: \"Right Ankle\"\n", "\n", " AI Challenger Description:\n", " 0: \"Right Shoulder\",\n", " 1: \"Right Elbow\",\n", " 2: \"Right Wrist\",\n", " 3: \"Left Shoulder\",\n", " 4: \"Left Elbow\",\n", " 5: \"Left Wrist\",\n", " 6: \"Right Hip\",\n", " 7: \"Right Knee\",\n", " 8: \"Right Ankle\",\n", " 9: \"Left Hip\",\n", " 10: \"Left Knee\",\n", " 11: \"Left Ankle\",\n", " 12: \"Head top\",\n", " 13: \"Neck\"\n", " ```\n", "\n", " Since the annatation format of these two datasets are different, we aligned their annotations to `COCO` format. You can download [Training List](https://bj.bcebos.com/v1/paddledet/data/keypoint/aic_coco_train_cocoformat.json) and put it at `dataset/`. To align these two datasets, we mainly did the following works:\n", " - Align the indexes of the `AI Challenger` keypoint to be consistent with `COCO` and unify the flags whether the keypoint is labeled/visible.\n", " - Discard the unique keypoints in `AI Challenger`. For keypoints not in this dataset but in `COCO`, set it to not labeled.\n", " - Rearranged `image_id` and `annotation id`.\n", " \n", " If you need to customize the dataset, you can refer to[Quick Start - Custom Dataset](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# keypoint detection model\n", "!python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml\n", "\n", "# pedestrian detection model\n", "!python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Solution\n", "
\n", " \n", "
" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.10.6 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 }