introduction_cn.ipynb 9.0 KB
Notebook
Newer Older
1 2 3 4 5 6 7
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. PP-TinyPose模型简介\n",
L
LokeZhou 已提交
8 9 10 11 12 13
    "PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测模型,可流畅地在移动端设备上执行多人姿态估计任务。借助PaddleDetecion自研的优秀轻量级检测模型[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/picodet/README.md),我们同时提供了特色的轻量级垂类行人检测模型.TinyPose的运行环境有以下依赖要求:\n",
    "\n",
    "PaddlePaddle>=2.2\n",
    "\n",
    "如希望在移动端部署,则还需要:\n",
    "\n",
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
    "Paddle-Lite>=2.11"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "更多部署案例可参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/keypoint/tiny_pose/README.md)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 模型效果及应用场景\n",
    "### 2.1 关键点检测任务:\n",
    "\n",
    "#### 2.1.1 数据集:\n",
    "\n",
L
LokeZhou 已提交
33
    "目前KeyPoint模型支持[COCO](https://cocodataset.org/#keypoints-2017)数据集和[MPII](http://human-pose.mpi-inf.mpg.de/#overview)数据集,数据集的准备方式请参考[关键点数据准备](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/docs/tutorials/data/PrepareKeypointDataSet.md)\n",
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
    "\n",
    "#### 2.1.2 模型效果速览:\n",
    "\n",
    "PP-TinyPose的检测效果为:\n",
    "\n",
    "![](https://user-images.githubusercontent.com/15810355/181733705-d0f84232-c6a2-43dd-be70-4a3a246b8fbc.gif)\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 模型如何使用\n",
    "\n",
    "### 3.1 模型推理:\n",
    "* 下载 \n",
    "\n",
    "(不在Jupyter Notebook上运行时需要将\"!\"或者\"%\"去掉。)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "jupyter": {
     "outputs_hidden": false
    },
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
68 69 70 71
    "# 克隆PaddleDetection仓库\n",
    "%mkdir -p ~/work\n",
    "%cd ~/work/\n",
    "!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
72
    "\n",
73 74 75
    "# 安装其他依赖\n",
    "%cd PaddleDetection\n",
    "%mkdir -p demo_input demo_output\n",
76 77 78
    "!pip install -r requirements.txt\n",
    "\n",
    "# 开始安装PaddleDetection \n",
79
    "!python setup.py install  #如果安装过程中长时间卡住,可中断后继续重新执行"
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 验证是否安装成功\n",
    "如果报错,只需执行上一步操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# 测试是否安装成功\n",
    "!python ppdet/modeling/tests/test_architectures.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 快速体验\n",
    "\n",
    "恭喜! 您已经成功安装了PaddleDetection,接下来快速关键点检测效果。您可以直接下载模型库中提供的对应预测部署模型,分别获取得到行人检测模型和关键点检测模型的预测部署模型,解压即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
L
LokeZhou 已提交
120
    "# 下载模型\n",
121
    "!mkdir -p output_inference\n",
L
LokeZhou 已提交
122
    "%cd output_inference\n",
123
    "# 下载行人检测模型s\n",
L
LokeZhou 已提交
124 125 126 127
    "!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.zip\n",
    "!unzip picodet_s_320_lcnet_pedestrian.zip\n",
    "# 下载关键点检测模型\n",
    "!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.zip\n",
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145
    "!unzip tinypose_128x96.zip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%cd ~/work/PaddleDetection/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
146
    "# 预测一张图片\n",
147 148 149 150 151 152 153 154 155 156
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-TinyPose/000000568213.jpg\n",
    "!python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_v2_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file=demo_input/000000568213.jpg --device=GPU --output_dir=demo_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
157
    "# 预测一个视频\n",
158 159
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-TinyPose/demo_PP-TinyPose.mp4\n",
    "!python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_v2_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file=demo_input/demo_PP-TinyPose.mp4 --device=GPU --output_dir=demo_output"
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 模型训练:\n",
    "* 克隆PaddleDetection仓库(详见3.1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 数据集准备\n",
    "\n",
    "    关键点检测模型与行人检测模型的训练集在`COCO`以外还扩充了[AI Challenger](https://arxiv.org/abs/1711.06475)数据集,各数据集关键点定义如下:\n",
    "    ```\n",
    "    COCO keypoint Description:\n",
    "        0: \"Nose\",\n",
    "        1: \"Left Eye\",\n",
    "        2: \"Right Eye\",\n",
    "        3: \"Left Ear\",\n",
    "        4: \"Right Ear\",\n",
    "        5: \"Left Shoulder,\n",
    "        6: \"Right Shoulder\",\n",
    "        7: \"Left Elbow\",\n",
    "        8: \"Right Elbow\",\n",
    "        9: \"Left Wrist\",\n",
    "        10: \"Right Wrist\",\n",
    "        11: \"Left Hip\",\n",
    "        12: \"Right Hip\",\n",
    "        13: \"Left Knee\",\n",
    "        14: \"Right Knee\",\n",
    "        15: \"Left Ankle\",\n",
    "        16: \"Right Ankle\"\n",
    "\n",
    "    AI Challenger Description:\n",
    "        0: \"Right Shoulder\",\n",
    "        1: \"Right Elbow\",\n",
    "        2: \"Right Wrist\",\n",
    "        3: \"Left Shoulder\",\n",
    "        4: \"Left Elbow\",\n",
    "        5: \"Left Wrist\",\n",
    "        6: \"Right Hip\",\n",
    "        7: \"Right Knee\",\n",
    "        8: \"Right Ankle\",\n",
    "        9: \"Left Hip\",\n",
    "        10: \"Left Knee\",\n",
    "        11: \"Left Ankle\",\n",
    "        12: \"Head top\",\n",
    "        13: \"Neck\"\n",
    "    ```\n",
    "\n",
    "    由于两个数据集的关键点标注形式不同,我们将两个数据集的标注进行了对齐,仍然沿用COCO的标注形式,您可以下载[训练的参考列表](https://bj.bcebos.com/v1/paddledet/data/keypoint/aic_coco_train_cocoformat.json)并放在`dataset/`下使用。对齐两个数据集标注文件的主要处理如下:\n",
    "    - `AI Challenger`关键点标注顺序调整至与COCO一致,统一是否标注/可见的标志位;\n",
    "    - 舍弃了`AI Challenger`中特有的点位;将`AI Challenger`数据中`COCO`特有点位标记为未标注;\n",
    "    - 重新排列了`image_id`与`annotation id`;\n",
    "    利用转换为`COCO`形式的合并数据标注,执行模型训练。\n",
    "    若用户需要自定义数据集,可参考[快速开始-自定义数据集](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 关键点检测模型\n",
229 230 231 232 233 234 235 236 237
    "!python -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
238
    "# 行人检测模型\n",
239
    "!python -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml"
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 方案介绍\n",
    "<div align=\"center\">\n",
    "  <img src=\"tinypose_pipeline.png\" width='800'/>\n",
    "</div>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
255
   "display_name": "Python 3.8.13 ('paddle_env')",
256 257 258 259 260 261 262 263 264 265 266 267 268
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
269 270 271 272 273 274
   "version": "3.8.13"
  },
  "vscode": {
   "interpreter": {
    "hash": "864bc28e4d94d9c1c4bd0747e4313c0ab41718ab445ced17dbe1a405af5ecc64"
   }
275 276 277 278 279
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}