未验证 提交 8a236f41 编写于 作者: L LokeZhou 提交者: GitHub

fix PP-HumanV2 app bug (#5600)

* add PP-TInyPose

* add PP-TInyPose info.yaml app.yaml

* * add PP-HumanV2

* add PP-Vehicle

* PP-TInyPose PP-Vehicle PP-HumanV2 add English version

* fixed info.yaml

* Update info.yaml

* fix PP-TInyPose/info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>
上级 d7bee0b9
......@@ -135,14 +135,13 @@ def get_model_dir_with_list(cfg, args):
cfg[key]["rec_model_dir"] = rec_model_dir
print("rec_model_dir model dir: ", rec_model_dir)
elif key == "MOT" and (
key in activate_list): # for idbased and skeletonbased actions
model_dir = cfg[key]["model_dir"]
downloaded_model_dir = auto_download_model(model_dir)
if downloaded_model_dir:
model_dir = downloaded_model_dir
cfg[key]["model_dir"] = model_dir
print("mot_model_dir model_dir: ", model_dir)
if key == 'ID_BASED_DETACTION' or key == 'SKELETON_ACTION':
model_dir = cfg['MOT']["model_dir"]
downloaded_model_dir = auto_download_model(model_dir)
if downloaded_model_dir:
model_dir = downloaded_model_dir
cfg['MOT']["model_dir"] = model_dir
print("mot_model_dir model_dir: ", model_dir)
def get_model_dir(cfg):
......@@ -890,6 +889,7 @@ def pp_humanv2(input_date, avtivity_list):
FLAGS.video_file = input_date
else:
FLAGS.image_file = input_date
FLAGS.avtivity_list = avtivity_list
pipeline = Pipeline(FLAGS, cfg)
......
......@@ -67,27 +67,14 @@
},
"outputs": [],
"source": [
"## 环境准备\n",
"\n",
"#环境要求: PaddleDetection版本 >= release/2.4 或 develop版本\n",
"\n",
"#PaddlePaddle和PaddleDetection安装\n",
"\n",
"```\n",
"# PaddlePaddle CUDA10.1\n",
"python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"\n",
"# PaddlePaddle CPU\n",
"python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple\n",
"\n",
"# 克隆PaddleDetection仓库\n",
"cd <path/to/clone/PaddleDetection>\n",
"git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"%cd ~/work/\n",
"!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"\n",
"# 安装其他依赖\n",
"cd PaddleDetection\n",
"pip install -r requirements.txt\n",
"```"
"%cd PaddleDetection\n",
"!pip install -r requirements.txt\n"
]
},
{
......@@ -160,35 +147,33 @@
},
"outputs": [],
"source": [
"1. 直接使用默认配置或者examples中配置文件,或者直接在`infer_cfg_pphuman.yml`中修改配置:\n",
"```\n",
"#直接使用默认配置或者examples中配置文件,或者直接在`infer_cfg_pphuman.yml`中修改配置:\n",
"\n",
"# 例:行人检测,指定配置文件路径和测试图片,图片输入默认打开检测模型\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu\n",
"\n",
"# 例:行人属性识别,直接使用examples中配置\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu\n",
"\n",
"\n",
"#使用命令行进行功能开启,或者模型路径修改:\n",
"\n",
"2. 使用命令行进行功能开启,或者模型路径修改:\n",
"```\n",
"# 例:行人跟踪,指定配置文件路径,模型路径和测试视频, 命令行中指定的模型路径优先级高于配置文件\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"\n",
"# 例:行为识别,以摔倒识别为例,命令行中开启SKELETON_ACTION模型\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu\n",
"\n",
"3. rtsp推拉流\n",
"- rtsp拉流预测\n",
"\n",
"对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:\n",
"```\n",
"#rtsp推拉流\n",
"\n",
"#对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:\n",
"\n",
"# 例:行人属性识别,单路视频流\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"\n",
"# 例:行人属性识别,多路视频流\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n",
"```"
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n"
]
},
{
......@@ -277,7 +262,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
......@@ -291,7 +276,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.10.8"
}
},
"nbformat": 4,
......
......@@ -64,27 +64,13 @@
},
"outputs": [],
"source": [
"## Environmental preparation\n",
"\n",
"Environmental requirements: PaddleDetection >= release/2.5 or develop \n",
"\n",
"PaddlePaddle和PaddleDetection install\n",
"\n",
"```\n",
"# PaddlePaddle CUDA10.1\n",
"python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"\n",
"# PaddlePaddle CPU\n",
"python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple\n",
"\n",
"# clone PaddleDetection\n",
"cd <path/to/clone/PaddleDetection>\n",
"git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"%cd ~/work/\n",
"!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"\n",
"# Other Dependencies\n",
"cd PaddleDetection\n",
"pip install -r requirements.txt\n",
"```"
"%cd PaddleDetection\n",
"!pip install -r requirements.txt\n"
]
},
{
......@@ -157,48 +143,39 @@
},
"outputs": [],
"source": [
"1. Use the default configuration directly or the configuration file in examples, or modify the configuration in `infer_cfg_pphuman.yml`\n",
"\n",
" ```\n",
" # Example: In pedestrian detection model, specify configuration file path and test image, and image input opens detection model by default\n",
" python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu\n",
" # Example: In pedestrian attribute recognition, directly configure the examples\n",
" python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu\n",
" ```\n",
"\n",
"2. Use the command line to enable functions or change the model path.\n",
"\n",
"```\n",
"#Use the default configuration directly or the configuration file in examples, or modify the configuration in `infer_cfg_pphuman.yml`\n",
"# Example: In pedestrian detection model, specify configuration file path and test image, and image input opens detection model by default\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu\n",
"# Example: In pedestrian attribute recognition, directly configure the examples\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu\n",
" \n",
"\n",
"#Use the command line to enable functions or change the model path.\n",
"# Example: Pedestrian tracking, specify config file path, model path and test video. The specified model path on the command line has a higher priority than the config file.\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"\n",
"# Example: In behaviour recognition, with fall recognition as an example, enable the SKELETON_ACTION model on the command line\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu\n",
"\n",
"3. rtsp push/pull stream\n",
"- rtsp pull stream\n",
"\n",
"For rtsp pull stream, use `--rtsp RTSP [RTSP ...]` parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows\n",
"#rtsp push/pull stream\n",
"#For rtsp pull stream, use `--rtsp RTSP [RTSP ...]` parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows\n",
"\n",
"```\n",
"# Example: Single video stream for pedestrian attribute recognition\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"# Example: Multiple-video stream for pedestrian attribute recognition\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu |\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu |\n",
"\n",
"- rtsp push stream\n",
"\n",
"For rtsp push stream, use `--pushurl rtsp:[IP]` parameter to push stream to a IP set, and you can visualize the output video by [VLC Player](https://vlc.onl/) with the `open network` funciton. the whole url path is `rtsp:[IP]/videoname`, the videoname here is the basename of the video file to infer, and the default of videoname is `output` when the video come from local camera and there is no video name. \n",
"#rtsp push stream\n",
"\n",
"```\n",
"#For rtsp push stream, use `--pushurl rtsp:[IP]` parameter to push stream to a IP set, and you can visualize the output video by [VLC Player](https://vlc.onl/) with the `open network` funciton. the whole url path is `rtsp:[IP]/videoname`, the videoname here is the basename of the video file to infer, and the default of videoname is `output` when the video come from local camera and there is no video name. \n",
"# Example:Pedestrian attribute recognition,in this example the whole url path is: rtsp://[YOUR_SERVER_IP]:8554/test_video\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"```\n",
"Note: \n",
"1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.\n",
"2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file."
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"\n",
"#Note: \n",
"#1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.\n",
"#2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file."
]
},
{
......
......@@ -5,31 +5,12 @@
"metadata": {},
"source": [
"## 1. PP-TinyPose模型简介\n",
"PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测模型,可流畅地在移动端设备上执行多人姿态估计任务。借助PaddleDetecion自研的优秀轻量级检测模型[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/picodet/README.md),我们同时提供了特色的轻量级垂类行人检测模型.TinyPose的运行环境有以下依赖要求:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PaddlePaddle>=2.2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"如希望在移动端部署,则还需要:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测模型,可流畅地在移动端设备上执行多人姿态估计任务。借助PaddleDetecion自研的优秀轻量级检测模型[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/picodet/README.md),我们同时提供了特色的轻量级垂类行人检测模型.TinyPose的运行环境有以下依赖要求:\n",
"\n",
"PaddlePaddle>=2.2\n",
"\n",
"如希望在移动端部署,则还需要:\n",
"\n",
"Paddle-Lite>=2.11"
]
},
......@@ -173,13 +154,24 @@
},
"outputs": [],
"source": [
"# 下载模型\n",
"!mkdir output_inference\n",
"%cd output_inference\n",
"# 下载行人检测模型\n",
"!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.zip\n",
"!unzip picodet_s_320_lcnet_pedestrian.zip\n",
"# 下载关键点检测模型\n",
"!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.zip\n",
"!unzip tinypose_128x96.zip\n",
"\n",
"%cd ~/work/PaddleDetection/\n",
"# 预测一张图片\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file={your image file} --device=GPU\n",
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file=demo/hrnet_demo.jpg --device=GPU\n",
"# 预测多张图片\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_dir={dir of image file} --device=GPU\n",
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_dir=demo/ --device=GPU\n",
"\n",
"# 预测一个视频\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU\n"
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU\n"
]
},
{
......@@ -249,10 +241,10 @@
"outputs": [],
"source": [
"# 关键点检测模型\n",
"python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml\n",
"!python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml\n",
"\n",
"# 行人检测模型\n",
"python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml"
"!python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml"
]
},
{
......
......@@ -7,31 +7,11 @@
"## 1. PP-TinyPose Introduction\n",
"PP-TinyPose is a real-time keypoint detection model optimized by PaddleDetecion for mobile devices, which can smoothly run multi-person pose estimation tasks on mobile devices. With the excellent self-developed lightweight detection model [PicoDet](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/picodet/README.md),\n",
"\n",
"we also provide a lightweight pedestrian detection model. PP-TinyPose has the following dependency requirements:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PaddlePaddle>=2.2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to deploy it on the mobile devives, you also need:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"we also provide a lightweight pedestrian detection model. PP-TinyPose has the following dependency requirements:\n",
"PaddlePaddle>=2.2\n",
"\n",
"If you want to deploy it on the mobile devives, you also need:\n",
"\n",
"Paddle-Lite>=2.11"
]
},
......@@ -55,7 +35,6 @@
"\n",
"#### 2.1.2 Model effects:\n",
"\n",
"PP-TinyPose的检测效果为:\n",
"\n",
"![](https://user-images.githubusercontent.com/15810355/181733705-d0f84232-c6a2-43dd-be70-4a3a246b8fbc.gif)\n",
"\n",
......@@ -173,13 +152,24 @@
},
"outputs": [],
"source": [
"# Download model\n",
"!mkdir output_inference\n",
"%cd output_inference\n",
"# Download pedestrian detection model\n",
"!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.zip\n",
"!unzip picodet_s_320_lcnet_pedestrian.zip\n",
"# Download key point detection model\n",
"!wget https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.zip\n",
"!unzip tinypose_128x96.zip\n",
"\n",
"%cd ~/work/PaddleDetection/\n",
"# Predict a image\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file={your image file} --device=GPU\n",
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file=demo/hrnet_demo.jpg --device=GPU\n",
"# predict multiple images\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_dir={dir of image file} --device=GPU\n",
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_dir=demo/ --device=GPU\n",
"\n",
"# predict video\n",
"python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU\n"
"!python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU\n"
]
},
{
......@@ -250,10 +240,10 @@
"outputs": [],
"source": [
"# keypoint detection model\n",
"python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml\n",
"!python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml\n",
"\n",
"# pedestrian detection model\n",
"python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml"
"!python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml"
]
},
{
......
......@@ -60,27 +60,13 @@
},
"outputs": [],
"source": [
"## 环境准备\n",
"\n",
"环境要求: PaddleDetection版本 >= release/2.5 或 develop版本\n",
"\n",
"PaddlePaddle和PaddleDetection安装\n",
"\n",
"```\n",
"# PaddlePaddle CUDA10.1\n",
"python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"\n",
"# PaddlePaddle CPU\n",
"python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple\n",
"\n",
"# 克隆PaddleDetection仓库\n",
"cd <path/to/clone/PaddleDetection>\n",
"git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"%cd ~/work\n",
"!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"\n",
"# 安装其他依赖\n",
"cd PaddleDetection\n",
"pip install -r requirements.txt\n",
"```"
"%cd PaddleDetection\n",
"!pip install -r requirements.txt\n"
]
},
{
......@@ -159,21 +145,19 @@
"outputs": [],
"source": [
"# 1. 直接使用默认配置或者examples中配置文件,或者直接在`infer_cfg_ppvehicle.yml`中修改配置:\n",
"```\n",
"# 例:车辆检测,指定配置文件路径和测试图片\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu\n",
"\n",
"# 例:车辆车牌识别,指定配置文件路径和测试视频\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml --video_file=test_video.mp4 --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml --video_file=test_video.mp4 --device=gpu\n",
"\n",
"\n",
"#2. 使用命令行进行功能开启,或者模型路径修改:\n",
"```\n",
"# 例:车辆跟踪,指定配置文件路径和测试视频,命令行中开启MOT模型并修改模型路径,命令行中指定的模型路径优先级高于配置文件\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"\n",
"# 例:车辆违章分析,指定配置文件和测试视频,命令行中指定违停区域设置、违停时间判断。\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_illegal_parking.yml \\\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_illegal_parking.yml \\\n",
" --video_file=../car_test.mov \\\n",
" --device=gpu \\\n",
" --draw_center_traj \\\n",
......@@ -181,26 +165,24 @@
" --region_type=custom \\\n",
" --region_polygon 600 300 1300 300 1300 800 600 800\n",
"\n",
"```\n",
"\n",
"#3. rtsp推拉流\n",
"- rtsp拉流预测\n",
"\n",
"对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:\n",
"```\n",
"#对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:\n",
"\n",
"# 例:车辆属性识别,单路视频流\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"\n",
"# 例:车辆属性识别,多路视频流\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n",
"\n",
"\n",
"#视频结果推流rtsp\n",
"#预测结果进行rtsp推流,使用--pushurl rtsp:[IP] 推流到IP地址端,PC端可以使用[VLC播放器](https://vlc.onl/)打开网络流进行播放,播放地址为 `rtsp:[IP]/videoname`。其中`videoname`是预测的视频文件名,如果视频来源是本地摄像头则`videoname`默认为`output`.\n",
"```\n",
"# 例:车辆属性识别,单路视频流,该示例播放地址为 rtsp://[YOUR_SERVER_IP]:8554/test_video\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"\n",
"#注:\n",
"#1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务.\n",
"#2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s版本,即修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。"
......@@ -285,7 +267,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
......@@ -299,7 +281,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.10.8"
}
},
"nbformat": 4,
......
......@@ -59,27 +59,13 @@
},
"outputs": [],
"source": [
"## Environmental preparation\n",
"\n",
"Environmental requirements: PaddleDetection >= release/2.5 or develop \n",
"\n",
"PaddlePaddle和PaddleDetection install\n",
"\n",
"```\n",
"# PaddlePaddle CUDA10.1\n",
"python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"\n",
"# PaddlePaddle CPU\n",
"python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple\n",
"\n",
"# clone PaddleDetection\n",
"cd <path/to/clone/PaddleDetection>\n",
"git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"%cd ~/work\n",
"!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
"\n",
"# Other Dependencies\n",
"cd PaddleDetection\n",
"pip install -r requirements.txt\n",
"```"
"%cd PaddleDetection\n",
"!pip install -r requirements.txt\n"
]
},
{
......@@ -157,21 +143,19 @@
"outputs": [],
"source": [
"# 1. Use the default configuration directly or the configuration file in examples, or modify the configuration in `infer_cfg_ppvehicle.yml`\n",
"```\n",
"# Example:In vehicle detection,specify configuration file path and test image\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu\n",
"\n",
"# Example:In license plate recognition,directly configure the examples\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml --video_file=test_video.mp4 --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml --video_file=test_video.mp4 --device=gpu\n",
"\n",
"\n",
"#2.Use the command line to enable functions or change the model path.\n",
"```\n",
"# Example:In vehicle tracking,specify configuration file path and test video, Turn on the MOT model and modify the model path on the command line, the model path specified on the command line has higher priority than the configuration file\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu\n",
"\n",
"# Example:In vehicle illegal action analysis,specify configuration file path and test video,Setting of designated violation area and judgment of violation time in the command line\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_illegal_parking.yml \\\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_illegal_parking.yml \\\n",
" --video_file=../car_test.mov \\\n",
" --device=gpu \\\n",
" --draw_center_traj \\\n",
......@@ -179,28 +163,24 @@
" --region_type=custom \\\n",
" --region_polygon 600 300 1300 300 1300 800 600 800\n",
"\n",
"```\n",
"\n",
"#3. rtsp push/pull stream\n",
"- rtsp pull stream\n",
"#For rtsp pull stream, use --rtsp RTSP [RTSP ...] parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows\n",
"\n",
"For rtsp pull stream, use --rtsp RTSP [RTSP ...] parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows\n",
"```\n",
"# Example: Single video stream for pedestrian attribute recognition\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu\n",
"\n",
"# Example: Multiple-video stream for pedestrian attribute recognition\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n",
"```\n",
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu\n",
"\n",
"\n",
"#rtsp push stream\n",
"#For rtsp push stream, use --pushurl rtsp:[IP] parameter to push stream to a IP set, and you can visualize the output video by [VLC Player](https://vlc.onl/) with the `open network` funciton. the whole url path is `rtsp:[IP]/videoname`, the videoname here is the basename of the video file to infer, and the default of videoname is `output` when the video come from local camera and there is no video name. \n",
"#Example:license plate recognition,in this example the whole url path is: rtsp://[YOUR_SERVER_IP]:8554/test_video\n",
"python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"```\n",
"Note: \n",
"1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.\n",
"2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.\n"
"!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
"\n",
"#Note: \n",
"#1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.\n",
"#2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.\n"
]
},
{
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册