diff --git a/configs/mot/bytetrack/README_cn.md b/configs/mot/bytetrack/README_cn.md index 56f72e0b168f64ffb6646c754de5272ab6a40c09..7620366da964987db90483cd920445c54c307068 100644 --- a/configs/mot/bytetrack/README_cn.md +++ b/configs/mot/bytetrack/README_cn.md @@ -17,13 +17,13 @@ | 检测训练数据集 | 检测器 | 输入尺度 | ReID | 检测mAP | MOTA | IDF1 | FPS | 配置文件 | | :-------- | :----- | :----: | :----:|:------: | :----: |:-----: |:----:|:----: | -| MOT-17 half train | YOLOv3 | 608x608 | - | 42.7 | 49.5 | 54.8 | - |[配置文件](./bytetrack_yolov3.yml) | -| MOT-17 half train | PPYOLOe | 640x640 | - | 52.9 | 50.4 | 59.7 | - |[配置文件](./bytetrack_ppyoloe.yml) | -| MOT-17 half train | PPYOLOe | 640x640 |PPLCNet| 52.9 | 51.7 | 58.8 | - |[配置文件](./bytetrack_ppyoloe_pplcnet.yml) | +| MOT-17 half train | YOLOv3 | 608x608 | - | 42.7 | 49.3 | 55.5 | - |[配置文件](./bytetrack_yolov3.yml) | +| MOT-17 half train | PPYOLOe | 640x640 | - | 52.7 | 50.4 | 59.7 | - |[配置文件](./bytetrack_ppyoloe.yml) | +| MOT-17 half train | PPYOLOe | 640x640 |PPLCNet| 52.7 | 51.7 | 58.8 | - |[配置文件](./bytetrack_ppyoloe_pplcnet.yml) | | mix_det | YOLOX-x | 800x1440| - | 61.9 | 77.3 | 71.6 | - |[配置文件](./bytetrack_yolox.yml) | **注意:** -- 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行验证的命令即可自动下载。 +- 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行```tools/eval_mot.py```评估的命令即可自动下载。 - **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载,并解压放在`dataset/mot/`文件夹下。 - **mix_det**是MOT17、crowdhuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。 - ByteTrack的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。 @@ -35,13 +35,20 @@ ### 1. 训练 通过如下命令一键式启动训练和评估 ```bash -python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml --eval --amp +# 或者 +python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml --eval --amp ``` +**注意:** + - ` --eval`是边训练边验证精度;`--amp`是混合精度训练避免溢出,推荐使用paddlepaddle2.2.2版本。 + ### 2. 评估 #### 2.1 评估检测效果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +# 或者 +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_det.pdparams ``` **注意:** @@ -54,30 +61,41 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetra CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --scaled=True # 或者 CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml --scaled=True +# 或者 +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_yolox.yml --scaled=True ``` **注意:** - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE YOLOv3则为False,如果使用通用检测模型则为True, 默认值是False。 - - 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置。 + - 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`。 ### 3. 预测 使用单个GPU通过如下命令预测一个视频,并保存为视频 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --video_file={your video name}.mp4 --scaled=True --save_videos +# 下载demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 + +# 使用PPYOLOe行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos +# 或者使用YOLOX行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_yolox.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` **注意:** - 请先确保已经安装了[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。 - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 + - `--save_videos`表示保存可视化视频,同时会保存可视化的图片在`{output_dir}/mot_outputs/`中,`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`。 ### 4. 导出预测模型 Step 1:导出检测模型 ```bash -# 导出PPYOLe行人检测模型 +# 导出PPYOLOe行人检测模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +# 或者导出YOLOX行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams ``` Step 2:导出ReID模型(可选步骤,默认不需要) @@ -89,12 +107,14 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid ### 4. 用导出的模型基于Python去预测 ```bash -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts +# 或者 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/yolox_x_24e_800x1440_mix_det/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts ``` + **注意:** - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 ## 引用 diff --git a/configs/mot/bytetrack/_base_/mix_det.yml b/configs/mot/bytetrack/_base_/mix_det.yml index 157866824d72596c96014df278f3044c8430f04e..fbe19bdaa29246919189d5d93a3ea01e3734b52c 100644 --- a/configs/mot/bytetrack/_base_/mix_det.yml +++ b/configs/mot/bytetrack/_base_/mix_det.yml @@ -11,7 +11,7 @@ TrainDataset: EvalDataset: !COCODataSet - image_dir: train + image_dir: images/train anno_path: annotations/val_half.json dataset_dir: dataset/mot/MOT17 diff --git a/configs/mot/bytetrack/detector/README_cn.md b/configs/mot/bytetrack/detector/README_cn.md index 8b47596a716cd2b398c18171245d01e09e415bc1..dcf44ee3ad0429593225c70578872d89cfe7a46c 100644 --- a/configs/mot/bytetrack/detector/README_cn.md +++ b/configs/mot/bytetrack/detector/README_cn.md @@ -30,7 +30,7 @@ job_name=ppyoloe_crn_l_36e_640x640_mot17half config=configs/mot/bytetrack/detector/${job_name}.yml log_dir=log_dir/${job_name} # 1. training -python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp # 2. evaluation CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/${job_name}.pdparams ``` diff --git a/configs/mot/deepsort/README_cn.md b/configs/mot/deepsort/README_cn.md index 8a957d7fad61624bd70f9670182f3d2ad15e5992..3cb88327980b5013bffb715855e3066fe332b42c 100644 --- a/configs/mot/deepsort/README_cn.md +++ b/configs/mot/deepsort/README_cn.md @@ -18,18 +18,18 @@ | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 检测结果或模型 | ReID模型 |配置文件 | | :---------| :------- | :----: | :----: | :--: | :----: | :---: | :---: | :-----:| :-----: | :-----: | -| ResNet-101 | 1088x608 | 72.2 | 60.5 | 998 | 8054 | 21644 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | +| ResNet-101 | 1088x608 | 72.2 | 60.5 | 998 | 8054 | 21644 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | | ResNet-101 | 1088x608 | 68.3 | 56.5 | 1722 | 17337 | 15890 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./deepsort_jde_yolov3_pcb_pyramid.yml) | -| PPLCNet | 1088x608 | 72.2 | 59.5 | 1087 | 8034 | 21481 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | +| PPLCNet | 1088x608 | 72.2 | 59.5 | 1087 | 8034 | 21481 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | | PPLCNet | 1088x608 | 68.1 | 53.6 | 1979 | 17446 | 15766 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | ### DeepSORT在MOT-16 Test Set上结果 | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 检测结果或模型 | ReID模型 |配置文件 | | :---------| :------- | :----: | :----: | :--: | :----: | :---: | :---: | :-----: | :-----: |:-----: | -| ResNet-101 | 1088x608 | 64.1 | 53.0 | 1024 | 12457 | 51919 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) | [ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | +| ResNet-101 | 1088x608 | 64.1 | 53.0 | 1024 | 12457 | 51919 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) | [ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | | ResNet-101 | 1088x608 | 61.2 | 48.5 | 1799 | 25796 | 43232 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./deepsort_jde_yolov3_pcb_pyramid.yml) | -| PPLCNet | 1088x608 | 64.0 | 51.3 | 1208 | 12697 | 51784 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | +| PPLCNet | 1088x608 | 64.0 | 51.3 | 1208 | 12697 | 51784 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | | PPLCNet | 1088x608 | 61.1 | 48.8 | 2010 | 25401 | 43432 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | @@ -41,8 +41,8 @@ | MIX | JDE YOLOv3 | PPLCNet | - | 66.3 | 62.1 | - |[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | | MOT-17 half train | YOLOv3 | PPLCNet | 42.7 | 50.2 | 52.4 | - |[配置文件](./deepsort_yolov3_pplcnet.yml) | | MOT-17 half train | PPYOLOv2 | PPLCNet | 46.8 | 51.8 | 55.8 | - |[配置文件](./deepsort_ppyolov2_pplcnet.yml) | -| MOT-17 half train | PPYOLOe | PPLCNet | 52.9 | 56.7 | 60.5 | - |[配置文件](./deepsort_ppyoloe_pplcnet.yml) | -| MOT-17 half train | PPYOLOe | ResNet-50 | 52.9 | 56.7 | 64.6 | - |[配置文件](./deepsort_ppyoloe_resnet.yml) | +| MOT-17 half train | PPYOLOe | PPLCNet | 52.7 | 56.7 | 60.5 | - |[配置文件](./deepsort_ppyoloe_pplcnet.yml) | +| MOT-17 half train | PPYOLOe | ResNet-50 | 52.7 | 56.7 | 64.6 | - |[配置文件](./deepsort_ppyoloe_resnet.yml) | **注意:** 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行验证的命令即可自动下载。 @@ -60,7 +60,7 @@ det_results_dir ``` 对于MOT16数据集,可以下载PaddleDetection提供的一个经过匹配之后的检测框结果det_results_dir.zip并解压: ``` -wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip +wget https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip ``` 如果使用更强的检测模型,可以取得更好的结果。其中每个txt是每个视频中所有图片的检测结果,每行都描述一个边界框,格式如下: ``` @@ -72,7 +72,7 @@ wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip - `score`是目标框的得分 - `class_id`是目标框的类别,如果只有1类则是`0` -- **方式2**:同时加载检测模型和ReID模型,此处选用JDE版本的YOLOv3,具体配置见`configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml`。加载其他通用检测模型可参照`configs/mot/deepsort/deepsort_ppyolov2_pplcnet.yml`进行修改。 +- **方式2**:同时加载检测模型和ReID模型,此处选用JDE版本的YOLOv3,具体配置见`configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml`。加载其他通用检测模型可参照`configs/mot/deepsort/deepsort_yoloe_pplcnet.yml`进行修改。 ## 快速开始 @@ -80,7 +80,7 @@ wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip #### 1.1 评估检测效果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` **注意:** @@ -89,9 +89,12 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppy #### 1.2 评估跟踪效果 **方式1**:加载检测结果文件和ReID模型,得到跟踪结果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml --det_results_dir {your detection results} +# 下载PaddleDetection提供的MOT16数据集检测结果文件并解压,如需自己使用其他检测器生成请参照这个文件里的格式 +wget https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip + +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml --det_results_dir det_results_dir # 或者 -CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml --det_results_dir {your detection results} +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml --det_results_dir det_results_dir ``` **方式2**:加载行人检测模型和ReID模型,得到跟踪结果 @@ -115,11 +118,14 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/deepsort 使用单个GPU通过如下命令预测一个视频,并保存为视频 ```bash +# 下载demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 + # 加载JDE YOLOv3行人检测模型和PCB Pyramid ReID模型,并保存为视频 -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml --video_file={your video name}.mp4 --save_videos +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml --video_file=mot17_demo.mp4 --save_videos -# 或者加载PPYOLOv2行人检测模型和PPLCNet ReID模型,并保存为视频 -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_ppyolov2_pplcnet.yml --video_file={your video name}.mp4 --scaled=True --save_videos +# 或者加载PPYOLOE行人检测模型和PPLCNet ReID模型,并保存为视频 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` **注意:** @@ -132,33 +138,34 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsor Step 1:导出检测模型 ```bash # 导出JDE YOLOv3行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/jde_yolov3_darknet53_30e_1088x608_mix.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams +CUDA_VISIBLE_DEVICES=0 python3.7 tools/export_model.py -c configs/mot/deepsort/detector/jde_yolov3_darknet53_30e_1088x608_mix.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams -# 或导出PPYOLOv2行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyolov2_r50vd_dcn_365e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_640x640_mot17half.pdparams +# 或导出PPYOLOE行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` Step 2:导出ReID模型 ```bash # 导出PCB Pyramid ReID模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams + # 或者导出PPLCNet ReID模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams + +# 或者导出ResNet ReID模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_resnet.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_resnet.pdparams ``` ### 4. 用导出的模型基于Python去预测 ```bash -# 用导出JDE YOLOv3行人检测模型和PCB Pyramid ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/jde_yolov3_darknet53_30e_1088x608_mix/ --reid_model_dir=output_inference/deepsort_pcb_pyramid_r101/ --video_file={your video name}.mp4 --device=GPU --save_mot_txts - -# 或用导出的PPYOLOv2行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyolov2_r50vd_dcn_365e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +# 用导出的PPYOLOE行人检测模型和PPLCNet ReID模型 +python3.7 deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts --threshold=0.5 ``` **注意:** - - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。 + - 运行前需要先改动`deploy/pptracking/python/tracker_config.yml`里的tracker为`DeepSORTTracker`。 + - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`表示对每个视频保存一个txt,或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 ## 适配其他检测器 @@ -184,7 +191,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/deepsort ``` #### 2.加载检测模型和ReID模型去推理: ``` -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_xxx_yyy.yml --video_file={your video name}.mp4 --scaled=True --save_videos +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_xxx_yyy.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` #### 3.导出检测模型和ReID模型: ```bash @@ -195,7 +202,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid ``` #### 4.使用导出的检测模型和ReID模型去部署: ``` -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/xxx./ --reid_model_dir=output_inference/deepsort_yyy/ --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/xxx./ --reid_model_dir=output_inference/deepsort_yyy/ --video_file=mot17_demo.mp4 --device=GPU --scaled=True --save_mot_txts ``` **注意:** - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 diff --git a/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml b/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml index 972a1d16975a587a162ad5354b784dffdd473e60..0af80a7d899f02ac4b66c5191b2616ed1db1aa8e 100644 --- a/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml +++ b/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml @@ -92,7 +92,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml b/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml index cde2cf23cf4ea81ce67f7dbd08b4fe1fc4b77e15..d9692304b055040bb22c49a2f90e05e4e7ba53eb 100644 --- a/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml +++ b/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml @@ -91,7 +91,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/deepsort/detector/README_cn.md b/configs/mot/deepsort/detector/README_cn.md index 1a984b5c358f2c12e77e3d12396bba22bb46793c..6ebe7de7949d1db3d4c4f72db5ad8147f12e1f3d 100644 --- a/configs/mot/deepsort/detector/README_cn.md +++ b/configs/mot/deepsort/detector/README_cn.md @@ -26,11 +26,11 @@ 通过如下命令一键式启动训练和评估 ```bash -job_name=ppyolov2_r50vd_dcn_365e_640x640_mot17half +job_name=ppyoloe_crn_l_36e_640x640_mot17half config=configs/mot/deepsort/detector/${job_name}.yml log_dir=log_dir/${job_name} # 1. training -python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp # 2. evaluation -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/${job_name}.pdparams +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/${job_name}.pdparams ``` diff --git a/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml b/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml index 4852d6b7e21924156826575dcc10a00cfecb5f80..a0501222c9f35d657826fb525e54bd7f4f663ae4 100644 --- a/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +++ b/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml @@ -6,6 +6,7 @@ weights: output/ppyoloe_crn_l_36e_640x640_mot17half/model_final log_iter: 20 snapshot_epoch: 2 + # schedule configuration for fine-tuning epoch: 36 LearningRate: @@ -15,7 +16,7 @@ LearningRate: max_epochs: 43 - !LinearWarmup start_factor: 0.001 - steps: 100 + epochs: 1 OptimizerBuilder: optimizer: @@ -25,9 +26,11 @@ OptimizerBuilder: factor: 0.0005 type: L2 + TrainReader: batch_size: 8 + # detector configuration architecture: YOLOv3 norm_type: sync_bn @@ -62,7 +65,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/deploy/pptracking/python/README.md b/deploy/pptracking/python/README.md index 46ff59a317ca08b2ef34197f5502f858189a8080..dc70b767bfa8f78c1caa94e4e67cbb9c75afb826 100644 --- a/deploy/pptracking/python/README.md +++ b/deploy/pptracking/python/README.md @@ -65,13 +65,12 @@ python deploy/pptracking/python/mot_jde_infer.py --model_dir=output_inference/fa - bdd100k车辆跟踪和多类别demo视频可从此链接下载:`wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/bdd100k_demo.mp4` + ## 2. 对DeepSORT模型的导出和预测 ### 2.1 导出预测模型 Step 1:导出检测模型 ```bash -# 导出PPYOLOv2行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyolov2_r50vd_dcn_365e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_640x640_mot17half.pdparams -# 或导出PPYOLOe行人检测模型 +# 导出PPYOLOe行人检测模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` @@ -88,45 +87,41 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid # 下载行人跟踪demo视频: wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 -# 用导出的PPYOLOv2行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyolov2_r50vd_dcn_365e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images -# 或用导出的PPYOLOe行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images +# 用导出的PPYOLOE行人检测模型和PPLCNet ReID模型 +python3.7 deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts --threshold=0.5 ``` ### 2.3 用导出的模型基于Python去预测车辆跟踪 ```bash -# 下载车辆检测PicoDet导出的模型: -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/picodet_l_640_aic21mtmct_vehicle.tar -tar -xvf picodet_l_640_aic21mtmct_vehicle.tar -# 或者车辆检测PP-YOLOv2导出的模型: -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar -tar -xvf ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar +# 下载车辆demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/bdd100k_demo.mp4 + +# 下载车辆检测PPYOLOE导出的模型: +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip +unzip mot_ppyoloe_l_36e_ppvehicle.zip # 下载车辆ReID导出的模型: wget https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet_vehicle.tar tar -xvf deepsort_pplcnet_vehicle.tar -# 用导出的PicoDet车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=picodet_l_640_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=deploy/pptracking/python/tracker_config.yml --device=GPU --threshold=0.5 --video_file={your video}.mp4 --save_mot_txts --save_images - -# 用导出的PP-YOLOv2车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=deploy/pptracking/python/tracker_config.yml --device=GPU --threshold=0.5 --video_file={your video}.mp4 --save_mot_txts --save_images +# 用导出的PPYOLOE车辆检测模型和PPLCNet车辆ReID模型 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=mot_ppyoloe_l_36e_ppvehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=deploy/pptracking/python/tracker_config.yml --device=GPU --threshold=0.5 --video_file=bdd100k_demo.mp4 --save_mot_txts --save_images ``` **注意:** + - 运行前需要手动修改`tracker_config.yml`的跟踪器类型为`type: DeepSORTTracker`。 - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - `--threshold`表示结果可视化的置信度阈值,默认为0.5,低于该阈值的结果会被过滤掉,为了可视化效果更佳,可根据实际情况自行修改。 - DeepSORT算法不支持多类别跟踪,只支持单类别跟踪,且ReID模型最好是与检测模型同一类别的物体训练过的,比如行人跟踪最好使用行人ReID模型,车辆跟踪最好使用车辆ReID模型。 - - 需要手动修改`tracker_config.yml`的跟踪器类型为`type: DeepSORTTracker`。 + ## 3. 对ByteTrack模型的导出和预测 ### 3.1 导出预测模型 ```bash # 导出PPYOLOe行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` ### 3.2 用导出的模型基于Python去预测行人跟踪 @@ -135,27 +130,27 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/dete wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 # 用导出的PPYOLOe行人检测模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts # 用导出的PPYOLOe行人检测模型和PPLCNet ReID模型 python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images ``` **注意:** + - 运行前需要确认`tracker_config.yml`的跟踪器类型为`type: JDETracker`。 - ByteTrack模型是加载导出的检测器和单独配置的`--tracker_config`文件运行的,为了实时跟踪所以不需要reid模型,`--reid_model_dir`表示reid导出模型的路径,默认为空,加不加具体视效果而定; - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - `--threshold`表示结果可视化的置信度阈值,默认为0.5,低于该阈值的结果会被过滤掉,为了可视化效果更佳,可根据实际情况自行修改。 + ## 4. 跨境跟踪模型的导出和预测 ### 4.1 导出预测模型 Step 1:下载导出的检测模型 ```bash -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/picodet_l_640_aic21mtmct_vehicle.tar -tar -xvf picodet_l_640_aic21mtmct_vehicle.tar -# 或者 -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar -tar -xvf ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar +# 下载车辆检测PPYOLOE导出的模型: +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip +unzip mot_ppyoloe_l_36e_ppvehicle.zip ``` Step 2:下载导出的ReID模型 ```bash @@ -169,13 +164,10 @@ tar -xvf deepsort_pplcnet_vehicle.tar wget https://paddledet.bj.bcebos.com/data/mot/demo/mtmct-demo.tar tar -xvf mtmct-demo.tar -# 用导出的PicoDet车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=picodet_l_640_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images - -# 用导出的PP-YOLOv2车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images - +# 用导出的PPYOLOE车辆检测模型和PPLCNet车辆ReID模型 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=mot_ppyoloe_l_36e_ppvehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images ``` + **注意:** - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt),或`--save_images`表示保存跟踪结果可视化图片。 - 跨镜头跟踪结果txt文件每行信息是`camera_id,frame,id,x1,y1,w,h,-1,-1`。 @@ -190,6 +182,7 @@ python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_ | 参数 | 是否必须|含义 | |-------|-------|----------| | --model_dir | Yes| 上述导出的模型路径 | +| --reid_model_dir | Option| ReID导出的模型路径 | | --image_file | Option | 需要预测的图片 | | --image_dir | Option | 要预测的图片文件夹路径 | | --video_file | Option | 需要预测的视频 | @@ -203,8 +196,10 @@ python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_ | --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | | --cpu_threads | Option| 设置cpu线程数,默认为1 | | --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | -| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | -| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | +| --save_mot_txts | Option | 跟踪任务是否保存txt结果文件,默认为False | +| --save_images | Option | 跟踪任务是否保存视频的可视化图片,默认为False | +| --do_entrance_counting | Option | 跟踪任务是否统计出入口流量,默认为False | +| --draw_center_traj | Option | 跟踪任务是否绘制跟踪轨迹,默认为False | | --mtmct_dir | Option | 需要进行MTMCT跨境头跟踪预测的图片文件夹路径,默认为None | | --mtmct_cfg | Option | 需要进行MTMCT跨境头跟踪预测的配置文件路径,默认为None | diff --git a/deploy/pptracking/python/det_infer.py b/deploy/pptracking/python/det_infer.py index 4a682093df16c913e443131bacba8e892f3e10c0..c52879453027e099d25cd0388083eb57d1f8aeea 100644 --- a/deploy/pptracking/python/det_infer.py +++ b/deploy/pptracking/python/det_infer.py @@ -32,7 +32,7 @@ sys.path.insert(0, parent_path) from benchmark_utils import PaddleInferBenchmark from picodet_postprocess import PicoDetPostProcess -from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, decode_image +from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, Pad, decode_image from mot.visualize import visualize_box_mask from mot_utils import argsparser, Timer, get_current_memory_mb diff --git a/deploy/pptracking/python/mot_sde_infer.py b/deploy/pptracking/python/mot_sde_infer.py index da1b3caed2e534d4e91dd9e18c032b30baa70d19..7b5c8852defeea0fc51bd6d45c25490a794011a2 100644 --- a/deploy/pptracking/python/mot_sde_infer.py +++ b/deploy/pptracking/python/mot_sde_infer.py @@ -186,7 +186,9 @@ class SDE_Detector(Detector): def postprocess(self, inputs, result): # postprocess output of predictor - np_boxes_num = result['boxes_num'] + keep_idx = result['boxes'][:, 1] > self.threshold + result['boxes'] = result['boxes'][keep_idx] + np_boxes_num = [len(result['boxes'])] if np_boxes_num[0] <= 0: print('[WARNNING] No object detected.') result = {'boxes': np.zeros([0, 6]), 'boxes_num': [0]} @@ -520,8 +522,8 @@ class SDE_Detector(Detector): # bs=1 in MOT model online_tlwhs, online_scores, online_ids = mot_results[0] - # NOTE: just implement flow statistic for one class - if num_classes == 1: + # flow statistic for one class, and only for bytetracker + if num_classes == 1 and not self.use_deepsort_tracker: result = (frame_id + 1, online_tlwhs[0], online_scores[0], online_ids[0]) statistic = flow_statistic( diff --git a/deploy/pptracking/python/preprocess.py b/deploy/pptracking/python/preprocess.py index 2df5df9c3c3dc0dcb90b0224bf0d8e022a47903e..427479c814d6b3250921ead6b7b2a07ea6352173 100644 --- a/deploy/pptracking/python/preprocess.py +++ b/deploy/pptracking/python/preprocess.py @@ -245,6 +245,34 @@ class LetterBoxResize(object): return im, im_info +class Pad(object): + def __init__(self, size, fill_value=[114.0, 114.0, 114.0]): + """ + Pad image to a specified size. + Args: + size (list[int]): image target size + fill_value (list[float]): rgb value of pad area, default (114.0, 114.0, 114.0) + """ + super(Pad, self).__init__() + if isinstance(size, int): + size = [size, size] + self.size = size + self.fill_value = fill_value + + def __call__(self, im, im_info): + im_h, im_w = im.shape[:2] + h, w = self.size + if h == im_h and w == im_w: + im = im.astype(np.float32) + return im, im_info + + canvas = np.ones((h, w, 3), dtype=np.float32) + canvas *= np.array(self.fill_value, dtype=np.float32) + canvas[0:im_h, 0:im_w, :] = im.astype(np.float32) + im = canvas + return im, im_info + + def preprocess(im, preprocess_ops): # process image by preprocess_ops im_info = {