未验证 提交 5d3e37d2 编写于 作者: G George Ni 提交者: GitHub

[MOT] update mot doc, mot.yml (#3233)

* update mot doc config, test=document_fix

* update citations, test=document_fix

* update title, test=document_fix
上级 f94013f0
...@@ -13,25 +13,15 @@ MOTDataZoo: { ...@@ -13,25 +13,15 @@ MOTDataZoo: {
'demo': ['MOT16-02'], 'demo': ['MOT16-02'],
} }
# for MOT training
TrainDataset: TrainDataset:
!MOTDataSet !MOTDataSet
dataset_dir: dataset/mot dataset_dir: dataset/mot
image_lists: ['mot17.train', 'caltech.all', 'cuhksysu.train', 'prw.train', 'citypersons.train', 'eth.train'] image_lists: ['mot17.train', 'caltech.all', 'cuhksysu.train', 'prw.train', 'citypersons.train', 'eth.train']
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide'] data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
# for detection or reid evaluation, no use in MOT evaluation # for MOT evaluation
EvalDataset: # If you want to change the MOT evaluation dataset, please modify 'task' and 'data_root'
!MOTDataSet
dataset_dir: dataset/mot
image_lists: ['citypersons.val', 'caltech.val'] # for detection evaluation
# image_lists: ['caltech.10k.val', 'cuhksysu.val', 'prw.val'] # for reid evaluation
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
# for detection inference, no use in MOT inference
TestDataset:
!ImageFolder
dataset_dir: dataset/mot
EvalMOTDataset: EvalMOTDataset:
!MOTImageFolder !MOTImageFolder
task: MOT16_train task: MOT16_train
...@@ -39,7 +29,22 @@ EvalMOTDataset: ...@@ -39,7 +29,22 @@ EvalMOTDataset:
data_root: MOT16/images/train data_root: MOT16/images/train
keep_ori_im: False # set True if save visualization images or video keep_ori_im: False # set True if save visualization images or video
# for MOT video inference
TestMOTDataset: TestMOTDataset:
!MOTVideoDataset !MOTVideoDataset
dataset_dir: dataset/mot dataset_dir: dataset/mot
keep_ori_im: True # set True if save visualization images or video keep_ori_im: True # set True if save visualization images or video
# for detection or reid evaluation, following the JDE paper, but no use in MOT evaluation
EvalDataset:
!MOTDataSet
dataset_dir: dataset/mot
image_lists: ['citypersons.val', 'caltech.val'] # for detection evaluation
# image_lists: ['caltech.10k.val', 'cuhksysu.val', 'prw.val'] # for reid evaluation
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
# for detection inference, no use in MOT inference
TestDataset:
!ImageFolder
dataset_dir: dataset/mot
...@@ -244,13 +244,6 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_ ...@@ -244,13 +244,6 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_
## Citations ## Citations
``` ```
@article{wang2019towards,
title={Towards Real-Time Multi-Object Tracking},
author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
journal={arXiv preprint arXiv:1909.12605},
year={2019}
}
@inproceedings{Wojke2017simple, @inproceedings{Wojke2017simple,
title={Simple Online and Realtime Tracking with a Deep Association Metric}, title={Simple Online and Realtime Tracking with a Deep Association Metric},
author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich}, author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
...@@ -277,4 +270,11 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_ ...@@ -277,4 +270,11 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_
journal={arXiv preprint arXiv:1909.12605}, journal={arXiv preprint arXiv:1909.12605},
year={2019} year={2019}
} }
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
``` ```
...@@ -241,13 +241,6 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_ ...@@ -241,13 +241,6 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_
## 引用 ## 引用
``` ```
@article{wang2019towards,
title={Towards Real-Time Multi-Object Tracking},
author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
journal={arXiv preprint arXiv:1909.12605},
year={2019}
}
@inproceedings{Wojke2017simple, @inproceedings{Wojke2017simple,
title={Simple Online and Realtime Tracking with a Deep Association Metric}, title={Simple Online and Realtime Tracking with a Deep Association Metric},
author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich}, author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
...@@ -274,4 +267,11 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_ ...@@ -274,4 +267,11 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_
journal={arXiv preprint arXiv:1909.12605}, journal={arXiv preprint arXiv:1909.12605},
year={2019} year={2019}
} }
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
``` ```
English | [简体中文](README_cn.md) English | [简体中文](README_cn.md)
# DeepSORT (Simple Online and Realtime Tracking with a Deep Association Metric) # DeepSORT (Deep Cosine Metric Learning for Person Re-identification)
## Table of Contents ## Table of Contents
- [Introduction](#Introduction) - [Introduction](#Introduction)
......
简体中文 | [English](README.md) 简体中文 | [English](README.md)
# DeepSORT # DeepSORT (Deep Cosine Metric Learning for Person Re-identification)
## 内容 ## 内容
- [简介](#简介) - [简介](#简介)
......
...@@ -77,10 +77,10 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_ ...@@ -77,10 +77,10 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/fairmot/fairmot_
## 引用 ## 引用
``` ```
@article{wang2019towards, @article{zhang2020fair,
title={Towards Real-Time Multi-Object Tracking}, title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin}, author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:1909.12605}, journal={arXiv preprint arXiv:2004.01888},
year={2019} year={2020}
} }
``` ```
English | [简体中文](README_cn.md) English | [简体中文](README_cn.md)
# JDE (Joint Detection and Embedding) # JDE (Towards Real-Time Multi-Object Tracking)
## Table of Contents ## Table of Contents
- [Introduction](#Introduction) - [Introduction](#Introduction)
...@@ -10,7 +10,7 @@ English | [简体中文](README_cn.md) ...@@ -10,7 +10,7 @@ English | [简体中文](README_cn.md)
## Introduction ## Introduction
- [JDE](https://arxiv.org/abs/1909.12605) (Joint Detection and Embedding) learns the object detection task and appearance embedding task simutaneously in a shared neural network. And the detection results and the corresponding embeddings are also outputed at the same time. JDE original paper is based on an Anchor Base detector YOLOv3 , adding a new ReID branch to learn embeddings. The training process is constructed as a multi-task learning problem, taking into account both accuracy and speed. - [JDE](https://arxiv.org/abs/1909.12605) (Joint Detection and Embedding) learns the object detection task and appearance embedding task simutaneously in a shared neural network. And the detection results and the corresponding embeddings are also outputed at the same time. JDE original paper is based on an Anchor Base detector YOLOv3, adding a new ReID branch to learn embeddings. The training process is constructed as a multi-task learning problem, taking into account both accuracy and speed.
<div align="center"> <div align="center">
<img src="../../../docs/images/mot16_jde.gif" width=500 /> <img src="../../../docs/images/mot16_jde.gif" width=500 />
...@@ -35,6 +35,7 @@ English | [简体中文](README_cn.md) ...@@ -35,6 +35,7 @@ English | [简体中文](README_cn.md)
| DarkNet53(paper) | 864x480 | 62.1 | 56.9 | 1608 | - | - | - | - | - | | DarkNet53(paper) | 864x480 | 62.1 | 56.9 | 1608 | - | - | - | - | - |
| DarkNet53 | 864x480 | 63.2 | 57.7 | 1966 | 10070 | 55081 | - |[model](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_864x480.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde/jde_darknet53_30e_864x480.yml) | | DarkNet53 | 864x480 | 63.2 | 57.7 | 1966 | 10070 | 55081 | - |[model](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_864x480.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde/jde_darknet53_30e_864x480.yml) |
| DarkNet53 | 576x320 | 59.1 | 56.4 | 1911 | 10923 | 61789 | - |[model](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_576x320.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde/jde_darknet53_30e_576x320.yml) | | DarkNet53 | 576x320 | 59.1 | 56.4 | 1911 | 10923 | 61789 | - |[model](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_576x320.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde/jde_darknet53_30e_576x320.yml) |
**Notes:** **Notes:**
JDE used 8 GPUs for training and mini-batch size as 4 on each GPU, and trained for 30 epoches. JDE used 8 GPUs for training and mini-batch size as 4 on each GPU, and trained for 30 epoches.
...@@ -59,6 +60,16 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53 ...@@ -59,6 +60,16 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53
# use saved checkpoint in training # use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final.pdparams CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final.pdparams
``` ```
**Notes:**
The default evaluation dataset is MOT-16 Train Set. If you want to change the evaluation dataset, please refer to the following code and modify `configs/datasets/mot.yml`
```
EvalMOTDataset:
!MOTImageFolder
task: MOT17_train
dataset_dir: dataset/mot
data_root: MOT17/images/train
keep_ori_im: False # set True if save visualization images or video
```
### 3. Inference ### 3. Inference
......
简体中文 | [English](README.md) 简体中文 | [English](README.md)
# JDE (Joint Detection and Embedding) # JDE (Towards Real-Time Multi-Object Tracking)
## 内容 ## 内容
- [简介](#简介) - [简介](#简介)
...@@ -61,6 +61,16 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53 ...@@ -61,6 +61,16 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53
# 使用训练保存的checkpoint # 使用训练保存的checkpoint
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final.pdparams CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final.pdparams
``` ```
**注意:**
默认评估的是MOT-16 Train Set数据集, 如需换评估数据集可参照以下代码修改`configs/datasets/mot.yml`
```
EvalMOTDataset:
!MOTImageFolder
task: MOT17_train
dataset_dir: dataset/mot
data_root: MOT17/images/train
keep_ori_im: False # set True if save visualization images or video
```
### 3. 预测 ### 3. 预测
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册