未验证 提交 0b1b74ad 编写于 作者: G George Ni 提交者: GitHub

[MOT] JDE: others (#2788)

* add config for jde and deepsort, add engine metric tools

* add mot_metrics

* fix configs engine and others

* fix configs others

* move configs

* fix ap_per_class, fix doc

* fix test_mode, seqs

* fix test_mode to metric

* fix JDE arch metric

* clean requirement

* add to requirement
上级 0947171b
metric: MOTDet
num_classes: 1
MOTDataZoo: {
'MOT15_train': ['Venice-2', 'KITTI-13', 'KITTI-17', 'ETH-Bahnhof', 'ETH-Sunnyday', 'PETS09-S2L1', 'TUD-Campus', 'TUD-Stadtmitte', 'ADL-Rundle-6', 'ADL-Rundle-8', 'ETH-Pedcross2'],
'MOT16_train': ['MOT16-02', 'MOT16-04', 'MOT16-05', 'MOT16-09', 'MOT16-10', 'MOT16-11', 'MOT16-13'],
'MOT17_train': ['MOT17-02-SDP', 'MOT17-04-SDP', 'MOT17-05-SDP', 'MOT17-09-SDP', 'MOT17-10-SDP', 'MOT17-11-SDP', 'MOT17-13-SDP'],
'MOT20_train': ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05'],
'demo': ['MOT16-02'],
}
TrainDataset:
!MOTDataSet
dataset_dir: dataset/mot
......@@ -19,8 +27,10 @@ TestDataset:
dataset_dir: dataset/mot
EvalMOTDataset:
!ImageFolder
!MOTImageFolder
task: MOT16_train
dataset_dir: dataset/mot
data_root: MOT16/images/train
keep_ori_im: False # set True if save visualization images or video
TestMOTDataset:
......
English | [简体中文](README_cn.md)
# JDE (Towards-Realtime-MOT)
## Table of Contents
- [Introduction](#Introduction)
- [Model Zoo](#Model_Zoo)
- [Getting Start](#Getting_Start)
- [Citations](#Citations)
## Introduction
[Joint Detection and Embedding](https://arxiv.org/abs/1909.12605)(JDE) is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network。
JDE reached 64.4 MOTA on MOT16-tesing datatset.
<div align="center">
<img src="../../../../docs/images/mot16_jde.gif" width=500 />
</div>
## Model Zoo
### JDE on MOT-16 training set
| backbone | input shape | MOTA | IDF1 | IDS | FP | FN | FPS | download | config |
| :-----------------| :------- | :----: | :----: | :---: | :----: | :---: | :---: |:---: | :---: |
| DarkNet53(paper) | 1088x608 | 74.8 | 67.3 | 1189 | 5558 | 21505 | 22.2 | ---- | ---- |
| DarkNet53 | 1088x608 | 73.2 | 69.4 | 1320 | 6613 | 21629 | - |[model](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde/jde_darknet53_30e_1088x608.yml) |
**Notes:**
JDE used 8 GPUs for training and mini-batch size as 4 on each GPU, and trained for 30 epoches.
## Getting Start
### 1. Training
Training JDE on 8 GPUs with following command
```bash
python -m paddle.distributed.launch --log_dir=./jde_darknet53_30e_1088x608/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml &>jde_darknet53_30e_1088x608.log 2>&1 &
```
### 2. Evaluation
Evaluating the detector module of JDE on val dataset in single GPU with following commands:
```bash
# use weights released in PaddleDetection model zoo
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams
# use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final
```
Evaluating the ReID module of JDE on val dataset in single GPU with following commands:
```bash
# use weights released in PaddleDetection model zoo
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o metric='MOTDet' weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams
# use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o metric='MOT' weights=output/jde_darknet53_30e_1088x608/model_final
```
Evaluating the track performance of JDE on val dataset in single GPU with following commands:
```bash
# use weights released in PaddleDetection model zoo
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608_track.yml -o metric='MOT' weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams
# use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608_track.yml -o metric='MOT' weights=output/jde_darknet53_30e_1088x608/model_final
```
### 3. Inference
Inference images in single GPU with following commands, use `--infer_img` to inference a single image and `--infer_dir` to inference all images in the directory.
```bash
# inference single image
CUDA_VISIBLE_DEVICES=0 python tools/infer.py configs/mot/jde/jde_darknet53_30e_1088x608_track.yml -o metric='MOT' weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams --infer_img=./demo/000000014439.jpg
# inference all images in the directory
CUDA_VISIBLE_DEVICES=0 python tools/infer.py configs/mot/jde/jde_darknet53_30e_1088x608_track.yml -o metric='MOT' weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams --infer_dir=./demo
```
Inference vidoe in single GPU with following commands.
```bash
# inference on video
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py configs/mot/jde/jde_darknet53_30e_1088x608_track.yml -o metric='MOT' weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams --video_file={your video name}.mp4
```
## Citations
```
@article{wang2019towards,
title={Towards Real-Time Multi-Object Tracking},
author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
journal={arXiv preprint arXiv:1909.12605},
year={2019}
}
```
architecture: JDE
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/DarkNet53_pretrained.pdparams
find_unused_parameters: True
JDE:
detector: YOLOv3
reid: JDEEmbeddingHead
tracker: JDETracker
YOLOv3:
backbone: DarkNet
neck: YOLOv3FPN
yolo_head: YOLOv3Head
post_process: JDEBBoxPostProcess
for_mot: True
DarkNet:
depth: 53
return_idx: [2, 3, 4]
freeze_norm: True
YOLOv3FPN:
freeze_norm: True
YOLOv3Head:
anchors: [[128,384], [180,540], [256,640], [512,640],
[32,96], [45,135], [64,192], [90,271],
[8,24], [11,34], [16,48], [23,68]]
anchor_masks: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
loss: JDEDetectionLoss
JDEBBoxPostProcess:
decode:
name: JDEBox
conf_thresh: 0.3
downsample_ratio: 32
nms:
name: MultiClassNMS
keep_top_k: 500
score_threshold: 0.01
nms_threshold: 0.5
nms_top_k: 2000
normalized: true
JDEEmbeddingHead:
anchor_levels: 3
anchor_scales: 4
embedding_dim: 512
emb_loss: JDEEmbeddingLoss
jde_loss: JDELoss
JDETracker:
det_thresh: 0.3
track_buffer: 30
min_box_area: 200
motion: KalmanFilter
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- AugmentHSV: {}
- LetterBoxResize: {target_size: [608, 1088]}
- RandomAffine: {}
- RandomFlip: {}
- BboxXYXY2XYWH: {}
- NormalizeBox: {}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_transforms:
- Gt2JDETargetThres:
anchor_masks: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
anchors: [[[128,384], [180,540], [256,640], [512,640]],
[[32,96], [45,135], [64,192], [90,271]],
[[8,24], [11,34], [16,48], [23,68]]]
downsample_ratios: [32, 16, 8]
ide_thresh: 0.5
fg_thresh: 0.5
bg_thresh: 0.4
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- BboxXYXY2XYWH: {}
- NormalizeBox: {}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_transforms:
- Gt2JDETargetMax:
anchor_masks: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
anchors: [[[128,384], [180,540], [256,640], [512,640]],
[[32,96], [45,135], [64,192], [90,271]],
[[8,24], [11,34], [16,48], [23,68]]]
downsample_ratios: [32, 16, 8]
max_iou_thresh: 0.60
- BboxCXCYWH2XYXY: {}
- Norm2PixelBbox: {}
batch_size: 1
drop_empty: false
TestReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
EvalMOTReader:
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
TestMOTReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
epoch: 30
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [15, 22]
use_warmup: True
- !BurninWarmup
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0001
type: L2
_BASE_: [
'../../datasets/mot.yml',
'../../runtime.yml',
'_base_/optimizer_30e.yml',
'_base_/jde_darknet53.yml',
'_base_/jde_reader_1088x608.yml',
]
weights: output/jde_darknet53_30e_1088x608/model_final
JDE:
detector: YOLOv3
reid: JDEEmbeddingHead
tracker: JDETracker
metric: 'MOT'
YOLOv3:
backbone: DarkNet
neck: YOLOv3FPN
yolo_head: YOLOv3Head
post_process: JDEBBoxPostProcess
for_mot: True
YOLOv3Head:
anchors: [[128,384], [180,540], [256,640], [512,640],
[32,96], [45,135], [64,192], [90,271],
[8,24], [11,34], [16,48], [23,68]]
anchor_masks: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
loss: JDEDetectionLoss
JDETracker:
det_thresh: 0.3
track_buffer: 30
min_box_area: 200
motion: KalmanFilter
JDEBBoxPostProcess:
decode:
name: JDEBox
conf_thresh: 0.5
downsample_ratio: 32
nms:
name: MultiClassNMS
keep_top_k: 500
score_threshold: 0.01
nms_threshold: 0.4
nms_top_k: 2000
normalized: true
因为 它太大了无法显示 image diff 。你可以改为 查看blob
......@@ -87,13 +87,8 @@ class DetDataset(Dataset):
return self.transform(roidb)
def check_or_download_dataset(self):
if isinstance(self.anno_path, list):
for path in self.anno_path:
self.dataset_dir = get_dataset_path(self.dataset_dir, path,
self.image_dir)
else:
self.dataset_dir = get_dataset_path(self.dataset_dir,
self.anno_path, self.image_dir)
self.dataset_dir = get_dataset_path(self.dataset_dir, self.anno_path,
self.image_dir)
def set_kwargs(self, **kwargs):
self.mixup_epoch = kwargs.get('mixup_epoch', -1)
......@@ -139,18 +134,19 @@ class ImageFolder(DetDataset):
def __init__(self,
dataset_dir=None,
image_dir=None,
anno_path=None,
sample_num=-1,
use_default_label=None,
keep_ori_im=False,
**kwargs):
super(ImageFolder, self).__init__(
dataset_dir,
image_dir,
anno_path,
sample_num=sample_num,
use_default_label=use_default_label)
self.keep_ori_im = keep_ori_im
self._imid2path = {}
self.roidbs = None
self.sample_num = sample_num
def check_or_download_dataset(self):
return
......@@ -182,8 +178,6 @@ class ImageFolder(DetDataset):
if self.sample_num > 0 and ct >= self.sample_num:
break
rec = {'im_id': np.array([ct]), 'im_file': image}
if self.keep_ori_im:
rec.update({'keep_ori_im': 1})
self._imid2path[ct] = image
ct += 1
records.append(rec)
......
......@@ -16,8 +16,11 @@ import os
import cv2
import numpy as np
from collections import OrderedDict
from .dataset import DetDataset
try:
from collections.abc import Sequence
except Exception:
from collections import Sequence
from .dataset import DetDataset, _make_dataset, _is_valid_file
from ppdet.core.workspace import register, serializable
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
......@@ -33,8 +36,7 @@ class MOTDataSet(DetDataset):
image_lists (str|list): mot data image lists, muiti-source mot dataset.
data_fields (list): key name of data dictionary, at least have 'image'.
sample_num (int): number of samples to load, -1 means all.
label_list (str): if use_default_label is False, will load
mapping between category and class index.
Notes:
MOT datasets root directory following this:
dataset/mot
......@@ -72,17 +74,17 @@ class MOTDataSet(DetDataset):
dataset_dir=None,
image_lists=[],
data_fields=['image'],
sample_num=-1,
label_list=None):
sample_num=-1):
super(MOTDataSet, self).__init__(
dataset_dir=dataset_dir,
data_fields=data_fields,
sample_num=sample_num)
self.dataset_dir = dataset_dir
self.image_lists = image_lists
self.label_list = label_list
if isinstance(self.image_lists, str):
self.image_lists = [self.image_lists]
self.roidbs = None
self.cname2cid = None
def get_anno(self):
if self.image_lists == []:
......@@ -127,7 +129,7 @@ class MOTDataSet(DetDataset):
continue
else:
# self.img_files[data_name] each line following this:
# {self.dataset_dir}/MOT17/images/..
# {self.dataset_dir}/MOT17/images/...
first_path = self.img_files[data_name][0]
data_dir = first_path.replace(self.dataset_dir,
'').split('/')[1]
......@@ -183,21 +185,8 @@ class MOTDataSet(DetDataset):
logger.info('identity start index: {}'.format(self.tid_start_index))
logger.info('=' * 80)
# mapping category name to class id
# first_class:0, second_class:1, ...
records = []
if self.label_list:
label_list_path = os.path.join(self.dataset_dir, self.label_list)
if not os.path.exists(label_list_path):
raise ValueError("label_list {} does not exists".format(
label_list_path))
with open(label_list_path, 'r') as fr:
label_id = 0
for line in fr.readlines():
cname2cid[line.strip()] = label_id
label_id += 1
else:
cname2cid = mot_label()
cname2cid = mot_label()
for img_index in range(self.total_imgs):
for i, (k, v) in enumerate(self.img_start_index.items()):
......@@ -255,6 +244,71 @@ def mot_label():
return labels_map
@register
@serializable
class MOTImageFolder(DetDataset):
def __init__(self,
task,
dataset_dir=None,
data_root=None,
image_dir=None,
sample_num=-1,
keep_ori_im=False,
**kwargs):
super(MOTImageFolder, self).__init__(
dataset_dir, image_dir, sample_num=sample_num)
self.task = task
self.data_root = data_root
self.keep_ori_im = keep_ori_im
self._imid2path = {}
self.roidbs = None
def check_or_download_dataset(self):
return
def parse_dataset(self, ):
if not self.roidbs:
self.roidbs = self._load_images()
def _parse(self):
image_dir = self.image_dir
if not isinstance(image_dir, Sequence):
image_dir = [image_dir]
images = []
for im_dir in image_dir:
if os.path.isdir(im_dir):
im_dir = os.path.join(self.dataset_dir, im_dir)
images.extend(_make_dataset(im_dir))
elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
images.append(im_dir)
return images
def _load_images(self):
images = self._parse()
ct = 0
records = []
for image in images:
assert image != '' and os.path.isfile(image), \
"Image {} not found".format(image)
if self.sample_num > 0 and ct >= self.sample_num:
break
rec = {'im_id': np.array([ct]), 'im_file': image}
if self.keep_ori_im:
rec.update({'keep_ori_im': 1})
self._imid2path[ct] = image
ct += 1
records.append(rec)
assert len(records) > 0, "No image file found"
return records
def get_imid2path(self):
return self._imid2path
def set_images(self, images):
self.image_dir = images
self.roidbs = self._load_images()
def _is_valid_video(f, extensions=('.mp4', '.avi', '.mov', '.rmvb', 'flv')):
return f.lower().endswith(extensions)
......
......@@ -14,6 +14,8 @@
from . import trainer
from .trainer import *
from . import tracker
from .tracker import *
from . import callbacks
from .callbacks import *
......@@ -22,5 +24,6 @@ from . import env
from .env import *
__all__ = trainer.__all__ \
+ tracker.__all__ \
+ callbacks.__all__ \
+ env.__all__
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import cv2
import glob
import paddle
import numpy as np
from ppdet.core.workspace import create
from ppdet.utils.checkpoint import load_weight, load_pretrain_weight
from ppdet.modeling.mot.utils import Timer, load_det_results
from ppdet.modeling.mot import visualization as mot_vis
from ppdet.metrics import Metric, MOTMetric
import ppdet.utils.stats as stats
from .callbacks import Callback, ComposeCallback
from .export_utils import _dump_infer_config
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
__all__ = ['Tracker']
class Tracker(object):
def __init__(self, cfg, mode='eval'):
self.cfg = cfg
assert mode.lower() in ['test', 'eval'], \
"mode should be 'test' or 'eval'"
self.mode = mode.lower()
self.optimizer = None
# build MOT data loader
self.dataset = cfg['{}MOTDataset'.format(self.mode.capitalize())]
# build model
self.model = create(cfg.architecture)
self.status = {}
self.start_epoch = 0
# initial default callbacks
self._init_callbacks()
# initial default metrics
self._init_metrics()
self._reset_metrics()
def _init_callbacks(self):
self._callbacks = []
self._compose_callback = None
def _init_metrics(self):
if self.mode in ['test']:
self._metrics = []
return
if self.cfg.metric == 'MOT':
self._metrics = [MOTMetric(), ]
else:
logger.warn("Metric not support for metric type {}".format(
self.cfg.metric))
self._metrics = []
def _reset_metrics(self):
for metric in self._metrics:
metric.reset()
def register_callbacks(self, callbacks):
callbacks = [h for h in list(callbacks) if h is not None]
for c in callbacks:
assert isinstance(c, Callback), \
"metrics shoule be instances of subclass of Metric"
self._callbacks.extend(callbacks)
self._compose_callback = ComposeCallback(self._callbacks)
def register_metrics(self, metrics):
metrics = [m for m in list(metrics) if m is not None]
for m in metrics:
assert isinstance(m, Metric), \
"metrics shoule be instances of subclass of Metric"
self._metrics.extend(metrics)
def load_weights_jde(self, weights):
load_weight(self.model, weights, self.optimizer)
def load_weights_sde(self, det_weights, reid_weights):
if self.model.detector:
load_weight(self.model.detector, det_weights, self.optimizer)
load_weight(self.model.reid, reid_weights, self.optimizer)
def _eval_seq_jde(self,
dataloader,
save_dir=None,
show_image=False,
frame_rate=30):
if save_dir:
if not os.path.exists(save_dir): os.makedirs(save_dir)
tracker = self.model.tracker
tracker.max_time_lost = int(frame_rate / 30.0 * tracker.track_buffer)
timer = Timer()
results = []
frame_id = 0
self.status['mode'] = 'track'
self.model.eval()
for step_id, data in enumerate(dataloader):
self.status['step_id'] = step_id
if frame_id % 40 == 0:
logger.info('Processing frame {} ({:.2f} fps)'.format(
frame_id, 1. / max(1e-5, timer.average_time)))
# forward
timer.tic()
online_targets = self.model(data)
online_tlwhs, online_ids = [], []
for t in online_targets:
tlwh = t.tlwh
tid = t.track_id
vertical = tlwh[2] / tlwh[3] > 1.6
if tlwh[2] * tlwh[3] > tracker.min_box_area and not vertical:
online_tlwhs.append(tlwh)
online_ids.append(tid)
timer.toc()
# save results
results.append((frame_id + 1, online_tlwhs, online_ids))
self.save_results(data, frame_id, online_ids, online_tlwhs,
timer.average_time, show_image, save_dir)
frame_id += 1
return results, frame_id, timer.average_time, timer.calls
def _eval_seq_sde(self,
dataloader,
save_dir=None,
show_image=False,
frame_rate=30,
det_file=''):
if save_dir:
if not os.path.exists(save_dir): os.makedirs(save_dir)
tracker = self.model.tracker
use_detector = False if not self.model.detector else True
timer = Timer()
results = []
frame_id = 0
self.status['mode'] = 'track'
self.model.eval()
self.model.reid.eval()
if not use_detector:
dets_list = load_det_results(det_file, len(dataloader))
logger.info('Finish loading detection results file {}.'.format(
det_file))
for step_id, data in enumerate(dataloader):
self.status['step_id'] = step_id
if frame_id % 40 == 0:
logger.info('Processing frame {} ({:.2f} fps)'.format(
frame_id, 1. / max(1e-5, timer.average_time)))
timer.tic()
if not use_detector:
timer.tic()
dets = dets_list[frame_id]
bbox_tlwh = paddle.to_tensor(dets['bbox'], dtype='float32')
pred_scores = paddle.to_tensor(dets['score'], dtype='float32')
if bbox_tlwh.shape[0] > 0:
pred_bboxes = paddle.concat(
(bbox_tlwh[:, 0:2],
bbox_tlwh[:, 2:4] + bbox_tlwh[:, 0:2]),
axis=1)
else:
pred_bboxes = []
pred_scores = []
data.update({
'pred_bboxes': pred_bboxes,
'pred_scores': pred_scores
})
# forward
timer.tic()
online_targets = self.model(data)
online_tlwhs = []
online_ids = []
for track in online_targets:
if not track.is_confirmed() or track.time_since_update > 1:
continue
tlwh = track.to_tlwh()
track_id = track.track_id
online_tlwhs.append(tlwh)
online_ids.append(track_id)
timer.toc()
# save results
results.append((frame_id + 1, online_tlwhs, online_ids))
self.save_results(data, frame_id, online_ids, online_tlwhs,
timer.average_time, show_image, save_dir)
frame_id += 1
return results, frame_id, timer.average_time, timer.calls
def mot_evaluate(self,
data_root,
seqs,
output_dir,
data_type='mot',
model_type='JDE',
save_images=False,
save_videos=False,
show_image=False,
det_results_dir=''):
if not os.path.exists(output_dir): os.makedirs(output_dir)
result_root = os.path.join(output_dir, 'mot_results')
if not os.path.exists(result_root): os.makedirs(result_root)
assert data_type in ['mot', 'kitti'], \
"data_type should be 'mot' or 'kitti'"
assert model_type in ['JDE', 'DeepSORT', 'FairMOT'], \
"model_type should be 'JDE', 'DeepSORT' or 'FairMOT'"
# run tracking
n_frame = 0
timer_avgs, timer_calls = [], []
for seq in seqs:
save_dir = os.path.join(output_dir, 'mot_outputs',
seq) if save_images or save_videos else None
logger.info('start seq: {}'.format(seq))
infer_dir = os.path.join(data_root, seq, 'img1')
images = self.get_infer_images(infer_dir)
self.dataset.set_images(images)
dataloader = create('EvalMOTReader')(self.dataset, 0)
result_filename = os.path.join(result_root, '{}.txt'.format(seq))
meta_info = open(os.path.join(data_root, seq, 'seqinfo.ini')).read()
frame_rate = int(meta_info[meta_info.find('frameRate') + 10:
meta_info.find('\nseqLength')])
if model_type in ['JDE', 'FairMOT']:
results, nf, ta, tc = self._eval_seq_jde(
dataloader,
save_dir=save_dir,
show_image=show_image,
frame_rate=frame_rate)
elif model_type in ['DeepSORT']:
results, nf, ta, tc = self._eval_seq_sde(
dataloader,
save_dir=save_dir,
show_image=show_image,
frame_rate=frame_rate,
det_file=os.path.join(det_results_dir,
'{}.txt'.format(seq)))
else:
raise ValueError(model_type)
self.write_mot_results(result_filename, results, data_type)
n_frame += nf
timer_avgs.append(ta)
timer_calls.append(tc)
if save_videos:
output_video_path = os.path.join(save_dir, '{}.mp4'.format(seq))
cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(
save_dir, output_video_path)
os.system(cmd_str)
logger.info('Save video in {}.'.format(output_video_path))
logger.info('Evaluate seq: {}'.format(seq))
# update metrics
for metric in self._metrics:
metric.update(data_root, seq, data_type, result_root,
result_filename)
timer_avgs = np.asarray(timer_avgs)
timer_calls = np.asarray(timer_calls)
all_time = np.dot(timer_avgs, timer_calls)
avg_time = all_time / np.sum(timer_calls)
logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(
all_time, 1.0 / avg_time))
# accumulate metric to log out
for metric in self._metrics:
metric.accumulate()
metric.log()
# reset metric states for metric may performed multiple times
self._reset_metrics()
def get_infer_images(self, infer_dir):
assert infer_dir is None or os.path.isdir(infer_dir), \
"{} is not a directory".format(infer_dir)
images = set()
assert os.path.isdir(infer_dir), \
"infer_dir {} is not a directory".format(infer_dir)
exts = ['jpg', 'jpeg', 'png', 'bmp']
exts += [ext.upper() for ext in exts]
for ext in exts:
images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
images = list(images)
images.sort()
assert len(images) > 0, "no image found in {}".format(infer_dir)
logger.info("Found {} inference images in total.".format(len(images)))
return images
def mot_predict(self,
video_file,
output_dir,
data_type='mot',
model_type='JDE',
save_images=False,
save_videos=True,
show_image=False,
det_results_dir=''):
if not os.path.exists(output_dir): os.makedirs(output_dir)
result_root = os.path.join(output_dir, 'mot_results')
if not os.path.exists(result_root): os.makedirs(result_root)
assert data_type in ['mot', 'kitti'], \
"data_type should be 'mot' or 'kitti'"
assert model_type in ['JDE', 'DeepSORT', 'FairMOT'], \
"model_type should be 'JDE', 'DeepSORT' or 'FairMOT'"
# run tracking
seq = video_file.split('/')[-1].split('.')[0]
save_dir = os.path.join(output_dir, 'mot_outputs',
seq) if save_images or save_videos else None
logger.info('Starting tracking {}'.format(video_file))
self.dataset.set_video(video_file)
dataloader = create('TestMOTReader')(self.dataset, 0)
result_filename = os.path.join(result_root, '{}.txt'.format(seq))
frame_rate = self.dataset.frame_rate
if model_type in ['JDE', 'FairMOT']:
results, nf, ta, tc = self._eval_seq_jde(
dataloader,
save_dir=save_dir,
show_image=show_image,
frame_rate=frame_rate)
elif model_type in ['DeepSORT']:
results, nf, ta, tc = self._eval_seq_sde(
dataloader,
save_dir=save_dir,
show_image=show_image,
frame_rate=frame_rate,
det_file=os.path.join(det_results_dir, '{}.txt'.format(seq)))
else:
raise ValueError(model_type)
if save_videos:
output_video_path = os.path.join(save_dir, '{}.mp4'.format(seq))
cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(
save_dir, output_video_path)
os.system(cmd_str)
logger.info('Save video in {}'.format(output_video_path))
def write_mot_results(self, filename, results, data_type='mot'):
if data_type in ['mot', 'mcmot', 'lab']:
save_format = '{frame},{id},{x1},{y1},{w},{h},1,-1,-1,-1\n'
elif data_type == 'kitti':
save_format = '{frame} {id} pedestrian 0 0 -10 {x1} {y1} {x2} {y2} -10 -10 -10 -1000 -1000 -1000 -10\n'
else:
raise ValueError(data_type)
with open(filename, 'w') as f:
for frame_id, tlwhs, track_ids in results:
if data_type == 'kitti':
frame_id -= 1
for tlwh, track_id in zip(tlwhs, track_ids):
if track_id < 0:
continue
x1, y1, w, h = tlwh
x2, y2 = x1 + w, y1 + h
line = save_format.format(
frame=frame_id,
id=track_id,
x1=x1,
y1=y1,
x2=x2,
y2=y2,
w=w,
h=h)
f.write(line)
logger.info('MOT results save in {}'.format(filename))
def save_results(self, data, frame_id, online_ids, online_tlwhs,
average_time, show_image, save_dir):
if show_image or save_dir is not None:
assert 'ori_image' in data
img0 = data['ori_image'].numpy()[0]
online_im = mot_vis.plot_tracking(
img0,
online_tlwhs,
online_ids,
frame_id=frame_id,
fps=1. / average_time)
if show_image:
cv2.imshow('online_im', online_im)
if save_dir is not None:
cv2.imwrite(
os.path.join(save_dir, '{:05d}.jpg'.format(frame_id)),
online_im)
......@@ -33,6 +33,7 @@ from ppdet.optimizer import ModelEMA
from ppdet.core.workspace import create
from ppdet.utils.checkpoint import load_weight, load_pretrain_weight
from ppdet.utils.visualizer import visualize_results, save_result
from ppdet.metrics import JDEDetMetric, JDEReIDMetric
from ppdet.metrics import Metric, COCOMetric, VOCMetric, WiderFaceMetric, get_infer_results, KeyPointTopDownCOCOEval
from ppdet.data.source.category import get_categories
import ppdet.utils.stats as stats
......@@ -55,6 +56,16 @@ class Trainer(object):
self.optimizer = None
self.is_loaded_weights = False
# build data loader
self.dataset = cfg['{}Dataset'.format(self.mode.capitalize())]
if self.mode == 'train':
self.loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, cfg.worker_num)
if cfg.architecture == 'JDE' and self.mode == 'train':
cfg['JDEEmbeddingHead'][
'num_identifiers'] = self.dataset.total_identities
# build model
if 'model' not in self.cfg:
self.model = create(cfg.architecture)
......@@ -67,11 +78,6 @@ class Trainer(object):
self.ema = ModelEMA(
cfg['ema_decay'], self.model, use_thres_step=True)
# build data loader
self.dataset = cfg['{}Dataset'.format(self.mode.capitalize())]
if self.mode == 'train':
self.loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, cfg.worker_num)
# EvalDataset build with BatchSampler to evaluate in single device
# TODO: multi-device evaluate
if self.mode == 'eval':
......@@ -184,6 +190,10 @@ class Trainer(object):
len(eval_dataset), self.cfg.num_joints,
self.cfg.save_dir)
]
elif self.cfg.metric == 'MOTDet':
self._metrics = [JDEDetMetric(), ]
elif self.cfg.metric == 'ReID':
self._metrics = [JDEReIDMetric(), ]
else:
logger.warn("Metric not support for metric type {}".format(
self.cfg.metric))
......@@ -212,7 +222,10 @@ class Trainer(object):
if self.is_loaded_weights:
return
self.start_epoch = 0
load_pretrain_weight(self.model, weights)
if hasattr(self.model, 'detector'):
load_pretrain_weight(self.model.detector, weights)
else:
load_pretrain_weight(self.model, weights)
logger.debug("Load weights {} to start training".format(weights))
def resume_weights(self, weights):
......@@ -238,7 +251,10 @@ class Trainer(object):
self.optimizer = fleet.distributed_optimizer(
self.optimizer).user_defined_optimizer
elif self._nranks > 1:
model = paddle.DataParallel(self.model)
find_unused_parameters = self.cfg[
'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
model = paddle.DataParallel(
self.model, find_unused_parameters=find_unused_parameters)
# initial fp16
if self.cfg.fp16:
......
......@@ -14,7 +14,10 @@
from . import metrics
from . import keypoint_metrics
from . import mot_metrics
from .metrics import *
from .mot_metrics import *
from .keypoint_metrics import *
__all__ = metrics.__all__ + keypoint_metrics.__all__
__all__ = metrics.__all__ + keypoint_metrics.__all__ + mot_metrics.__all__
......@@ -26,8 +26,13 @@ from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
__all__ = [
'draw_pr_curve', 'bbox_area', 'jaccard_overlap', 'prune_zero_padding',
'DetectionMAP'
'draw_pr_curve',
'bbox_area',
'jaccard_overlap',
'prune_zero_padding',
'DetectionMAP',
'ap_per_class',
'compute_ap',
]
......@@ -304,3 +309,87 @@ class DetectionMAP(object):
accum_fp += 1 - int(pos)
accum_fp_list.append(accum_fp)
return accum_tp_list, accum_fp_list
def ap_per_class(tp, conf, pred_cls, target_cls):
"""
Computes the average precision, given the recall and precision curves.
Method originally from https://github.com/rafaelpadilla/Object-Detection-Metrics.
Args:
tp (list): True positives.
conf (list): Objectness value from 0-1.
pred_cls (list): Predicted object classes.
target_cls (list): Target object classes.
"""
tp, conf, pred_cls, target_cls = np.array(tp), np.array(conf), np.array(
pred_cls), np.array(target_cls)
# Sort by objectness
i = np.argsort(-conf)
tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
# Find unique classes
unique_classes = np.unique(np.concatenate((pred_cls, target_cls), 0))
# Create Precision-Recall curve and compute AP for each class
ap, p, r = [], [], []
for c in unique_classes:
i = pred_cls == c
n_gt = sum(target_cls == c) # Number of ground truth objects
n_p = sum(i) # Number of predicted objects
if (n_p == 0) and (n_gt == 0):
continue
elif (n_p == 0) or (n_gt == 0):
ap.append(0)
r.append(0)
p.append(0)
else:
# Accumulate FPs and TPs
fpc = np.cumsum(1 - tp[i])
tpc = np.cumsum(tp[i])
# Recall
recall_curve = tpc / (n_gt + 1e-16)
r.append(tpc[-1] / (n_gt + 1e-16))
# Precision
precision_curve = tpc / (tpc + fpc)
p.append(tpc[-1] / (tpc[-1] + fpc[-1]))
# AP from recall-precision curve
ap.append(compute_ap(recall_curve, precision_curve))
return np.array(ap), unique_classes.astype('int32'), np.array(r), np.array(
p)
def compute_ap(recall, precision):
"""
Computes the average precision, given the recall and precision curves.
Code originally from https://github.com/rbgirshick/py-faster-rcnn.
Args:
recall (list): The recall curve.
precision (list): The precision curve.
Returns:
The average precision as computed in py-faster-rcnn.
"""
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], recall, [1.]))
mpre = np.concatenate(([0.], precision, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import numpy as np
import copy
import motmetrics as mm
mm.lap.default_solver = 'lap'
__all__ = [
'read_mot_results',
'unzip_objs',
'MOTEvaluator',
]
def read_mot_results(filename, is_gt=False, is_ignore=False):
valid_labels = {1}
ignore_labels = {2, 7, 8, 12}
results_dict = dict()
if os.path.isfile(filename):
with open(filename, 'r') as f:
for line in f.readlines():
linelist = line.split(',')
if len(linelist) < 7:
continue
fid = int(linelist[0])
if fid < 1:
continue
results_dict.setdefault(fid, list())
box_size = float(linelist[4]) * float(linelist[5])
if is_gt:
if 'MOT16-' in filename or 'MOT17-' in filename:
label = int(float(linelist[7]))
mark = int(float(linelist[6]))
if mark == 0 or label not in valid_labels:
continue
score = 1
elif is_ignore:
if 'MOT16-' in filename or 'MOT17-' in filename:
label = int(float(linelist[7]))
vis_ratio = float(linelist[8])
if label not in ignore_labels and vis_ratio >= 0:
continue
else:
continue
score = 1
else:
score = float(linelist[6])
tlwh = tuple(map(float, linelist[2:6]))
target_id = int(linelist[1])
results_dict[fid].append((tlwh, target_id, score))
return results_dict
"""
labels={'ped', ... % 1
'person_on_vhcl', ... % 2
'car', ... % 3
'bicycle', ... % 4
'mbike', ... % 5
'non_mot_vhcl', ... % 6
'static_person', ... % 7
'distractor', ... % 8
'occluder', ... % 9
'occluder_on_grnd', ... % 10
'occluder_full', ... % 11
'reflection', ... % 12
'crowd' ... % 13
};
"""
def unzip_objs(objs):
if len(objs) > 0:
tlwhs, ids, scores = zip(*objs)
else:
tlwhs, ids, scores = [], [], []
tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)
return tlwhs, ids, scores
class MOTEvaluator(object):
def __init__(self, data_root, seq_name, data_type):
self.data_root = data_root
self.seq_name = seq_name
self.data_type = data_type
self.load_annotations()
self.reset_accumulator()
def load_annotations(self):
assert self.data_type == 'mot'
gt_filename = os.path.join(self.data_root, self.seq_name, 'gt',
'gt.txt')
self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True)
self.gt_ignore_frame_dict = read_mot_results(
gt_filename, is_ignore=True)
def reset_accumulator(self):
self.acc = mm.MOTAccumulator(auto_id=True)
def eval_frame(self, frame_id, trk_tlwhs, trk_ids, rtn_events=False):
# results
trk_tlwhs = np.copy(trk_tlwhs)
trk_ids = np.copy(trk_ids)
# gts
gt_objs = self.gt_frame_dict.get(frame_id, [])
gt_tlwhs, gt_ids = unzip_objs(gt_objs)[:2]
# ignore boxes
ignore_objs = self.gt_ignore_frame_dict.get(frame_id, [])
ignore_tlwhs = unzip_objs(ignore_objs)[0]
# remove ignored results
keep = np.ones(len(trk_tlwhs), dtype=bool)
iou_distance = mm.distances.iou_matrix(
ignore_tlwhs, trk_tlwhs, max_iou=0.5)
if len(iou_distance) > 0:
match_is, match_js = mm.lap.linear_sum_assignment(iou_distance)
match_is, match_js = map(lambda a: np.asarray(a, dtype=int), [match_is, match_js])
match_ious = iou_distance[match_is, match_js]
match_js = np.asarray(match_js, dtype=int)
match_js = match_js[np.logical_not(np.isnan(match_ious))]
keep[match_js] = False
trk_tlwhs = trk_tlwhs[keep]
trk_ids = trk_ids[keep]
# get distance matrix
iou_distance = mm.distances.iou_matrix(gt_tlwhs, trk_tlwhs, max_iou=0.5)
# acc
self.acc.update(gt_ids, trk_ids, iou_distance)
if rtn_events and iou_distance.size > 0 and hasattr(self.acc,
'last_mot_events'):
events = self.acc.last_mot_events # only supported by https://github.com/longcw/py-motmetrics
else:
events = None
return events
def eval_file(self, filename):
self.reset_accumulator()
result_frame_dict = read_mot_results(filename, is_gt=False)
frames = sorted(list(set(result_frame_dict.keys())))
for frame_id in frames:
trk_objs = result_frame_dict.get(frame_id, [])
trk_tlwhs, trk_ids = unzip_objs(trk_objs)[:2]
self.eval_frame(frame_id, trk_tlwhs, trk_ids, rtn_events=False)
return self.acc
@staticmethod
def get_summary(accs,
names,
metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1',
'precision', 'recall')):
names = copy.deepcopy(names)
if metrics is None:
metrics = mm.metrics.motchallenge_metrics
metrics = copy.deepcopy(metrics)
mh = mm.metrics.create()
summary = mh.compute_many(
accs, metrics=metrics, names=names, generate_overall=True)
return summary
@staticmethod
def save_summary(summary, filename):
import pandas as pd
writer = pd.ExcelWriter(filename)
summary.to_excel(writer)
writer.save()
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import paddle
import numpy as np
from scipy import interpolate
import paddle.nn.functional as F
from .map_utils import ap_per_class
from ppdet.modeling.bbox_utils import bbox_iou_np_expand
from .mot_eval_utils import MOTEvaluator
from .metrics import Metric
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
__all__ = ['JDEDetMetric', 'JDEReIDMetric', 'MOTMetric']
class JDEDetMetric(Metric):
def __init__(self, overlap_thresh=0.5):
self.overlap_thresh = overlap_thresh
self.reset()
def reset(self):
self.AP_accum = np.zeros(1)
self.AP_accum_count = np.zeros(1)
def update(self, inputs, outputs):
bboxes = outputs['bbox'][:, 2:].numpy()
scores = outputs['bbox'][:, 1].numpy()
labels = outputs['bbox'][:, 0].numpy()
bbox_lengths = outputs['bbox_num'].numpy()
if bboxes.shape[0] == 1 and bboxes.sum() == 0.0:
return
gt_boxes = inputs['gt_bbox'].numpy()[0]
gt_labels = inputs['gt_class'].numpy()[0]
if gt_labels.shape[0] == 0:
return
correct = []
detected = []
for i in range(bboxes.shape[0]):
obj_pred = 0
pred_bbox = bboxes[i].reshape(1, 4)
# Compute iou with target boxes
iou = bbox_iou_np_expand(pred_bbox, gt_boxes, x1y1x2y2=True)[0]
# Extract index of largest overlap
best_i = np.argmax(iou)
# If overlap exceeds threshold and classification is correct mark as correct
if iou[best_i] > self.overlap_thresh and obj_pred == gt_labels[
best_i] and best_i not in detected:
correct.append(1)
detected.append(best_i)
else:
correct.append(0)
# Compute Average Precision (AP) per class
target_cls = list(gt_labels.T[0])
AP, AP_class, R, P = ap_per_class(
tp=correct,
conf=scores,
pred_cls=np.zeros_like(scores),
target_cls=target_cls)
self.AP_accum_count += np.bincount(AP_class, minlength=1)
self.AP_accum += np.bincount(AP_class, minlength=1, weights=AP)
def accumulate(self):
logger.info("Accumulating evaluatation results...")
self.map_stat = self.AP_accum[0] / (self.AP_accum_count[0] + 1E-16)
def log(self):
map_stat = 100. * self.map_stat
logger.info("mAP({:.2f}) = {:.2f}%".format(self.overlap_thresh,
map_stat))
def get_results(self):
return self.map_stat
class JDEReIDMetric(Metric):
def __init__(self, far_levels=[1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1]):
self.far_levels = far_levels
self.reset()
def reset(self):
self.embedding = []
self.id_labels = []
self.eval_results = {}
def update(self, inputs, outputs):
for out in outputs:
feat, label = out[:-1].clone().detach(), int(out[-1])
if label != -1:
self.embedding.append(feat)
self.id_labels.append(label)
def accumulate(self):
logger.info("Computing pairwise similairity...")
assert len(self.embedding) == len(self.id_labels)
if len(self.embedding) < 1:
return None
embedding = paddle.stack(self.embedding, axis=0)
emb = F.normalize(embedding, axis=1).numpy()
pdist = np.matmul(emb, emb.T)
id_labels = np.array(self.id_labels, dtype='int32').reshape(-1, 1)
n = len(id_labels)
id_lbl = np.tile(id_labels, n).T
gt = id_lbl == id_lbl.T
up_triangle = np.where(np.triu(pdist) - np.eye(n) * pdist != 0)
pdist = pdist[up_triangle]
gt = gt[up_triangle]
# lazy import metrics here
from sklearn import metrics
far, tar, threshold = metrics.roc_curve(gt, pdist)
interp = interpolate.interp1d(far, tar)
tar_at_far = [interp(x) for x in self.far_levels]
for f, fa in enumerate(self.far_levels):
self.eval_results['TPR@FAR={:.7f}'.format(fa)] = ' {:.4f}'.format(
tar_at_far[f])
def log(self):
for k, v in self.eval_results.items():
logger.info('{}: {}'.format(k, v))
def get_results(self):
return self.eval_results
class MOTMetric(Metric):
def __init__(self, save_summary=False):
self.save_summary = save_summary
self.MOTEvaluator = MOTEvaluator
self.result_root = None
self.reset()
def reset(self):
self.accs = []
self.seqs = []
def update(self, data_root, seq, data_type, result_root, result_filename):
evaluator = self.MOTEvaluator(data_root, seq, data_type)
self.accs.append(evaluator.eval_file(result_filename))
self.seqs.append(seq)
self.result_root = result_root
def accumulate(self):
import motmetrics as mm
import openpyxl
metrics = mm.metrics.motchallenge_metrics
mh = mm.metrics.create()
summary = self.MOTEvaluator.get_summary(self.accs, self.seqs, metrics)
self.strsummary = mm.io.render_summary(
summary,
formatters=mh.formatters,
namemap=mm.io.motchallenge_metric_names)
if self.save_summary:
self.MOTEvaluator.save_summary(
summary, os.path.join(self.result_root, 'summary.xlsx'))
def log(self):
print(self.strsummary)
def get_results(self):
return self.strsummary
......@@ -26,6 +26,7 @@ __all__ = ['JDE']
@register
class JDE(BaseArch):
__shared__ = ['metric']
"""
JDE network, see https://arxiv.org/abs/1909.12605v1
......@@ -33,7 +34,9 @@ class JDE(BaseArch):
detector (object): detector model instance
reid (object): reid model instance
tracker (object): tracker instance
test_mode (str): 'detection', 'embedding' or 'tracking'
metric (str): 'MOTDet' for training and detection evaluation, 'ReID'
for ReID embedding evaluation, or 'MOT' for multi object tracking
evaluation。
"""
__category__ = 'architecture'
......@@ -41,12 +44,12 @@ class JDE(BaseArch):
detector='YOLOv3',
reid='JDEEmbeddingHead',
tracker='JDETracker',
test_mode='detection'):
metric='MOTDet'):
super(JDE, self).__init__()
self.detector = detector
self.reid = reid
self.tracker = tracker
self.test_mode = test_mode
self.metric = metric
@classmethod
def from_config(cls, cfg, *args, **kwargs):
......@@ -74,19 +77,19 @@ class JDE(BaseArch):
loss_boxes)
return jde_losses
else:
if self.test_mode == 'detection':
if self.metric == 'MOTDet':
det_results = {
'bbox': det_outs['bbox'],
'bbox_num': det_outs['bbox_num'],
}
return det_results
elif self.test_mode == 'embedding':
elif self.metric == 'ReID':
emb_feats = det_outs['emb_feats']
embs_and_gts = self.reid(emb_feats, self.inputs, test_emb=True)
return embs_and_gts
elif self.test_mode == 'tracking':
elif self.metric == 'MOT':
emb_feats = det_outs['emb_feats']
emb_outs = self.reid(emb_feats, self.inputs)
......@@ -111,7 +114,8 @@ class JDE(BaseArch):
return online_targets
else:
raise ValueError("Unknown test_mode {}.".format(self.test_mode))
raise ValueError("Unknown metric {} for multi object tracking.".
format(self.metric))
def get_loss(self):
return self._forward()
......
......@@ -15,6 +15,7 @@
This code is borrow from https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/matching.py
"""
import lap
import scipy
import numpy as np
from scipy.spatial.distance import cdist
......@@ -55,7 +56,6 @@ def linear_assignment(cost_matrix, thresh):
(0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(
range(cost_matrix.shape[1]))
matches, unmatched_a, unmatched_b = [], [], []
import lap
cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh)
for ix, mx in enumerate(x):
if mx >= 0:
......
......@@ -106,6 +106,7 @@ class PiecewiseDecay(object):
else:
# do not use LinearWarmup
boundary = [int(step_per_epoch) * i for i in self.milestones]
value = [base_lr] # during step[0, boundary[0]] is base_lr
# self.values is setted directly in config
if self.values is not None:
......@@ -135,7 +136,7 @@ class LinearWarmup(object):
self.steps = steps
self.start_factor = start_factor
def __call__(self, base_lr):
def __call__(self, base_lr, step_per_epoch):
boundary = []
value = []
for i in range(self.steps + 1):
......@@ -149,6 +150,31 @@ class LinearWarmup(object):
return boundary, value
@serializable
class BurninWarmup(object):
"""
Warm up learning rate in burnin mode
Args:
steps (int): warm up steps
"""
def __init__(self, steps=1000):
super(BurninWarmup, self).__init__()
self.steps = steps
def __call__(self, base_lr, step_per_epoch):
boundary = []
value = []
burnin = min(self.steps, step_per_epoch)
for i in range(burnin + 1):
factor = (i * 1.0 / burnin)**4
lr = base_lr * factor
value.append(lr)
if i > 0:
boundary.append(i)
return boundary, value
@register
class LearningRate(object):
"""
......@@ -175,7 +201,7 @@ class LearningRate(object):
# TODO: split warmup & decay
# warmup
boundary, value = self.schedulers[1](self.base_lr)
boundary, value = self.schedulers[1](self.base_lr, step_per_epoch)
# decay
decay_lr = self.schedulers[0](self.base_lr, boundary, value,
step_per_epoch)
......
......@@ -89,6 +89,7 @@ DATASETS = {
'roadsign_coco': ([(
'https://paddlemodels.bj.bcebos.com/object_detection/roadsign_coco.tar',
'49ce5a9b5ad0d6266163cd01de4b018e', ), ], ['annotations', 'images']),
'mot': (),
'objects365': ()
}
......@@ -180,6 +181,16 @@ def get_dataset_path(path, annotation, image_dir):
"Please apply and download the dataset from "
"https://www.objects365.org/download.html".format(name))
data_dir = osp.join(DATASET_HOME, name)
if name == 'mot':
if osp.exists(path) or osp.exists(data_dir):
return data_dir
else:
raise NotImplementedError(
"Dataset {} is not valid for download automatically. "
"Please apply and download the dataset following docs/tutorials/PrepareMOTDataSet.md".
format(name))
# For voc, only check dir VOCdevkit/VOC2012, VOCdevkit/VOC2007
if name in ['voc', 'fruit', 'roadsign_voc']:
exists = True
......@@ -206,7 +217,7 @@ def get_dataset_path(path, annotation, image_dir):
raise ValueError(
"Dataset {} is not valid and cannot parse dataset type "
"'{}' for automaticly downloading, which only supports "
"'voc' , 'coco', 'wider_face', 'fruit' and 'roadsign_voc' currently".
"'voc' , 'coco', 'wider_face', 'fruit', 'roadsign_voc' and 'mot' currently".
format(path, osp.split(path)[-1]))
......
......@@ -10,5 +10,7 @@ Cython
pycocotools
#xtcocotools==1.6 #only for crowdpose
setuptools>=42.0.0
#lap #for mot
#motmetrics #for mot
\ No newline at end of file
lap
sklearn
cython_bbox
motmetrics
\ No newline at end of file
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, sys
# add python path of PadleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
if parent_path not in sys.path:
sys.path.append(parent_path)
# ignore warning log
import warnings
warnings.filterwarnings('ignore')
import glob
import paddle
from paddle.distributed import ParallelEnv
from ppdet.core.workspace import load_config, merge_config
from ppdet.engine import Tracker
from ppdet.utils.check import check_gpu, check_version, check_config
from ppdet.utils.cli import ArgsParser
from ppdet.utils.logger import setup_logger
logger = setup_logger('eval')
def parse_args():
parser = ArgsParser()
parser.add_argument(
"--data_type",
type=str,
default='mot',
help='Data type of tracking dataset, should be in ["mot", "kitti"]')
parser.add_argument(
"--det_results_dir",
type=str,
default=None,
help="Directory name for detection results.")
parser.add_argument(
'--output_dir',
type=str,
default='output',
help='Directory name for output tracking results.')
parser.add_argument(
'--save_images',
action='store_true',
help='Save tracking results (image).')
parser.add_argument(
'--save_videos',
action='store_true',
help='Save tracking results (video).')
parser.add_argument(
'--show_image',
action='store_true',
help='Show tracking results (image).')
args = parser.parse_args()
return args
def run(FLAGS, cfg):
task = cfg['EvalMOTDataset'].task
dataset_dir = cfg['EvalMOTDataset'].dataset_dir
data_root = cfg['EvalMOTDataset'].data_root
data_root = '{}/{}'.format(dataset_dir, data_root)
seqs = cfg['MOTDataZoo'][task]
# build Tracker
tracker = Tracker(cfg, mode='eval')
# load weights
if cfg.architecture in ['DeepSORT']:
if cfg.det_weights != 'None':
tracker.load_weights_sde(cfg.det_weights, cfg.reid_weights)
else:
tracker.load_weights_sde(None, cfg.reid_weights)
else:
tracker.load_weights_jde(cfg.weights)
# inference
tracker.mot_evaluate(
data_root=data_root,
seqs=seqs,
data_type=FLAGS.data_type,
model_type=cfg.architecture,
output_dir=FLAGS.output_dir,
save_images=FLAGS.save_images,
save_videos=FLAGS.save_videos,
show_image=FLAGS.show_image,
det_results_dir=FLAGS.det_results_dir)
def main():
FLAGS = parse_args()
cfg = load_config(FLAGS.config)
merge_config(FLAGS.opt)
check_config(cfg)
check_gpu(cfg.use_gpu)
check_version()
place = 'gpu:{}'.format(ParallelEnv().dev_id) if cfg.use_gpu else 'cpu'
place = paddle.set_device(place)
run(FLAGS, cfg)
if __name__ == '__main__':
main()
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, sys
# add python path of PadleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
if parent_path not in sys.path:
sys.path.append(parent_path)
# ignore warning log
import warnings
warnings.filterwarnings('ignore')
import glob
import paddle
from paddle.distributed import ParallelEnv
from ppdet.core.workspace import load_config, merge_config
from ppdet.engine import Tracker
from ppdet.utils.check import check_gpu, check_version, check_config
from ppdet.utils.cli import ArgsParser
from ppdet.utils.logger import setup_logger
logger = setup_logger('train')
def parse_args():
parser = ArgsParser()
parser.add_argument(
'--video_file', type=str, default=None, help='Video name for tracking.')
parser.add_argument(
"--data_type",
type=str,
default='mot',
help='Data type of tracking dataset, should be in ["mot", "kitti"]')
parser.add_argument(
"--det_results_dir",
type=str,
default=None,
help="Directory name for detection results.")
parser.add_argument(
'--output_dir',
type=str,
default='output',
help='Directory name for output tracking results.')
parser.add_argument(
'--save_images',
action='store_true',
help='Save tracking results (image).')
parser.add_argument(
'--save_videos',
action='store_true',
help='Save tracking results (video).')
parser.add_argument(
'--show_image',
action='store_true',
help='Show tracking results (image).')
args = parser.parse_args()
return args
def run(FLAGS, cfg):
# build Tracker
tracker = Tracker(cfg, mode='test')
# load weights
if cfg.architecture in ['DeepSORT']:
if cfg.det_weights != 'None':
tracker.load_weights_sde(cfg.det_weights, cfg.reid_weights)
else:
tracker.load_weights_sde(None, cfg.reid_weights)
else:
tracker.load_weights_jde(cfg.weights)
# inference
tracker.mot_predict(
video_file=FLAGS.video_file,
data_type=FLAGS.data_type,
model_type=cfg.architecture,
output_dir=FLAGS.output_dir,
save_images=FLAGS.save_images,
save_videos=FLAGS.save_videos,
show_image=FLAGS.show_image,
det_results_dir=FLAGS.det_results_dir)
def main():
FLAGS = parse_args()
cfg = load_config(FLAGS.config)
merge_config(FLAGS.opt)
check_config(cfg)
check_gpu(cfg.use_gpu)
check_version()
place = 'gpu:{}'.format(ParallelEnv().dev_id) if cfg.use_gpu else 'cpu'
place = paddle.set_device(place)
run(FLAGS, cfg)
if __name__ == '__main__':
main()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册