未验证 提交 f67c9b2e 编写于 作者: Y YixinKristy 提交者: GitHub

Merge branch 'PaddlePaddle:develop' into develop

......@@ -47,12 +47,11 @@ PP-Tracking 提供了简洁的GUI可视化界面,教程请参考[PP-Tracking
## 安装依赖
一键安装MOT相关的依赖:
```
pip install lap sklearn motmetrics openpyxl cython_bbox
pip install lap sklearn motmetrics openpyxl
或者
pip install -r requirements.txt
```
**注意:**
- `cython_bbox`在windows上安装:`pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox`。可参考这个[教程](https://stackoverflow.com/questions/60349980/is-there-a-way-to-install-cython-bbox-for-windows)
- 预测需确保已安装[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`
......
......@@ -49,12 +49,11 @@ PP-Tracking supports GUI predict and deployment. Please refer to this [doc](http
## Installation
Install all the related dependencies for MOT:
```
pip install lap sklearn motmetrics openpyxl cython_bbox
pip install lap sklearn motmetrics openpyxl
or
pip install -r requirements.txt
```
**Notes:**
- Install `cython_bbox` for Windows: `pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox`. You can refer to this [tutorial](https://stackoverflow.com/questions/60349980/is-there-a-way-to-install-cython-bbox-for-windows).
- Please make sure that [ffmpeg](https://ffmpeg.org/ffmpeg.html) is installed first, on Linux(Ubuntu) platform you can directly install it by the following command:`apt-get update && apt-get install -y ffmpeg`.
......
......@@ -86,7 +86,7 @@ PP-tracking provides an AI studio public project tutorial. Please refer to this
### Results on MOT-17 Half Set
| backbone | input shape | MOTA | IDF1 | IDS | FP | FN | FPS | download | config |
| :--------------| :------- | :----: | :----: | :----: | :----: | :----: | :------: | :----: |:-----: |
| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [config](./fairmot_dla34_30e_1088x608.yml) |
| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | [config](./fairmot_dla34_30e_1088x608.yml) |
| DLA-34 + BYTETracker| 1088x608 | 70.3 | 73.2 | 234 | 2176 | 13598 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [config](./fairmot_dla34_30e_1088x608_bytetracker.yml) |
**Notes:**
......
......@@ -82,7 +82,7 @@ PP-Tracking 提供了AI Studio公开项目案例,教程请参考[PP-Tracking
### 在MOT-17 Half上结果
| 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 |
| :--------------| :------- | :----: | :----: | :----: | :----: | :----: | :------: | :----: |:-----: |
| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608.yml) |
| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608.yml) |
| DLA-34 + BYTETracker| 1088x608 | 70.3 | 73.2 | 234 | 2176 | 13598 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_bytetracker.yml) |
......
......@@ -14,8 +14,18 @@ TrainDataset:
image_lists: ['mot17.half', 'caltech.all', 'cuhksysu.train', 'prw.train', 'citypersons.train', 'eth.train']
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
# for MOT evaluation
# If you want to change the MOT evaluation dataset, please modify 'data_root'
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: MOT17/images/half
keep_ori_im: False # set True if save visualization images or video, or used in DeepSORT
JDETracker:
use_byte: True
match_thres: 0.8
conf_thres: 0.4
low_conf_thres: 0.2
min_box_area: 200
vertical_ratio: 1.6 # for pedestrian
......@@ -18,7 +18,7 @@
| :-------------| :-------- | :------- | :----: | :----: | :----: | :-----: |:------: |
| PathTrack | DLA-34 | 1088x608 | 44.9 | 59.3 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_pathtrack.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_pathtrack.yml) |
| VisDrone | DLA-34 | 1088x608 | 49.2 | 63.1 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_visdrone_pedestrian.yml) |
| VisDrone | HRNetv2-W18| 1088x608 | 40.5 | 54.7 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.yml) |
| VisDrone | HRNetv2-W18| 1088x608 | 40.5 | 54.7 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.yml) |
| VisDrone | HRNetv2-W18| 864x480 | 38.6 | 50.9 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.yml) |
| VisDrone | HRNetv2-W18| 576x320 | 30.6 | 47.2 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_pedestrian.yml) |
......@@ -124,8 +124,8 @@ month={Oct},}
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
......
此差异已折叠。
......@@ -15,16 +15,15 @@
// for setprecision
#include <chrono>
#include <iomanip>
#include "include/object_detector.h"
using namespace paddle_infer;
#include "include/object_detector.h"
namespace PaddleDetection {
// Load Model and create model predictor
void ObjectDetector::LoadModel(const std::string& model_dir,
void ObjectDetector::LoadModel(const std::string &model_dir,
const int batch_size,
const std::string& run_mode) {
const std::string &run_mode) {
paddle_infer::Config config;
std::string prog_file = model_dir + OS_PATH_SEP + "model.pdmodel";
std::string params_file = model_dir + OS_PATH_SEP + "model.pdiparams";
......@@ -42,27 +41,22 @@ void ObjectDetector::LoadModel(const std::string& model_dir,
} else if (run_mode == "trt_int8") {
precision = paddle_infer::Config::Precision::kInt8;
} else {
printf(
"run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or "
"'trt_int8'");
printf("run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or "
"'trt_int8'");
}
// set tensorrt
config.EnableTensorRtEngine(1 << 30,
batch_size,
this->min_subgraph_size_,
precision,
false,
this->trt_calib_mode_);
config.EnableTensorRtEngine(1 << 30, batch_size, this->min_subgraph_size_,
precision, false, this->trt_calib_mode_);
// set use dynamic shape
if (this->use_dynamic_shape_) {
// set DynamicShsape for image tensor
// set DynamicShape for image tensor
const std::vector<int> min_input_shape = {
1, 3, this->trt_min_shape_, this->trt_min_shape_};
batch_size, 3, this->trt_min_shape_, this->trt_min_shape_};
const std::vector<int> max_input_shape = {
1, 3, this->trt_max_shape_, this->trt_max_shape_};
batch_size, 3, this->trt_max_shape_, this->trt_max_shape_};
const std::vector<int> opt_input_shape = {
1, 3, this->trt_opt_shape_, this->trt_opt_shape_};
batch_size, 3, this->trt_opt_shape_, this->trt_opt_shape_};
const std::map<std::string, std::vector<int>> map_min_input_shape = {
{"image", min_input_shape}};
const std::map<std::string, std::vector<int>> map_max_input_shape = {
......@@ -70,8 +64,8 @@ void ObjectDetector::LoadModel(const std::string& model_dir,
const std::map<std::string, std::vector<int>> map_opt_input_shape = {
{"image", opt_input_shape}};
config.SetTRTDynamicShapeInfo(
map_min_input_shape, map_max_input_shape, map_opt_input_shape);
config.SetTRTDynamicShapeInfo(map_min_input_shape, map_max_input_shape,
map_opt_input_shape);
std::cout << "TensorRT dynamic shape enabled" << std::endl;
}
}
......@@ -96,12 +90,11 @@ void ObjectDetector::LoadModel(const std::string& model_dir,
}
// Visualiztion MaskDetector results
cv::Mat VisualizeResult(
const cv::Mat& img,
const std::vector<PaddleDetection::ObjectResult>& results,
const std::vector<std::string>& lables,
const std::vector<int>& colormap,
const bool is_rbox = false) {
cv::Mat
VisualizeResult(const cv::Mat &img,
const std::vector<PaddleDetection::ObjectResult> &results,
const std::vector<std::string> &lables,
const std::vector<int> &colormap, const bool is_rbox = false) {
cv::Mat vis_img = img.clone();
for (int i = 0; i < results.size(); ++i) {
// Configure color and text size
......@@ -142,24 +135,18 @@ cv::Mat VisualizeResult(
origin.y = results[i].rect[1];
// Configure text background
cv::Rect text_back = cv::Rect(results[i].rect[0],
results[i].rect[1] - text_size.height,
text_size.width,
text_size.height);
cv::Rect text_back =
cv::Rect(results[i].rect[0], results[i].rect[1] - text_size.height,
text_size.width, text_size.height);
// Draw text, and background
cv::rectangle(vis_img, text_back, roi_color, -1);
cv::putText(vis_img,
text,
origin,
font_face,
font_scale,
cv::Scalar(255, 255, 255),
thickness);
cv::putText(vis_img, text, origin, font_face, font_scale,
cv::Scalar(255, 255, 255), thickness);
}
return vis_img;
}
void ObjectDetector::Preprocess(const cv::Mat& ori_im) {
void ObjectDetector::Preprocess(const cv::Mat &ori_im) {
// Clone the image : keep the original mat for postprocess
cv::Mat im = ori_im.clone();
cv::cvtColor(im, im, cv::COLOR_BGR2RGB);
......@@ -168,9 +155,8 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) {
void ObjectDetector::Postprocess(
const std::vector<cv::Mat> mats,
std::vector<PaddleDetection::ObjectResult>* result,
std::vector<int> bbox_num,
std::vector<float> output_data_,
std::vector<PaddleDetection::ObjectResult> *result,
std::vector<int> bbox_num, std::vector<float> output_data_,
bool is_rbox = false) {
result->clear();
int start_idx = 0;
......@@ -226,12 +212,11 @@ void ObjectDetector::Postprocess(
}
void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
const double threshold,
const int warmup,
const double threshold, const int warmup,
const int repeats,
std::vector<PaddleDetection::ObjectResult>* result,
std::vector<int>* bbox_num,
std::vector<double>* times) {
std::vector<PaddleDetection::ObjectResult> *result,
std::vector<int> *bbox_num,
std::vector<double> *times) {
auto preprocess_start = std::chrono::steady_clock::now();
int batch_size = imgs.size();
......@@ -239,7 +224,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
std::vector<float> in_data_all;
std::vector<float> im_shape_all(batch_size * 2);
std::vector<float> scale_factor_all(batch_size * 2);
std::vector<const float*> output_data_list_;
std::vector<const float *> output_data_list_;
std::vector<int> out_bbox_num_data_;
// in_net img for each batch
......@@ -255,9 +240,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
scale_factor_all[bs_idx * 2] = inputs_.scale_factor_[0];
scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1];
// TODO: reduce cost time
in_data_all.insert(
in_data_all.end(), inputs_.im_data_.begin(), inputs_.im_data_.end());
in_data_all.insert(in_data_all.end(), inputs_.im_data_.begin(),
inputs_.im_data_.end());
// collect in_net img
in_net_img_all[bs_idx] = inputs_.in_net_im_;
......@@ -276,10 +260,10 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
pad_img.convertTo(pad_img, CV_32FC3);
std::vector<float> pad_data;
pad_data.resize(rc * rh * rw);
float* base = pad_data.data();
float *base = pad_data.data();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(
pad_img, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i);
cv::extractChannel(pad_img,
cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i);
}
in_data_all.insert(in_data_all.end(), pad_data.begin(), pad_data.end());
}
......@@ -290,7 +274,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
auto preprocess_end = std::chrono::steady_clock::now();
// Prepare input tensor
auto input_names = predictor_->GetInputNames();
for (const auto& tensor_name : input_names) {
for (const auto &tensor_name : input_names) {
auto in_tensor = predictor_->GetInputHandle(tensor_name);
if (tensor_name == "image") {
int rh = inputs_.in_net_shape_[0];
......@@ -320,8 +304,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
for (int j = 0; j < output_names.size(); j++) {
auto output_tensor = predictor_->GetOutputHandle(output_names[j]);
std::vector<int> output_shape = output_tensor->shape();
int out_num = std::accumulate(
output_shape.begin(), output_shape.end(), 1, std::multiplies<int>());
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies<int>());
if (output_tensor->type() == paddle_infer::DataType::INT32) {
out_bbox_num_data_.resize(out_num);
output_tensor->CopyToCpu(out_bbox_num_data_.data());
......@@ -344,8 +328,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
for (int j = 0; j < output_names.size(); j++) {
auto output_tensor = predictor_->GetOutputHandle(output_names[j]);
std::vector<int> output_shape = output_tensor->shape();
int out_num = std::accumulate(
output_shape.begin(), output_shape.end(), 1, std::multiplies<int>());
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies<int>());
output_shape_list.push_back(output_shape);
if (output_tensor->type() == paddle_infer::DataType::INT32) {
out_bbox_num_data_.resize(out_num);
......@@ -371,22 +355,15 @@ void ObjectDetector::Predict(const std::vector<cv::Mat> imgs,
if (i == config_.fpn_stride_.size()) {
reg_max = output_shape_list[i][2] / 4 - 1;
}
float* buffer = new float[out_tensor_list[i].size()];
memcpy(buffer,
&out_tensor_list[i][0],
float *buffer = new float[out_tensor_list[i].size()];
memcpy(buffer, &out_tensor_list[i][0],
out_tensor_list[i].size() * sizeof(float));
output_data_list_.push_back(buffer);
}
PaddleDetection::PicoDetPostProcess(
result,
output_data_list_,
config_.fpn_stride_,
inputs_.im_shape_,
inputs_.scale_factor_,
config_.nms_info_["score_threshold"].as<float>(),
config_.nms_info_["nms_threshold"].as<float>(),
num_class,
reg_max);
result, output_data_list_, config_.fpn_stride_, inputs_.im_shape_,
inputs_.scale_factor_, config_.nms_info_["score_threshold"].as<float>(),
config_.nms_info_["nms_threshold"].as<float>(), num_class, reg_max);
bbox_num->push_back(result->size());
} else {
is_rbox = output_shape_list[0][output_shape_list[0].size() - 1] % 10 == 0;
......
......@@ -35,6 +35,9 @@ class Result(object):
return self.res_dict[name]
return None
def clear(self, name):
self.res_dict[name].clear()
class DataCollector(object):
"""
......@@ -80,7 +83,6 @@ class DataCollector(object):
ids = int(mot_item[0])
if ids not in self.collector:
self.collector[ids] = copy.deepcopy(self.mots)
self.collector[ids]["frames"].append(frameid)
self.collector[ids]["rects"].append([mot_item[2:]])
if attr_res:
......
......@@ -297,10 +297,9 @@ def distill_idfeat(mot_res):
feature_new = feature_list
#if available frames number is more than 200, take one frame data per 20 frames
if len(qualities_new) > 200:
skipf = 20
else:
skipf = max(10, len(qualities_new) // 10)
skipf = 1
if len(qualities_new) > 20:
skipf = 2
quality_skip = np.array(qualities_new[::skipf])
feature_skip = np.array(feature_new[::skipf])
......
......@@ -587,7 +587,7 @@ class PipePredictor(object):
if self.cfg['visual']:
self.action_visual_helper.update(action_res)
if self.with_mtmct:
if self.with_mtmct and frame_id % 10 == 0:
crop_input, img_qualities, rects = self.reid_predictor.crop_image_with_mot(
frame, mot_res)
if frame_id > self.warmup_frame:
......@@ -603,6 +603,8 @@ class PipePredictor(object):
"rects": rects
}
self.pipeline_res.update(reid_res_dict, 'reid')
else:
self.pipeline_res.clear('reid')
self.collector.append(frame_id, self.pipeline_res)
......
......@@ -26,7 +26,7 @@ warnings.filterwarnings("ignore")
__all__ = [
'merge_matches',
'linear_assignment',
'cython_bbox_ious',
'bbox_ious',
'iou_distance',
'embedding_distance',
'fuse_motion',
......@@ -68,22 +68,28 @@ def linear_assignment(cost_matrix, thresh):
return matches, unmatched_a, unmatched_b
def cython_bbox_ious(atlbrs, btlbrs):
ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)
if ious.size == 0:
def bbox_ious(atlbrs, btlbrs):
boxes = np.ascontiguousarray(atlbrs, dtype=np.float)
query_boxes = np.ascontiguousarray(btlbrs, dtype=np.float)
N = boxes.shape[0]
K = query_boxes.shape[0]
ious = np.zeros((N, K), dtype=boxes.dtype)
if N * K == 0:
return ious
try:
import cython_bbox
except Exception as e:
print('cython_bbox not found, please install cython_bbox.'
'for example: `pip install cython_bbox`.')
exit()
ious = cython_bbox.bbox_overlaps(
np.ascontiguousarray(
atlbrs, dtype=np.float),
np.ascontiguousarray(
btlbrs, dtype=np.float))
for k in range(K):
box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + 1) *
(query_boxes[k, 3] - query_boxes[k, 1] + 1))
for n in range(N):
iw = (min(boxes[n, 2], query_boxes[k, 2]) - max(
boxes[n, 0], query_boxes[k, 0]) + 1)
if iw > 0:
ih = (min(boxes[n, 3], query_boxes[k, 3]) - max(
boxes[n, 1], query_boxes[k, 1]) + 1)
if ih > 0:
ua = float((boxes[n, 2] - boxes[n, 0] + 1) * (boxes[
n, 3] - boxes[n, 1] + 1) + box_area - iw * ih)
ious[n, k] = iw * ih / ua
return ious
......@@ -98,7 +104,7 @@ def iou_distance(atracks, btracks):
else:
atlbrs = [track.tlbr for track in atracks]
btlbrs = [track.tlbr for track in btracks]
_ious = cython_bbox_ious(atlbrs, btlbrs)
_ious = bbox_ious(atlbrs, btlbrs)
cost_matrix = 1 - _ious
return cost_matrix
......
......@@ -231,7 +231,7 @@ class Detector(object):
self.det_times.preprocess_time_s.end()
# model prediction
result = self.predict(repeats=repeats) # warmup
result = self.predict(repeats=50) # warmup
self.det_times.inference_time_s.start()
result = self.predict(repeats=repeats)
self.det_times.inference_time_s.end(repeats=repeats)
......@@ -296,7 +296,7 @@ class Detector(object):
if not os.path.exists(self.output_dir):
os.makedirs(self.output_dir)
out_path = os.path.join(self.output_dir, video_out_name)
fourcc = cv2.VideoWriter_fourcc(* 'mp4v')
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height))
index = 1
while (1):
......@@ -790,7 +790,7 @@ def main():
if FLAGS.image_dir is None and FLAGS.image_file is not None:
assert FLAGS.batch_size == 1, "batch_size should be 1, when image_file is not None"
img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file)
detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10)
detector.predict_image(img_list, FLAGS.run_benchmark, repeats=100)
if not FLAGS.run_benchmark:
detector.det_times.info(average=True)
else:
......
......@@ -306,11 +306,12 @@ class MCMOTEvaluator(object):
def load_annotations(self):
assert self.data_type == 'mcmot'
self.gt_filename = os.path.join(self.data_root, '../',
'sequences',
self.gt_filename = os.path.join(self.data_root, '../', 'sequences',
'{}.txt'.format(self.seq_name))
if not os.path.exists(self.gt_filename):
logger.warning("gt_filename '{}' of MCMOTEvaluator is not exist, so the MOTA will be -inf.")
logger.warning(
"gt_filename '{}' of MCMOTEvaluator is not exist, so the MOTA will be -INF."
)
def reset_accumulator(self):
import motmetrics as mm
......
......@@ -37,8 +37,11 @@ __all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric']
def read_mot_results(filename, is_gt=False, is_ignore=False):
valid_label = [1]
ignore_labels = [2, 7, 8, 12] # only in motchallenge datasets like 'MOT16'
logger.info("In MOT16/17 dataset the valid_label of ground truth is '{}', "
"in other dataset it should be '0' for single classs MOT.".format(valid_label[0]))
if is_gt:
logger.info(
"In MOT16/17 dataset the valid_label of ground truth is '{}', "
"in other dataset it should be '0' for single classs MOT.".format(
valid_label[0]))
results_dict = dict()
if os.path.isfile(filename):
with open(filename, 'r') as f:
......@@ -118,7 +121,9 @@ class MOTEvaluator(object):
gt_filename = os.path.join(self.data_root, self.seq_name, 'gt',
'gt.txt')
if not os.path.exists(gt_filename):
logger.warning("gt_filename '{}' of MOTEvaluator is not exist, so the MOTA will be -inf.")
logger.warning(
"gt_filename '{}' of MOTEvaluator is not exist, so the MOTA will be -INF."
)
self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True)
self.gt_ignore_frame_dict = read_mot_results(
gt_filename, is_ignore=True)
......
......@@ -22,22 +22,23 @@ class BaseArch(nn.Layer):
self.fuse_norm = False
def load_meanstd(self, cfg_transform):
self.scale = 1.
self.mean = paddle.to_tensor([0.485, 0.456, 0.406]).reshape(
(1, 3, 1, 1))
self.std = paddle.to_tensor([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1))
scale = 1.
mean = np.array([0.485, 0.456, 0.406], dtype=np.float32)
std = np.array([0.229, 0.224, 0.225], dtype=np.float32)
for item in cfg_transform:
if 'NormalizeImage' in item:
self.mean = paddle.to_tensor(item['NormalizeImage'][
'mean']).reshape((1, 3, 1, 1))
self.std = paddle.to_tensor(item['NormalizeImage'][
'std']).reshape((1, 3, 1, 1))
mean = np.array(
item['NormalizeImage']['mean'], dtype=np.float32)
std = np.array(item['NormalizeImage']['std'], dtype=np.float32)
if item['NormalizeImage'].get('is_scale', True):
self.scale = 1. / 255.
scale = 1. / 255.
break
if self.data_format == 'NHWC':
self.mean = self.mean.reshape(1, 1, 1, 3)
self.std = self.std.reshape(1, 1, 1, 3)
self.scale = paddle.to_tensor(scale / std).reshape((1, 1, 1, 3))
self.bias = paddle.to_tensor(-mean / std).reshape((1, 1, 1, 3))
else:
self.scale = paddle.to_tensor(scale / std).reshape((1, 3, 1, 1))
self.bias = paddle.to_tensor(-mean / std).reshape((1, 3, 1, 1))
def forward(self, inputs):
if self.data_format == 'NHWC':
......@@ -46,7 +47,7 @@ class BaseArch(nn.Layer):
if self.fuse_norm:
image = inputs['image']
self.inputs['image'] = (image * self.scale - self.mean) / self.std
self.inputs['image'] = image * self.scale + self.bias
self.inputs['im_shape'] = inputs['im_shape']
self.inputs['scale_factor'] = inputs['scale_factor']
else:
......@@ -66,8 +67,7 @@ class BaseArch(nn.Layer):
outs = []
for inp in inputs_list:
if self.fuse_norm:
self.inputs['image'] = (
inp['image'] * self.scale - self.mean) / self.std
self.inputs['image'] = inp['image'] * self.scale + self.bias
self.inputs['im_shape'] = inp['im_shape']
self.inputs['scale_factor'] = inp['scale_factor']
else:
......@@ -75,7 +75,7 @@ class BaseArch(nn.Layer):
outs.append(self.get_pred())
# multi-scale test
if len(outs)>1:
if len(outs) > 1:
out = self.merge_multi_scale_predictions(outs)
else:
out = outs[0]
......@@ -92,7 +92,9 @@ class BaseArch(nn.Layer):
keep_top_k = self.bbox_post_process.nms.keep_top_k
nms_threshold = self.bbox_post_process.nms.nms_threshold
else:
raise Exception("Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now")
raise Exception(
"Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now"
)
final_boxes = []
all_scale_outs = paddle.concat([o['bbox'] for o in outs]).numpy()
......@@ -101,9 +103,11 @@ class BaseArch(nn.Layer):
if np.count_nonzero(idxs) == 0:
continue
r = nms(all_scale_outs[idxs, 1:], nms_threshold)
final_boxes.append(np.concatenate([np.full((r.shape[0], 1), c), r], 1))
final_boxes.append(
np.concatenate([np.full((r.shape[0], 1), c), r], 1))
out = np.concatenate(final_boxes)
out = np.concatenate(sorted(out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6))
out = np.concatenate(sorted(
out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6))
out = {
'bbox': paddle.to_tensor(out),
'bbox_num': paddle.to_tensor(np.array([out.shape[0], ]))
......
......@@ -199,7 +199,11 @@ class ATSSAssigner(nn.Layer):
gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0)
assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4])
assigned_scores = F.one_hot(assigned_labels, self.num_classes)
assigned_scores = F.one_hot(assigned_labels, self.num_classes + 1)
ind = list(range(self.num_classes + 1))
ind.remove(bg_index)
assigned_scores = paddle.index_select(
assigned_scores, paddle.to_tensor(ind), axis=-1)
if pred_bboxes is not None:
# assigned iou
ious = batch_iou_similarity(gt_bboxes, pred_bboxes) * mask_positive
......
......@@ -143,7 +143,11 @@ class TaskAlignedAssigner(nn.Layer):
gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0)
assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4])
assigned_scores = F.one_hot(assigned_labels, num_classes)
assigned_scores = F.one_hot(assigned_labels, num_classes + 1)
ind = list(range(num_classes + 1))
ind.remove(bg_index)
assigned_scores = paddle.index_select(
assigned_scores, paddle.to_tensor(ind), axis=-1)
# rescale alignment metrics
alignment_metrics *= mask_positive
max_metrics_per_instance = alignment_metrics.max(axis=-1, keepdim=True)
......
......@@ -331,7 +331,8 @@ class PPYOLOEHead(nn.Layer):
assigned_bboxes /= stride_tensor
# cls loss
if self.use_varifocal_loss:
one_hot_label = F.one_hot(assigned_labels, self.num_classes)
one_hot_label = F.one_hot(assigned_labels,
self.num_classes + 1)[..., :-1]
loss_cls = self._varifocal_loss(pred_scores, assigned_scores,
one_hot_label)
else:
......
......@@ -80,7 +80,7 @@ class DETRLoss(nn.Layer):
target_label = target_label.reshape([bs, num_query_objects])
if self.use_focal_loss:
target_label = F.one_hot(target_label,
self.num_classes + 1)[:, :, :-1]
self.num_classes + 1)[..., :-1]
return {
'loss_class': self.loss_coeff['class'] * sigmoid_focal_loss(
logits, target_label, num_gts / num_query_objects)
......
......@@ -26,7 +26,7 @@ warnings.filterwarnings("ignore")
__all__ = [
'merge_matches',
'linear_assignment',
'cython_bbox_ious',
'bbox_ious',
'iou_distance',
'embedding_distance',
'fuse_motion',
......@@ -68,22 +68,28 @@ def linear_assignment(cost_matrix, thresh):
return matches, unmatched_a, unmatched_b
def cython_bbox_ious(atlbrs, btlbrs):
ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)
if ious.size == 0:
def bbox_ious(atlbrs, btlbrs):
boxes = np.ascontiguousarray(atlbrs, dtype=np.float)
query_boxes = np.ascontiguousarray(btlbrs, dtype=np.float)
N = boxes.shape[0]
K = query_boxes.shape[0]
ious = np.zeros((N, K), dtype=boxes.dtype)
if N * K == 0:
return ious
try:
import cython_bbox
except Exception as e:
print('cython_bbox not found, please install cython_bbox.'
'for example: `pip install cython_bbox`.')
raise e
ious = cython_bbox.bbox_overlaps(
np.ascontiguousarray(
atlbrs, dtype=np.float),
np.ascontiguousarray(
btlbrs, dtype=np.float))
for k in range(K):
box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + 1) *
(query_boxes[k, 3] - query_boxes[k, 1] + 1))
for n in range(N):
iw = (min(boxes[n, 2], query_boxes[k, 2]) - max(
boxes[n, 0], query_boxes[k, 0]) + 1)
if iw > 0:
ih = (min(boxes[n, 3], query_boxes[k, 3]) - max(
boxes[n, 1], query_boxes[k, 1]) + 1)
if ih > 0:
ua = float((boxes[n, 2] - boxes[n, 0] + 1) * (boxes[
n, 3] - boxes[n, 1] + 1) + box_area - iw * ih)
ious[n, k] = iw * ih / ua
return ious
......@@ -98,7 +104,7 @@ def iou_distance(atracks, btracks):
else:
atlbrs = [track.tlbr for track in atracks]
btlbrs = [track.tlbr for track in btracks]
_ious = cython_bbox_ious(atlbrs, btlbrs)
_ious = bbox_ious(atlbrs, btlbrs)
cost_matrix = 1 - _ious
return cost_matrix
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册