未验证 提交 7745eaf2 编写于 作者: D Double_V 提交者: GitHub

update vot code (#4338)

上级 7ec3ec11
# tracking 单目标跟踪框架
## 目标跟踪介绍
## 介绍
tracking 是基于百度深度学习框架Paddle研发的视频单目标跟踪(Visual Object Tracking, VOT)库, 整体框架参考 [pytracking](https://github.com/visionml/pytracking),其优秀的设计使得我们能够方便地将其他跟踪器如SiamFC,SiamRPN,SiamMask等融合到一个框架中,方便后续统一的实验和比较。
当前tracking涵盖当前目标跟踪的主流模型,包括SiamFC, SiamRPN, SiamMask, ATOM。tracking旨在给开发者提供一系列基于PaddlePaddle的便捷、高效的目标跟踪深度学习算法,后续会不断的扩展模型的丰富度。
## 目标跟踪库的代码目录结构
## 代码目录结构
```
......@@ -36,7 +36,7 @@ pytracking 包含跟踪代码
### 数据准备
目标跟踪的训练集和测试集是不同的,目前最好的模型往往是使用多个训练集进行训练。常用的数据集如下:
目标跟踪的训练集和测试集是不同的,目前最好的模型往往是使用多个训练集进行训练。
主流的训练数据集有:
- [VID](http://bvisionweb1.cs.unc.edu/ilsvrc2015/ILSVRC2015_VID.tar.gz)
......@@ -60,6 +60,8 @@ tracking的工作环境:
- python3
- PaddlePaddle1.7
> 注意:如果遇到cmath无法import的问题,建议切换Python版本,建议使用python3.6.8, python3.7.0
### 安装依赖
1. 安装paddle,需要安装1.7版本的Paddle,如低于这个版本,请升级到Paddle 1.7.
......
......@@ -19,9 +19,9 @@ CURRENT_DIR = osp.dirname(__file__)
sys.path.append(osp.join(CURRENT_DIR, '..'))
from pytracking.admin.environment import env_settings
from pytracking.pysot_toolkit.datasets import DatasetFactory
from pytracking.pysot_toolkit.evaluation import EAOBenchmark, AccuracyRobustnessBenchmark, OPEBenchmark
from pytracking.pysot_toolkit.utils.region import vot_overlap
from pytracking.pysot_toolkit.pysot.datasets import DatasetFactory
from pytracking.pysot_toolkit.pysot.evaluation import EAOBenchmark, AccuracyRobustnessBenchmark, OPEBenchmark
from pytracking.pysot_toolkit.pysot.utils.region import vot_overlap
parser = argparse.ArgumentParser(description='tracking evaluation')
......
......@@ -394,7 +394,7 @@ class ConjugateGradient(ConjugateGradientBase):
fetch_list=[v.name for v in self.f0])
res = TensorList(res)
loss = self.problem.ip_output(res, res)
print('Paddle Loss: {}'.format(loss))
#print('Paddle Loss: {}'.format(loss))
class GaussNewtonCG(ConjugateGradientBase):
......@@ -614,7 +614,7 @@ class GaussNewtonCG(ConjugateGradientBase):
fetch_list=[v.name for v in self.f0])
res = TensorList(res)
loss = self.problem.ip_output(res, res)
print('Paddle Loss: {}'.format(loss))
#print('Paddle Loss: {}'.format(loss))
class GradientDescentL2:
......@@ -691,7 +691,7 @@ class GradientDescentL2:
fetch_list=[self.loss.name] + grad_names)
if self.debug:
loss = res[0]
print('Paddle Loss: {}'.format(loss))
#print('Paddle Loss: {}'.format(loss))
grad = TensorList(res[1:])
......
import numpy as np
from pytracking.features.deep import ResNet50
from pytracking.features.deep import ResNet18, ResNet50
from pytracking.features.extractor import MultiResolutionExtractor
from pytracking.utils import TrackerParams, FeatureParams
......@@ -42,7 +42,9 @@ def parameters():
params.train_skipping = 10 # How often to run training (every n-th frame)
# Online model parameters
deep_params.kernel_size = (4, 4) # Kernel size of filter
# deep_params.kernel_size = (4, 4) # when slice double grad is support,
# else, deep_params.kernel_size = (5, 5)
deep_params.kernel_size = (5, 5) # Kernel size of filter
deep_params.compressed_dim = 64 # Dimension output of projection matrix
deep_params.filter_reg = 1e-1 # Filter regularization factor
deep_params.projection_reg = 1e-4 # Projection regularization factor
......@@ -104,11 +106,8 @@ def parameters():
# Setup the feature extractor (which includes the IoUNet)
deep_fparams = FeatureParams(feature_params=[deep_params])
deep_feat = ResNet50(
net_path='/home/vis/bily/code/baidu/personal-code/libi-13/paddle_ATOMnet-ep0040',
output_layers=['block2'],
fparams=deep_fparams,
normalize_power=2)
deep_feat = ResNet18(
output_layers=['block2'], fparams=deep_fparams, normalize_power=2)
params.features = MultiResolutionExtractor([deep_feat])
params.vot_anno_conversion_type = 'preserve_area'
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册