Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openvinotoolkit
training_toolbox_caffe
提交
789ff8dd
T
training_toolbox_caffe
项目概览
openvinotoolkit
/
training_toolbox_caffe
上一次同步 1 年多
通知
0
Star
49
Fork
21
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
training_toolbox_caffe
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
未验证
提交
789ff8dd
编写于
9月 09, 2019
作者:
A
Alexander Dokuchaev
提交者:
GitHub
9月 09, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Use relative paths in scripts arguments (#16)
* Add os.abspath * Fix readme files * Remove CI status
上级
bc22a4bb
变更
9
隐藏空白更改
内联
并排
Showing
9 changed file
with
26 addition
and
28 deletion
+26
-28
README.md
README.md
+0
-2
README_AD.md
README_AD.md
+8
-8
README_AG.md
README_AG.md
+1
-1
README_CR.md
README_CR.md
+5
-5
README_FD.md
README_FD.md
+5
-5
README_PD.md
README_PD.md
+2
-2
models/evaluate.py
models/evaluate.py
+2
-2
models/mo_convert.py
models/mo_convert.py
+1
-1
models/train.py
models/train.py
+2
-2
未找到文件。
README.md
浏览文件 @
789ff8dd
# TTCF: Training Toolbox for Caffe
[
![Build Status
](
http://134.191.240.124/buildStatus/icon?job=caffe-toolbox/develop/trigger
)
](http://134.191.240.124/job/caffe-toolbox/job/develop/job/trigger/)
This is a
[
BVLC Caffe
](
https://github.com/BVLC/caffe
)
fork that is intended for deployment multiple SSD-based detection models. It includes
-
action detection and action recognition models for smart classroom use-case, see
[
README_AD.md
](
README_AD.md
)
,
-
person detection for smart classroom use-case, see
[
README_PD.md
](
README_PD.md
)
,
...
...
README_AD.md
浏览文件 @
789ff8dd
...
...
@@ -37,7 +37,7 @@ cd ./models/templates/person_detection_action_recognition_N_classes
### (optional) Prepare init weights from PD model
1.
Run docker in interactive sesion with mounted directory with WIDER dataset
1.
Run docker in interactive ses
s
ion with mounted directory with WIDER dataset
```
Shell
nvidia-docker --rm -it -v <path_to_folder_with_weights>:/workspace tccf bash
```
...
...
@@ -59,10 +59,10 @@ On next stage we should train the Action Recognition (AR) model which reuses det
```
Shell
cd ./models
python train.py --model person_detection_action_recognition \ # name of model
--weights action_detection_0005.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
python
3
train.py --model person_detection_action_recognition \ # name of model
--weights action_detection_0005.caffemodel \
# initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \
# path to directory with dataset
--work_dir <WORK_DIR> \
# directory to collect file from training process
--gpu <GPU_ID>
```
...
...
@@ -72,7 +72,7 @@ To evaluate the quality of trained Action Recognition model on your test data yo
1.
Frame independent evaluation:
```
Shell
python evaluate.py --type ad \
python
3
evaluate.py --type ad \
--dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \
--annotaion test_tasks.txt \
...
...
@@ -81,7 +81,7 @@ python evaluate.py --type ad \
2.
Event-based evaluation:
```
Shell
python evaluate.py --type ad_event \
python
3
evaluate.py --type ad_event \
--dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \
--annotaion test_tasks.txt \
...
...
@@ -91,7 +91,7 @@ python evaluate.py --type ad_event \
### Export to IR format
```
Shell
python mo_convert.py --name action_recognition --type ad \
python
3
mo_convert.py --name action_recognition --type ad \
--dir <EXPERIMENT_DIR> \
--iter <ITERATION_NUM> \
--data_type FP32
...
...
README_AG.md
浏览文件 @
789ff8dd
...
...
@@ -27,7 +27,7 @@ python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_VAL_FILE> images_db_val
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TEST_FILE> images_db_test
```
3.
Close docker session by
'alt+D'
and check that you have
`images_db_<subset>.hd5`
and
`images_db_<subset>_list.txt`
files in
<DATA_DIR>
.
3.
Close docker session by
`ctrl+D`
and check that you have
`images_db_<subset>.hd5`
and
`images_db_<subset>_list.txt`
files in
<DATA_DIR>
.
## Model training and evaluation
...
...
README_CR.md
浏览文件 @
789ff8dd
...
...
@@ -9,7 +9,7 @@ As an example of usage please download a small dataset from [here](https://downl
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1.
Run docker in interactive sesion with mounted directory with WIDER dataset
1.
Run docker in interactive ses
s
ion with mounted directory with WIDER dataset
```
Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
```
...
...
@@ -22,7 +22,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/convert_to_voc_format.py /data/annotation_
```
Shell
bash $CAFFE_ROOT/python/lmdb_utils/create_cr_lmdb.sh
```
4.
Close docker session by
'alt+D'
and check that you have lmdb files in
<DATA_DIR>
/lmdb.
4.
Close docker session by
`ctrl+D`
and check that you have lmdb files in
<DATA_DIR>
/lmdb.
###
...
...
@@ -32,7 +32,7 @@ On next stage we should train the Person-vehicle-bike crossroad (four class) det
```
Shell
cd ./models
python train.py --model crossroad \
python
3
train.py --model crossroad \
--weights person-vehicle-bike-detection-crossroad-0078.caffemodel \
--data_dir <DATA_DIR> \
--work_dir<WORK_DIR> \
...
...
@@ -44,7 +44,7 @@ python train.py --model crossroad \
To evaluate the quality of trained Person-vehicle-bike crossroad detection model on your test data you can use provided scripts.
```
Shell
python evaluate.py --type cr \
python
3
evaluate.py --type cr \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation annotation_val_cvt.json \
...
...
@@ -54,7 +54,7 @@ python evaluate.py --type cr \
### Export to IR format
```
Shell
python mo_convert.py --type cr \
python
3
mo_convert.py --type cr \
--name crossroad \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
...
...
README_FD.md
浏览文件 @
789ff8dd
...
...
@@ -8,7 +8,7 @@ The training procedure can be done using data in LMDB format. To launch training
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1.
Run docker in interactive sesion with mounted directory with WIDER dataset
1.
Run docker in interactive ses
s
ion with mounted directory with WIDER dataset
```
Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
```
...
...
@@ -29,7 +29,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_
bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh
```
5.
Close docker session by
'alt+D'
and check that you have lmdb files in
<DATA_DIR>
.
5.
Close docker session by
`ctrl+D`
and check that you have lmdb files in
<DATA_DIR>
.
###
...
...
@@ -39,7 +39,7 @@ On next stage we should train the Face Detection model. To do this follow next s
```
Shell
cd ./models
python
train.py --model face_detection \
# name of model
python
3 train.py --model face_detection \
# name of model
--weights face-detection-retail-0044.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <DATA_DIR> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
...
...
@@ -51,7 +51,7 @@ python train.py --model face_detection \ # name of mod
To evaluate the quality of trained Face Detection model on your test data you can use provided scripts.
```
Shell
python evaluate.py --type fd \
python
3
evaluate.py --type fd \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation wider_val.xml \
...
...
@@ -61,7 +61,7 @@ python evaluate.py --type fd \
### Export to IR format
```
Shell
python mo_convert.py --name face_detection \
python
3
mo_convert.py --name face_detection \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
--data_type FP32
...
...
README_PD.md
浏览文件 @
789ff8dd
...
...
@@ -13,7 +13,7 @@ On first stage you should train the SSD-based person (two class) detector. To do
```
Shell
cd ./models
python
train.py --model person_detection \
# name of model
python
3 train.py --model person_detection \
# name of model
--weights person_detection_0022.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> # directory to collect file from training process
...
...
@@ -28,7 +28,7 @@ Note: to get more accurate model it's recommended to use pre-training of backbon
```
Shell
cd ./models
python mo_convert.py --name face_detection \
python
3
mo_convert.py --name face_detection \
--dir <WORK_DIR>/person_detection/<EXPERIMENT_NUM> \
--iter <INTERATION> \
--data_type FP32
...
...
models/evaluate.py
浏览文件 @
789ff8dd
...
...
@@ -102,8 +102,8 @@ def main():
exec_bin
,
'run'
,
'--rm'
,
'--name'
,
container_name
,
'--user=%s:%s'
%
(
os
.
getuid
(),
os
.
getgid
()),
'-v'
,
'%s:/workspace'
%
args
.
dir
,
'-v'
,
'%s:/data:ro'
%
args
.
data_dir
,
# Mount directory with dataset
'-v'
,
'%s:/workspace'
%
os
.
path
.
abspath
(
args
.
dir
)
,
'-v'
,
'%s:/data:ro'
%
os
.
path
.
abspath
(
args
.
data_dir
)
,
# Mount directory with dataset
args
.
image
]
...
...
models/mo_convert.py
浏览文件 @
789ff8dd
...
...
@@ -75,7 +75,7 @@ def main():
docker_command
=
[
exec_bin
,
'run'
,
'--rm'
,
'--user=%s:%s'
%
(
os
.
getuid
(),
os
.
getgid
()),
'-v'
,
'%s:/workspace'
%
args
.
dir
,
'-v'
,
'%s:/workspace'
%
os
.
path
.
abspath
(
args
.
dir
)
,
args
.
image
]
...
...
models/train.py
浏览文件 @
789ff8dd
...
...
@@ -73,8 +73,8 @@ def main():
exec_bin
,
'run'
,
'--rm'
,
'--user=%s:%s'
%
(
os
.
getuid
(),
os
.
getgid
()),
'--name'
,
container_name
,
# Name of container
'-v'
,
'%s:/workspace'
%
experiment_dir
,
# Mout work directory
'-v'
,
'%s:/data:ro'
%
args
.
data_dir
,
# Mount directory with dataset
'-v'
,
'%s:/workspace'
%
os
.
path
.
abspath
(
experiment_dir
)
,
# Mout work directory
'-v'
,
'%s:/data:ro'
%
os
.
path
.
abspath
(
args
.
data_dir
)
,
# Mount directory with dataset
'-v'
,
'%s:/init_weights:ro'
%
os
.
path
.
abspath
(
'../init_weights'
),
# Mount directory with init weights
args
.
image
]
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录