未验证 提交 789ff8dd 编写于 作者: A Alexander Dokuchaev 提交者: GitHub

Use relative paths in scripts arguments (#16)

* Add os.abspath

* Fix readme files

* Remove CI status
上级 bc22a4bb
# TTCF: Training Toolbox for Caffe
[![Build Status](http://134.191.240.124/buildStatus/icon?job=caffe-toolbox/develop/trigger)](http://134.191.240.124/job/caffe-toolbox/job/develop/job/trigger/)
This is a [BVLC Caffe](https://github.com/BVLC/caffe) fork that is intended for deployment multiple SSD-based detection models. It includes
- action detection and action recognition models for smart classroom use-case, see [README_AD.md](README_AD.md),
- person detection for smart classroom use-case, see [README_PD.md](README_PD.md),
......
......@@ -37,7 +37,7 @@ cd ./models/templates/person_detection_action_recognition_N_classes
### (optional) Prepare init weights from PD model
1. Run docker in interactive sesion with mounted directory with WIDER dataset
1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell
nvidia-docker --rm -it -v <path_to_folder_with_weights>:/workspace tccf bash
```
......@@ -59,10 +59,10 @@ On next stage we should train the Action Recognition (AR) model which reuses det
```Shell
cd ./models
python train.py --model person_detection_action_recognition \ # name of model
--weights action_detection_0005.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
python3 train.py --model person_detection_action_recognition \ # name of model
--weights action_detection_0005.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
--gpu <GPU_ID>
```
......@@ -72,7 +72,7 @@ To evaluate the quality of trained Action Recognition model on your test data yo
1. Frame independent evaluation:
```Shell
python evaluate.py --type ad \
python3 evaluate.py --type ad \
--dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \
--annotaion test_tasks.txt \
......@@ -81,7 +81,7 @@ python evaluate.py --type ad \
2. Event-based evaluation:
```Shell
python evaluate.py --type ad_event \
python3 evaluate.py --type ad_event \
--dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \
--annotaion test_tasks.txt \
......@@ -91,7 +91,7 @@ python evaluate.py --type ad_event \
### Export to IR format
```Shell
python mo_convert.py --name action_recognition --type ad \
python3 mo_convert.py --name action_recognition --type ad \
--dir <EXPERIMENT_DIR> \
--iter <ITERATION_NUM> \
--data_type FP32
......
......@@ -27,7 +27,7 @@ python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_VAL_FILE> images_db_val
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TEST_FILE> images_db_test
```
3. Close docker session by 'alt+D' and check that you have `images_db_<subset>.hd5` and `images_db_<subset>_list.txt` files in <DATA_DIR>.
3. Close docker session by `ctrl+D` and check that you have `images_db_<subset>.hd5` and `images_db_<subset>_list.txt` files in <DATA_DIR>.
## Model training and evaluation
......
......@@ -9,7 +9,7 @@ As an example of usage please download a small dataset from [here](https://downl
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1. Run docker in interactive sesion with mounted directory with WIDER dataset
1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
```
......@@ -22,7 +22,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/convert_to_voc_format.py /data/annotation_
```Shell
bash $CAFFE_ROOT/python/lmdb_utils/create_cr_lmdb.sh
```
4. Close docker session by 'alt+D' and check that you have lmdb files in <DATA_DIR>/lmdb.
4. Close docker session by `ctrl+D` and check that you have lmdb files in <DATA_DIR>/lmdb.
###
......@@ -32,7 +32,7 @@ On next stage we should train the Person-vehicle-bike crossroad (four class) det
```Shell
cd ./models
python train.py --model crossroad \
python3 train.py --model crossroad \
--weights person-vehicle-bike-detection-crossroad-0078.caffemodel \
--data_dir <DATA_DIR> \
--work_dir<WORK_DIR> \
......@@ -44,7 +44,7 @@ python train.py --model crossroad \
To evaluate the quality of trained Person-vehicle-bike crossroad detection model on your test data you can use provided scripts.
```Shell
python evaluate.py --type cr \
python3 evaluate.py --type cr \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation annotation_val_cvt.json \
......@@ -54,7 +54,7 @@ python evaluate.py --type cr \
### Export to IR format
```Shell
python mo_convert.py --type cr \
python3 mo_convert.py --type cr \
--name crossroad \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
......
......@@ -8,7 +8,7 @@ The training procedure can be done using data in LMDB format. To launch training
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1. Run docker in interactive sesion with mounted directory with WIDER dataset
1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
```
......@@ -29,7 +29,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_
bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh
```
5. Close docker session by 'alt+D' and check that you have lmdb files in <DATA_DIR>.
5. Close docker session by `ctrl+D` and check that you have lmdb files in <DATA_DIR>.
###
......@@ -39,7 +39,7 @@ On next stage we should train the Face Detection model. To do this follow next s
```Shell
cd ./models
python train.py --model face_detection \ # name of model
python3 train.py --model face_detection \ # name of model
--weights face-detection-retail-0044.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <DATA_DIR> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
......@@ -51,7 +51,7 @@ python train.py --model face_detection \ # name of mod
To evaluate the quality of trained Face Detection model on your test data you can use provided scripts.
```Shell
python evaluate.py --type fd \
python3 evaluate.py --type fd \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation wider_val.xml \
......@@ -61,7 +61,7 @@ python evaluate.py --type fd \
### Export to IR format
```Shell
python mo_convert.py --name face_detection \
python3 mo_convert.py --name face_detection \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
--data_type FP32
......
......@@ -13,7 +13,7 @@ On first stage you should train the SSD-based person (two class) detector. To do
```Shell
cd ./models
python train.py --model person_detection \ # name of model
python3 train.py --model person_detection \ # name of model
--weights person_detection_0022.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> # directory to collect file from training process
......@@ -28,7 +28,7 @@ Note: to get more accurate model it's recommended to use pre-training of backbon
```Shell
cd ./models
python mo_convert.py --name face_detection \
python3 mo_convert.py --name face_detection \
--dir <WORK_DIR>/person_detection/<EXPERIMENT_NUM> \
--iter <INTERATION> \
--data_type FP32
......
......@@ -102,8 +102,8 @@ def main():
exec_bin, 'run', '--rm',
'--name', container_name,
'--user=%s:%s' % (os.getuid(), os.getgid()),
'-v', '%s:/workspace' % args.dir,
'-v', '%s:/data:ro' % args.data_dir, # Mount directory with dataset
'-v', '%s:/workspace' % os.path.abspath(args.dir),
'-v', '%s:/data:ro' % os.path.abspath(args.data_dir), # Mount directory with dataset
args.image
]
......
......@@ -75,7 +75,7 @@ def main():
docker_command = [
exec_bin, 'run', '--rm',
'--user=%s:%s' % (os.getuid(), os.getgid()),
'-v', '%s:/workspace' % args.dir,
'-v', '%s:/workspace' % os.path.abspath(args.dir),
args.image
]
......
......@@ -73,8 +73,8 @@ def main():
exec_bin, 'run', '--rm',
'--user=%s:%s' % (os.getuid(), os.getgid()),
'--name', container_name, # Name of container
'-v', '%s:/workspace' % experiment_dir, # Mout work directory
'-v', '%s:/data:ro' % args.data_dir, # Mount directory with dataset
'-v', '%s:/workspace' % os.path.abspath(experiment_dir), # Mout work directory
'-v', '%s:/data:ro' % os.path.abspath(args.data_dir), # Mount directory with dataset
'-v', '%s:/init_weights:ro' % os.path.abspath('../init_weights'), # Mount directory with init weights
args.image
]
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册