未验证 提交 789ff8dd 编写于 作者: A Alexander Dokuchaev 提交者: GitHub

Use relative paths in scripts arguments (#16)

* Add os.abspath

* Fix readme files

* Remove CI status
上级 bc22a4bb
# TTCF: Training Toolbox for Caffe # TTCF: Training Toolbox for Caffe
[![Build Status](http://134.191.240.124/buildStatus/icon?job=caffe-toolbox/develop/trigger)](http://134.191.240.124/job/caffe-toolbox/job/develop/job/trigger/)
This is a [BVLC Caffe](https://github.com/BVLC/caffe) fork that is intended for deployment multiple SSD-based detection models. It includes This is a [BVLC Caffe](https://github.com/BVLC/caffe) fork that is intended for deployment multiple SSD-based detection models. It includes
- action detection and action recognition models for smart classroom use-case, see [README_AD.md](README_AD.md), - action detection and action recognition models for smart classroom use-case, see [README_AD.md](README_AD.md),
- person detection for smart classroom use-case, see [README_PD.md](README_PD.md), - person detection for smart classroom use-case, see [README_PD.md](README_PD.md),
......
...@@ -37,7 +37,7 @@ cd ./models/templates/person_detection_action_recognition_N_classes ...@@ -37,7 +37,7 @@ cd ./models/templates/person_detection_action_recognition_N_classes
### (optional) Prepare init weights from PD model ### (optional) Prepare init weights from PD model
1. Run docker in interactive sesion with mounted directory with WIDER dataset 1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell ```Shell
nvidia-docker --rm -it -v <path_to_folder_with_weights>:/workspace tccf bash nvidia-docker --rm -it -v <path_to_folder_with_weights>:/workspace tccf bash
``` ```
...@@ -59,7 +59,7 @@ On next stage we should train the Action Recognition (AR) model which reuses det ...@@ -59,7 +59,7 @@ On next stage we should train the Action Recognition (AR) model which reuses det
```Shell ```Shell
cd ./models cd ./models
python train.py --model person_detection_action_recognition \ # name of model python3 train.py --model person_detection_action_recognition \ # name of model
--weights action_detection_0005.caffemodel \ # initialize weights from 'init_weights' directory --weights action_detection_0005.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset --data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process --work_dir <WORK_DIR> \ # directory to collect file from training process
...@@ -72,7 +72,7 @@ To evaluate the quality of trained Action Recognition model on your test data yo ...@@ -72,7 +72,7 @@ To evaluate the quality of trained Action Recognition model on your test data yo
1. Frame independent evaluation: 1. Frame independent evaluation:
```Shell ```Shell
python evaluate.py --type ad \ python3 evaluate.py --type ad \
--dir <EXPERIMENT_DIR> \ --dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \ --data_dir <DATA_DIR> \
--annotaion test_tasks.txt \ --annotaion test_tasks.txt \
...@@ -81,7 +81,7 @@ python evaluate.py --type ad \ ...@@ -81,7 +81,7 @@ python evaluate.py --type ad \
2. Event-based evaluation: 2. Event-based evaluation:
```Shell ```Shell
python evaluate.py --type ad_event \ python3 evaluate.py --type ad_event \
--dir <EXPERIMENT_DIR> \ --dir <EXPERIMENT_DIR> \
--data_dir <DATA_DIR> \ --data_dir <DATA_DIR> \
--annotaion test_tasks.txt \ --annotaion test_tasks.txt \
...@@ -91,7 +91,7 @@ python evaluate.py --type ad_event \ ...@@ -91,7 +91,7 @@ python evaluate.py --type ad_event \
### Export to IR format ### Export to IR format
```Shell ```Shell
python mo_convert.py --name action_recognition --type ad \ python3 mo_convert.py --name action_recognition --type ad \
--dir <EXPERIMENT_DIR> \ --dir <EXPERIMENT_DIR> \
--iter <ITERATION_NUM> \ --iter <ITERATION_NUM> \
--data_type FP32 --data_type FP32
......
...@@ -27,7 +27,7 @@ python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_VAL_FILE> images_db_val ...@@ -27,7 +27,7 @@ python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_VAL_FILE> images_db_val
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TEST_FILE> images_db_test python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TEST_FILE> images_db_test
``` ```
3. Close docker session by 'alt+D' and check that you have `images_db_<subset>.hd5` and `images_db_<subset>_list.txt` files in <DATA_DIR>. 3. Close docker session by `ctrl+D` and check that you have `images_db_<subset>.hd5` and `images_db_<subset>_list.txt` files in <DATA_DIR>.
## Model training and evaluation ## Model training and evaluation
......
...@@ -9,7 +9,7 @@ As an example of usage please download a small dataset from [here](https://downl ...@@ -9,7 +9,7 @@ As an example of usage please download a small dataset from [here](https://downl
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts: To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1. Run docker in interactive sesion with mounted directory with WIDER dataset 1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell ```Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
``` ```
...@@ -22,7 +22,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/convert_to_voc_format.py /data/annotation_ ...@@ -22,7 +22,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/convert_to_voc_format.py /data/annotation_
```Shell ```Shell
bash $CAFFE_ROOT/python/lmdb_utils/create_cr_lmdb.sh bash $CAFFE_ROOT/python/lmdb_utils/create_cr_lmdb.sh
``` ```
4. Close docker session by 'alt+D' and check that you have lmdb files in <DATA_DIR>/lmdb. 4. Close docker session by `ctrl+D` and check that you have lmdb files in <DATA_DIR>/lmdb.
### ###
...@@ -32,7 +32,7 @@ On next stage we should train the Person-vehicle-bike crossroad (four class) det ...@@ -32,7 +32,7 @@ On next stage we should train the Person-vehicle-bike crossroad (four class) det
```Shell ```Shell
cd ./models cd ./models
python train.py --model crossroad \ python3 train.py --model crossroad \
--weights person-vehicle-bike-detection-crossroad-0078.caffemodel \ --weights person-vehicle-bike-detection-crossroad-0078.caffemodel \
--data_dir <DATA_DIR> \ --data_dir <DATA_DIR> \
--work_dir<WORK_DIR> \ --work_dir<WORK_DIR> \
...@@ -44,7 +44,7 @@ python train.py --model crossroad \ ...@@ -44,7 +44,7 @@ python train.py --model crossroad \
To evaluate the quality of trained Person-vehicle-bike crossroad detection model on your test data you can use provided scripts. To evaluate the quality of trained Person-vehicle-bike crossroad detection model on your test data you can use provided scripts.
```Shell ```Shell
python evaluate.py --type cr \ python3 evaluate.py --type cr \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \ --dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \ --data_dir <DATA_DIR> \
--annotation annotation_val_cvt.json \ --annotation annotation_val_cvt.json \
...@@ -54,7 +54,7 @@ python evaluate.py --type cr \ ...@@ -54,7 +54,7 @@ python evaluate.py --type cr \
### Export to IR format ### Export to IR format
```Shell ```Shell
python mo_convert.py --type cr \ python3 mo_convert.py --type cr \
--name crossroad \ --name crossroad \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \ --dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \ --iter <ITERATION_NUM> \
......
...@@ -8,7 +8,7 @@ The training procedure can be done using data in LMDB format. To launch training ...@@ -8,7 +8,7 @@ The training procedure can be done using data in LMDB format. To launch training
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts: To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
1. Run docker in interactive sesion with mounted directory with WIDER dataset 1. Run docker in interactive session with mounted directory with WIDER dataset
```Shell ```Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
``` ```
...@@ -29,7 +29,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_ ...@@ -29,7 +29,7 @@ python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_
bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh
``` ```
5. Close docker session by 'alt+D' and check that you have lmdb files in <DATA_DIR>. 5. Close docker session by `ctrl+D` and check that you have lmdb files in <DATA_DIR>.
### ###
...@@ -39,7 +39,7 @@ On next stage we should train the Face Detection model. To do this follow next s ...@@ -39,7 +39,7 @@ On next stage we should train the Face Detection model. To do this follow next s
```Shell ```Shell
cd ./models cd ./models
python train.py --model face_detection \ # name of model python3 train.py --model face_detection \ # name of model
--weights face-detection-retail-0044.caffemodel \ # initialize weights from 'init_weights' directory --weights face-detection-retail-0044.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <DATA_DIR> \ # path to directory with dataset --data_dir <DATA_DIR> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process --work_dir <WORK_DIR> \ # directory to collect file from training process
...@@ -51,7 +51,7 @@ python train.py --model face_detection \ # name of mod ...@@ -51,7 +51,7 @@ python train.py --model face_detection \ # name of mod
To evaluate the quality of trained Face Detection model on your test data you can use provided scripts. To evaluate the quality of trained Face Detection model on your test data you can use provided scripts.
```Shell ```Shell
python evaluate.py --type fd \ python3 evaluate.py --type fd \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \ --dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \ --data_dir <DATA_DIR> \
--annotation wider_val.xml \ --annotation wider_val.xml \
...@@ -61,7 +61,7 @@ python evaluate.py --type fd \ ...@@ -61,7 +61,7 @@ python evaluate.py --type fd \
### Export to IR format ### Export to IR format
```Shell ```Shell
python mo_convert.py --name face_detection \ python3 mo_convert.py --name face_detection \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \ --dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \ --iter <ITERATION_NUM> \
--data_type FP32 --data_type FP32
......
...@@ -13,7 +13,7 @@ On first stage you should train the SSD-based person (two class) detector. To do ...@@ -13,7 +13,7 @@ On first stage you should train the SSD-based person (two class) detector. To do
```Shell ```Shell
cd ./models cd ./models
python train.py --model person_detection \ # name of model python3 train.py --model person_detection \ # name of model
--weights person_detection_0022.caffemodel \ # initialize weights from 'init_weights' directory --weights person_detection_0022.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <PATH_TO_DATA> \ # path to directory with dataset --data_dir <PATH_TO_DATA> \ # path to directory with dataset
--work_dir <WORK_DIR> # directory to collect file from training process --work_dir <WORK_DIR> # directory to collect file from training process
...@@ -28,7 +28,7 @@ Note: to get more accurate model it's recommended to use pre-training of backbon ...@@ -28,7 +28,7 @@ Note: to get more accurate model it's recommended to use pre-training of backbon
```Shell ```Shell
cd ./models cd ./models
python mo_convert.py --name face_detection \ python3 mo_convert.py --name face_detection \
--dir <WORK_DIR>/person_detection/<EXPERIMENT_NUM> \ --dir <WORK_DIR>/person_detection/<EXPERIMENT_NUM> \
--iter <INTERATION> \ --iter <INTERATION> \
--data_type FP32 --data_type FP32
......
...@@ -102,8 +102,8 @@ def main(): ...@@ -102,8 +102,8 @@ def main():
exec_bin, 'run', '--rm', exec_bin, 'run', '--rm',
'--name', container_name, '--name', container_name,
'--user=%s:%s' % (os.getuid(), os.getgid()), '--user=%s:%s' % (os.getuid(), os.getgid()),
'-v', '%s:/workspace' % args.dir, '-v', '%s:/workspace' % os.path.abspath(args.dir),
'-v', '%s:/data:ro' % args.data_dir, # Mount directory with dataset '-v', '%s:/data:ro' % os.path.abspath(args.data_dir), # Mount directory with dataset
args.image args.image
] ]
......
...@@ -75,7 +75,7 @@ def main(): ...@@ -75,7 +75,7 @@ def main():
docker_command = [ docker_command = [
exec_bin, 'run', '--rm', exec_bin, 'run', '--rm',
'--user=%s:%s' % (os.getuid(), os.getgid()), '--user=%s:%s' % (os.getuid(), os.getgid()),
'-v', '%s:/workspace' % args.dir, '-v', '%s:/workspace' % os.path.abspath(args.dir),
args.image args.image
] ]
......
...@@ -73,8 +73,8 @@ def main(): ...@@ -73,8 +73,8 @@ def main():
exec_bin, 'run', '--rm', exec_bin, 'run', '--rm',
'--user=%s:%s' % (os.getuid(), os.getgid()), '--user=%s:%s' % (os.getuid(), os.getgid()),
'--name', container_name, # Name of container '--name', container_name, # Name of container
'-v', '%s:/workspace' % experiment_dir, # Mout work directory '-v', '%s:/workspace' % os.path.abspath(experiment_dir), # Mout work directory
'-v', '%s:/data:ro' % args.data_dir, # Mount directory with dataset '-v', '%s:/data:ro' % os.path.abspath(args.data_dir), # Mount directory with dataset
'-v', '%s:/init_weights:ro' % os.path.abspath('../init_weights'), # Mount directory with init weights '-v', '%s:/init_weights:ro' % os.path.abspath('../init_weights'), # Mount directory with init weights
args.image args.image
] ]
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册