README_FD.md 2.7 KB
Newer Older
E
Elena Shipunova 已提交
1 2 3 4
# Face detection.

## Data preparation

E
elenash 已提交
5
The training procedure can be done using data in LMDB format. To launch training or evaluation at the WiderFace dataset, download it from [the source](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/), extract images and annotations into <DATA_DIR> folder and use the provided scripts to convert original annotations to LMDB format.
E
Elena Shipunova 已提交
6 7 8

### Create LMDB files

E
elenash 已提交
9
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
10

11
1. Run docker in interactive session with mounted directory with WIDER dataset
E
elenash 已提交
12 13
```Shell
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
E
Elena Shipunova 已提交
14
```
15 16

2.  Convert original annotation to xml format for both train and val subsets:
E
elenash 已提交
17 18 19
```Shell
python3 $CAFFE_ROOT/python/lmdb_utils/wider_to_xml.py /data /data/WIDER_train/images/ /data/wider_face_split/wider_face_train_bbx_gt.txt train
python3 $CAFFE_ROOT/python/lmdb_utils/wider_to_xml.py /data /data/WIDER_val/images/ /data/wider_face_split/wider_face_val_bbx_gt.txt val
E
Elena Shipunova 已提交
20
```
21

E
elenash 已提交
22 23 24
3. Convert xml annotations to set of xml files per image:
```Shell
python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_train /data/wider_train.xml --xml_path_val /data/wider_val.xml
25 26
```

E
elenash 已提交
27 28 29
4. Run bash script to create LMDB:
```Shell
bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh
E
Elena Shipunova 已提交
30 31
```

32
5. Close docker session by `ctrl+D` and check that you have lmdb files in <DATA_DIR>.
E
Elena Shipunova 已提交
33 34


35
###
E
Elena Shipunova 已提交
36

E
elenash 已提交
37 38
### Face detection training
On next stage we should train the Face Detection model. To do this follow next steps:
E
Elena Shipunova 已提交
39

40 41
```Shell
cd ./models
42
python3 train.py --model face_detection \                          # name of model
43
                --weights face-detection-retail-0044.caffemodel \  # initialize weights from 'init_weights' directory
E
elenash 已提交
44 45
                --data_dir <DATA_DIR> \                            # path to directory with dataset
                --work_dir <WORK_DIR> \                            # directory to collect file from training process
46
                --gpu <GPU_ID>
E
Elena Shipunova 已提交
47
```
48

E
Elena Shipunova 已提交
49 50

### Face Detection model evaluation
51 52 53
To evaluate the quality of trained Face Detection model on your test data you can use provided scripts.

```Shell
54
python3 evaluate.py --type fd \
E
elenash 已提交
55
    --dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
56
    --data_dir <DATA_DIR> \
E
elenash 已提交
57
    --annotation wider_val.xml \
58
    --iter <ITERATION_NUM>
E
Elena Shipunova 已提交
59 60 61 62
```

### Export to IR format

63
```Shell
64
python3 mo_convert.py --name face_detection \
E
elenash 已提交
65
    --dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
66 67
    --iter <ITERATION_NUM> \
    --data_type FP32
E
Elena Shipunova 已提交
68 69
```

70 71
### Face Detection demo
You can use [this demo](https://github.com/opencv/open_model_zoo/tree/master/demos/interactive_face_detection_demo) to view how resulting model performs.