提交 c53882a0 编写于 作者: G Guanghua Yu 提交者: qingqing01

Add face detection model link in README and support eval by singe-scale (#3665)

上级 89742766
...@@ -39,7 +39,7 @@ Loads `wider_face` type dataset with directory structures like this: ...@@ -39,7 +39,7 @@ Loads `wider_face` type dataset with directory structures like this:
``` ```
- Download dataset manually: - Download dataset manually:
On the other hand, to download the WIDER FACE dataset, run the following commands: To download the WIDER FACE dataset, run the following commands:
``` ```
cd dataset/wider_face && ./download.sh cd dataset/wider_face && ./download.sh
``` ```
...@@ -84,13 +84,13 @@ optimized network structure. ...@@ -84,13 +84,13 @@ optimized network structure.
#### mAP in WIDER FACE #### mAP in WIDER FACE
| Architecture | Type | Size | Img/gpu | Lr schd | Easy Set | Medium Set | Hard Set | | Architecture | Type | Size | Img/gpu | Lr schd | Easy Set | Medium Set | Hard Set | Download |
|:------------:|:--------:|:----:|:-------:|:-------:|:--------:|:----------:|:--------:| |:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:----------:|:---------:|:--------:|
| BlazeFace | Original | 640 | 8 | 32w | **0.915** | **0.892** | **0.797** | | BlazeFace | Original | 640 | 8 | 32w | **0.915** | **0.892** | **0.797** | [model](https://paddlemodels.bj.bcebos.com/object_detection/blazeface_original.tar) |
| BlazeFace | Lite | 640 | 8 | 32w | 0.909 | 0.885 | 0.781 | | BlazeFace | Lite | 640 | 8 | 32w | 0.909 | 0.885 | 0.781 | [model](https://paddlemodels.bj.bcebos.com/object_detection/blazeface_lite.tar) |
| BlazeFace | NAS | 640 | 8 | 32w | 0.837 | 0.807 | 0.658 | | BlazeFace | NAS | 640 | 8 | 32w | 0.837 | 0.807 | 0.658 | [model](https://paddlemodels.bj.bcebos.com/object_detection/blazeface_nas.tar) |
| FaceBoxes | Original | 640 | 8 | 32w | 0.875 | 0.848 | 0.568 | | FaceBoxes | Original | 640 | 8 | 32w | 0.875 | 0.848 | 0.568 | [model](https://paddlemodels.bj.bcebos.com/object_detection/faceboxes_original.tar) |
| FaceBoxes | Lite | 640 | 8 | 32w | 0.898 | 0.872 | 0.752 | | FaceBoxes | Lite | 640 | 8 | 32w | 0.898 | 0.872 | 0.752 | [model](https://paddlemodels.bj.bcebos.com/object_detection/faceboxes_lite.tar) |
**NOTES:** **NOTES:**
- Get mAP in `Easy/Medium/Hard Set` by multi-scale evaluation in `tools/face_eval.py`. - Get mAP in `Easy/Medium/Hard Set` by multi-scale evaluation in `tools/face_eval.py`.
...@@ -140,7 +140,11 @@ For details can refer to [Evaluation](#Evaluate-on-the-FDDB). ...@@ -140,7 +140,11 @@ For details can refer to [Evaluation](#Evaluate-on-the-FDDB).
## Get Started ## Get Started
`Training` and `Inference` please refer to [GETTING_STARTED.md](../../docs/GETTING_STARTED.md) `Training` and `Inference` please refer to [GETTING_STARTED.md](../../docs/GETTING_STARTED.md)
- **NOTES:** Currently we do not support evaluation in training. - **NOTES:**
- `BlazeFace` and `FaceBoxes` is trained in 4 GPU with `batch_size=8` per gpu (total batch size as 32)
and trained 320000 iters.(If your GPU count is not 4, please refer to the rule of training parameters
in the table of [calculation rules](../../docs/GETTING_STARTED.md#faq))
- Currently we do not support evaluation in training.
### Evaluation ### Evaluation
``` ```
...@@ -152,9 +156,13 @@ python tools/face_eval.py -c configs/face_detection/blazeface.yml ...@@ -152,9 +156,13 @@ python tools/face_eval.py -c configs/face_detection/blazeface.yml
- `-d` or `--dataset_dir`: Dataset path, same as dataset_dir of configs. Such as: `-d dataset/wider_face`. - `-d` or `--dataset_dir`: Dataset path, same as dataset_dir of configs. Such as: `-d dataset/wider_face`.
- `-f` or `--output_eval`: Evaluation file directory, default is `output/pred`. - `-f` or `--output_eval`: Evaluation file directory, default is `output/pred`.
- `-e` or `--eval_mode`: Evaluation mode, include `widerface` and `fddb`, default is `widerface`. - `-e` or `--eval_mode`: Evaluation mode, include `widerface` and `fddb`, default is `widerface`.
- `--multi_scale`: If you add this action button in the command, it will select `multi_scale` evaluation.
Default is `False`, it will select `single-scale` evaluation.
After the evaluation is completed, the test result in txt format will be generated in `output/pred`, After the evaluation is completed, the test result in txt format will be generated in `output/pred`,
and then mAP will be calculated according to different data sets: and then mAP will be calculated according to different data sets. If you set `--eval_mode=widerface`,
it will [Evaluate on the WIDER FACE](#Evaluate-on-the-WIDER-FACE).If you set `--eval_mode=fddb`,
it will [Evaluate on the FDDB](#Evaluate-on-the-FDDB).
#### Evaluate on the WIDER FACE #### Evaluate on the WIDER FACE
- Download the official evaluation script to evaluate the AP metrics: - Download the official evaluation script to evaluate the AP metrics:
...@@ -175,6 +183,7 @@ matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;" ...@@ -175,6 +183,7 @@ matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;"
``` ```
#### Evaluate on the FDDB #### Evaluate on the FDDB
[FDDB dataset](http://vis-www.cs.umass.edu/fddb/) details can refer to FDDB's official website.
- Download the official dataset and evaluation script to evaluate the ROC metrics: - Download the official dataset and evaluation script to evaluate the ROC metrics:
``` ```
#external link to the Faces in the Wild data set #external link to the Faces in the Wild data set
......
...@@ -56,7 +56,8 @@ def face_eval_run(exe, ...@@ -56,7 +56,8 @@ def face_eval_run(exe,
img_root_dir, img_root_dir,
gt_file, gt_file,
pred_dir='output/pred', pred_dir='output/pred',
eval_mode='widerface'): eval_mode='widerface',
multi_scale=False):
# load ground truth files # load ground truth files
with open(gt_file, 'r') as f: with open(gt_file, 'r') as f:
gt_lines = f.readlines() gt_lines = f.readlines()
...@@ -76,8 +77,8 @@ def face_eval_run(exe, ...@@ -76,8 +77,8 @@ def face_eval_run(exe,
if eval_mode == 'fddb': if eval_mode == 'fddb':
image_path += '.jpg' image_path += '.jpg'
image = Image.open(image_path).convert('RGB') image = Image.open(image_path).convert('RGB')
if multi_scale:
shrink, max_shrink = get_shrink(image.size[1], image.size[0]) shrink, max_shrink = get_shrink(image.size[1], image.size[0])
det0 = detect_face(exe, compile_program, fetches, image, shrink) det0 = detect_face(exe, compile_program, fetches, image, shrink)
det1 = flip_test(exe, compile_program, fetches, image, shrink) det1 = flip_test(exe, compile_program, fetches, image, shrink)
[det2, det3] = multi_scale_test(exe, compile_program, fetches, image, [det2, det3] = multi_scale_test(exe, compile_program, fetches, image,
...@@ -86,6 +87,8 @@ def face_eval_run(exe, ...@@ -86,6 +87,8 @@ def face_eval_run(exe,
max_shrink) max_shrink)
det = np.row_stack((det0, det1, det2, det3, det4)) det = np.row_stack((det0, det1, det2, det3, det4))
dets = bbox_vote(det) dets = bbox_vote(det)
else:
dets = detect_face(exe, compile_program, fetches, image, 1)
if eval_mode == 'widerface': if eval_mode == 'widerface':
save_widerface_bboxes(image_path, dets, pred_dir) save_widerface_bboxes(image_path, dets, pred_dir)
else: else:
...@@ -261,7 +264,8 @@ def main(): ...@@ -261,7 +264,8 @@ def main():
img_root_dir, img_root_dir,
gt_file, gt_file,
pred_dir=pred_dir, pred_dir=pred_dir,
eval_mode=FLAGS.eval_mode) eval_mode=FLAGS.eval_mode,
multi_scale=FLAGS.multi_scale)
if __name__ == '__main__': if __name__ == '__main__':
...@@ -285,5 +289,10 @@ if __name__ == '__main__': ...@@ -285,5 +289,10 @@ if __name__ == '__main__':
type=str, type=str,
help="Evaluation mode, include `widerface` and `fddb`, default is `widerface`." help="Evaluation mode, include `widerface` and `fddb`, default is `widerface`."
) )
parser.add_argument(
"--multi_scale",
action='store_true',
default=False,
help="If True it will select `multi_scale` evaluation. Default is `False`, it will select `single-scale` evaluation.")
FLAGS = parser.parse_args() FLAGS = parser.parse_args()
main() main()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册