提交 4f7bf849 编写于 作者: L Liufang Sang 提交者: whs

[PaddleSlim] rm yolov3 detection config file and fix doc details (#3567)

上级 e9b5c69d
...@@ -37,24 +37,73 @@ ...@@ -37,24 +37,73 @@
- config: 检测库的配置,其中配置了训练超参数、数据集信息等。 - config: 检测库的配置,其中配置了训练超参数、数据集信息等。
- slim_file: PaddleSlim的配置文件,参见[配置文件说明](#配置文件说明) - slim_file: PaddleSlim的配置文件,参见[配置文件说明](#配置文件说明)
您可以通过运行以下命令运行该示例,请确保已正确下载[pretrained model](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification#%E5%B7%B2%E5%8F%91%E5%B8%83%E6%A8%A1%E5%9E%8B%E5%8F%8A%E5%85%B6%E6%80%A7%E8%83%BD) 您可以通过运行以下命令运行该示例。
step1: 开启显存优化策略 step1: 设置gpu卡
``` ```
export FLAGS_fast_eager_deletion_mode=1 export CUDA_VISIBLE_DEVICES=0
export FLAGS_eager_delete_tensor_gb=0.0
``` ```
step2: 设置gpu卡,目前的超参设置适合2卡训练 step2: 开始训练
``` ```
export CUDA_VISIBLE_DEVICES=0,1 使用PaddleDetection提供的配置文件在用8卡进行训练:
```
python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \
-c ../../configs/yolov3_mobilenet_v1_voc.yml \
-d "../../dataset/voc" \
-o max_iters=258 \
LearningRate.base_lr=0.0001 \
LearningRate.schedulers='[!PiecewiseDecay {gamma: 0.1, milestones: [258, 516]}]' \
pretrain_weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar \
YoloTrainFeed.batch_size=64
```
>通过命令行覆盖设置max_iters选项,因为PaddleDetection中训练是以`batch`为单位迭代的,并没有涉及`epoch`的概念,但是PaddleSlim需要知道当前训练进行到第几个`epoch`, 所以需要将`max_iters`设置为一个`epoch`内的`batch`的数量。
如果要调整训练卡数,需要调整配置文件`yolov3_mobilenet_v1_voc.yml`中的以下参数:
- **max_iters:** 一个`epoch`中batch的数量,需要设置为`total_num / batch_size`, 其中`total_num`为训练样本总数量,`batch_size`为多卡上总的batch size.
- **YoloTrainFeed.batch_size:** 当使用DataLoader时,表示单张卡上的batch size; 当使用普通reader时,则表示多卡上的总的batch_size。batch_size受限于显存大小。
- **LeaningRate.base_lr:** 根据多卡的总`batch_size`调整`base_lr`,两者大小正相关,可以简单的按比例进行调整。
- **LearningRate.schedulers.PiecewiseDecay.milestones:**请根据batch size的变化对其调整。
- **LearningRate.schedulers.PiecewiseDecay.LinearWarmup.steps:** 请根据batch size的变化对其进行调整。
以下为4卡训练示例,通过命令行覆盖`yolov3_mobilenet_v1_voc.yml`中的参数:
```
python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \
-c ../../configs/yolov3_mobilenet_v1_voc.yml \
-d "../../dataset/voc" \
-o max_iters=258 \
LearningRate.base_lr=0.0001 \
LearningRate.schedulers='[!PiecewiseDecay {gamma: 0.1, milestones: [258, 516]}]' \
pretrain_weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar \
YoloTrainFeed.batch_size=64
``` ```
step3: 开始训练
以下为2卡训练示例,受显存所制,单卡`batch_size`不变, 总`batch_size`减小,`base_lr`减小,一个epoch内batch数量增加,同时需要调整学习率相关参数,如下:
``` ```
python compress.py \ python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \ -s yolov3_mobilenet_v1_slim.yaml \
-c yolov3_mobilenet_v1_voc.yml -c ../../configs/yolov3_mobilenet_v1_voc.yml \
-d "../../dataset/voc" \
-o max_iters=516 \
LearningRate.base_lr=0.00005 \
LearningRate.schedulers='[!PiecewiseDecay {gamma: 0.1, milestones: [516, 1012]}]' \
pretrain_weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar \
YoloTrainFeed.batch_size=32
``` ```
通过`python compress.py --help`查看可配置参数。
通过`python ../../tools/configure.py ${option_name} help`查看如何通过命令行覆盖配置文件`yolov3_mobilenet_v1_voc.yml`中的参数。
### 训练时的模型结构 ### 训练时的模型结构
这部分介绍来源于[量化low-level API介绍](https://github.com/PaddlePaddle/models/tree/develop/PaddleSlim/quant_low_level_api#1-%E9%87%8F%E5%8C%96%E8%AE%AD%E7%BB%83low-level-apis%E4%BB%8B%E7%BB%8D)。 这部分介绍来源于[量化low-level API介绍](https://github.com/PaddlePaddle/models/tree/develop/PaddleSlim/quant_low_level_api#1-%E9%87%8F%E5%8C%96%E8%AE%AD%E7%BB%83low-level-apis%E4%BB%8B%E7%BB%8D)。
...@@ -128,7 +177,8 @@ python ../eval.py \ ...@@ -128,7 +177,8 @@ python ../eval.py \
--model_path ${checkpoint_path}/${epoch_id}/eval_model/ \ --model_path ${checkpoint_path}/${epoch_id}/eval_model/ \
--model_name __model__ \ --model_name __model__ \
--params_name __params__ \ --params_name __params__ \
-c yolov3_mobilenet_v1_voc.yml -c ../../configs/yolov3_mobilenet_v1_voc.yml \
-d "../../dataset/voc"
``` ```
在评估之后,选取效果最好的epoch的模型,可使用脚本 <a href='./freeze.py'>slim/quantization/freeze.py</a>将该模型转化为以上介绍的三种模型:FP32模型,int8模型,mobile模型,需要配置的参数为: 在评估之后,选取效果最好的epoch的模型,可使用脚本 <a href='./freeze.py'>slim/quantization/freeze.py</a>将该模型转化为以上介绍的三种模型:FP32模型,int8模型,mobile模型,需要配置的参数为:
...@@ -153,7 +203,8 @@ python ../eval.py \ ...@@ -153,7 +203,8 @@ python ../eval.py \
--model_path ${float_model_path} --model_path ${float_model_path}
--model_name model \ --model_name model \
--params_name weights \ --params_name weights \
-c yolov3_mobilenet_v1_voc.yml -c ../../configs/yolov3_mobilenet_v1_voc.yml \
-d "../../dataset/voc"
``` ```
## 预测 ## 预测
...@@ -169,7 +220,7 @@ python ../infer.py \ ...@@ -169,7 +220,7 @@ python ../infer.py \
--model_path ${save_path}/float \ --model_path ${save_path}/float \
--model_name model \ --model_name model \
--params_name weights \ --params_name weights \
-c yolov3_mobilenet_v1_voc.yml \ -c ../../configs/yolov3_mobilenet_v1_voc.yml \
--infer_dir ../../demo --infer_dir ../../demo
``` ```
......
...@@ -46,7 +46,7 @@ from ppdet.data.data_feed import create_reader ...@@ -46,7 +46,7 @@ from ppdet.data.data_feed import create_reader
from ppdet.utils.eval_utils import parse_fetches, eval_results from ppdet.utils.eval_utils import parse_fetches, eval_results
from ppdet.utils.stats import TrainingStats from ppdet.utils.stats import TrainingStats
from ppdet.utils.cli import ArgsParser from ppdet.utils.cli import ArgsParser, print_total_cfg
from ppdet.utils.check import check_gpu, check_version from ppdet.utils.check import check_gpu, check_version
import ppdet.utils.checkpoint as checkpoint import ppdet.utils.checkpoint as checkpoint
from ppdet.modeling.model_input import create_feed from ppdet.modeling.model_input import create_feed
...@@ -77,7 +77,7 @@ def eval_run(exe, compile_program, reader, keys, values, cls, test_feed): ...@@ -77,7 +77,7 @@ def eval_run(exe, compile_program, reader, keys, values, cls, test_feed):
'im_size': data['im_size']} 'im_size': data['im_size']}
outs = exe.run(compile_program, outs = exe.run(compile_program,
feed=feed_data, feed=feed_data,
fetch_list=values[0], fetch_list=[values[0]],
return_numpy=False) return_numpy=False)
outs.append(data['gt_box']) outs.append(data['gt_box'])
outs.append(data['gt_label']) outs.append(data['gt_label'])
...@@ -118,8 +118,8 @@ def main(): ...@@ -118,8 +118,8 @@ def main():
# check if set use_gpu=True in paddlepaddle cpu version # check if set use_gpu=True in paddlepaddle cpu version
check_gpu(cfg.use_gpu) check_gpu(cfg.use_gpu)
# print_total_cfg(cfg)
#check_version() #check_version()
if cfg.use_gpu: if cfg.use_gpu:
devices_num = fluid.core.get_cuda_device_count() devices_num = fluid.core.get_cuda_device_count()
else: else:
...@@ -156,7 +156,7 @@ def main(): ...@@ -156,7 +156,7 @@ def main():
optimizer.minimize(loss) optimizer.minimize(loss)
train_reader = create_reader(train_feed, cfg.max_iters * devices_num, train_reader = create_reader(train_feed, cfg.max_iters,
FLAGS.dataset_dir) FLAGS.dataset_dir)
train_loader.set_sample_list_generator(train_reader, place) train_loader.set_sample_list_generator(train_reader, place)
...@@ -220,7 +220,6 @@ def main(): ...@@ -220,7 +220,6 @@ def main():
best_box_ap_list.append(box_ap_stats[0]) best_box_ap_list.append(box_ap_stats[0])
elif box_ap_stats[0] > best_box_ap_list[0]: elif box_ap_stats[0] > best_box_ap_list[0]:
best_box_ap_list[0] = box_ap_stats[0] best_box_ap_list[0] = box_ap_stats[0]
checkpoint.save(exe, train_prog, os.path.join(save_dir,"best_model"))
logger.info("Best test box ap: {}".format( logger.info("Best test box ap: {}".format(
best_box_ap_list[0])) best_box_ap_list[0]))
return best_box_ap_list[0] return best_box_ap_list[0]
......
architecture: YOLOv3
train_feed: YoloTrainFeed
eval_feed: YoloEvalFeed
test_feed: YoloTestFeed
use_gpu: true
max_iters: 1000
log_smooth_window: 20
save_dir: output
snapshot_iter: 2000
metric: VOC
map_type: 11point
pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar
weights: output/yolov3_mobilenet_v1_voc/model_final
num_classes: 20
YOLOv3:
backbone: MobileNet
yolo_head: YOLOv3Head
MobileNet:
norm_type: sync_bn
norm_decay: 0.
conv_group_scale: 1
with_extra_blocks: false
YOLOv3Head:
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
norm_decay: 0.
ignore_thresh: 0.7
label_smooth: false
nms:
background_label: -1
keep_top_k: 100
nms_threshold: 0.45
nms_top_k: 1000
normalized: false
score_threshold: 0.01
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 1000
- 2000
#- !LinearWarmup
#start_factor: 0.
#steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YoloTrainFeed:
batch_size: 8
dataset:
dataset_dir: ../../dataset/voc
annotation: VOCdevkit/VOC_all/ImageSets/Main/train.txt
image_dir: VOCdevkit/VOC_all/JPEGImages
use_default_label: true
num_workers: 8
bufsize: 128
use_process: true
mixup_epoch: 250
YoloEvalFeed:
batch_size: 8
image_shape: [3, 608, 608]
dataset:
dataset_dir: ../../dataset/voc
annotation: VOCdevkit/VOC_all/ImageSets/Main/val.txt
image_dir: VOCdevkit/VOC_all/JPEGImages
use_default_label: true
YoloTestFeed:
batch_size: 1
image_shape: [3, 608, 608]
dataset:
use_default_label: true
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册