未验证 提交 78e6a016 编写于 作者: W whs 提交者: GitHub

Fix doc of pruing yolov3 demo in PaddleDetection. (#3561)

上级 e09a4af0
......@@ -7,7 +7,8 @@
该示例使用PaddleSlim提供的[卷积通道剪裁压缩策略](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/tutorial.md#2-%E5%8D%B7%E7%A7%AF%E6%A0%B8%E5%89%AA%E8%A3%81%E5%8E%9F%E7%90%86)对检测库中的模型进行压缩。
在阅读该示例前,建议您先了解以下内容:
- <a href="../..README_cn.md">检测库的常规训练方法</a>
- <a href="../../README_cn.md">检测库的常规训练方法</a>
- [检测模型数据准备](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/docs/INSTALL_cn.md#%E6%95%B0%E6%8D%AE%E9%9B%86)
- [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md)
......@@ -109,6 +110,7 @@ python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \
-c ../../configs/yolov3_mobilenet_v1_voc.yml \
-o max_iters=258 \
YoloTrainFeed.batch_size=64 \
-d "../../dataset/voc"
```
......@@ -117,7 +119,7 @@ python compress.py \
如果要调整训练卡数,需要调整配置文件`yolov3_mobilenet_v1_voc.yml`中的以下参数:
- **max_iters:** 一个`epoch`中batch的数量,需要设置为`total_num / batch_size`, 其中`total_num`为训练样本总数量,`batch_size`为多卡上总的batch size.
- **YoloTrainFeed.batch_size:** 单张卡上的batch size, 受限于显存大小。
- **YoloTrainFeed.batch_size:** 当使用DataLoader时,表示单张卡上的batch size; 当使用普通reader时,则表示多卡上的总的`batch_size``batch_size`受限于显存大小。
- **LeaningRate.base_lr:** 根据多卡的总`batch_size`调整`base_lr`,两者大小正相关,可以简单的按比例进行调整。
- **LearningRate.schedulers.PiecewiseDecay.milestones:**请根据batch size的变化对其调整。
- **LearningRate.schedulers.PiecewiseDecay.LinearWarmup.steps:** 请根据batch size的变化对其进行调整。
......@@ -130,7 +132,7 @@ python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \
-c ../../configs/yolov3_mobilenet_v1_voc.yml \
-o max_iters=258 \
-o YoloTrainFeed.batch_size = 16 \
YoloTrainFeed.batch_size=64 \
-d "../../dataset/voc"
```
......@@ -140,9 +142,9 @@ python compress.py \
-s yolov3_mobilenet_v1_slim.yaml \
-c ../../configs/yolov3_mobilenet_v1_voc.yml \
-o max_iters=516 \
-o LeaningRate.base_lr=0.005 \ # 0.001 /2
-o YoloTrainFeed.batch_size = 16 \
-o LearningRate.schedulers='[!PiecewiseDecay {gamma: 0.1, milestones: [110000, 124000]}, !LinearWarmup {start_factor: 0., steps: 2000}]' \
LeaningRate.base_lr=0.005 \
YoloTrainFeed.batch_size=32 \
LearningRate.schedulers='[!PiecewiseDecay {gamma: 0.1, milestones: [110000, 124000]}, !LinearWarmup {start_factor: 0., steps: 2000}]' \
-d "../../dataset/voc"
```
......@@ -189,11 +191,9 @@ python compress.py \
### MobileNetV1-YOLO-V3
| FLOPS |top1_acc/top5_acc| model_size |Paddle Fluid inference time(ms)| Paddle Lite inference time(ms)|
| FLOPS |Box AP| model_size |Paddle Fluid inference time(ms)| Paddle Lite inference time(ms)|
|---|---|---|---|---|
|baseline|- |- |- |-|
|-10%|- |- |- |-|
|-30%|- |- |- |-|
|-50%|- |- |- |-|
|baseline|76.2 |93M |- |-|
|-50%|69.48 |51M |- |-|
## FAQ
......@@ -148,7 +148,7 @@ def main():
optimizer.minimize(loss)
train_reader = create_reader(train_feed, cfg.max_iters * devices_num,
train_reader = create_reader(train_feed, cfg.max_iters,
FLAGS.dataset_dir)
train_loader.set_sample_list_generator(train_reader, place)
......@@ -207,7 +207,6 @@ def main():
best_box_ap_list.append(box_ap_stats[0])
elif box_ap_stats[0] > best_box_ap_list[0]:
best_box_ap_list[0] = box_ap_stats[0]
checkpoint.save(exe, train_prog, os.path.join(save_dir,"best_model"))
logger.info("Best test box ap: {}".format(
best_box_ap_list[0]))
return best_box_ap_list[0]
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册