Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
c320457d
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
c320457d
编写于
9月 23, 2020
作者:
D
Double_V
提交者:
GitHub
9月 23, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #810 from yukavio/develop
update bash of slim pruning
上级
115b5175
2d07a0bc
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
8 addition
and
7 deletion
+8
-7
deploy/slim/prune/README.md
deploy/slim/prune/README.md
+3
-3
deploy/slim/prune/README_en.md
deploy/slim/prune/README_en.md
+3
-3
deploy/slim/prune/pruning_and_finetune.py
deploy/slim/prune/pruning_and_finetune.py
+2
-1
未找到文件。
deploy/slim/prune/README.md
浏览文件 @
c320457d
...
@@ -51,14 +51,14 @@ python setup.py install
...
@@ -51,14 +51,14 @@ python setup.py install
进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析训练:
进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析训练:
```
bash
```
bash
python deploy/slim/prune/sensitivity_anal.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
"your trained model"
Global.test_batch_size_per_card
=
1
python deploy/slim/prune/sensitivity_anal.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
"your trained model"
Global.test_batch_size_per_card
=
1
```
```
### 4. 模型裁剪训练
### 4. 模型裁剪训练
裁剪时通过之前的敏感度分析文件决定每个网络层的裁剪比例。在具体实现时,为了尽可能多的保留从图像中提取的低阶特征,我们跳过了backbone中靠近输入的4个卷积层。同样,为了减少由于裁剪导致的模型性能损失,我们通过之前敏感度分析所获得的敏感度表,人工挑选出了一些冗余较少,对裁剪较为敏感的
[
网络层
](
https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41
)
(指在较低的裁剪比例下就导致很高性能损失的网络层),并在之后的裁剪过程中选择避开这些网络层。裁剪过后finetune的过程沿用OCR检测模型原始的训练策略。
裁剪时通过之前的敏感度分析文件决定每个网络层的裁剪比例。在具体实现时,为了尽可能多的保留从图像中提取的低阶特征,我们跳过了backbone中靠近输入的4个卷积层。同样,为了减少由于裁剪导致的模型性能损失,我们通过之前敏感度分析所获得的敏感度表,人工挑选出了一些冗余较少,对裁剪较为敏感的
[
网络层
](
https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41
)
(指在较低的裁剪比例下就导致很高性能损失的网络层),并在之后的裁剪过程中选择避开这些网络层。裁剪过后finetune的过程沿用OCR检测模型原始的训练策略。
```
bash
```
bash
python deploy/slim/prune/pruning_and_finetune.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
python deploy/slim/prune/pruning_and_finetune.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
```
```
通过对比可以发现,经过裁剪训练保存的模型更小。
通过对比可以发现,经过裁剪训练保存的模型更小。
...
@@ -66,7 +66,7 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
...
@@ -66,7 +66,7 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
在得到裁剪训练保存的模型后,我们可以将其导出为inference_model:
在得到裁剪训练保存的模型后,我们可以将其导出为inference_model:
```
bash
```
bash
python deploy/slim/prune/export_prune_model.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
./output/det_db/best_accuracy Global.test_batch_size_per_card
=
1 Global.save_inference_dir
=
inference_model
python deploy/slim/prune/export_prune_model.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
./output/det_db/best_accuracy Global.test_batch_size_per_card
=
1 Global.save_inference_dir
=
inference_model
```
```
inference model的预测和部署参考:
inference model的预测和部署参考:
...
...
deploy/slim/prune/README_en.md
浏览文件 @
c320457d
...
@@ -55,7 +55,7 @@ Enter the PaddleOCR root directory,perform sensitivity analysis on the model w
...
@@ -55,7 +55,7 @@ Enter the PaddleOCR root directory,perform sensitivity analysis on the model w
```
bash
```
bash
python deploy/slim/prune/sensitivity_anal.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
python deploy/slim/prune/sensitivity_anal.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
```
```
...
@@ -67,7 +67,7 @@ python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Gl
...
@@ -67,7 +67,7 @@ python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Gl
```
bash
```
bash
python deploy/slim/prune/pruning_and_finetune.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
python deploy/slim/prune/pruning_and_finetune.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card
=
1
```
```
...
@@ -76,7 +76,7 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
...
@@ -76,7 +76,7 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
We can export the pruned model as inference_model for deployment:
We can export the pruned model as inference_model for deployment:
```
bash
```
bash
python deploy/slim/prune/export_prune_model.py
-c
configs/det/det_mv3_db.yml
-o
Global.pretrain_weights
=
./output/det_db/best_accuracy Global.test_batch_size_per_card
=
1 Global.save_inference_dir
=
inference_model
python deploy/slim/prune/export_prune_model.py
-c
configs/det/det_mv3_db
_v1.1
.yml
-o
Global.pretrain_weights
=
./output/det_db/best_accuracy Global.test_batch_size_per_card
=
1 Global.save_inference_dir
=
inference_model
```
```
Reference for prediction and deployment of inference model:
Reference for prediction and deployment of inference model:
...
...
deploy/slim/prune/pruning_and_finetune.py
浏览文件 @
c320457d
...
@@ -92,7 +92,8 @@ def main():
...
@@ -92,7 +92,8 @@ def main():
sen
=
load_sensitivities
(
"sensitivities_0.data"
)
sen
=
load_sensitivities
(
"sensitivities_0.data"
)
for
i
in
skip_list
:
for
i
in
skip_list
:
sen
.
pop
(
i
)
if
i
in
sen
.
keys
():
sen
.
pop
(
i
)
back_bone_list
=
[
'conv'
+
str
(
x
)
for
x
in
range
(
1
,
5
)]
back_bone_list
=
[
'conv'
+
str
(
x
)
for
x
in
range
(
1
,
5
)]
for
i
in
back_bone_list
:
for
i
in
back_bone_list
:
for
key
in
list
(
sen
.
keys
()):
for
key
in
list
(
sen
.
keys
()):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录