Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleSlim
提交
191face9
P
PaddleSlim
项目概览
PaddlePaddle
/
PaddleSlim
1 年多 前同步成功
通知
51
Star
1434
Fork
344
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
53
列表
看板
标记
里程碑
合并请求
16
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleSlim
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
53
Issue
53
列表
看板
标记
里程碑
合并请求
16
合并请求
16
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
191face9
编写于
7月 01, 2022
作者:
C
Chang Xu
提交者:
GitHub
7月 01, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update Imagenet Demo (#1216)
上级
91fcce52
变更
5
显示空白变更内容
内联
并排
Showing
5 changed file
with
12 addition
and
306 deletion
+12
-306
demo/auto_compression/image_classification/README.md
demo/auto_compression/image_classification/README.md
+7
-3
demo/auto_compression/image_classification/configs/MobileNetV1/qat_dis.yaml
...ion/image_classification/configs/MobileNetV1/qat_dis.yaml
+2
-2
demo/auto_compression/image_classification/postprocess.py
demo/auto_compression/image_classification/postprocess.py
+0
-189
demo/auto_compression/image_classification/preprocess.py
demo/auto_compression/image_classification/preprocess.py
+0
-110
demo/auto_compression/image_classification/run.py
demo/auto_compression/image_classification/run.py
+3
-2
未找到文件。
demo/auto_compression/image_classification/README.md
浏览文件 @
191face9
...
...
@@ -91,7 +91,7 @@ tar -xf MobileNetV1_infer.tar
#### 3.4 自动压缩并产出模型
蒸馏量化自动压缩示例通过run.py脚本启动,会使用接口
```paddleslim.auto_compression.AutoCompression```
对模型进行量化训练和蒸馏。配置config文件中模型路径、数据集路径、蒸馏、量化和训练等部分的参数,配置完成后便可开始自动压缩。
蒸馏量化自动压缩示例通过run.py脚本启动,会使用接口
```paddleslim.auto_compression.AutoCompression```
对模型进行量化训练和蒸馏。配置config文件中模型路径、数据集路径、蒸馏、量化和训练等部分的参数,配置完成后便可开始自动压缩。
**单卡启动**
...
...
@@ -110,11 +110,13 @@ python -m paddle.distributed.launch run.py --save_dir='./save_quant_mobilev1/' -
```
多卡训练(分布式训练)指的是将训练任务按照一定方法拆分到多个训练节点完成数据读取、前向计算、反向梯度计算等过程,并将计算出的梯度上传至服务节点。服务节点在收到所有训练节点传来的梯度后,会将梯度聚合并更新参数。最后将参数发送给训练节点,开始新一轮的训练。多卡训练一轮训练能训练
```batch size * num gpus```
的数据,比如单卡的
```batch size```
为32,单轮训练的数据量即32,而四卡训练的
```batch size```
为32,单轮训练的数据量为128。
注意
```learning rate```
与
```batch size```
呈线性关系,这里单卡
```batch size```
为32,对应的
```learning rate```
为0.015,那么如果
```batch size```
减小4倍改为8,
```learning rate```
也需除以4;多卡时
```batch size```
为32,
```learning rate```
需乘上卡数。所以改变
```batch size```
或改变训练卡数都需要对应修改
```learning rate```
。
注意
```learning rate```
与
```batch size```
呈线性关系,这里单卡
```batch size```
为32,对应的
```learning rate```
为0.015,那么如果
```batch size```
减小4倍改为8,
```learning rate```
也需除以4;多卡时
```batch size```
为32,
```learning rate```
需乘上卡数。所以改变
```batch size```
或改变训练卡数都需要对应修改
```learning rate```
。
## 4.预测部署
#### 4.1 Python预测推理
准备好inference模型后,使用以下命令进行预测:
```
shell
python infer.py
-c
configs/infer.yaml
...
...
@@ -133,7 +135,9 @@ python infer.py -c configs/infer.yaml
注意:
-
请注意模型的输入数据尺寸,部分模型需要修改参数:
```PreProcess.resize_short```
,
```PreProcess.resize```
-
如果希望提升评测模型速度,使用
```GPU```
评测时,建议开启
```TensorRT```
加速预测,使用
```CPU```
评测时,建议开启
```MKL-DNN```
加速预测。
-
如果希望提升评测模型速度,使用
```GPU```
评测时,建议开启
```TensorRT```
加速预测,使用
```CPU```
评测时,建议开启
```MKL-DNN```
加速预测
-
若使用 TesorRT 预测引擎,需安装
```WITH_TRT=ON```
的Paddle,下载地址:
[
Python预测库
](
https://paddleinference.paddlepaddle.org.cn/master/user_guides/download_lib.html#python
)
#### 4.2 PaddleLite端侧部署
PaddleLite端侧部署可参考:
...
...
demo/auto_compression/image_classification/configs/MobileNetV1/qat_dis.yaml
浏览文件 @
191face9
...
...
@@ -15,8 +15,8 @@ Quantization:
use_pact
:
true
activation_bits
:
8
is_full_quantize
:
false
activation_quantize_type
:
ran
ge_abs_max
weight_quantize_type
:
channel_wise_
abs_max
activation_quantize_type
:
moving_avera
ge_abs_max
weight_quantize_type
:
abs_max
not_quant_pattern
:
-
skip_quant
quantize_op_types
:
...
...
demo/auto_compression/image_classification/postprocess.py
浏览文件 @
191face9
...
...
@@ -53,34 +53,6 @@ class PostProcesser(object):
return
rtn
class
ThreshOutput
(
object
):
def
__init__
(
self
,
threshold
,
label_0
=
"0"
,
label_1
=
"1"
):
self
.
threshold
=
threshold
self
.
label_0
=
label_0
self
.
label_1
=
label_1
def
__call__
(
self
,
x
,
file_names
=
None
):
y
=
[]
for
idx
,
probs
in
enumerate
(
x
):
score
=
probs
[
1
]
if
score
<
self
.
threshold
:
result
=
{
"class_ids"
:
[
0
],
"scores"
:
[
1
-
score
],
"label_names"
:
[
self
.
label_0
]
}
else
:
result
=
{
"class_ids"
:
[
1
],
"scores"
:
[
score
],
"label_names"
:
[
self
.
label_1
]
}
if
file_names
is
not
None
:
result
[
"file_name"
]
=
file_names
[
idx
]
y
.
append
(
result
)
return
y
class
Topk
(
object
):
def
__init__
(
self
,
topk
=
1
,
class_id_map_file
=
None
):
assert
isinstance
(
topk
,
(
int
,
))
...
...
@@ -138,14 +110,6 @@ class Topk(object):
return
y
class
MultiLabelTopk
(
Topk
):
def
__init__
(
self
,
topk
=
1
,
class_id_map_file
=
None
):
super
().
__init__
()
def
__call__
(
self
,
x
,
file_names
=
None
):
return
super
().
__call__
(
x
,
file_names
,
multilabel
=
True
)
class
SavePreLabel
(
object
):
def
__init__
(
self
,
save_dir
):
if
save_dir
is
None
:
...
...
@@ -165,156 +129,3 @@ class SavePreLabel(object):
output_dir
=
self
.
save_dir
(
str
(
id
))
os
.
makedirs
(
output_dir
,
exist_ok
=
True
)
shutil
.
copy
(
image_file
,
output_dir
)
class
Binarize
(
object
):
def
__init__
(
self
,
method
=
"round"
):
self
.
method
=
method
self
.
unit
=
np
.
array
([[
128
,
64
,
32
,
16
,
8
,
4
,
2
,
1
]]).
T
def
__call__
(
self
,
x
,
file_names
=
None
):
if
self
.
method
==
"round"
:
x
=
np
.
round
(
x
+
1
).
astype
(
"uint8"
)
-
1
if
self
.
method
==
"sign"
:
x
=
((
np
.
sign
(
x
)
+
1
)
/
2
).
astype
(
"uint8"
)
embedding_size
=
x
.
shape
[
1
]
assert
embedding_size
%
8
==
0
,
"The Binary index only support vectors with sizes multiple of 8"
byte
=
np
.
zeros
([
x
.
shape
[
0
],
embedding_size
//
8
],
dtype
=
np
.
uint8
)
for
i
in
range
(
embedding_size
//
8
):
byte
[:,
i
:
i
+
1
]
=
np
.
dot
(
x
[:,
i
*
8
:(
i
+
1
)
*
8
],
self
.
unit
)
return
byte
class
PersonAttribute
(
object
):
def
__init__
(
self
,
threshold
=
0.5
,
glasses_threshold
=
0.3
,
hold_threshold
=
0.6
):
self
.
threshold
=
threshold
self
.
glasses_threshold
=
glasses_threshold
self
.
hold_threshold
=
hold_threshold
def
__call__
(
self
,
batch_preds
,
file_names
=
None
):
# postprocess output of predictor
age_list
=
[
'AgeLess18'
,
'Age18-60'
,
'AgeOver60'
]
direct_list
=
[
'Front'
,
'Side'
,
'Back'
]
bag_list
=
[
'HandBag'
,
'ShoulderBag'
,
'Backpack'
]
upper_list
=
[
'UpperStride'
,
'UpperLogo'
,
'UpperPlaid'
,
'UpperSplice'
]
lower_list
=
[
'LowerStripe'
,
'LowerPattern'
,
'LongCoat'
,
'Trousers'
,
'Shorts'
,
'Skirt&Dress'
]
batch_res
=
[]
for
res
in
batch_preds
:
res
=
res
.
tolist
()
label_res
=
[]
# gender
gender
=
'Female'
if
res
[
22
]
>
self
.
threshold
else
'Male'
label_res
.
append
(
gender
)
# age
age
=
age_list
[
np
.
argmax
(
res
[
19
:
22
])]
label_res
.
append
(
age
)
# direction
direction
=
direct_list
[
np
.
argmax
(
res
[
23
:])]
label_res
.
append
(
direction
)
# glasses
glasses
=
'Glasses: '
if
res
[
1
]
>
self
.
glasses_threshold
:
glasses
+=
'True'
else
:
glasses
+=
'False'
label_res
.
append
(
glasses
)
# hat
hat
=
'Hat: '
if
res
[
0
]
>
self
.
threshold
:
hat
+=
'True'
else
:
hat
+=
'False'
label_res
.
append
(
hat
)
# hold obj
hold_obj
=
'HoldObjectsInFront: '
if
res
[
18
]
>
self
.
hold_threshold
:
hold_obj
+=
'True'
else
:
hold_obj
+=
'False'
label_res
.
append
(
hold_obj
)
# bag
bag
=
bag_list
[
np
.
argmax
(
res
[
15
:
18
])]
bag_score
=
res
[
15
+
np
.
argmax
(
res
[
15
:
18
])]
bag_label
=
bag
if
bag_score
>
self
.
threshold
else
'No bag'
label_res
.
append
(
bag_label
)
# upper
upper_res
=
res
[
4
:
8
]
upper_label
=
'Upper:'
sleeve
=
'LongSleeve'
if
res
[
3
]
>
res
[
2
]
else
'ShortSleeve'
upper_label
+=
' {}'
.
format
(
sleeve
)
for
i
,
r
in
enumerate
(
upper_res
):
if
r
>
self
.
threshold
:
upper_label
+=
' {}'
.
format
(
upper_list
[
i
])
label_res
.
append
(
upper_label
)
# lower
lower_res
=
res
[
8
:
14
]
lower_label
=
'Lower: '
has_lower
=
False
for
i
,
l
in
enumerate
(
lower_res
):
if
l
>
self
.
threshold
:
lower_label
+=
' {}'
.
format
(
lower_list
[
i
])
has_lower
=
True
if
not
has_lower
:
lower_label
+=
' {}'
.
format
(
lower_list
[
np
.
argmax
(
lower_res
)])
label_res
.
append
(
lower_label
)
# shoe
shoe
=
'Boots'
if
res
[
14
]
>
self
.
threshold
else
'No boots'
label_res
.
append
(
shoe
)
threshold_list
=
[
0.5
]
*
len
(
res
)
threshold_list
[
1
]
=
self
.
glasses_threshold
threshold_list
[
18
]
=
self
.
hold_threshold
pred_res
=
(
np
.
array
(
res
)
>
np
.
array
(
threshold_list
)
).
astype
(
np
.
int8
).
tolist
()
batch_res
.
append
({
"attributes"
:
label_res
,
"output"
:
pred_res
})
return
batch_res
class
VehicleAttribute
(
object
):
def
__init__
(
self
,
color_threshold
=
0.5
,
type_threshold
=
0.5
):
self
.
color_threshold
=
color_threshold
self
.
type_threshold
=
type_threshold
self
.
color_list
=
[
"yellow"
,
"orange"
,
"green"
,
"gray"
,
"red"
,
"blue"
,
"white"
,
"golden"
,
"brown"
,
"black"
]
self
.
type_list
=
[
"sedan"
,
"suv"
,
"van"
,
"hatchback"
,
"mpv"
,
"pickup"
,
"bus"
,
"truck"
,
"estate"
]
def
__call__
(
self
,
batch_preds
,
file_names
=
None
):
# postprocess output of predictor
batch_res
=
[]
for
res
in
batch_preds
:
res
=
res
.
tolist
()
label_res
=
[]
color_idx
=
np
.
argmax
(
res
[:
10
])
type_idx
=
np
.
argmax
(
res
[
10
:])
if
res
[
color_idx
]
>=
self
.
color_threshold
:
color_info
=
f
"Color: (
{
self
.
color_list
[
color_idx
]
}
, prob:
{
res
[
color_idx
]
}
)"
else
:
color_info
=
"Color unknown"
if
res
[
type_idx
+
10
]
>=
self
.
type_threshold
:
type_info
=
f
"Type: (
{
self
.
type_list
[
type_idx
]
}
, prob:
{
res
[
type_idx
+
10
]
}
)"
else
:
type_info
=
"Type unknown"
label_res
=
f
"
{
color_info
}
,
{
type_info
}
"
threshold_list
=
[
self
.
color_threshold
]
*
10
+
[
self
.
type_threshold
]
*
9
pred_res
=
(
np
.
array
(
res
)
>
np
.
array
(
threshold_list
)
).
astype
(
np
.
int8
).
tolist
()
batch_res
.
append
({
"attributes"
:
label_res
,
"output"
:
pred_res
})
return
batch_res
demo/auto_compression/image_classification/preprocess.py
浏览文件 @
191face9
...
...
@@ -26,8 +26,6 @@ import numpy as np
import
importlib
from
PIL
import
Image
#from python.det_preprocess import DetNormalizeImage, DetPadStride, DetPermute, DetResize
def
create_operators
(
params
):
"""
...
...
@@ -100,33 +98,6 @@ class OperatorParamError(ValueError):
pass
class
DecodeImage
(
object
):
""" decode image """
def
__init__
(
self
,
to_rgb
=
True
,
to_np
=
False
,
channel_first
=
False
):
self
.
to_rgb
=
to_rgb
self
.
to_np
=
to_np
# to numpy
self
.
channel_first
=
channel_first
# only enabled when to_np is True
def
__call__
(
self
,
img
):
if
six
.
PY2
:
assert
type
(
img
)
is
str
and
len
(
img
)
>
0
,
"invalid input 'img' in DecodeImage"
else
:
assert
type
(
img
)
is
bytes
and
len
(
img
)
>
0
,
"invalid input 'img' in DecodeImage"
data
=
np
.
frombuffer
(
img
,
dtype
=
'uint8'
)
img
=
cv2
.
imdecode
(
data
,
1
)
if
self
.
to_rgb
:
assert
img
.
shape
[
2
]
==
3
,
'invalid shape of image[%s]'
%
(
img
.
shape
)
img
=
img
[:,
:,
::
-
1
]
if
self
.
channel_first
:
img
=
img
.
transpose
((
2
,
0
,
1
))
return
img
class
ResizeImage
(
object
):
""" resize image """
...
...
@@ -188,87 +159,6 @@ class CropImage(object):
return
img
[
h_start
:
h_end
,
w_start
:
w_end
,
:]
class
RandCropImage
(
object
):
""" random crop image """
def
__init__
(
self
,
size
,
scale
=
None
,
ratio
=
None
,
interpolation
=
None
,
backend
=
"cv2"
):
if
type
(
size
)
is
int
:
self
.
size
=
(
size
,
size
)
# (h, w)
else
:
self
.
size
=
size
self
.
scale
=
[
0.08
,
1.0
]
if
scale
is
None
else
scale
self
.
ratio
=
[
3.
/
4.
,
4.
/
3.
]
if
ratio
is
None
else
ratio
self
.
_resize_func
=
UnifiedResize
(
interpolation
=
interpolation
,
backend
=
backend
)
def
__call__
(
self
,
img
):
size
=
self
.
size
scale
=
self
.
scale
ratio
=
self
.
ratio
aspect_ratio
=
math
.
sqrt
(
random
.
uniform
(
*
ratio
))
w
=
1.
*
aspect_ratio
h
=
1.
/
aspect_ratio
img_h
,
img_w
=
img
.
shape
[:
2
]
bound
=
min
((
float
(
img_w
)
/
img_h
)
/
(
w
**
2
),
(
float
(
img_h
)
/
img_w
)
/
(
h
**
2
))
scale_max
=
min
(
scale
[
1
],
bound
)
scale_min
=
min
(
scale
[
0
],
bound
)
target_area
=
img_w
*
img_h
*
random
.
uniform
(
scale_min
,
scale_max
)
target_size
=
math
.
sqrt
(
target_area
)
w
=
int
(
target_size
*
w
)
h
=
int
(
target_size
*
h
)
i
=
random
.
randint
(
0
,
img_w
-
w
)
j
=
random
.
randint
(
0
,
img_h
-
h
)
img
=
img
[
j
:
j
+
h
,
i
:
i
+
w
,
:]
return
self
.
_resize_func
(
img
,
size
)
class
RandFlipImage
(
object
):
""" random flip image
flip_code:
1: Flipped Horizontally
0: Flipped Vertically
-1: Flipped Horizontally & Vertically
"""
def
__init__
(
self
,
flip_code
=
1
):
assert
flip_code
in
[
-
1
,
0
,
1
],
"flip_code should be a value in [-1, 0, 1]"
self
.
flip_code
=
flip_code
def
__call__
(
self
,
img
):
if
random
.
randint
(
0
,
1
)
==
1
:
return
cv2
.
flip
(
img
,
self
.
flip_code
)
else
:
return
img
class
AutoAugment
(
object
):
def
__init__
(
self
):
self
.
policy
=
ImageNetPolicy
()
def
__call__
(
self
,
img
):
from
PIL
import
Image
img
=
np
.
ascontiguousarray
(
img
)
img
=
Image
.
fromarray
(
img
)
img
=
self
.
policy
(
img
)
img
=
np
.
asarray
(
img
)
class
NormalizeImage
(
object
):
""" normalize image such as substract mean, divide std
"""
...
...
demo/auto_compression/image_classification/run.py
浏览文件 @
191face9
...
...
@@ -120,7 +120,8 @@ def main():
assert
"Global"
in
all_config
,
f
"Key 'Global' not found in config file.
\n
{
all_config
}
"
global_config
=
all_config
[
"Global"
]
gpu_num
=
paddle
.
distributed
.
get_world_size
()
if
all_config
[
'TrainConfig'
][
'learning_rate'
][
if
isinstance
(
all_config
[
'TrainConfig'
][
'learning_rate'
],
dict
)
and
all_config
[
'TrainConfig'
][
'learning_rate'
][
'type'
]
==
'CosineAnnealingDecay'
:
step
=
int
(
math
.
ceil
(
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录