Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
f64188e4
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
接近 2 年 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f64188e4
编写于
8月 18, 2021
作者:
L
lilithzhou
提交者:
GitHub
8月 18, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'PaddlePaddle:develop' into develop
上级
8d40db24
4ae4a4a9
变更
42
展开全部
隐藏空白更改
内联
并排
Showing
42 changed file
with
474 addition
and
132 deletion
+474
-132
README_ch.md
README_ch.md
+1
-1
docs/images/faq/momentum.jpeg
docs/images/faq/momentum.jpeg
+0
-0
docs/zh_CN/application/mainbody_detection.md
docs/zh_CN/application/mainbody_detection.md
+19
-1
docs/zh_CN/faq_series/faq_2021_s2.md
docs/zh_CN/faq_series/faq_2021_s2.md
+218
-73
docs/zh_CN/tutorials/getting_started.md
docs/zh_CN/tutorials/getting_started.md
+1
-1
docs/zh_CN/tutorials/quick_start_professional.md
docs/zh_CN/tutorials/quick_start_professional.md
+2
-2
docs/zh_CN_tmp/.gitkeep
docs/zh_CN_tmp/.gitkeep
+0
-0
docs/zh_CN_tmp/advanced_tutorials/.gitkeep
docs/zh_CN_tmp/advanced_tutorials/.gitkeep
+0
-0
docs/zh_CN_tmp/algorithm_introduction/.gitkeep
docs/zh_CN_tmp/algorithm_introduction/.gitkeep
+0
-0
docs/zh_CN_tmp/data_preparation/.gitkeep
docs/zh_CN_tmp/data_preparation/.gitkeep
+0
-0
docs/zh_CN_tmp/faq_series/.gitkeep
docs/zh_CN_tmp/faq_series/.gitkeep
+0
-0
docs/zh_CN_tmp/image_recognition_pipeline/.gitkeep
docs/zh_CN_tmp/image_recognition_pipeline/.gitkeep
+0
-0
docs/zh_CN_tmp/inference_deployment/.gitkeep
docs/zh_CN_tmp/inference_deployment/.gitkeep
+0
-0
docs/zh_CN_tmp/installation/.gitkeep
docs/zh_CN_tmp/installation/.gitkeep
+0
-0
docs/zh_CN_tmp/introduction/.gitkeep
docs/zh_CN_tmp/introduction/.gitkeep
+0
-0
docs/zh_CN_tmp/models_training/.gitkeep
docs/zh_CN_tmp/models_training/.gitkeep
+0
-0
docs/zh_CN_tmp/quick_start/.gitkeep
docs/zh_CN_tmp/quick_start/.gitkeep
+0
-0
ppcls/arch/backbone/base/theseus_layer.py
ppcls/arch/backbone/base/theseus_layer.py
+70
-29
ppcls/arch/backbone/legendary_models/vgg.py
ppcls/arch/backbone/legendary_models/vgg.py
+5
-2
ppcls/arch/backbone/model_zoo/resnext101_wsl.py
ppcls/arch/backbone/model_zoo/resnext101_wsl.py
+1
-1
ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml
ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml
+3
-1
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml
+3
-1
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml
+3
-1
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml
+1
-1
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml
+129
-0
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml
+1
-1
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml
+1
-1
ppcls/engine/trainer.py
ppcls/engine/trainer.py
+2
-2
tests/test.sh
tests/test.sh
+1
-1
未找到文件。
README_ch.md
浏览文件 @
f64188e4
...
...
@@ -8,7 +8,7 @@
**近期更新**
-
2021.0
7.08、07.27 添加26个
[
FAQ
](
docs/zh_CN/faq_series/faq_2021_s2.md
)
-
2021.0
8.11 更新7个
[
FAQ
](
docs/zh_CN/faq_series/faq_2021_s2.md
)
。
-
2021.06.29 添加Swin-transformer系列模型,ImageNet1k数据集上Top1 acc最高精度可达87.2%;支持训练预测评估与whl包部署,预训练模型可以从
[
这里
](
docs/zh_CN/models/models_intro.md
)
下载。
-
2021.06.22,23,24 PaddleClas官方研发团队带来技术深入解读三日直播课。课程回放:
[
https://aistudio.baidu.com/aistudio/course/introduce/24519
](
https://aistudio.baidu.com/aistudio/course/introduce/24519
)
-
2021.06.16 PaddleClas v2.2版本升级,集成Metric learning,向量检索等组件。新增商品识别、动漫人物识别、车辆识别和logo识别等4个图像识别应用。新增LeViT、Twins、TNT、DLA、HarDNet、RedNet系列30个预训练模型。
...
...
docs/images/faq/momentum.jpeg
0 → 100644
浏览文件 @
f64188e4
1.1 MB
docs/zh_CN/application/mainbody_detection.md
浏览文件 @
f64188e4
...
...
@@ -167,4 +167,22 @@ python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml
更多模型导出教程,请参考:
[
EXPORT_MODEL
](
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md
)
导出模型之后,在主体检测与识别任务中,就可以将检测模型的路径更改为该inference模型路径,完成预测。图像识别快速体验可以参考:
[
图像识别快速开始教程
](
../tutorials/quick_start_recognition.md
)
。
最终,目录
`inference/ppyolov2_r50vd_dcn_365e_coco`
中包含
`inference.pdiparams`
,
`inference.pdiparams.info`
以及
`inference.pdmodel`
文件,其中
`inference.pdiparams`
为保存的inference模型权重文件,
`inference.pdmodel`
为保存的inference模型结构文件。
导出模型之后,在主体检测与识别任务中,就可以将检测模型的路径更改为该inference模型路径,完成预测。
以商品识别为例,其配置文件为
[
inference_product.yaml
](
../../../deploy/configs/inference_product.yaml
)
,修改其中的
`Global.det_inference_model_dir`
字段为导出的主体检测inference模型目录,参考
[
图像识别快速开始教程
](
../tutorials/quick_start_recognition.md
)
,即可完成商品检测与识别过程。
### FAQ
#### Q:可以使用其他的主体检测模型结构吗?
*
A:可以的,但是目前的检测预处理过程仅适配yolo系列的预处理,因此在使用的时候,建议优先使用yolo系列的模型进行训练,如果希望使用faster rcnn等其他系列的模型,需要按照PaddleDetection的数据预处理,修改下预处理逻辑,这块如果您有需求或者有问题的话,欢迎提issue或者在群里反馈。
#### Q:可以修改主体检测的预测尺度吗?
*
A:可以的,但是需要注意2个地方
*
PaddleClas中提供的主体检测模型是基于640x640的分辨率去训练的,因此预测的时候也是默认使用640x640的分辨率进行预测,使用其他分辨率预测的话,精度会有所降低。
*
在模型导出的时候,建议也修改下模型导出的分辨率,保持模型导出、模型预测的分辨率一致。
docs/zh_CN/faq_series/faq_2021_s2.md
浏览文件 @
f64188e4
此差异已折叠。
点击以展开。
docs/zh_CN/tutorials/getting_started.md
浏览文件 @
f64188e4
...
...
@@ -244,7 +244,7 @@ python3 python/predict_cls.py \
-c
configs/inference_cls.yaml
\
-o
Global.infer_imgs
=
../dataset/flowers102/jpg/image_00001.jpg
\
-o
Global.inference_model_dir
=
../inference/
\
-o
PostProcess.class_id_map_file
=
None
-o
PostProcess.
Topk.
class_id_map_file
=
None
其中:
...
...
docs/zh_CN/tutorials/quick_start_professional.md
浏览文件 @
f64188e4
...
...
@@ -128,7 +128,7 @@ python3 -m paddle.distributed.launch \
PaddleClas包含了自研的SSLD知识蒸馏方案,具体的内容可以参考
[
知识蒸馏章节
](
../advanced_tutorials/distillation/distillation.md
)
, 本小节将尝试使用知识蒸馏技术对MobileNetV3_large_x1_0模型进行训练,使用
`2.1.2小节`
训练得到的ResNet50_vd模型作为蒸馏所用的教师模型,首先将
`2.1.2小节`
训练得到的ResNet50_vd模型保存到指定目录,脚本如下。
```
shell
mkdir
pretrained
mkdir
pretrained
cp
-r
output_CIFAR/ResNet50_vd/best_model.pdparams ./pretrained/
```
...
...
@@ -256,5 +256,5 @@ PreProcess:
python3 python/predict_cls.py
\
-c
configs/inference_cls.yaml
\
-o
Global.infer_imgs
=
../dataset/CIFAR100/test/0/0001.png
\
-o
PostProcess.class_id_map_file
=
None
-o
PostProcess.
Topk.
class_id_map_file
=
None
```
docs/zh_CN_tmp/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/advanced_tutorials/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/algorithm_introduction/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/data_preparation/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/faq_series/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/image_recognition_pipeline/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/inference_deployment/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/installation/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/introduction/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/models_training/.gitkeep
0 → 100644
浏览文件 @
f64188e4
docs/zh_CN_tmp/quick_start/.gitkeep
0 → 100644
浏览文件 @
f64188e4
ppcls/arch/backbone/base/theseus_layer.py
浏览文件 @
f64188e4
...
...
@@ -12,15 +12,9 @@ class Identity(nn.Layer):
class
TheseusLayer
(
nn
.
Layer
):
def
__init__
(
self
,
*
args
,
return_patterns
=
None
,
**
kwargs
):
def
__init__
(
self
,
*
args
,
**
kwargs
):
super
(
TheseusLayer
,
self
).
__init__
()
self
.
res_dict
=
None
if
return_patterns
is
not
None
:
self
.
_update_res
(
return_patterns
)
def
forward
(
self
,
*
input
,
res_dict
=
None
,
**
kwargs
):
if
res_dict
is
not
None
:
self
.
res_dict
=
res_dict
self
.
res_dict
=
{}
# stop doesn't work when stop layer has a parallel branch.
def
stop_after
(
self
,
stop_layer_name
:
str
):
...
...
@@ -38,33 +32,43 @@ class TheseusLayer(nn.Layer):
stop_layer_name
)
return
after_stop
def
_update_res
(
self
,
return_layers
):
def
update_res
(
self
,
return_patterns
):
if
not
return_patterns
or
isinstance
(
self
,
WrapLayer
):
return
for
layer_i
in
self
.
_sub_layers
:
layer_name
=
self
.
_sub_layers
[
layer_i
].
full_name
()
if
isinstance
(
self
.
_sub_layers
[
layer_i
],
(
nn
.
Sequential
,
nn
.
LayerList
)):
self
.
_sub_layers
[
layer_i
]
=
wrap_theseus
(
self
.
_sub_layers
[
layer_i
])
self
.
_sub_layers
[
layer_i
].
res_dict
=
self
.
res_dict
self
.
_sub_layers
[
layer_i
].
update_res
(
return_patterns
)
else
:
for
return_pattern
in
return_patterns
:
if
re
.
match
(
return_pattern
,
layer_name
):
if
not
isinstance
(
self
.
_sub_layers
[
layer_i
],
TheseusLayer
):
self
.
_sub_layers
[
layer_i
]
=
wrap_theseus
(
self
.
_sub_layers
[
layer_i
])
self
.
_sub_layers
[
layer_i
].
register_forward_post_hook
(
self
.
_sub_layers
[
layer_i
].
_save_sub_res_hook
)
self
.
_sub_layers
[
layer_i
].
res_dict
=
self
.
res_dict
if
isinstance
(
self
.
_sub_layers
[
layer_i
],
TheseusLayer
):
self
.
_sub_layers
[
layer_i
].
res_dict
=
self
.
res_dict
self
.
_sub_layers
[
layer_i
].
update_res
(
return_patterns
)
def
_save_sub_res_hook
(
self
,
layer
,
input
,
output
):
self
.
res_dict
[
layer
.
full_name
()]
=
output
def
replace_sub
(
self
,
layer_name_pattern
,
replace_function
,
recursive
=
True
):
for
layer_i
in
self
.
_sub_layers
:
layer_name
=
self
.
_sub_layers
[
layer_i
].
full_name
()
for
return_pattern
in
return_layers
:
if
return_layers
is
not
None
and
re
.
match
(
return_pattern
,
layer_name
):
self
.
_sub_layers
[
layer_i
].
register_forward_post_hook
(
self
.
_save_sub_res_hook
)
def
replace_sub
(
self
,
layer_name_pattern
,
replace_function
,
recursive
=
True
):
for
k
in
self
.
_sub_layers
.
keys
():
layer_name
=
self
.
_sub_layers
[
k
].
full_name
()
if
re
.
match
(
layer_name_pattern
,
layer_name
):
self
.
_sub_layers
[
k
]
=
replace_function
(
self
.
_sub_layers
[
k
])
self
.
_sub_layers
[
layer_i
]
=
replace_function
(
self
.
_sub_layers
[
layer_i
])
if
recursive
:
if
isinstance
(
self
.
_sub_layers
[
k
],
TheseusLayer
):
self
.
_sub_layers
[
k
].
replace_sub
(
if
isinstance
(
self
.
_sub_layers
[
layer_i
],
TheseusLayer
):
self
.
_sub_layers
[
layer_i
].
replace_sub
(
layer_name_pattern
,
replace_function
,
recursive
)
elif
isinstance
(
self
.
_sub_layers
[
k
],
nn
.
Sequential
)
or
isinstance
(
self
.
_sub_layers
[
k
],
nn
.
LayerList
):
for
kk
in
self
.
_sub_layers
[
k
].
_sub_layers
.
keys
():
self
.
_sub_layers
[
k
].
_sub_layers
[
kk
].
replace_sub
(
elif
isinstance
(
self
.
_sub_layers
[
layer_i
],
(
nn
.
Sequential
,
nn
.
LayerList
)):
for
layer_j
in
self
.
_sub_layers
[
layer_i
].
_sub_layers
:
self
.
_sub_layers
[
layer_i
].
_sub_layers
[
layer_j
].
replace_sub
(
layer_name_pattern
,
replace_function
,
recursive
)
else
:
pass
'''
example of replace function:
...
...
@@ -78,3 +82,40 @@ class TheseusLayer(nn.Layer):
return new_conv
'''
class
WrapLayer
(
TheseusLayer
):
def
__init__
(
self
,
sub_layer
):
super
(
WrapLayer
,
self
).
__init__
()
self
.
sub_layer
=
sub_layer
self
.
name
=
sub_layer
.
full_name
()
def
full_name
(
self
):
return
self
.
name
def
forward
(
self
,
*
inputs
,
**
kwargs
):
return
self
.
sub_layer
(
*
inputs
,
**
kwargs
)
def
update_res
(
self
,
return_patterns
):
if
not
return_patterns
or
not
isinstance
(
self
.
sub_layer
,
(
nn
.
Sequential
,
nn
.
LayerList
)):
return
for
layer_i
in
self
.
sub_layer
.
_sub_layers
:
if
isinstance
(
self
.
sub_layer
.
_sub_layers
[
layer_i
],
(
nn
.
Sequential
,
nn
.
LayerList
)):
self
.
sub_layer
.
_sub_layers
[
layer_i
]
=
wrap_theseus
(
self
.
sub_layer
.
_sub_layers
[
layer_i
])
self
.
sub_layer
.
_sub_layers
[
layer_i
].
res_dict
=
self
.
res_dict
self
.
sub_layer
.
_sub_layers
[
layer_i
].
update_res
(
return_patterns
)
layer_name
=
self
.
sub_layer
.
_sub_layers
[
layer_i
].
full_name
()
for
return_pattern
in
return_patterns
:
if
re
.
match
(
return_pattern
,
layer_name
):
self
.
sub_layer
.
_sub_layers
[
layer_i
].
res_dict
=
self
.
res_dict
self
.
sub_layer
.
_sub_layers
[
layer_i
].
register_forward_post_hook
(
self
.
_sub_layers
[
layer_i
].
_save_sub_res_hook
)
if
isinstance
(
self
.
sub_layer
.
_sub_layers
[
layer_i
],
TheseusLayer
):
self
.
sub_layer
.
_sub_layers
[
layer_i
].
update_res
(
return_patterns
)
def
wrap_theseus
(
sub_layer
):
wrapped_layer
=
WrapLayer
(
sub_layer
)
return
wrapped_layer
ppcls/arch/backbone/legendary_models/vgg.py
浏览文件 @
f64188e4
...
...
@@ -111,7 +111,7 @@ class VGGNet(TheseusLayer):
model: nn.Layer. Specific VGG model depends on args.
"""
def
__init__
(
self
,
config
,
stop_grad_layers
=
0
,
class_num
=
1000
):
def
__init__
(
self
,
config
,
stop_grad_layers
=
0
,
class_num
=
1000
,
return_patterns
=
None
):
super
().
__init__
()
self
.
stop_grad_layers
=
stop_grad_layers
...
...
@@ -138,7 +138,7 @@ class VGGNet(TheseusLayer):
self
.
fc2
=
Linear
(
4096
,
4096
)
self
.
fc3
=
Linear
(
4096
,
class_num
)
def
forward
(
self
,
inputs
):
def
forward
(
self
,
inputs
,
res_dict
=
None
):
x
=
self
.
conv_block_1
(
inputs
)
x
=
self
.
conv_block_2
(
x
)
x
=
self
.
conv_block_3
(
x
)
...
...
@@ -152,6 +152,9 @@ class VGGNet(TheseusLayer):
x
=
self
.
relu
(
x
)
x
=
self
.
drop
(
x
)
x
=
self
.
fc3
(
x
)
if
self
.
res_dict
and
res_dict
is
not
None
:
for
res_key
in
list
(
self
.
res_dict
):
res_dict
[
res_key
]
=
self
.
res_dict
.
pop
(
res_key
)
return
x
...
...
ppcls/arch/backbone/model_zoo/resnext101_wsl.py
浏览文件 @
f64188e4
...
...
@@ -12,7 +12,7 @@ MODEL_URLS = {
"ResNeXt101_32x8d_wsl"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeXt101_32x8d_wsl_pretrained.pdparams"
,
"ResNeXt101_32x16d_wsl"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeXt101_32x
8
16_wsl_pretrained.pdparams"
,
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeXt101_32x16_wsl_pretrained.pdparams"
,
"ResNeXt101_32x32d_wsl"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeXt101_32x32d_wsl_pretrained.pdparams"
,
"ResNeXt101_32x48d_wsl"
:
...
...
ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml
浏览文件 @
f64188e4
...
...
@@ -24,6 +24,7 @@ Loss:
Train
:
-
CELoss
:
weight
:
1.0
epsilon
:
0.1
Eval
:
-
CELoss
:
weight
:
1.0
...
...
@@ -35,9 +36,10 @@ Optimizer:
lr
:
name
:
Cosine
learning_rate
:
0.8
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml
浏览文件 @
f64188e4
...
...
@@ -24,6 +24,7 @@ Loss:
Train
:
-
CELoss
:
weight
:
1.0
epsilon
:
0.1
Eval
:
-
CELoss
:
weight
:
1.0
...
...
@@ -35,9 +36,10 @@ Optimizer:
lr
:
name
:
Cosine
learning_rate
:
0.8
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml
浏览文件 @
f64188e4
...
...
@@ -24,6 +24,7 @@ Loss:
Train
:
-
CELoss
:
weight
:
1.0
epsilon
:
0.1
Eval
:
-
CELoss
:
weight
:
1.0
...
...
@@ -35,9 +36,10 @@ Optimizer:
lr
:
name
:
Cosine
learning_rate
:
0.8
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml
浏览文件 @
f64188e4
...
...
@@ -41,7 +41,7 @@ Optimizer:
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml
浏览文件 @
f64188e4
...
...
@@ -39,7 +39,7 @@ Optimizer:
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml
浏览文件 @
f64188e4
...
...
@@ -39,7 +39,7 @@ Optimizer:
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml
浏览文件 @
f64188e4
...
...
@@ -39,7 +39,7 @@ Optimizer:
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml
浏览文件 @
f64188e4
...
...
@@ -39,7 +39,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml
浏览文件 @
f64188e4
...
...
@@ -37,7 +37,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml
浏览文件 @
f64188e4
...
...
@@ -37,7 +37,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml
浏览文件 @
f64188e4
...
...
@@ -37,7 +37,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml
浏览文件 @
f64188e4
...
...
@@ -37,7 +37,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml
浏览文件 @
f64188e4
...
...
@@ -37,7 +37,7 @@ Optimizer:
learning_rate
:
0.045
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml
0 → 100644
浏览文件 @
f64188e4
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
240
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# model architecture
Arch
:
name
:
ShuffleNetV2_swish
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Cosine
learning_rate
:
0.5
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.00004
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
256
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0003
coeff
:
0.000
0
3
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml
浏览文件 @
f64188e4
...
...
@@ -38,7 +38,7 @@ Optimizer:
warmup_epoch
:
5
regularizer
:
name
:
'
L2'
coeff
:
0.0004
coeff
:
0.000
0
4
# data loader for train and eval
...
...
ppcls/engine/trainer.py
浏览文件 @
f64188e4
...
...
@@ -588,7 +588,7 @@ class Trainer(object):
if
len
(
batch
)
==
3
:
has_unique_id
=
True
batch
[
2
]
=
batch
[
2
].
reshape
([
-
1
,
1
]).
astype
(
"int64"
)
out
=
self
.
model
(
batch
[
0
],
batch
[
1
]
)
out
=
self
.
forward
(
batch
)
batch_feas
=
out
[
"features"
]
# do norm
...
...
@@ -653,7 +653,7 @@ class Trainer(object):
image_file_list
.
append
(
image_file
)
if
len
(
batch_data
)
>=
batch_size
or
idx
==
len
(
image_list
)
-
1
:
batch_tensor
=
paddle
.
to_tensor
(
batch_data
)
out
=
self
.
model
(
batch_tensor
)
out
=
self
.
forward
([
batch_tensor
]
)
if
isinstance
(
out
,
list
):
out
=
out
[
0
]
result
=
postprocess_func
(
out
,
image_file_list
)
...
...
tests/test.sh
浏览文件 @
f64188e4
...
...
@@ -185,7 +185,7 @@ function func_inference(){
elif
[
${
use_gpu
}
=
"True"
]
||
[
${
use_gpu
}
=
"gpu"
]
;
then
for
use_trt
in
${
use_trt_list
[*]
}
;
do
for
precision
in
${
precision_list
[*]
}
;
do
if
[
${
precision
}
=
"
Fals
e"
]
&&
[
${
use_trt
}
=
"False"
]
;
then
if
[
${
precision
}
=
"
Tru
e"
]
&&
[
${
use_trt
}
=
"False"
]
;
then
continue
fi
if
[[
${
use_trt
}
=
"False"
||
${
precision
}
=
~
"int8"
]]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录