Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
1180a55a
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
未验证
提交
1180a55a
编写于
6月 19, 2021
作者:
W
Wei Shengyu
提交者:
GitHub
6月 19, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'PaddlePaddle:develop' into develop
上级
1c55e08a
713b47e5
变更
8
隐藏空白更改
内联
并排
Showing
8 changed file
with
141 addition
and
2860 deletion
+141
-2860
docs/zh_CN/ImageNet_models_cn.md
docs/zh_CN/ImageNet_models_cn.md
+141
-60
ppcls/arch/backbone/model_zoo/hrnet.py
ppcls/arch/backbone/model_zoo/hrnet.py
+0
-716
ppcls/arch/backbone/model_zoo/inception_v3.py
ppcls/arch/backbone/model_zoo/inception_v3.py
+0
-503
ppcls/arch/backbone/model_zoo/mobilenet_v1.py
ppcls/arch/backbone/model_zoo/mobilenet_v1.py
+0
-288
ppcls/arch/backbone/model_zoo/mobilenet_v3.py
ppcls/arch/backbone/model_zoo/mobilenet_v3.py
+0
-389
ppcls/arch/backbone/model_zoo/resnet.py
ppcls/arch/backbone/model_zoo/resnet.py
+0
-343
ppcls/arch/backbone/model_zoo/resnet_vd.py
ppcls/arch/backbone/model_zoo/resnet_vd.py
+0
-382
ppcls/arch/backbone/model_zoo/vgg.py
ppcls/arch/backbone/model_zoo/vgg.py
+0
-179
未找到文件。
docs/zh_CN/ImageNet_models_cn.md
浏览文件 @
1180a55a
...
@@ -30,27 +30,26 @@
...
@@ -30,27 +30,26 @@
| 模型 | Top-1 Acc | Reference
<br>
Top-1 Acc | Acc gain | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| 模型 | Top-1 Acc | Reference
<br>
Top-1 Acc | Acc gain | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
|---------------------|-----------|-----------|---------------|----------------|-----------|----------|-----------|-----------------------------------|
|---------------------|-----------|-----------|---------------|----------------|-----------|----------|-----------|-----------------------------------|
| ResNet34_vd_ssld | 0.797 | 0.760 | 0.037 | 2.434 | 6.222 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_vd_ssld_pretrained.pdparams
)
|
| ResNet34_vd_ssld | 0.797 | 0.760 | 0.037 | 2.434 | 6.222 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet34_vd_ssld_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld | 0.824 | 0.791 | 0.033 | 3.531 | 8.090 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_ssld_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld | 0.830 | 0.792 | 0.039 | 3.531 | 8.090 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet50_vd_ssld_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld_v2 | 0.830 | 0.792 | 0.039 | 3.531 | 8.090 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_ssld_v2_pretrained.pdparams
)
|
| ResNet101_vd_
<br>
ssld | 0.837 | 0.802 | 0.035 | 6.117 | 13.762 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet101_vd_ssld_pretrained.pdparams
)
|
| ResNet101_vd_
<br>
ssld | 0.837 | 0.802 | 0.035 | 6.117 | 13.762 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_vd_ssld_pretrained.pdparams
)
|
| Res2Net50_vd_
<br>
26w_4s_ssld | 0.831 | 0.798 | 0.033 | 4.527 | 9.657 | 8.37 | 25.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net50_vd_26w_4s_ssld_pretrained.pdparams
)
|
| Res2Net50_vd_
<br>
26w_4s_ssld | 0.831 | 0.798 | 0.033 | 4.527 | 9.657 | 8.37 | 25.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net50_vd_26w_4s_ssld_pretrained.pdparams
)
|
| Res2Net101_vd_
<br>
26w_4s_ssld | 0.839 | 0.806 | 0.033 | 8.087 | 17.312 | 16.67 | 45.22 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net101_vd_26w_4s_ssld_pretrained.pdparams
)
|
| Res2Net101_vd_
<br>
26w_4s_ssld | 0.839 | 0.806 | 0.033 | 8.087 | 17.312 | 16.67 | 45.22 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net101_vd_26w_4s_ssld_pretrained.pdparams
)
|
| Res2Net200_vd_
<br>
26w_4s_ssld | 0.851 | 0.812 | 0.049 | 14.678 | 32.350 | 31.49 | 76.21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net200_vd_26w_4s_ssld_pretrained.pdparams
)
|
| Res2Net200_vd_
<br>
26w_4s_ssld | 0.851 | 0.812 | 0.049 | 14.678 | 32.350 | 31.49 | 76.21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Res2Net200_vd_26w_4s_ssld_pretrained.pdparams
)
|
| HRNet_W18_C_ssld | 0.812 | 0.769 | 0.043 | 7.406 | 13.297 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W18_C_ssld_pretrained.pdparams
)
|
| HRNet_W18_C_ssld | 0.812 | 0.769 | 0.043 | 7.406 | 13.297 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W18_C_ssld_pretrained.pdparams
)
|
| HRNet_W48_C_ssld | 0.836 | 0.790 | 0.046 | 13.707 | 34.435 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W48_C_ssld_pretrained.pdparams
)
|
| HRNet_W48_C_ssld | 0.836 | 0.790 | 0.046 | 13.707 | 34.435 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W48_C_ssld_pretrained.pdparams
)
|
| SE_HRNet_W64_C_ssld | 0.848 | - | - | 31.697 | 94.995 | 57.83 | 128.97 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SE_HRNet_W64_C_ssld_pretrained.pdparams
)
|
| SE_HRNet_W64_C_ssld | 0.848 | - | - | 31.697 | 94.995 | 57.83 | 128.97 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
SE_HRNet_W64_C_ssld_pretrained.pdparams
)
|
*
端侧知识蒸馏模型
*
端侧知识蒸馏模型
| 模型 | Top-1 Acc | Reference
<br>
Top-1 Acc | Acc gain | SD855 time(ms)
<br>
bs=1 | Flops(G) | Params(M) | 模型大小(M) | 下载地址 |
| 模型 | Top-1 Acc | Reference
<br>
Top-1 Acc | Acc gain | SD855 time(ms)
<br>
bs=1 | Flops(G) | Params(M) | 模型大小(M) | 下载地址 |
|---------------------|-----------|-----------|---------------|----------------|-----------|----------|-----------|-----------------------------------|
|---------------------|-----------|-----------|---------------|----------------|-----------|----------|-----------|-----------------------------------|
| MobileNetV1_
<br>
ssld | 0.779 | 0.710 | 0.069 | 32.523 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_ssld_pretrained.pdparams
)
|
| MobileNetV1_
<br>
ssld | 0.779 | 0.710 | 0.069 | 32.523 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_ssld_pretrained.pdparams
)
|
| MobileNetV2_
<br>
ssld | 0.767 | 0.722 | 0.045 | 23.318 | 0.6 | 3.44 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_ssld_pretrained.pdparams
)
|
| MobileNetV2_
<br>
ssld | 0.767 | 0.722 | 0.045 | 23.318 | 0.6 | 3.44 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35_ssld | 0.556 | 0.530 | 0.026 | 2.635 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_35_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35_ssld | 0.556 | 0.530 | 0.026 | 2.635 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x0_35_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0_ssld | 0.790 | 0.753 | 0.036 | 19.308 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0_ssld | 0.790 | 0.753 | 0.036 | 19.308 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_small_
<br>
x1_0_ssld | 0.713 | 0.682 | 0.031 | 6.546 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_small_
<br>
x1_0_ssld | 0.713 | 0.682 | 0.031 | 6.546 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x1_0_ssld_pretrained.pdparams
)
|
| GhostNet_
<br>
x1_3_ssld | 0.794 | 0.757 | 0.037 | 19.983 | 0.44 | 7.3 | 29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/GhostNet_x1_3_ssld_pretrained.pdparams
)
|
| GhostNet_
<br>
x1_3_ssld | 0.794 | 0.757 | 0.037 | 19.983 | 0.44 | 7.3 | 29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/GhostNet_x1_3_ssld_pretrained.pdparams
)
|
...
@@ -63,23 +62,21 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
...
@@ -63,23 +62,21 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
|---------------------|-----------|-----------|-----------------------|----------------------|----------|-----------|----------------------------------------------------------------------------------------------|
|---------------------|-----------|-----------|-----------------------|----------------------|----------|-----------|----------------------------------------------------------------------------------------------|
| ResNet18 | 0.7098 | 0.8992 | 1.45606 | 3.56305 | 3.66 | 11.69 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet18_pretrained.pdparams
)
|
| ResNet18 | 0.7098 | 0.8992 | 1.45606 | 3.56305 | 3.66 | 11.69 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet18_pretrained.pdparams
)
|
| ResNet18_vd | 0.7226 | 0.9080 | 1.54557 | 3.85363 | 4.14 | 11.71 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet18_vd_pretrained.pdparams
)
|
| ResNet18_vd | 0.7226 | 0.9080 | 1.54557 | 3.85363 | 4.14 | 11.71 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet18_vd_pretrained.pdparams
)
|
| ResNet34 | 0.7457 | 0.9214 | 2.34957 | 5.89821 | 7.36 | 21.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_pretrained.pdparams
)
|
| ResNet34 | 0.7457 | 0.9214 | 2.34957 | 5.89821 | 7.36 | 21.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet34_pretrained.pdparams
)
|
| ResNet34_vd | 0.7598 | 0.9298 | 2.43427 | 6.22257 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_vd_pretrained.pdparams
)
|
| ResNet34_vd | 0.7598 | 0.9298 | 2.43427 | 6.22257 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet34_vd_pretrained.pdparams
)
|
| ResNet34_vd_ssld | 0.7972 | 0.9490 | 2.43427 | 6.22257 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_vd_ssld_pretrained.pdparams
)
|
| ResNet34_vd_ssld | 0.7972 | 0.9490 | 2.43427 | 6.22257 | 7.39 | 21.82 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet34_vd_ssld_pretrained.pdparams
)
|
| ResNet50 | 0.7650 | 0.9300 | 3.47712 | 7.84421 | 8.19 | 25.56 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams
)
|
| ResNet50 | 0.7650 | 0.9300 | 3.47712 | 7.84421 | 8.19 | 25.56 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
ResNet50_pretrained.pdparams
)
|
| ResNet50_vc | 0.7835 | 0.9403 | 3.52346 | 8.10725 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vc_pretrained.pdparams
)
|
| ResNet50_vc | 0.7835 | 0.9403 | 3.52346 | 8.10725 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vc_pretrained.pdparams
)
|
| ResNet50_vd | 0.7912 | 0.9444 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_pretrained.pdparams
)
|
| ResNet50_vd | 0.7912 | 0.9444 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet50_vd_pretrained.pdparams
)
|
| ResNet50_vd_v2 | 0.7984 | 0.9493 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_v2_pretrained.pdparams
)
|
| ResNet101 | 0.7756 | 0.9364 | 6.07125 | 13.40573 | 15.52 | 44.55 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet101_pretrained.pdparams
)
|
| ResNet101 | 0.7756 | 0.9364 | 6.07125 | 13.40573 | 15.52 | 44.55 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_pretrained.pdparams
)
|
| ResNet101_vd | 0.8017 | 0.9497 | 6.11704 | 13.76222 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet101_vd_pretrained.pdparams
)
|
| ResNet101_vd | 0.8017 | 0.9497 | 6.11704 | 13.76222 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_vd_pretrained.pdparams
)
|
| ResNet152 | 0.7826 | 0.9396 | 8.50198 | 19.17073 | 23.05 | 60.19 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet152_pretrained.pdparams
)
|
| ResNet152 | 0.7826 | 0.9396 | 8.50198 | 19.17073 | 23.05 | 60.19 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet152_pretrained.pdparams
)
|
| ResNet152_vd | 0.8059 | 0.9530 | 8.54376 | 19.52157 | 23.53 | 60.21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet152_vd_pretrained.pdparams
)
|
| ResNet152_vd | 0.8059 | 0.9530 | 8.54376 | 19.52157 | 23.53 | 60.21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet152_vd_pretrained.pdparams
)
|
| ResNet200_vd | 0.8093 | 0.9533 | 10.80619 | 25.01731 | 30.53 | 74.74 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet200_vd_pretrained.pdparams
)
|
| ResNet200_vd | 0.8093 | 0.9533 | 10.80619 | 25.01731 | 30.53 | 74.74 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet200_vd_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld | 0.8300 | 0.9640 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet50_vd_ssld_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld | 0.8239 | 0.9610 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_ssld_pretrained.pdparams
)
|
| ResNet101_vd_
<br>
ssld | 0.8373 | 0.9669 | 6.11704 | 13.76222 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet101_vd_ssld_pretrained.pdparams
)
|
| ResNet50_vd_
<br>
ssld_v2 | 0.8300 | 0.9640 | 3.53131 | 8.09057 | 8.67 | 25.58 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_ssld_v2_pretrained.pdparams
)
|
| ResNet101_vd_
<br>
ssld | 0.8373 | 0.9669 | 6.11704 | 13.76222 | 16.1 | 44.57 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_vd_ssld_pretrained.pdparams
)
|
<a
name=
"移动端系列"
></a>
<a
name=
"移动端系列"
></a>
...
@@ -89,11 +86,11 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
...
@@ -89,11 +86,11 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
| 模型 | Top-1 Acc | Top-5 Acc | SD855 time(ms)
<br>
bs=1 | Flops(G) | Params(M) | 模型大小(M) | 下载地址 |
| 模型 | Top-1 Acc | Top-5 Acc | SD855 time(ms)
<br>
bs=1 | Flops(G) | Params(M) | 模型大小(M) | 下载地址 |
|----------------------------------|-----------|-----------|------------------------|----------|-----------|---------|-----------------------------------------------------------------------------------------------------------|
|----------------------------------|-----------|-----------|------------------------|----------|-----------|---------|-----------------------------------------------------------------------------------------------------------|
| MobileNetV1_
<br>
x0_25 | 0.5143 | 0.7546 | 3.21985 | 0.07 | 0.46 | 1.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_25_pretrained.pdparams
)
|
| MobileNetV1_
<br>
x0_25 | 0.5143 | 0.7546 | 3.21985 | 0.07 | 0.46 | 1.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_x0_25_pretrained.pdparams
)
|
| MobileNetV1_
<br>
x0_5 | 0.6352 | 0.8473 | 9.579599 | 0.28 | 1.31 | 5.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_5_pretrained.pdparams
)
|
| MobileNetV1_
<br>
x0_5 | 0.6352 | 0.8473 | 9.579599 | 0.28 | 1.31 | 5.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_x0_5_pretrained.pdparams
)
|
| MobileNetV1_
<br>
x0_75 | 0.6881 | 0.8823 | 19.436399 | 0.63 | 2.55 | 10 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_75_pretrained.pdparams
)
|
| MobileNetV1_
<br>
x0_75 | 0.6881 | 0.8823 | 19.436399 | 0.63 | 2.55 | 10 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_x0_75_pretrained.pdparams
)
|
| MobileNetV1 | 0.7099 | 0.8968 | 32.523048 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_pretrained.pdparams
)
|
| MobileNetV1 | 0.7099 | 0.8968 | 32.523048 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_pretrained.pdparams
)
|
| MobileNetV1_
<br>
ssld | 0.7789 | 0.9394 | 32.523048 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_ssld_pretrained.pdparams
)
|
| MobileNetV1_
<br>
ssld | 0.7789 | 0.9394 | 32.523048 | 1.11 | 4.19 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV1_ssld_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_25 | 0.5321 | 0.7652 | 3.79925 | 0.05 | 1.5 | 6.1 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_25_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_25 | 0.5321 | 0.7652 | 3.79925 | 0.05 | 1.5 | 6.1 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_25_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_5 | 0.6503 | 0.8572 | 8.7021 | 0.17 | 1.93 | 7.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_5_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_5 | 0.6503 | 0.8572 | 8.7021 | 0.17 | 1.93 | 7.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_5_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_75 | 0.6983 | 0.8901 | 15.531351 | 0.35 | 2.58 | 10 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_75_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x0_75 | 0.6983 | 0.8901 | 15.531351 | 0.35 | 2.58 | 10 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x0_75_pretrained.pdparams
)
|
...
@@ -101,19 +98,19 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
...
@@ -101,19 +98,19 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
| MobileNetV2_
<br>
x1_5 | 0.7412 | 0.9167 | 45.623848 | 1.32 | 6.76 | 26 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x1_5_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x1_5 | 0.7412 | 0.9167 | 45.623848 | 1.32 | 6.76 | 26 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x1_5_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x2_0 | 0.7523 | 0.9258 | 74.291649 | 2.32 | 11.13 | 43 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x2_0_pretrained.pdparams
)
|
| MobileNetV2_
<br>
x2_0 | 0.7523 | 0.9258 | 74.291649 | 2.32 | 11.13 | 43 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_x2_0_pretrained.pdparams
)
|
| MobileNetV2_
<br>
ssld | 0.7674 | 0.9339 | 23.317699 | 0.6 | 3.44 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_ssld_pretrained.pdparams
)
|
| MobileNetV2_
<br>
ssld | 0.7674 | 0.9339 | 23.317699 | 0.6 | 3.44 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV2_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_25 | 0.7641 | 0.9295 | 28.217701 | 0.714 | 7.44 | 29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_25_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_25 | 0.7641 | 0.9295 | 28.217701 | 0.714 | 7.44 | 29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x1_25_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0 | 0.7532 | 0.9231 | 19.30835 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_0_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0 | 0.7532 | 0.9231 | 19.30835 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x1_0_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_75 | 0.7314 | 0.9108 | 13.5646 | 0.296 | 3.91 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_75_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_75 | 0.7314 | 0.9108 | 13.5646 | 0.296 | 3.91 | 16 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x0_75_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_5 | 0.6924 | 0.8852 | 7.49315 | 0.138 | 2.67 | 11 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_5 | 0.6924 | 0.8852 | 7.49315 | 0.138 | 2.67 | 11 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x0_5_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_35 | 0.6432 | 0.8546 | 5.13695 | 0.077 | 2.1 | 8.6 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_35_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x0_35 | 0.6432 | 0.8546 | 5.13695 | 0.077 | 2.1 | 8.6 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x0_35_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x1_25 | 0.7067 | 0.8951 | 9.2745 | 0.195 | 3.62 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_25_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x1_25 | 0.7067 | 0.8951 | 9.2745 | 0.195 | 3.62 | 14 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x1_25_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x1_0 | 0.6824 | 0.8806 | 6.5463 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_0_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x1_0 | 0.6824 | 0.8806 | 6.5463 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x1_0_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_75 | 0.6602 | 0.8633 | 5.28435 | 0.088 | 2.37 | 9.6 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_75_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_75 | 0.6602 | 0.8633 | 5.28435 | 0.088 | 2.37 | 9.6 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x0_75_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_5 | 0.5921 | 0.8152 | 3.35165 | 0.043 | 1.9 | 7.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_5_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_5 | 0.5921 | 0.8152 | 3.35165 | 0.043 | 1.9 | 7.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x0_5_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35 | 0.5303 | 0.7637 | 2.6352 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_35_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35 | 0.5303 | 0.7637 | 2.6352 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x0_35_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35_ssld | 0.5555 | 0.7771 | 2.6352 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_35_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
small_x0_35_ssld | 0.5555 | 0.7771 | 2.6352 | 0.026 | 1.66 | 6.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x0_35_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0_ssld | 0.7896 | 0.9448 | 19.30835 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_
<br>
large_x1_0_ssld | 0.7896 | 0.9448 | 19.30835 | 0.45 | 5.47 | 21 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_large_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_small_
<br>
x1_0_ssld | 0.7129 | 0.9010 | 6.5463 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_0_ssld_pretrained.pdparams
)
|
| MobileNetV3_small_
<br>
x1_0_ssld | 0.7129 | 0.9010 | 6.5463 | 0.123 | 2.94 | 12 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
MobileNetV3_small_x1_0_ssld_pretrained.pdparams
)
|
| ShuffleNetV2 | 0.6880 | 0.8845 | 10.941 | 0.28 | 2.26 | 9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x1_0_pretrained.pdparams
)
|
| ShuffleNetV2 | 0.6880 | 0.8845 | 10.941 | 0.28 | 2.26 | 9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x1_0_pretrained.pdparams
)
|
| ShuffleNetV2_
<br>
x0_25 | 0.4990 | 0.7379 | 2.329 | 0.03 | 0.6 | 2.7 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x0_25_pretrained.pdparams
)
|
| ShuffleNetV2_
<br>
x0_25 | 0.4990 | 0.7379 | 2.329 | 0.03 | 0.6 | 2.7 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x0_25_pretrained.pdparams
)
|
| ShuffleNetV2_
<br>
x0_33 | 0.5373 | 0.7705 | 2.64335 | 0.04 | 0.64 | 2.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x0_33_pretrained.pdparams
)
|
| ShuffleNetV2_
<br>
x0_33 | 0.5373 | 0.7705 | 2.64335 | 0.04 | 0.64 | 2.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ShuffleNetV2_x0_33_pretrained.pdparams
)
|
...
@@ -191,16 +188,16 @@ HRNet系列模型的精度、速度指标如下表所示,更多关于该系列
...
@@ -191,16 +188,16 @@ HRNet系列模型的精度、速度指标如下表所示,更多关于该系列
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
|-------------|-----------|-----------|------------------|------------------|----------|-----------|--------------------------------------------------------------------------------------|
|-------------|-----------|-----------|------------------|------------------|----------|-----------|--------------------------------------------------------------------------------------|
| HRNet_W18_C | 0.7692 | 0.9339 | 7.40636 | 13.29752 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W18_C_pretrained.pdparams
)
|
| HRNet_W18_C | 0.7692 | 0.9339 | 7.40636 | 13.29752 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W18_C_pretrained.pdparams
)
|
| HRNet_W18_C_ssld | 0.81162 | 0.95804 | 7.40636 | 13.29752 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W18_C_ssld_pretrained.pdparams
)
|
| HRNet_W18_C_ssld | 0.81162 | 0.95804 | 7.40636 | 13.29752 | 4.14 | 21.29 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W18_C_ssld_pretrained.pdparams
)
|
| HRNet_W30_C | 0.7804 | 0.9402 | 9.57594 | 17.35485 | 16.23 | 37.71 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W30_C_pretrained.pdparams
)
|
| HRNet_W30_C | 0.7804 | 0.9402 | 9.57594 | 17.35485 | 16.23 | 37.71 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W30_C_pretrained.pdparams
)
|
| HRNet_W32_C | 0.7828 | 0.9424 | 9.49807 | 17.72921 | 17.86 | 41.23 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W32_C_pretrained.pdparams
)
|
| HRNet_W32_C | 0.7828 | 0.9424 | 9.49807 | 17.72921 | 17.86 | 41.23 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W32_C_pretrained.pdparams
)
|
| HRNet_W40_C | 0.7877 | 0.9447 | 12.12202 | 25.68184 | 25.41 | 57.55 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W40_C_pretrained.pdparams
)
|
| HRNet_W40_C | 0.7877 | 0.9447 | 12.12202 | 25.68184 | 25.41 | 57.55 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W40_C_pretrained.pdparams
)
|
| HRNet_W44_C | 0.7900 | 0.9451 | 13.19858 | 32.25202 | 29.79 | 67.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W44_C_pretrained.pdparams
)
|
| HRNet_W44_C | 0.7900 | 0.9451 | 13.19858 | 32.25202 | 29.79 | 67.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W44_C_pretrained.pdparams
)
|
| HRNet_W48_C | 0.7895 | 0.9442 | 13.70761 | 34.43572 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W48_C_pretrained.pdparams
)
|
| HRNet_W48_C | 0.7895 | 0.9442 | 13.70761 | 34.43572 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W48_C_pretrained.pdparams
)
|
| HRNet_W48_C_ssld | 0.8363 | 0.9682 | 13.70761 | 34.43572 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W48_C_ssld_pretrained.pdparams
)
|
| HRNet_W48_C_ssld | 0.8363 | 0.9682 | 13.70761 | 34.43572 | 34.58 | 77.47 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W48_C_ssld_pretrained.pdparams
)
|
| HRNet_W64_C | 0.7930 | 0.9461 | 17.57527 | 47.9533 | 57.83 | 128.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W64_C_pretrained.pdparams
)
|
| HRNet_W64_C | 0.7930 | 0.9461 | 17.57527 | 47.9533 | 57.83 | 128.06 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
HRNet_W64_C_pretrained.pdparams
)
|
| SE_HRNet_W64_C_ssld | 0.8475 | 0.9726 | 31.69770 | 94.99546 | 57.83 | 128.97 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SE_HRNet_W64_C_ssld_pretrained.pdparams
)
|
| SE_HRNet_W64_C_ssld | 0.8475 | 0.9726 | 31.69770 | 94.99546 | 57.83 | 128.97 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
SE_HRNet_W64_C_ssld_pretrained.pdparams
)
|
<a
name=
"Inception系列"
></a>
<a
name=
"Inception系列"
></a>
...
@@ -216,7 +213,7 @@ Inception系列模型的精度、速度指标如下表所示,更多关于该
...
@@ -216,7 +213,7 @@ Inception系列模型的精度、速度指标如下表所示,更多关于该
| Xception65 | 0.8100 | 0.9549 | 7.26158 | 25.88778 | 25.95 | 35.48 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception65_pretrained.pdparams
)
|
| Xception65 | 0.8100 | 0.9549 | 7.26158 | 25.88778 | 25.95 | 35.48 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception65_pretrained.pdparams
)
|
| Xception65_deeplab | 0.8032 | 0.9449 | 7.60208 | 26.03699 | 27.37 | 39.52 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception65_deeplab_pretrained.pdparams
)
|
| Xception65_deeplab | 0.8032 | 0.9449 | 7.60208 | 26.03699 | 27.37 | 39.52 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception65_deeplab_pretrained.pdparams
)
|
| Xception71 | 0.8111 | 0.9545 | 8.72457 | 31.55549 | 31.77 | 37.28 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception71_pretrained.pdparams
)
|
| Xception71 | 0.8111 | 0.9545 | 8.72457 | 31.55549 | 31.77 | 37.28 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/Xception71_pretrained.pdparams
)
|
| InceptionV3 | 0.7914 | 0.9459 | 6.64054 | 13.53630 | 11.46 | 23.83 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/InceptionV3_pretrained.pdparams
)
|
| InceptionV3 | 0.7914 | 0.9459 | 6.64054 | 13.53630 | 11.46 | 23.83 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
InceptionV3_pretrained.pdparams
)
|
| InceptionV4 | 0.8077 | 0.9526 | 12.99342 | 25.23416 | 24.57 | 42.68 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/InceptionV4_pretrained.pdparams
)
|
| InceptionV4 | 0.8077 | 0.9526 | 12.99342 | 25.23416 | 24.57 | 42.68 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/InceptionV4_pretrained.pdparams
)
|
...
@@ -352,6 +349,90 @@ ViT(Vision Transformer)与DeiT(Data-efficient Image Transformers)系列
...
@@ -352,6 +349,90 @@ ViT(Vision Transformer)与DeiT(Data-efficient Image Transformers)系列
[1]:基于ImageNet22k数据集预训练,然后在ImageNet1k数据集迁移学习得到。
[1]:基于ImageNet22k数据集预训练,然后在ImageNet1k数据集迁移学习得到。
<a
name=
"LeViT系列"
></a>
### LeViT系列
关于LeViT系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
LeViT系列模型文档
](
./models/LeViT.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(M) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| LeViT_128S | 0.7598 | 0.9269 | | | 305 | 7.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/LeViT_128S_pretrained.pdparams
)
|
| LeViT_128 | 0.7810 | 0.9371 | | | 406 | 9.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/LeViT_128_pretrained.pdparams
)
|
| LeViT_192 | 0.7934 | 0.9446 | | | 658 | 11 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/LeViT_192_pretrained.pdparams
)
|
| LeViT_256 | 0.8085 | 0.9497 | | | 1120 | 19 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/LeViT_256_pretrained.pdparams
)
|
| LeViT_384 | 0.8191 | 0.9551 | | | 2353 | 39 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/LeViT_384_pretrained.pdparams
)
|
**注**
:与Reference的精度差异源于数据预处理不同及未使用蒸馏的head作为输出。
<a
name=
"Twins系列"
></a>
### Twins系列
关于Twins系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
Twins系列模型文档
](
./models/Twins.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| pcpvt_small | 0.8082 | 0.9552 | | |3.7 | 24.1 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/pcpvt_small_pretrained.pdparams
)
|
| pcpvt_base | 0.8242 | 0.9619 | | | 6.4 | 43.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/pcpvt_base_pretrained.pdparams
)
|
| pcpvt_large | 0.8273 | 0.9650 | | | 9.5 | 60.9 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/pcpvt_large_pretrained.pdparams
)
|
| alt_gvt_small | 0.8140 | 0.9546 | | |2.8 | 24 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/alt_gvt_small_pretrained.pdparams
)
|
| alt_gvt_base | 0.8294 | 0.9621 | | | 8.3 | 56 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/alt_gvt_base_pretrained.pdparams
)
|
| alt_gvt_large | 0.8331 | 0.9642 | | | 14.8 | 99.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/alt_gvt_large_pretrained.pdparams
)
|
**注**
:与Reference的精度差异源于数据预处理不同。
<a
name=
"HarDNet系列"
></a>
### HarDNet系列
关于HarDNet系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
HarDNet系列模型文档
](
./models/HarDNet.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| HarDNet39_ds | 0.7133 |0.8998 | | | 0.4 | 3.5 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HarDNet39_ds_pretrained.pdparams
)
|
| HarDNet68_ds |0.7362 | 0.9152 | | | 0.8 | 4.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HarDNet68_ds_pretrained.pdparams
)
|
| HarDNet68| 0.7546 | 0.9265 | | | 4.3 | 17.6 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HarDNet68_pretrained.pdparams
)
|
| HarDNet85 | 0.7744 | 0.9355 | | | 9.1 | 36.7 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HarDNet85_pretrained.pdparams
)
|
<a
name=
"DLA系列"
></a>
### DLA系列
关于 DLA系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
DLA系列模型文档
](
./models/DLA.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| DLA102 | 0.7893 |0.9452 | | | 7.2 | 33.3 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA102_pretrained.pdparams
)
|
| DLA102x2 |0.7885 | 0.9445 | | | 9.3 | 41.4 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA102x2_pretrained.pdparams
)
|
| DLA102x| 0.781 | 0.9400 | | | 5.9 | 26.4 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA102x_pretrained.pdparams
)
|
| DLA169 | 0.7809 | 0.9409 | | | 11.6 | 53.5 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA169_pretrained.pdparams
)
|
| DLA34 | 0.7603 | 0.9298 | | | 3.1 | 15.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA34_pretrained.pdparams
)
|
| DLA46_c |0.6321 | 0.853 | | | 0.5 | 1.3 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA46_c_pretrained.pdparams
)
|
| DLA60 | 0.7610 | 0.9292 | | | 4.2 | 22.0 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA60_pretrained.pdparams
)
|
| DLA60x_c | 0.6645 | 0.8754 | | | 0.6 | 1.3 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA60x_c_pretrained.pdparams
)
|
| DLA60x | 0.7753 | 0.9378 | | | 3.5 | 17.4 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DLA60x_pretrained.pdparams
)
|
<a
name=
"RedNet系列"
></a>
### RedNet系列
关于RedNet系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
RedNet系列模型文档
](
./models/RedNet.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| RedNet26 | 0.7595 |0.9319 | | | 1.7 | 9.2 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RedNet26_pretrained.pdparams
)
|
| RedNet38 |0.7747 | 0.9356 | | | 2.2 | 12.4 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RedNet38_pretrained.pdparams
)
|
| RedNet50| 0.7833 | 0.9417 | | | 2.7 | 15.5 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RedNet50_pretrained.pdparams
)
|
| RedNet101 | 0.7894 | 0.9436 | | | 4.7 | 25.7 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RedNet101_pretrained.pdparams
)
|
| RedNet152 | 0.7917 | 0.9440 | | | 6.8 | 34.0 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RedNet152_pretrained.pdparams
)
|
<a
name=
"TNT系列"
></a>
### TNT系列
关于TNT系列模型的精度、速度指标如下表所示,更多介绍可以参考:
[
TNT系列模型文档
](
./models/TNT.md
)
。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)
<br>
bs=1 | time(ms)
<br>
bs=4 | Flops(G) | Params(M) | 下载地址 |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ |
| TNT_small | 0.8121 |0.9563 | | | 5.2 | 23.8 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/TNT_small_pretrained.pdparams
)
| |
**注**
:TNT模型的数据预处理部分
`NormalizeImage`
中的
`mean`
与
`std`
均为0.5。
<a
name=
"其他模型"
></a>
<a
name=
"其他模型"
></a>
### 其他模型
### 其他模型
...
@@ -364,8 +445,8 @@ ViT(Vision Transformer)与DeiT(Data-efficient Image Transformers)系列
...
@@ -364,8 +445,8 @@ ViT(Vision Transformer)与DeiT(Data-efficient Image Transformers)系列
| AlexNet | 0.567 | 0.792 | 1.44993 | 2.46696 | 1.370 | 61.090 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/AlexNet_pretrained.pdparams
)
|
| AlexNet | 0.567 | 0.792 | 1.44993 | 2.46696 | 1.370 | 61.090 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/AlexNet_pretrained.pdparams
)
|
| SqueezeNet1_0 | 0.596 | 0.817 | 0.96736 | 2.53221 | 1.550 | 1.240 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SqueezeNet1_0_pretrained.pdparams
)
|
| SqueezeNet1_0 | 0.596 | 0.817 | 0.96736 | 2.53221 | 1.550 | 1.240 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SqueezeNet1_0_pretrained.pdparams
)
|
| SqueezeNet1_1 | 0.601 | 0.819 | 0.76032 | 1.877 | 0.690 | 1.230 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SqueezeNet1_1_pretrained.pdparams
)
|
| SqueezeNet1_1 | 0.601 | 0.819 | 0.76032 | 1.877 | 0.690 | 1.230 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SqueezeNet1_1_pretrained.pdparams
)
|
| VGG11 | 0.693 | 0.891 | 3.90412 | 9.51147 | 15.090 | 132.850 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG11_pretrained.pdparams
)
|
| VGG11 | 0.693 | 0.891 | 3.90412 | 9.51147 | 15.090 | 132.850 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
VGG11_pretrained.pdparams
)
|
| VGG13 | 0.700 | 0.894 | 4.64684 | 12.61558 | 22.480 | 133.030 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG13_pretrained.pdparams
)
|
| VGG13 | 0.700 | 0.894 | 4.64684 | 12.61558 | 22.480 | 133.030 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
VGG13_pretrained.pdparams
)
|
| VGG16 | 0.720 | 0.907 | 5.61769 | 16.40064 | 30.810 | 138.340 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG16_pretrained.pdparams
)
|
| VGG16 | 0.720 | 0.907 | 5.61769 | 16.40064 | 30.810 | 138.340 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
VGG16_pretrained.pdparams
)
|
| VGG19 | 0.726 | 0.909 | 6.65221 | 20.4334 | 39.130 | 143.650 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG19_pretrained.pdparams
)
|
| VGG19 | 0.726 | 0.909 | 6.65221 | 20.4334 | 39.130 | 143.650 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/
legendary_models/
VGG19_pretrained.pdparams
)
|
| DarkNet53 | 0.780 | 0.941 | 4.10829 | 12.1714 | 18.580 | 41.600 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DarkNet53_pretrained.pdparams
)
|
| DarkNet53 | 0.780 | 0.941 | 4.10829 | 12.1714 | 18.580 | 41.600 |
[
下载链接
](
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DarkNet53_pretrained.pdparams
)
|
ppcls/arch/backbone/model_zoo/hrnet.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.nn.initializer
import
Uniform
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"HRNet_W18_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W18_C_pretrained.pdparams"
,
"HRNet_W30_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W30_C_pretrained.pdparams"
,
"HRNet_W32_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W32_C_pretrained.pdparams"
,
"HRNet_W40_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W40_C_pretrained.pdparams"
,
"HRNet_W44_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W44_C_pretrained.pdparams"
,
"HRNet_W48_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W48_C_pretrained.pdparams"
,
"HRNet_W64_C"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/HRNet_W64_C_pretrained.pdparams"
,
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
filter_size
,
stride
=
1
,
groups
=
1
,
act
=
"relu"
,
name
=
None
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
_conv
=
Conv2D
(
in_channels
=
num_channels
,
out_channels
=
num_filters
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_weights"
),
bias_attr
=
False
)
bn_name
=
name
+
'_bn'
self
.
_batch_norm
=
BatchNorm
(
num_filters
,
act
=
act
,
param_attr
=
ParamAttr
(
name
=
bn_name
+
'_scale'
),
bias_attr
=
ParamAttr
(
bn_name
+
'_offset'
),
moving_mean_name
=
bn_name
+
'_mean'
,
moving_variance_name
=
bn_name
+
'_variance'
)
def
forward
(
self
,
input
):
y
=
self
.
_conv
(
input
)
y
=
self
.
_batch_norm
(
y
)
return
y
class
Layer1
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
has_se
=
False
,
name
=
None
):
super
(
Layer1
,
self
).
__init__
()
self
.
bottleneck_block_list
=
[]
for
i
in
range
(
4
):
bottleneck_block
=
self
.
add_sublayer
(
"bb_{}_{}"
.
format
(
name
,
i
+
1
),
BottleneckBlock
(
num_channels
=
num_channels
if
i
==
0
else
256
,
num_filters
=
64
,
has_se
=
has_se
,
stride
=
1
,
downsample
=
True
if
i
==
0
else
False
,
name
=
name
+
'_'
+
str
(
i
+
1
)))
self
.
bottleneck_block_list
.
append
(
bottleneck_block
)
def
forward
(
self
,
input
):
conv
=
input
for
block_func
in
self
.
bottleneck_block_list
:
conv
=
block_func
(
conv
)
return
conv
class
TransitionLayer
(
nn
.
Layer
):
def
__init__
(
self
,
in_channels
,
out_channels
,
name
=
None
):
super
(
TransitionLayer
,
self
).
__init__
()
num_in
=
len
(
in_channels
)
num_out
=
len
(
out_channels
)
out
=
[]
self
.
conv_bn_func_list
=
[]
for
i
in
range
(
num_out
):
residual
=
None
if
i
<
num_in
:
if
in_channels
[
i
]
!=
out_channels
[
i
]:
residual
=
self
.
add_sublayer
(
"transition_{}_layer_{}"
.
format
(
name
,
i
+
1
),
ConvBNLayer
(
num_channels
=
in_channels
[
i
],
num_filters
=
out_channels
[
i
],
filter_size
=
3
,
name
=
name
+
'_layer_'
+
str
(
i
+
1
)))
else
:
residual
=
self
.
add_sublayer
(
"transition_{}_layer_{}"
.
format
(
name
,
i
+
1
),
ConvBNLayer
(
num_channels
=
in_channels
[
-
1
],
num_filters
=
out_channels
[
i
],
filter_size
=
3
,
stride
=
2
,
name
=
name
+
'_layer_'
+
str
(
i
+
1
)))
self
.
conv_bn_func_list
.
append
(
residual
)
def
forward
(
self
,
input
):
outs
=
[]
for
idx
,
conv_bn_func
in
enumerate
(
self
.
conv_bn_func_list
):
if
conv_bn_func
is
None
:
outs
.
append
(
input
[
idx
])
else
:
if
idx
<
len
(
input
):
outs
.
append
(
conv_bn_func
(
input
[
idx
]))
else
:
outs
.
append
(
conv_bn_func
(
input
[
-
1
]))
return
outs
class
Branches
(
nn
.
Layer
):
def
__init__
(
self
,
block_num
,
in_channels
,
out_channels
,
has_se
=
False
,
name
=
None
):
super
(
Branches
,
self
).
__init__
()
self
.
basic_block_list
=
[]
for
i
in
range
(
len
(
out_channels
)):
self
.
basic_block_list
.
append
([])
for
j
in
range
(
block_num
):
in_ch
=
in_channels
[
i
]
if
j
==
0
else
out_channels
[
i
]
basic_block_func
=
self
.
add_sublayer
(
"bb_{}_branch_layer_{}_{}"
.
format
(
name
,
i
+
1
,
j
+
1
),
BasicBlock
(
num_channels
=
in_ch
,
num_filters
=
out_channels
[
i
],
has_se
=
has_se
,
name
=
name
+
'_branch_layer_'
+
str
(
i
+
1
)
+
'_'
+
str
(
j
+
1
)))
self
.
basic_block_list
[
i
].
append
(
basic_block_func
)
def
forward
(
self
,
inputs
):
outs
=
[]
for
idx
,
input
in
enumerate
(
inputs
):
conv
=
input
basic_block_list
=
self
.
basic_block_list
[
idx
]
for
basic_block_func
in
basic_block_list
:
conv
=
basic_block_func
(
conv
)
outs
.
append
(
conv
)
return
outs
class
BottleneckBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
has_se
,
stride
=
1
,
downsample
=
False
,
name
=
None
):
super
(
BottleneckBlock
,
self
).
__init__
()
self
.
has_se
=
has_se
self
.
downsample
=
downsample
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
1
,
act
=
"relu"
,
name
=
name
+
"_conv1"
,
)
self
.
conv2
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
"relu"
,
name
=
name
+
"_conv2"
)
self
.
conv3
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
act
=
None
,
name
=
name
+
"_conv3"
)
if
self
.
downsample
:
self
.
conv_down
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
act
=
None
,
name
=
name
+
"_downsample"
)
if
self
.
has_se
:
self
.
se
=
SELayer
(
num_channels
=
num_filters
*
4
,
num_filters
=
num_filters
*
4
,
reduction_ratio
=
16
,
name
=
'fc'
+
name
)
def
forward
(
self
,
input
):
residual
=
input
conv1
=
self
.
conv1
(
input
)
conv2
=
self
.
conv2
(
conv1
)
conv3
=
self
.
conv3
(
conv2
)
if
self
.
downsample
:
residual
=
self
.
conv_down
(
input
)
if
self
.
has_se
:
conv3
=
self
.
se
(
conv3
)
y
=
paddle
.
add
(
x
=
residual
,
y
=
conv3
)
y
=
F
.
relu
(
y
)
return
y
class
BasicBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
stride
=
1
,
has_se
=
False
,
downsample
=
False
,
name
=
None
):
super
(
BasicBlock
,
self
).
__init__
()
self
.
has_se
=
has_se
self
.
downsample
=
downsample
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
"relu"
,
name
=
name
+
"_conv1"
)
self
.
conv2
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
1
,
act
=
None
,
name
=
name
+
"_conv2"
)
if
self
.
downsample
:
self
.
conv_down
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
act
=
"relu"
,
name
=
name
+
"_downsample"
)
if
self
.
has_se
:
self
.
se
=
SELayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
reduction_ratio
=
16
,
name
=
'fc'
+
name
)
def
forward
(
self
,
input
):
residual
=
input
conv1
=
self
.
conv1
(
input
)
conv2
=
self
.
conv2
(
conv1
)
if
self
.
downsample
:
residual
=
self
.
conv_down
(
input
)
if
self
.
has_se
:
conv2
=
self
.
se
(
conv2
)
y
=
paddle
.
add
(
x
=
residual
,
y
=
conv2
)
y
=
F
.
relu
(
y
)
return
y
class
SELayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
reduction_ratio
,
name
=
None
):
super
(
SELayer
,
self
).
__init__
()
self
.
pool2d_gap
=
AdaptiveAvgPool2D
(
1
)
self
.
_num_channels
=
num_channels
med_ch
=
int
(
num_channels
/
reduction_ratio
)
stdv
=
1.0
/
math
.
sqrt
(
num_channels
*
1.0
)
self
.
squeeze
=
Linear
(
num_channels
,
med_ch
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
name
+
"_sqz_weights"
),
bias_attr
=
ParamAttr
(
name
=
name
+
'_sqz_offset'
))
stdv
=
1.0
/
math
.
sqrt
(
med_ch
*
1.0
)
self
.
excitation
=
Linear
(
med_ch
,
num_filters
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
name
+
"_exc_weights"
),
bias_attr
=
ParamAttr
(
name
=
name
+
'_exc_offset'
))
def
forward
(
self
,
input
):
pool
=
self
.
pool2d_gap
(
input
)
pool
=
paddle
.
squeeze
(
pool
,
axis
=
[
2
,
3
])
squeeze
=
self
.
squeeze
(
pool
)
squeeze
=
F
.
relu
(
squeeze
)
excitation
=
self
.
excitation
(
squeeze
)
excitation
=
F
.
sigmoid
(
excitation
)
excitation
=
paddle
.
unsqueeze
(
excitation
,
axis
=
[
2
,
3
])
out
=
input
*
excitation
return
out
class
Stage
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_modules
,
num_filters
,
has_se
=
False
,
multi_scale_output
=
True
,
name
=
None
):
super
(
Stage
,
self
).
__init__
()
self
.
_num_modules
=
num_modules
self
.
stage_func_list
=
[]
for
i
in
range
(
num_modules
):
if
i
==
num_modules
-
1
and
not
multi_scale_output
:
stage_func
=
self
.
add_sublayer
(
"stage_{}_{}"
.
format
(
name
,
i
+
1
),
HighResolutionModule
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
has_se
=
has_se
,
multi_scale_output
=
False
,
name
=
name
+
'_'
+
str
(
i
+
1
)))
else
:
stage_func
=
self
.
add_sublayer
(
"stage_{}_{}"
.
format
(
name
,
i
+
1
),
HighResolutionModule
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
has_se
=
has_se
,
name
=
name
+
'_'
+
str
(
i
+
1
)))
self
.
stage_func_list
.
append
(
stage_func
)
def
forward
(
self
,
input
):
out
=
input
for
idx
in
range
(
self
.
_num_modules
):
out
=
self
.
stage_func_list
[
idx
](
out
)
return
out
class
HighResolutionModule
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
has_se
=
False
,
multi_scale_output
=
True
,
name
=
None
):
super
(
HighResolutionModule
,
self
).
__init__
()
self
.
branches_func
=
Branches
(
block_num
=
4
,
in_channels
=
num_channels
,
out_channels
=
num_filters
,
has_se
=
has_se
,
name
=
name
)
self
.
fuse_func
=
FuseLayers
(
in_channels
=
num_filters
,
out_channels
=
num_filters
,
multi_scale_output
=
multi_scale_output
,
name
=
name
)
def
forward
(
self
,
input
):
out
=
self
.
branches_func
(
input
)
out
=
self
.
fuse_func
(
out
)
return
out
class
FuseLayers
(
nn
.
Layer
):
def
__init__
(
self
,
in_channels
,
out_channels
,
multi_scale_output
=
True
,
name
=
None
):
super
(
FuseLayers
,
self
).
__init__
()
self
.
_actual_ch
=
len
(
in_channels
)
if
multi_scale_output
else
1
self
.
_in_channels
=
in_channels
self
.
residual_func_list
=
[]
for
i
in
range
(
self
.
_actual_ch
):
for
j
in
range
(
len
(
in_channels
)):
residual_func
=
None
if
j
>
i
:
residual_func
=
self
.
add_sublayer
(
"residual_{}_layer_{}_{}"
.
format
(
name
,
i
+
1
,
j
+
1
),
ConvBNLayer
(
num_channels
=
in_channels
[
j
],
num_filters
=
out_channels
[
i
],
filter_size
=
1
,
stride
=
1
,
act
=
None
,
name
=
name
+
'_layer_'
+
str
(
i
+
1
)
+
'_'
+
str
(
j
+
1
)))
self
.
residual_func_list
.
append
(
residual_func
)
elif
j
<
i
:
pre_num_filters
=
in_channels
[
j
]
for
k
in
range
(
i
-
j
):
if
k
==
i
-
j
-
1
:
residual_func
=
self
.
add_sublayer
(
"residual_{}_layer_{}_{}_{}"
.
format
(
name
,
i
+
1
,
j
+
1
,
k
+
1
),
ConvBNLayer
(
num_channels
=
pre_num_filters
,
num_filters
=
out_channels
[
i
],
filter_size
=
3
,
stride
=
2
,
act
=
None
,
name
=
name
+
'_layer_'
+
str
(
i
+
1
)
+
'_'
+
str
(
j
+
1
)
+
'_'
+
str
(
k
+
1
)))
pre_num_filters
=
out_channels
[
i
]
else
:
residual_func
=
self
.
add_sublayer
(
"residual_{}_layer_{}_{}_{}"
.
format
(
name
,
i
+
1
,
j
+
1
,
k
+
1
),
ConvBNLayer
(
num_channels
=
pre_num_filters
,
num_filters
=
out_channels
[
j
],
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
name
+
'_layer_'
+
str
(
i
+
1
)
+
'_'
+
str
(
j
+
1
)
+
'_'
+
str
(
k
+
1
)))
pre_num_filters
=
out_channels
[
j
]
self
.
residual_func_list
.
append
(
residual_func
)
def
forward
(
self
,
input
):
outs
=
[]
residual_func_idx
=
0
for
i
in
range
(
self
.
_actual_ch
):
residual
=
input
[
i
]
for
j
in
range
(
len
(
self
.
_in_channels
)):
if
j
>
i
:
y
=
self
.
residual_func_list
[
residual_func_idx
](
input
[
j
])
residual_func_idx
+=
1
y
=
F
.
upsample
(
y
,
scale_factor
=
2
**
(
j
-
i
),
mode
=
"nearest"
)
residual
=
paddle
.
add
(
x
=
residual
,
y
=
y
)
elif
j
<
i
:
y
=
input
[
j
]
for
k
in
range
(
i
-
j
):
y
=
self
.
residual_func_list
[
residual_func_idx
](
y
)
residual_func_idx
+=
1
residual
=
paddle
.
add
(
x
=
residual
,
y
=
y
)
residual
=
F
.
relu
(
residual
)
outs
.
append
(
residual
)
return
outs
class
LastClsOut
(
nn
.
Layer
):
def
__init__
(
self
,
num_channel_list
,
has_se
,
num_filters_list
=
[
32
,
64
,
128
,
256
],
name
=
None
):
super
(
LastClsOut
,
self
).
__init__
()
self
.
func_list
=
[]
for
idx
in
range
(
len
(
num_channel_list
)):
func
=
self
.
add_sublayer
(
"conv_{}_conv_{}"
.
format
(
name
,
idx
+
1
),
BottleneckBlock
(
num_channels
=
num_channel_list
[
idx
],
num_filters
=
num_filters_list
[
idx
],
has_se
=
has_se
,
downsample
=
True
,
name
=
name
+
'conv_'
+
str
(
idx
+
1
)))
self
.
func_list
.
append
(
func
)
def
forward
(
self
,
inputs
):
outs
=
[]
for
idx
,
input
in
enumerate
(
inputs
):
out
=
self
.
func_list
[
idx
](
input
)
outs
.
append
(
out
)
return
outs
class
HRNet
(
nn
.
Layer
):
def
__init__
(
self
,
width
=
18
,
has_se
=
False
,
class_dim
=
1000
):
super
(
HRNet
,
self
).
__init__
()
self
.
width
=
width
self
.
has_se
=
has_se
self
.
channels
=
{
18
:
[[
18
,
36
],
[
18
,
36
,
72
],
[
18
,
36
,
72
,
144
]],
30
:
[[
30
,
60
],
[
30
,
60
,
120
],
[
30
,
60
,
120
,
240
]],
32
:
[[
32
,
64
],
[
32
,
64
,
128
],
[
32
,
64
,
128
,
256
]],
40
:
[[
40
,
80
],
[
40
,
80
,
160
],
[
40
,
80
,
160
,
320
]],
44
:
[[
44
,
88
],
[
44
,
88
,
176
],
[
44
,
88
,
176
,
352
]],
48
:
[[
48
,
96
],
[
48
,
96
,
192
],
[
48
,
96
,
192
,
384
]],
60
:
[[
60
,
120
],
[
60
,
120
,
240
],
[
60
,
120
,
240
,
480
]],
64
:
[[
64
,
128
],
[
64
,
128
,
256
],
[
64
,
128
,
256
,
512
]]
}
self
.
_class_dim
=
class_dim
channels_2
,
channels_3
,
channels_4
=
self
.
channels
[
width
]
num_modules_2
,
num_modules_3
,
num_modules_4
=
1
,
4
,
3
self
.
conv_layer1_1
=
ConvBNLayer
(
num_channels
=
3
,
num_filters
=
64
,
filter_size
=
3
,
stride
=
2
,
act
=
'relu'
,
name
=
"layer1_1"
)
self
.
conv_layer1_2
=
ConvBNLayer
(
num_channels
=
64
,
num_filters
=
64
,
filter_size
=
3
,
stride
=
2
,
act
=
'relu'
,
name
=
"layer1_2"
)
self
.
la1
=
Layer1
(
num_channels
=
64
,
has_se
=
has_se
,
name
=
"layer2"
)
self
.
tr1
=
TransitionLayer
(
in_channels
=
[
256
],
out_channels
=
channels_2
,
name
=
"tr1"
)
self
.
st2
=
Stage
(
num_channels
=
channels_2
,
num_modules
=
num_modules_2
,
num_filters
=
channels_2
,
has_se
=
self
.
has_se
,
name
=
"st2"
)
self
.
tr2
=
TransitionLayer
(
in_channels
=
channels_2
,
out_channels
=
channels_3
,
name
=
"tr2"
)
self
.
st3
=
Stage
(
num_channels
=
channels_3
,
num_modules
=
num_modules_3
,
num_filters
=
channels_3
,
has_se
=
self
.
has_se
,
name
=
"st3"
)
self
.
tr3
=
TransitionLayer
(
in_channels
=
channels_3
,
out_channels
=
channels_4
,
name
=
"tr3"
)
self
.
st4
=
Stage
(
num_channels
=
channels_4
,
num_modules
=
num_modules_4
,
num_filters
=
channels_4
,
has_se
=
self
.
has_se
,
name
=
"st4"
)
# classification
num_filters_list
=
[
32
,
64
,
128
,
256
]
self
.
last_cls
=
LastClsOut
(
num_channel_list
=
channels_4
,
has_se
=
self
.
has_se
,
num_filters_list
=
num_filters_list
,
name
=
"cls_head"
,
)
last_num_filters
=
[
256
,
512
,
1024
]
self
.
cls_head_conv_list
=
[]
for
idx
in
range
(
3
):
self
.
cls_head_conv_list
.
append
(
self
.
add_sublayer
(
"cls_head_add{}"
.
format
(
idx
+
1
),
ConvBNLayer
(
num_channels
=
num_filters_list
[
idx
]
*
4
,
num_filters
=
last_num_filters
[
idx
],
filter_size
=
3
,
stride
=
2
,
name
=
"cls_head_add"
+
str
(
idx
+
1
))))
self
.
conv_last
=
ConvBNLayer
(
num_channels
=
1024
,
num_filters
=
2048
,
filter_size
=
1
,
stride
=
1
,
name
=
"cls_head_last_conv"
)
self
.
pool2d_avg
=
AdaptiveAvgPool2D
(
1
)
stdv
=
1.0
/
math
.
sqrt
(
2048
*
1.0
)
self
.
out
=
Linear
(
2048
,
class_dim
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
"fc_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc_offset"
))
def
forward
(
self
,
input
):
conv1
=
self
.
conv_layer1_1
(
input
)
conv2
=
self
.
conv_layer1_2
(
conv1
)
la1
=
self
.
la1
(
conv2
)
tr1
=
self
.
tr1
([
la1
])
st2
=
self
.
st2
(
tr1
)
tr2
=
self
.
tr2
(
st2
)
st3
=
self
.
st3
(
tr2
)
tr3
=
self
.
tr3
(
st3
)
st4
=
self
.
st4
(
tr3
)
last_cls
=
self
.
last_cls
(
st4
)
y
=
last_cls
[
0
]
for
idx
in
range
(
3
):
y
=
paddle
.
add
(
last_cls
[
idx
+
1
],
self
.
cls_head_conv_list
[
idx
](
y
))
y
=
self
.
conv_last
(
y
)
y
=
self
.
pool2d_avg
(
y
)
y
=
paddle
.
reshape
(
y
,
shape
=
[
-
1
,
y
.
shape
[
1
]])
y
=
self
.
out
(
y
)
return
y
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
HRNet_W18_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
18
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W18_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W30_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
30
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W30_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W32_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
32
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W32_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W40_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
40
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W40_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W44_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
44
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W44_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W48_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
48
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W48_C"
],
use_ssld
=
use_ssld
)
return
model
def
HRNet_W64_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
64
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"HRNet_W64_C"
],
use_ssld
=
use_ssld
)
return
model
def
SE_HRNet_W64_C
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwarg
):
model
=
HRNet
(
width
=
64
,
**
kwarg
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"SE_HRNet_W64_C"
],
use_ssld
=
use_ssld
)
return
model
ppcls/arch/backbone/model_zoo/inception_v3.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.nn.initializer
import
Uniform
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"InceptionV3"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/InceptionV3_pretrained.pdparams"
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
filter_size
,
stride
=
1
,
padding
=
0
,
groups
=
1
,
act
=
"relu"
,
name
=
None
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
conv
=
Conv2D
(
in_channels
=
num_channels
,
out_channels
=
num_filters
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
padding
,
groups
=
groups
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_weights"
),
bias_attr
=
False
)
self
.
batch_norm
=
BatchNorm
(
num_filters
,
act
=
act
,
param_attr
=
ParamAttr
(
name
=
name
+
"_bn_scale"
),
bias_attr
=
ParamAttr
(
name
=
name
+
"_bn_offset"
),
moving_mean_name
=
name
+
"_bn_mean"
,
moving_variance_name
=
name
+
"_bn_variance"
)
def
forward
(
self
,
inputs
):
y
=
self
.
conv
(
inputs
)
y
=
self
.
batch_norm
(
y
)
return
y
class
InceptionStem
(
nn
.
Layer
):
def
__init__
(
self
):
super
(
InceptionStem
,
self
).
__init__
()
self
.
conv_1a_3x3
=
ConvBNLayer
(
num_channels
=
3
,
num_filters
=
32
,
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
"conv_1a_3x3"
)
self
.
conv_2a_3x3
=
ConvBNLayer
(
num_channels
=
32
,
num_filters
=
32
,
filter_size
=
3
,
stride
=
1
,
act
=
"relu"
,
name
=
"conv_2a_3x3"
)
self
.
conv_2b_3x3
=
ConvBNLayer
(
num_channels
=
32
,
num_filters
=
64
,
filter_size
=
3
,
padding
=
1
,
act
=
"relu"
,
name
=
"conv_2b_3x3"
)
self
.
maxpool
=
MaxPool2D
(
kernel_size
=
3
,
stride
=
2
,
padding
=
0
)
self
.
conv_3b_1x1
=
ConvBNLayer
(
num_channels
=
64
,
num_filters
=
80
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"conv_3b_1x1"
)
self
.
conv_4a_3x3
=
ConvBNLayer
(
num_channels
=
80
,
num_filters
=
192
,
filter_size
=
3
,
act
=
"relu"
,
name
=
"conv_4a_3x3"
)
def
forward
(
self
,
x
):
y
=
self
.
conv_1a_3x3
(
x
)
y
=
self
.
conv_2a_3x3
(
y
)
y
=
self
.
conv_2b_3x3
(
y
)
y
=
self
.
maxpool
(
y
)
y
=
self
.
conv_3b_1x1
(
y
)
y
=
self
.
conv_4a_3x3
(
y
)
y
=
self
.
maxpool
(
y
)
return
y
class
InceptionA
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
pool_features
,
name
=
None
):
super
(
InceptionA
,
self
).
__init__
()
self
.
branch1x1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
64
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch1x1_"
+
name
)
self
.
branch5x5_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
48
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch5x5_1_"
+
name
)
self
.
branch5x5_2
=
ConvBNLayer
(
num_channels
=
48
,
num_filters
=
64
,
filter_size
=
5
,
padding
=
2
,
act
=
"relu"
,
name
=
"inception_a_branch5x5_2_"
+
name
)
self
.
branch3x3dbl_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
64
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch3x3dbl_1_"
+
name
)
self
.
branch3x3dbl_2
=
ConvBNLayer
(
num_channels
=
64
,
num_filters
=
96
,
filter_size
=
3
,
padding
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch3x3dbl_2_"
+
name
)
self
.
branch3x3dbl_3
=
ConvBNLayer
(
num_channels
=
96
,
num_filters
=
96
,
filter_size
=
3
,
padding
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch3x3dbl_3_"
+
name
)
self
.
branch_pool
=
AvgPool2D
(
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
exclusive
=
False
)
self
.
branch_pool_conv
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
pool_features
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_a_branch_pool_"
+
name
)
def
forward
(
self
,
x
):
branch1x1
=
self
.
branch1x1
(
x
)
branch5x5
=
self
.
branch5x5_1
(
x
)
branch5x5
=
self
.
branch5x5_2
(
branch5x5
)
branch3x3dbl
=
self
.
branch3x3dbl_1
(
x
)
branch3x3dbl
=
self
.
branch3x3dbl_2
(
branch3x3dbl
)
branch3x3dbl
=
self
.
branch3x3dbl_3
(
branch3x3dbl
)
branch_pool
=
self
.
branch_pool
(
x
)
branch_pool
=
self
.
branch_pool_conv
(
branch_pool
)
outputs
=
paddle
.
concat
([
branch1x1
,
branch5x5
,
branch3x3dbl
,
branch_pool
],
axis
=
1
)
return
outputs
class
InceptionB
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
name
=
None
):
super
(
InceptionB
,
self
).
__init__
()
self
.
branch3x3
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
384
,
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
"inception_b_branch3x3_"
+
name
)
self
.
branch3x3dbl_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
64
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_b_branch3x3dbl_1_"
+
name
)
self
.
branch3x3dbl_2
=
ConvBNLayer
(
num_channels
=
64
,
num_filters
=
96
,
filter_size
=
3
,
padding
=
1
,
act
=
"relu"
,
name
=
"inception_b_branch3x3dbl_2_"
+
name
)
self
.
branch3x3dbl_3
=
ConvBNLayer
(
num_channels
=
96
,
num_filters
=
96
,
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
"inception_b_branch3x3dbl_3_"
+
name
)
self
.
branch_pool
=
MaxPool2D
(
kernel_size
=
3
,
stride
=
2
)
def
forward
(
self
,
x
):
branch3x3
=
self
.
branch3x3
(
x
)
branch3x3dbl
=
self
.
branch3x3dbl_1
(
x
)
branch3x3dbl
=
self
.
branch3x3dbl_2
(
branch3x3dbl
)
branch3x3dbl
=
self
.
branch3x3dbl_3
(
branch3x3dbl
)
branch_pool
=
self
.
branch_pool
(
x
)
outputs
=
paddle
.
concat
([
branch3x3
,
branch3x3dbl
,
branch_pool
],
axis
=
1
)
return
outputs
class
InceptionC
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
channels_7x7
,
name
=
None
):
super
(
InceptionC
,
self
).
__init__
()
self
.
branch1x1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
192
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_c_branch1x1_"
+
name
)
self
.
branch7x7_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
channels_7x7
,
filter_size
=
1
,
stride
=
1
,
act
=
"relu"
,
name
=
"inception_c_branch7x7_1_"
+
name
)
self
.
branch7x7_2
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
channels_7x7
,
filter_size
=
(
1
,
7
),
stride
=
1
,
padding
=
(
0
,
3
),
act
=
"relu"
,
name
=
"inception_c_branch7x7_2_"
+
name
)
self
.
branch7x7_3
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
192
,
filter_size
=
(
7
,
1
),
stride
=
1
,
padding
=
(
3
,
0
),
act
=
"relu"
,
name
=
"inception_c_branch7x7_3_"
+
name
)
self
.
branch7x7dbl_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
channels_7x7
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_c_branch7x7dbl_1_"
+
name
)
self
.
branch7x7dbl_2
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
channels_7x7
,
filter_size
=
(
7
,
1
),
padding
=
(
3
,
0
),
act
=
"relu"
,
name
=
"inception_c_branch7x7dbl_2_"
+
name
)
self
.
branch7x7dbl_3
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
channels_7x7
,
filter_size
=
(
1
,
7
),
padding
=
(
0
,
3
),
act
=
"relu"
,
name
=
"inception_c_branch7x7dbl_3_"
+
name
)
self
.
branch7x7dbl_4
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
channels_7x7
,
filter_size
=
(
7
,
1
),
padding
=
(
3
,
0
),
act
=
"relu"
,
name
=
"inception_c_branch7x7dbl_4_"
+
name
)
self
.
branch7x7dbl_5
=
ConvBNLayer
(
num_channels
=
channels_7x7
,
num_filters
=
192
,
filter_size
=
(
1
,
7
),
padding
=
(
0
,
3
),
act
=
"relu"
,
name
=
"inception_c_branch7x7dbl_5_"
+
name
)
self
.
branch_pool
=
AvgPool2D
(
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
exclusive
=
False
)
self
.
branch_pool_conv
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
192
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_c_branch_pool_"
+
name
)
def
forward
(
self
,
x
):
branch1x1
=
self
.
branch1x1
(
x
)
branch7x7
=
self
.
branch7x7_1
(
x
)
branch7x7
=
self
.
branch7x7_2
(
branch7x7
)
branch7x7
=
self
.
branch7x7_3
(
branch7x7
)
branch7x7dbl
=
self
.
branch7x7dbl_1
(
x
)
branch7x7dbl
=
self
.
branch7x7dbl_2
(
branch7x7dbl
)
branch7x7dbl
=
self
.
branch7x7dbl_3
(
branch7x7dbl
)
branch7x7dbl
=
self
.
branch7x7dbl_4
(
branch7x7dbl
)
branch7x7dbl
=
self
.
branch7x7dbl_5
(
branch7x7dbl
)
branch_pool
=
self
.
branch_pool
(
x
)
branch_pool
=
self
.
branch_pool_conv
(
branch_pool
)
outputs
=
paddle
.
concat
([
branch1x1
,
branch7x7
,
branch7x7dbl
,
branch_pool
],
axis
=
1
)
return
outputs
class
InceptionD
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
name
=
None
):
super
(
InceptionD
,
self
).
__init__
()
self
.
branch3x3_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
192
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_d_branch3x3_1_"
+
name
)
self
.
branch3x3_2
=
ConvBNLayer
(
num_channels
=
192
,
num_filters
=
320
,
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
"inception_d_branch3x3_2_"
+
name
)
self
.
branch7x7x3_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
192
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_d_branch7x7x3_1_"
+
name
)
self
.
branch7x7x3_2
=
ConvBNLayer
(
num_channels
=
192
,
num_filters
=
192
,
filter_size
=
(
1
,
7
),
padding
=
(
0
,
3
),
act
=
"relu"
,
name
=
"inception_d_branch7x7x3_2_"
+
name
)
self
.
branch7x7x3_3
=
ConvBNLayer
(
num_channels
=
192
,
num_filters
=
192
,
filter_size
=
(
7
,
1
),
padding
=
(
3
,
0
),
act
=
"relu"
,
name
=
"inception_d_branch7x7x3_3_"
+
name
)
self
.
branch7x7x3_4
=
ConvBNLayer
(
num_channels
=
192
,
num_filters
=
192
,
filter_size
=
3
,
stride
=
2
,
act
=
"relu"
,
name
=
"inception_d_branch7x7x3_4_"
+
name
)
self
.
branch_pool
=
MaxPool2D
(
kernel_size
=
3
,
stride
=
2
)
def
forward
(
self
,
x
):
branch3x3
=
self
.
branch3x3_1
(
x
)
branch3x3
=
self
.
branch3x3_2
(
branch3x3
)
branch7x7x3
=
self
.
branch7x7x3_1
(
x
)
branch7x7x3
=
self
.
branch7x7x3_2
(
branch7x7x3
)
branch7x7x3
=
self
.
branch7x7x3_3
(
branch7x7x3
)
branch7x7x3
=
self
.
branch7x7x3_4
(
branch7x7x3
)
branch_pool
=
self
.
branch_pool
(
x
)
outputs
=
paddle
.
concat
([
branch3x3
,
branch7x7x3
,
branch_pool
],
axis
=
1
)
return
outputs
class
InceptionE
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
name
=
None
):
super
(
InceptionE
,
self
).
__init__
()
self
.
branch1x1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
320
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_e_branch1x1_"
+
name
)
self
.
branch3x3_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
384
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_e_branch3x3_1_"
+
name
)
self
.
branch3x3_2a
=
ConvBNLayer
(
num_channels
=
384
,
num_filters
=
384
,
filter_size
=
(
1
,
3
),
padding
=
(
0
,
1
),
act
=
"relu"
,
name
=
"inception_e_branch3x3_2a_"
+
name
)
self
.
branch3x3_2b
=
ConvBNLayer
(
num_channels
=
384
,
num_filters
=
384
,
filter_size
=
(
3
,
1
),
padding
=
(
1
,
0
),
act
=
"relu"
,
name
=
"inception_e_branch3x3_2b_"
+
name
)
self
.
branch3x3dbl_1
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
448
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_e_branch3x3dbl_1_"
+
name
)
self
.
branch3x3dbl_2
=
ConvBNLayer
(
num_channels
=
448
,
num_filters
=
384
,
filter_size
=
3
,
padding
=
1
,
act
=
"relu"
,
name
=
"inception_e_branch3x3dbl_2_"
+
name
)
self
.
branch3x3dbl_3a
=
ConvBNLayer
(
num_channels
=
384
,
num_filters
=
384
,
filter_size
=
(
1
,
3
),
padding
=
(
0
,
1
),
act
=
"relu"
,
name
=
"inception_e_branch3x3dbl_3a_"
+
name
)
self
.
branch3x3dbl_3b
=
ConvBNLayer
(
num_channels
=
384
,
num_filters
=
384
,
filter_size
=
(
3
,
1
),
padding
=
(
1
,
0
),
act
=
"relu"
,
name
=
"inception_e_branch3x3dbl_3b_"
+
name
)
self
.
branch_pool
=
AvgPool2D
(
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
exclusive
=
False
)
self
.
branch_pool_conv
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
192
,
filter_size
=
1
,
act
=
"relu"
,
name
=
"inception_e_branch_pool_"
+
name
)
def
forward
(
self
,
x
):
branch1x1
=
self
.
branch1x1
(
x
)
branch3x3
=
self
.
branch3x3_1
(
x
)
branch3x3
=
[
self
.
branch3x3_2a
(
branch3x3
),
self
.
branch3x3_2b
(
branch3x3
),
]
branch3x3
=
paddle
.
concat
(
branch3x3
,
axis
=
1
)
branch3x3dbl
=
self
.
branch3x3dbl_1
(
x
)
branch3x3dbl
=
self
.
branch3x3dbl_2
(
branch3x3dbl
)
branch3x3dbl
=
[
self
.
branch3x3dbl_3a
(
branch3x3dbl
),
self
.
branch3x3dbl_3b
(
branch3x3dbl
),
]
branch3x3dbl
=
paddle
.
concat
(
branch3x3dbl
,
axis
=
1
)
branch_pool
=
self
.
branch_pool
(
x
)
branch_pool
=
self
.
branch_pool_conv
(
branch_pool
)
outputs
=
paddle
.
concat
([
branch1x1
,
branch3x3
,
branch3x3dbl
,
branch_pool
],
axis
=
1
)
return
outputs
class
Inception_V3
(
nn
.
Layer
):
def
__init__
(
self
,
class_dim
=
1000
):
super
(
Inception_V3
,
self
).
__init__
()
self
.
inception_a_list
=
[[
192
,
256
,
288
],
[
32
,
64
,
64
]]
self
.
inception_c_list
=
[[
768
,
768
,
768
,
768
],
[
128
,
160
,
160
,
192
]]
self
.
inception_stem
=
InceptionStem
()
self
.
inception_block_list
=
[]
for
i
in
range
(
len
(
self
.
inception_a_list
[
0
])):
inception_a
=
self
.
add_sublayer
(
"inception_a_"
+
str
(
i
+
1
),
InceptionA
(
self
.
inception_a_list
[
0
][
i
],
self
.
inception_a_list
[
1
][
i
],
name
=
str
(
i
+
1
)))
self
.
inception_block_list
.
append
(
inception_a
)
inception_b
=
self
.
add_sublayer
(
"nception_b_1"
,
InceptionB
(
288
,
name
=
"1"
))
self
.
inception_block_list
.
append
(
inception_b
)
for
i
in
range
(
len
(
self
.
inception_c_list
[
0
])):
inception_c
=
self
.
add_sublayer
(
"inception_c_"
+
str
(
i
+
1
),
InceptionC
(
self
.
inception_c_list
[
0
][
i
],
self
.
inception_c_list
[
1
][
i
],
name
=
str
(
i
+
1
)))
self
.
inception_block_list
.
append
(
inception_c
)
inception_d
=
self
.
add_sublayer
(
"inception_d_1"
,
InceptionD
(
768
,
name
=
"1"
))
self
.
inception_block_list
.
append
(
inception_d
)
inception_e
=
self
.
add_sublayer
(
"inception_e_1"
,
InceptionE
(
1280
,
name
=
"1"
))
self
.
inception_block_list
.
append
(
inception_e
)
inception_e
=
self
.
add_sublayer
(
"inception_e_2"
,
InceptionE
(
2048
,
name
=
"2"
))
self
.
inception_block_list
.
append
(
inception_e
)
self
.
gap
=
AdaptiveAvgPool2D
(
1
)
self
.
drop
=
Dropout
(
p
=
0.2
,
mode
=
"downscale_in_infer"
)
stdv
=
1.0
/
math
.
sqrt
(
2048
*
1.0
)
self
.
out
=
Linear
(
2048
,
class_dim
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
"fc_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc_offset"
))
def
forward
(
self
,
x
):
y
=
self
.
inception_stem
(
x
)
for
inception_block
in
self
.
inception_block_list
:
y
=
inception_block
(
y
)
y
=
self
.
gap
(
y
)
y
=
paddle
.
reshape
(
y
,
shape
=
[
-
1
,
2048
])
y
=
self
.
drop
(
y
)
y
=
self
.
out
(
y
)
return
y
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
InceptionV3
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
Inception_V3
(
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"InceptionV3"
],
use_ssld
=
use_ssld
)
return
model
ppcls/arch/backbone/model_zoo/mobilenet_v1.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.nn.initializer
import
KaimingNormal
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"MobileNetV1_x0_25"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_25_pretrained.pdparams"
,
"MobileNetV1_x0_5"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_5_pretrained.pdparams"
,
"MobileNetV1_x0_75"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_x0_75_pretrained.pdparams"
,
"MobileNetV1"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV1_pretrained.pdparams"
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
filter_size
,
num_filters
,
stride
,
padding
,
channels
=
None
,
num_groups
=
1
,
act
=
'relu'
,
name
=
None
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
_conv
=
Conv2D
(
in_channels
=
num_channels
,
out_channels
=
num_filters
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
padding
,
groups
=
num_groups
,
weight_attr
=
ParamAttr
(
initializer
=
KaimingNormal
(),
name
=
name
+
"_weights"
),
bias_attr
=
False
)
self
.
_batch_norm
=
BatchNorm
(
num_filters
,
act
=
act
,
param_attr
=
ParamAttr
(
name
+
"_bn_scale"
),
bias_attr
=
ParamAttr
(
name
+
"_bn_offset"
),
moving_mean_name
=
name
+
"_bn_mean"
,
moving_variance_name
=
name
+
"_bn_variance"
)
def
forward
(
self
,
inputs
):
y
=
self
.
_conv
(
inputs
)
y
=
self
.
_batch_norm
(
y
)
return
y
class
DepthwiseSeparable
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters1
,
num_filters2
,
num_groups
,
stride
,
scale
,
name
=
None
):
super
(
DepthwiseSeparable
,
self
).
__init__
()
self
.
_depthwise_conv
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
int
(
num_filters1
*
scale
),
filter_size
=
3
,
stride
=
stride
,
padding
=
1
,
num_groups
=
int
(
num_groups
*
scale
),
name
=
name
+
"_dw"
)
self
.
_pointwise_conv
=
ConvBNLayer
(
num_channels
=
int
(
num_filters1
*
scale
),
filter_size
=
1
,
num_filters
=
int
(
num_filters2
*
scale
),
stride
=
1
,
padding
=
0
,
name
=
name
+
"_sep"
)
def
forward
(
self
,
inputs
):
y
=
self
.
_depthwise_conv
(
inputs
)
y
=
self
.
_pointwise_conv
(
y
)
return
y
class
MobileNet
(
nn
.
Layer
):
def
__init__
(
self
,
scale
=
1.0
,
class_dim
=
1000
):
super
(
MobileNet
,
self
).
__init__
()
self
.
scale
=
scale
self
.
block_list
=
[]
self
.
conv1
=
ConvBNLayer
(
num_channels
=
3
,
filter_size
=
3
,
channels
=
3
,
num_filters
=
int
(
32
*
scale
),
stride
=
2
,
padding
=
1
,
name
=
"conv1"
)
conv2_1
=
self
.
add_sublayer
(
"conv2_1"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
32
*
scale
),
num_filters1
=
32
,
num_filters2
=
64
,
num_groups
=
32
,
stride
=
1
,
scale
=
scale
,
name
=
"conv2_1"
))
self
.
block_list
.
append
(
conv2_1
)
conv2_2
=
self
.
add_sublayer
(
"conv2_2"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
64
*
scale
),
num_filters1
=
64
,
num_filters2
=
128
,
num_groups
=
64
,
stride
=
2
,
scale
=
scale
,
name
=
"conv2_2"
))
self
.
block_list
.
append
(
conv2_2
)
conv3_1
=
self
.
add_sublayer
(
"conv3_1"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
128
*
scale
),
num_filters1
=
128
,
num_filters2
=
128
,
num_groups
=
128
,
stride
=
1
,
scale
=
scale
,
name
=
"conv3_1"
))
self
.
block_list
.
append
(
conv3_1
)
conv3_2
=
self
.
add_sublayer
(
"conv3_2"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
128
*
scale
),
num_filters1
=
128
,
num_filters2
=
256
,
num_groups
=
128
,
stride
=
2
,
scale
=
scale
,
name
=
"conv3_2"
))
self
.
block_list
.
append
(
conv3_2
)
conv4_1
=
self
.
add_sublayer
(
"conv4_1"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
256
*
scale
),
num_filters1
=
256
,
num_filters2
=
256
,
num_groups
=
256
,
stride
=
1
,
scale
=
scale
,
name
=
"conv4_1"
))
self
.
block_list
.
append
(
conv4_1
)
conv4_2
=
self
.
add_sublayer
(
"conv4_2"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
256
*
scale
),
num_filters1
=
256
,
num_filters2
=
512
,
num_groups
=
256
,
stride
=
2
,
scale
=
scale
,
name
=
"conv4_2"
))
self
.
block_list
.
append
(
conv4_2
)
for
i
in
range
(
5
):
conv5
=
self
.
add_sublayer
(
"conv5_"
+
str
(
i
+
1
),
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
512
*
scale
),
num_filters1
=
512
,
num_filters2
=
512
,
num_groups
=
512
,
stride
=
1
,
scale
=
scale
,
name
=
"conv5_"
+
str
(
i
+
1
)))
self
.
block_list
.
append
(
conv5
)
conv5_6
=
self
.
add_sublayer
(
"conv5_6"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
512
*
scale
),
num_filters1
=
512
,
num_filters2
=
1024
,
num_groups
=
512
,
stride
=
2
,
scale
=
scale
,
name
=
"conv5_6"
))
self
.
block_list
.
append
(
conv5_6
)
conv6
=
self
.
add_sublayer
(
"conv6"
,
sublayer
=
DepthwiseSeparable
(
num_channels
=
int
(
1024
*
scale
),
num_filters1
=
1024
,
num_filters2
=
1024
,
num_groups
=
1024
,
stride
=
1
,
scale
=
scale
,
name
=
"conv6"
))
self
.
block_list
.
append
(
conv6
)
self
.
pool2d_avg
=
AdaptiveAvgPool2D
(
1
)
self
.
out
=
Linear
(
int
(
1024
*
scale
),
class_dim
,
weight_attr
=
ParamAttr
(
initializer
=
KaimingNormal
(),
name
=
"fc7_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc7_offset"
))
def
forward
(
self
,
inputs
):
y
=
self
.
conv1
(
inputs
)
for
block
in
self
.
block_list
:
y
=
block
(
y
)
y
=
self
.
pool2d_avg
(
y
)
y
=
paddle
.
flatten
(
y
,
start_axis
=
1
,
stop_axis
=-
1
)
y
=
self
.
out
(
y
)
return
y
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
MobileNetV1_x0_25
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNet
(
scale
=
0.25
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV1_x0_25"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV1_x0_5
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNet
(
scale
=
0.5
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV1_x0_5"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV1_x0_75
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNet
(
scale
=
0.75
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV1_x0_75"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV1
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNet
(
scale
=
1.0
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV1"
],
use_ssld
=
use_ssld
)
return
model
\ No newline at end of file
ppcls/arch/backbone/model_zoo/mobilenet_v3.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn.functional
import
hardswish
,
hardsigmoid
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.regularizer
import
L2Decay
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"MobileNetV3_small_x0_35"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_35_pretrained.pdparams"
,
"MobileNetV3_small_x0_5"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_5_pretrained.pdparams"
,
"MobileNetV3_small_x0_75"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x0_75_pretrained.pdparams"
,
"MobileNetV3_small_x1_0"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_0_pretrained.pdparams"
,
"MobileNetV3_small_x1_25"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_small_x1_25_pretrained.pdparams"
,
"MobileNetV3_large_x0_35"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_35_pretrained.pdparams"
,
"MobileNetV3_large_x0_5"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams"
,
"MobileNetV3_large_x0_75"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_75_pretrained.pdparams"
,
"MobileNetV3_large_x1_0"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_0_pretrained.pdparams"
,
"MobileNetV3_large_x1_25"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x1_25_pretrained.pdparams"
}
__all__
=
list
(
MODEL_URLS
.
keys
())
def
make_divisible
(
v
,
divisor
=
8
,
min_value
=
None
):
if
min_value
is
None
:
min_value
=
divisor
new_v
=
max
(
min_value
,
int
(
v
+
divisor
/
2
)
//
divisor
*
divisor
)
if
new_v
<
0.9
*
v
:
new_v
+=
divisor
return
new_v
class
MobileNetV3
(
nn
.
Layer
):
def
__init__
(
self
,
scale
=
1.0
,
model_name
=
"small"
,
dropout_prob
=
0.2
,
class_dim
=
1000
):
super
(
MobileNetV3
,
self
).
__init__
()
inplanes
=
16
if
model_name
==
"large"
:
self
.
cfg
=
[
# k, exp, c, se, nl, s,
[
3
,
16
,
16
,
False
,
"relu"
,
1
],
[
3
,
64
,
24
,
False
,
"relu"
,
2
],
[
3
,
72
,
24
,
False
,
"relu"
,
1
],
[
5
,
72
,
40
,
True
,
"relu"
,
2
],
[
5
,
120
,
40
,
True
,
"relu"
,
1
],
[
5
,
120
,
40
,
True
,
"relu"
,
1
],
[
3
,
240
,
80
,
False
,
"hardswish"
,
2
],
[
3
,
200
,
80
,
False
,
"hardswish"
,
1
],
[
3
,
184
,
80
,
False
,
"hardswish"
,
1
],
[
3
,
184
,
80
,
False
,
"hardswish"
,
1
],
[
3
,
480
,
112
,
True
,
"hardswish"
,
1
],
[
3
,
672
,
112
,
True
,
"hardswish"
,
1
],
[
5
,
672
,
160
,
True
,
"hardswish"
,
2
],
[
5
,
960
,
160
,
True
,
"hardswish"
,
1
],
[
5
,
960
,
160
,
True
,
"hardswish"
,
1
],
]
self
.
cls_ch_squeeze
=
960
self
.
cls_ch_expand
=
1280
elif
model_name
==
"small"
:
self
.
cfg
=
[
# k, exp, c, se, nl, s,
[
3
,
16
,
16
,
True
,
"relu"
,
2
],
[
3
,
72
,
24
,
False
,
"relu"
,
2
],
[
3
,
88
,
24
,
False
,
"relu"
,
1
],
[
5
,
96
,
40
,
True
,
"hardswish"
,
2
],
[
5
,
240
,
40
,
True
,
"hardswish"
,
1
],
[
5
,
240
,
40
,
True
,
"hardswish"
,
1
],
[
5
,
120
,
48
,
True
,
"hardswish"
,
1
],
[
5
,
144
,
48
,
True
,
"hardswish"
,
1
],
[
5
,
288
,
96
,
True
,
"hardswish"
,
2
],
[
5
,
576
,
96
,
True
,
"hardswish"
,
1
],
[
5
,
576
,
96
,
True
,
"hardswish"
,
1
],
]
self
.
cls_ch_squeeze
=
576
self
.
cls_ch_expand
=
1280
else
:
raise
NotImplementedError
(
"mode[{}_model] is not implemented!"
.
format
(
model_name
))
self
.
conv1
=
ConvBNLayer
(
in_c
=
3
,
out_c
=
make_divisible
(
inplanes
*
scale
),
filter_size
=
3
,
stride
=
2
,
padding
=
1
,
num_groups
=
1
,
if_act
=
True
,
act
=
"hardswish"
,
name
=
"conv1"
)
self
.
block_list
=
[]
i
=
0
inplanes
=
make_divisible
(
inplanes
*
scale
)
for
(
k
,
exp
,
c
,
se
,
nl
,
s
)
in
self
.
cfg
:
block
=
self
.
add_sublayer
(
"conv"
+
str
(
i
+
2
),
ResidualUnit
(
in_c
=
inplanes
,
mid_c
=
make_divisible
(
scale
*
exp
),
out_c
=
make_divisible
(
scale
*
c
),
filter_size
=
k
,
stride
=
s
,
use_se
=
se
,
act
=
nl
,
name
=
"conv"
+
str
(
i
+
2
)))
self
.
block_list
.
append
(
block
)
inplanes
=
make_divisible
(
scale
*
c
)
i
+=
1
self
.
last_second_conv
=
ConvBNLayer
(
in_c
=
inplanes
,
out_c
=
make_divisible
(
scale
*
self
.
cls_ch_squeeze
),
filter_size
=
1
,
stride
=
1
,
padding
=
0
,
num_groups
=
1
,
if_act
=
True
,
act
=
"hardswish"
,
name
=
"conv_last"
)
self
.
pool
=
AdaptiveAvgPool2D
(
1
)
self
.
last_conv
=
Conv2D
(
in_channels
=
make_divisible
(
scale
*
self
.
cls_ch_squeeze
),
out_channels
=
self
.
cls_ch_expand
,
kernel_size
=
1
,
stride
=
1
,
padding
=
0
,
weight_attr
=
ParamAttr
(
name
=
"last_1x1_conv_weights"
),
bias_attr
=
False
)
self
.
dropout
=
Dropout
(
p
=
dropout_prob
,
mode
=
"downscale_in_infer"
)
self
.
out
=
Linear
(
self
.
cls_ch_expand
,
class_dim
,
weight_attr
=
ParamAttr
(
"fc_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc_offset"
))
def
forward
(
self
,
inputs
):
x
=
self
.
conv1
(
inputs
)
for
block
in
self
.
block_list
:
x
=
block
(
x
)
x
=
self
.
last_second_conv
(
x
)
x
=
self
.
pool
(
x
)
x
=
self
.
last_conv
(
x
)
x
=
hardswish
(
x
)
x
=
self
.
dropout
(
x
)
x
=
paddle
.
flatten
(
x
,
start_axis
=
1
,
stop_axis
=-
1
)
x
=
self
.
out
(
x
)
return
x
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
in_c
,
out_c
,
filter_size
,
stride
,
padding
,
num_groups
=
1
,
if_act
=
True
,
act
=
None
,
use_cudnn
=
True
,
name
=
""
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
if_act
=
if_act
self
.
act
=
act
self
.
conv
=
Conv2D
(
in_channels
=
in_c
,
out_channels
=
out_c
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
padding
,
groups
=
num_groups
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_weights"
),
bias_attr
=
False
)
self
.
bn
=
BatchNorm
(
num_channels
=
out_c
,
act
=
None
,
param_attr
=
ParamAttr
(
name
=
name
+
"_bn_scale"
,
regularizer
=
L2Decay
(
0.0
)),
bias_attr
=
ParamAttr
(
name
=
name
+
"_bn_offset"
,
regularizer
=
L2Decay
(
0.0
)),
moving_mean_name
=
name
+
"_bn_mean"
,
moving_variance_name
=
name
+
"_bn_variance"
)
def
forward
(
self
,
x
):
x
=
self
.
conv
(
x
)
x
=
self
.
bn
(
x
)
if
self
.
if_act
:
if
self
.
act
==
"relu"
:
x
=
F
.
relu
(
x
)
elif
self
.
act
==
"hardswish"
:
x
=
hardswish
(
x
)
else
:
print
(
"The activation function is selected incorrectly."
)
exit
()
return
x
class
ResidualUnit
(
nn
.
Layer
):
def
__init__
(
self
,
in_c
,
mid_c
,
out_c
,
filter_size
,
stride
,
use_se
,
act
=
None
,
name
=
''
):
super
(
ResidualUnit
,
self
).
__init__
()
self
.
if_shortcut
=
stride
==
1
and
in_c
==
out_c
self
.
if_se
=
use_se
self
.
expand_conv
=
ConvBNLayer
(
in_c
=
in_c
,
out_c
=
mid_c
,
filter_size
=
1
,
stride
=
1
,
padding
=
0
,
if_act
=
True
,
act
=
act
,
name
=
name
+
"_expand"
)
self
.
bottleneck_conv
=
ConvBNLayer
(
in_c
=
mid_c
,
out_c
=
mid_c
,
filter_size
=
filter_size
,
stride
=
stride
,
padding
=
int
((
filter_size
-
1
)
//
2
),
num_groups
=
mid_c
,
if_act
=
True
,
act
=
act
,
name
=
name
+
"_depthwise"
)
if
self
.
if_se
:
self
.
mid_se
=
SEModule
(
mid_c
,
name
=
name
+
"_se"
)
self
.
linear_conv
=
ConvBNLayer
(
in_c
=
mid_c
,
out_c
=
out_c
,
filter_size
=
1
,
stride
=
1
,
padding
=
0
,
if_act
=
False
,
act
=
None
,
name
=
name
+
"_linear"
)
def
forward
(
self
,
inputs
):
x
=
self
.
expand_conv
(
inputs
)
x
=
self
.
bottleneck_conv
(
x
)
if
self
.
if_se
:
x
=
self
.
mid_se
(
x
)
x
=
self
.
linear_conv
(
x
)
if
self
.
if_shortcut
:
x
=
paddle
.
add
(
inputs
,
x
)
return
x
class
SEModule
(
nn
.
Layer
):
def
__init__
(
self
,
channel
,
reduction
=
4
,
name
=
""
):
super
(
SEModule
,
self
).
__init__
()
self
.
avg_pool
=
AdaptiveAvgPool2D
(
1
)
self
.
conv1
=
Conv2D
(
in_channels
=
channel
,
out_channels
=
channel
//
reduction
,
kernel_size
=
1
,
stride
=
1
,
padding
=
0
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_1_weights"
),
bias_attr
=
ParamAttr
(
name
=
name
+
"_1_offset"
))
self
.
conv2
=
Conv2D
(
in_channels
=
channel
//
reduction
,
out_channels
=
channel
,
kernel_size
=
1
,
stride
=
1
,
padding
=
0
,
weight_attr
=
ParamAttr
(
name
+
"_2_weights"
),
bias_attr
=
ParamAttr
(
name
=
name
+
"_2_offset"
))
def
forward
(
self
,
inputs
):
outputs
=
self
.
avg_pool
(
inputs
)
outputs
=
self
.
conv1
(
outputs
)
outputs
=
F
.
relu
(
outputs
)
outputs
=
self
.
conv2
(
outputs
)
outputs
=
hardsigmoid
(
outputs
,
slope
=
0.2
,
offset
=
0.5
)
return
paddle
.
multiply
(
x
=
inputs
,
y
=
outputs
)
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
MobileNetV3_small_x0_35
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"small"
,
scale
=
0.35
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_small_x0_35"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_small_x0_5
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"small"
,
scale
=
0.5
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_small_x0_5"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_small_x0_75
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"small"
,
scale
=
0.75
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_small_x0_75"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_small_x1_0
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"small"
,
scale
=
1.0
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_small_x1_0"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_small_x1_25
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"small"
,
scale
=
1.25
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_small_x1_25"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_large_x0_35
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"large"
,
scale
=
0.35
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_large_x0_35"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_large_x0_5
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"large"
,
scale
=
0.5
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_large_x0_5"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_large_x0_75
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"large"
,
scale
=
0.75
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_large_x0_75"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_large_x1_0
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"large"
,
scale
=
1.0
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_large_x1_0"
],
use_ssld
=
use_ssld
)
return
model
def
MobileNetV3_large_x1_25
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
MobileNetV3
(
model_name
=
"large"
,
scale
=
1.25
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"MobileNetV3_large_x1_25"
],
use_ssld
=
use_ssld
)
return
model
ppcls/arch/backbone/model_zoo/resnet.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.nn.initializer
import
Uniform
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"ResNet18"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet18_pretrained.pdparams"
,
"ResNet34"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_pretrained.pdparams"
,
"ResNet50"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams"
,
"ResNet101"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_pretrained.pdparams"
,
"ResNet152"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet152_pretrained.pdparams"
,
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
filter_size
,
stride
=
1
,
groups
=
1
,
act
=
None
,
name
=
None
,
data_format
=
"NCHW"
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
_conv
=
Conv2D
(
in_channels
=
num_channels
,
out_channels
=
num_filters
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_weights"
),
bias_attr
=
False
,
data_format
=
data_format
)
if
name
==
"conv1"
:
bn_name
=
"bn_"
+
name
else
:
bn_name
=
"bn"
+
name
[
3
:]
self
.
_batch_norm
=
BatchNorm
(
num_filters
,
act
=
act
,
param_attr
=
ParamAttr
(
name
=
bn_name
+
"_scale"
),
bias_attr
=
ParamAttr
(
bn_name
+
"_offset"
),
moving_mean_name
=
bn_name
+
"_mean"
,
moving_variance_name
=
bn_name
+
"_variance"
,
data_layout
=
data_format
)
def
forward
(
self
,
inputs
):
y
=
self
.
_conv
(
inputs
)
y
=
self
.
_batch_norm
(
y
)
return
y
class
BottleneckBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
stride
,
shortcut
=
True
,
name
=
None
,
data_format
=
"NCHW"
):
super
(
BottleneckBlock
,
self
).
__init__
()
self
.
conv0
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
1
,
act
=
"relu"
,
name
=
name
+
"_branch2a"
,
data_format
=
data_format
)
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
"relu"
,
name
=
name
+
"_branch2b"
,
data_format
=
data_format
)
self
.
conv2
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
act
=
None
,
name
=
name
+
"_branch2c"
,
data_format
=
data_format
)
if
not
shortcut
:
self
.
short
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
stride
=
stride
,
name
=
name
+
"_branch1"
,
data_format
=
data_format
)
self
.
shortcut
=
shortcut
self
.
_num_channels_out
=
num_filters
*
4
def
forward
(
self
,
inputs
):
y
=
self
.
conv0
(
inputs
)
conv1
=
self
.
conv1
(
y
)
conv2
=
self
.
conv2
(
conv1
)
if
self
.
shortcut
:
short
=
inputs
else
:
short
=
self
.
short
(
inputs
)
y
=
paddle
.
add
(
x
=
short
,
y
=
conv2
)
y
=
F
.
relu
(
y
)
return
y
class
BasicBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
stride
,
shortcut
=
True
,
name
=
None
,
data_format
=
"NCHW"
):
super
(
BasicBlock
,
self
).
__init__
()
self
.
stride
=
stride
self
.
conv0
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
"relu"
,
name
=
name
+
"_branch2a"
,
data_format
=
data_format
)
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
act
=
None
,
name
=
name
+
"_branch2b"
,
data_format
=
data_format
)
if
not
shortcut
:
self
.
short
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
1
,
stride
=
stride
,
name
=
name
+
"_branch1"
,
data_format
=
data_format
)
self
.
shortcut
=
shortcut
def
forward
(
self
,
inputs
):
y
=
self
.
conv0
(
inputs
)
conv1
=
self
.
conv1
(
y
)
if
self
.
shortcut
:
short
=
inputs
else
:
short
=
self
.
short
(
inputs
)
y
=
paddle
.
add
(
x
=
short
,
y
=
conv1
)
y
=
F
.
relu
(
y
)
return
y
class
ResNet
(
nn
.
Layer
):
def
__init__
(
self
,
layers
=
50
,
class_dim
=
1000
,
input_image_channel
=
3
,
data_format
=
"NCHW"
):
super
(
ResNet
,
self
).
__init__
()
self
.
layers
=
layers
self
.
data_format
=
data_format
self
.
input_image_channel
=
input_image_channel
supported_layers
=
[
18
,
34
,
50
,
101
,
152
]
assert
layers
in
supported_layers
,
\
"supported layers are {} but input layer is {}"
.
format
(
supported_layers
,
layers
)
if
layers
==
18
:
depth
=
[
2
,
2
,
2
,
2
]
elif
layers
==
34
or
layers
==
50
:
depth
=
[
3
,
4
,
6
,
3
]
elif
layers
==
101
:
depth
=
[
3
,
4
,
23
,
3
]
elif
layers
==
152
:
depth
=
[
3
,
8
,
36
,
3
]
num_channels
=
[
64
,
256
,
512
,
1024
]
if
layers
>=
50
else
[
64
,
64
,
128
,
256
]
num_filters
=
[
64
,
128
,
256
,
512
]
self
.
conv
=
ConvBNLayer
(
num_channels
=
self
.
input_image_channel
,
num_filters
=
64
,
filter_size
=
7
,
stride
=
2
,
act
=
"relu"
,
name
=
"conv1"
,
data_format
=
self
.
data_format
)
self
.
pool2d_max
=
MaxPool2D
(
kernel_size
=
3
,
stride
=
2
,
padding
=
1
,
data_format
=
self
.
data_format
)
self
.
block_list
=
[]
if
layers
>=
50
:
for
block
in
range
(
len
(
depth
)):
shortcut
=
False
for
i
in
range
(
depth
[
block
]):
if
layers
in
[
101
,
152
]
and
block
==
2
:
if
i
==
0
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
"a"
else
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
"b"
+
str
(
i
)
else
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
chr
(
97
+
i
)
bottleneck_block
=
self
.
add_sublayer
(
conv_name
,
BottleneckBlock
(
num_channels
=
num_channels
[
block
]
if
i
==
0
else
num_filters
[
block
]
*
4
,
num_filters
=
num_filters
[
block
],
stride
=
2
if
i
==
0
and
block
!=
0
else
1
,
shortcut
=
shortcut
,
name
=
conv_name
,
data_format
=
self
.
data_format
))
self
.
block_list
.
append
(
bottleneck_block
)
shortcut
=
True
else
:
for
block
in
range
(
len
(
depth
)):
shortcut
=
False
for
i
in
range
(
depth
[
block
]):
conv_name
=
"res"
+
str
(
block
+
2
)
+
chr
(
97
+
i
)
basic_block
=
self
.
add_sublayer
(
conv_name
,
BasicBlock
(
num_channels
=
num_channels
[
block
]
if
i
==
0
else
num_filters
[
block
],
num_filters
=
num_filters
[
block
],
stride
=
2
if
i
==
0
and
block
!=
0
else
1
,
shortcut
=
shortcut
,
name
=
conv_name
,
data_format
=
self
.
data_format
))
self
.
block_list
.
append
(
basic_block
)
shortcut
=
True
self
.
pool2d_avg
=
AdaptiveAvgPool2D
(
1
,
data_format
=
self
.
data_format
)
self
.
pool2d_avg_channels
=
num_channels
[
-
1
]
*
2
stdv
=
1.0
/
math
.
sqrt
(
self
.
pool2d_avg_channels
*
1.0
)
self
.
out
=
Linear
(
self
.
pool2d_avg_channels
,
class_dim
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
"fc_0.w_0"
),
bias_attr
=
ParamAttr
(
name
=
"fc_0.b_0"
))
def
forward
(
self
,
inputs
):
with
paddle
.
static
.
amp
.
fp16_guard
():
if
self
.
data_format
==
"NHWC"
:
inputs
=
paddle
.
tensor
.
transpose
(
inputs
,
[
0
,
2
,
3
,
1
])
inputs
.
stop_gradient
=
True
y
=
self
.
conv
(
inputs
)
y
=
self
.
pool2d_max
(
y
)
for
block
in
self
.
block_list
:
y
=
block
(
y
)
y
=
self
.
pool2d_avg
(
y
)
y
=
paddle
.
reshape
(
y
,
shape
=
[
-
1
,
self
.
pool2d_avg_channels
])
y
=
self
.
out
(
y
)
return
y
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
ResNet18
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet
(
layers
=
18
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet18"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet34
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet
(
layers
=
34
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet34"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet50
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet
(
layers
=
50
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet50"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet101
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet
(
layers
=
101
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet101"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet152
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet
(
layers
=
152
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet152"
],
use_ssld
=
use_ssld
)
return
model
ppcls/arch/backbone/model_zoo/resnet_vd.py
已删除
100644 → 0
浏览文件 @
1c55e08a
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
paddle.nn.initializer
import
Uniform
import
math
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"ResNet18_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet18_vd_pretrained.pdparams"
,
"ResNet34_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet34_vd_pretrained.pdparams"
,
"ResNet50_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_pretrained.pdparams"
,
"ResNet101_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet101_vd_pretrained.pdparams"
,
"ResNet152_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet152_vd_pretrained.pdparams"
,
"ResNet200_vd"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet200_vd_pretrained.pdparams"
,
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBNLayer
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
filter_size
,
stride
=
1
,
groups
=
1
,
is_vd_mode
=
False
,
act
=
None
,
lr_mult
=
1.0
,
name
=
None
):
super
(
ConvBNLayer
,
self
).
__init__
()
self
.
is_vd_mode
=
is_vd_mode
self
.
_pool2d_avg
=
AvgPool2D
(
kernel_size
=
2
,
stride
=
2
,
padding
=
0
,
ceil_mode
=
True
)
self
.
_conv
=
Conv2D
(
in_channels
=
num_channels
,
out_channels
=
num_filters
,
kernel_size
=
filter_size
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
weight_attr
=
ParamAttr
(
name
=
name
+
"_weights"
,
learning_rate
=
lr_mult
),
bias_attr
=
False
)
if
name
==
"conv1"
:
bn_name
=
"bn_"
+
name
else
:
bn_name
=
"bn"
+
name
[
3
:]
self
.
_batch_norm
=
BatchNorm
(
num_filters
,
act
=
act
,
param_attr
=
ParamAttr
(
name
=
bn_name
+
'_scale'
,
learning_rate
=
lr_mult
),
bias_attr
=
ParamAttr
(
bn_name
+
'_offset'
,
learning_rate
=
lr_mult
),
moving_mean_name
=
bn_name
+
'_mean'
,
moving_variance_name
=
bn_name
+
'_variance'
)
def
forward
(
self
,
inputs
):
if
self
.
is_vd_mode
:
inputs
=
self
.
_pool2d_avg
(
inputs
)
y
=
self
.
_conv
(
inputs
)
y
=
self
.
_batch_norm
(
y
)
return
y
class
BottleneckBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
stride
,
shortcut
=
True
,
if_first
=
False
,
lr_mult
=
1.0
,
name
=
None
):
super
(
BottleneckBlock
,
self
).
__init__
()
self
.
conv0
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
1
,
act
=
'relu'
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch2a"
)
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
'relu'
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch2b"
)
self
.
conv2
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
act
=
None
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch2c"
)
if
not
shortcut
:
self
.
short
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
*
4
,
filter_size
=
1
,
stride
=
1
,
is_vd_mode
=
False
if
if_first
else
True
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch1"
)
self
.
shortcut
=
shortcut
def
forward
(
self
,
inputs
):
y
=
self
.
conv0
(
inputs
)
conv1
=
self
.
conv1
(
y
)
conv2
=
self
.
conv2
(
conv1
)
if
self
.
shortcut
:
short
=
inputs
else
:
short
=
self
.
short
(
inputs
)
y
=
paddle
.
add
(
x
=
short
,
y
=
conv2
)
y
=
F
.
relu
(
y
)
return
y
class
BasicBlock
(
nn
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
stride
,
shortcut
=
True
,
if_first
=
False
,
lr_mult
=
1.0
,
name
=
None
):
super
(
BasicBlock
,
self
).
__init__
()
self
.
stride
=
stride
self
.
conv0
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
3
,
stride
=
stride
,
act
=
'relu'
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch2a"
)
self
.
conv1
=
ConvBNLayer
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
3
,
act
=
None
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch2b"
)
if
not
shortcut
:
self
.
short
=
ConvBNLayer
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
1
,
stride
=
1
,
is_vd_mode
=
False
if
if_first
else
True
,
lr_mult
=
lr_mult
,
name
=
name
+
"_branch1"
)
self
.
shortcut
=
shortcut
def
forward
(
self
,
inputs
):
y
=
self
.
conv0
(
inputs
)
conv1
=
self
.
conv1
(
y
)
if
self
.
shortcut
:
short
=
inputs
else
:
short
=
self
.
short
(
inputs
)
y
=
paddle
.
add
(
x
=
short
,
y
=
conv1
)
y
=
F
.
relu
(
y
)
return
y
class
ResNet_vd
(
nn
.
Layer
):
def
__init__
(
self
,
layers
=
50
,
class_dim
=
1000
,
lr_mult_list
=
[
1.0
,
1.0
,
1.0
,
1.0
,
1.0
]):
super
(
ResNet_vd
,
self
).
__init__
()
self
.
layers
=
layers
supported_layers
=
[
18
,
34
,
50
,
101
,
152
,
200
]
assert
layers
in
supported_layers
,
\
"supported layers are {} but input layer is {}"
.
format
(
supported_layers
,
layers
)
self
.
lr_mult_list
=
lr_mult_list
assert
isinstance
(
self
.
lr_mult_list
,
(
list
,
tuple
)),
"lr_mult_list should be in (list, tuple) but got {}"
.
format
(
type
(
self
.
lr_mult_list
))
assert
len
(
self
.
lr_mult_list
)
==
5
,
"lr_mult_list length should should be 5 but got {}"
.
format
(
len
(
self
.
lr_mult_list
))
if
layers
==
18
:
depth
=
[
2
,
2
,
2
,
2
]
elif
layers
==
34
or
layers
==
50
:
depth
=
[
3
,
4
,
6
,
3
]
elif
layers
==
101
:
depth
=
[
3
,
4
,
23
,
3
]
elif
layers
==
152
:
depth
=
[
3
,
8
,
36
,
3
]
elif
layers
==
200
:
depth
=
[
3
,
12
,
48
,
3
]
num_channels
=
[
64
,
256
,
512
,
1024
]
if
layers
>=
50
else
[
64
,
64
,
128
,
256
]
num_filters
=
[
64
,
128
,
256
,
512
]
self
.
conv1_1
=
ConvBNLayer
(
num_channels
=
3
,
num_filters
=
32
,
filter_size
=
3
,
stride
=
2
,
act
=
'relu'
,
lr_mult
=
self
.
lr_mult_list
[
0
],
name
=
"conv1_1"
)
self
.
conv1_2
=
ConvBNLayer
(
num_channels
=
32
,
num_filters
=
32
,
filter_size
=
3
,
stride
=
1
,
act
=
'relu'
,
lr_mult
=
self
.
lr_mult_list
[
0
],
name
=
"conv1_2"
)
self
.
conv1_3
=
ConvBNLayer
(
num_channels
=
32
,
num_filters
=
64
,
filter_size
=
3
,
stride
=
1
,
act
=
'relu'
,
lr_mult
=
self
.
lr_mult_list
[
0
],
name
=
"conv1_3"
)
self
.
pool2d_max
=
MaxPool2D
(
kernel_size
=
3
,
stride
=
2
,
padding
=
1
)
self
.
block_list
=
[]
if
layers
>=
50
:
for
block
in
range
(
len
(
depth
)):
shortcut
=
False
for
i
in
range
(
depth
[
block
]):
if
layers
in
[
101
,
152
,
200
]
and
block
==
2
:
if
i
==
0
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
"a"
else
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
"b"
+
str
(
i
)
else
:
conv_name
=
"res"
+
str
(
block
+
2
)
+
chr
(
97
+
i
)
bottleneck_block
=
self
.
add_sublayer
(
'bb_%d_%d'
%
(
block
,
i
),
BottleneckBlock
(
num_channels
=
num_channels
[
block
]
if
i
==
0
else
num_filters
[
block
]
*
4
,
num_filters
=
num_filters
[
block
],
stride
=
2
if
i
==
0
and
block
!=
0
else
1
,
shortcut
=
shortcut
,
if_first
=
block
==
i
==
0
,
lr_mult
=
self
.
lr_mult_list
[
block
+
1
],
name
=
conv_name
))
self
.
block_list
.
append
(
bottleneck_block
)
shortcut
=
True
else
:
for
block
in
range
(
len
(
depth
)):
shortcut
=
False
for
i
in
range
(
depth
[
block
]):
conv_name
=
"res"
+
str
(
block
+
2
)
+
chr
(
97
+
i
)
basic_block
=
self
.
add_sublayer
(
'bb_%d_%d'
%
(
block
,
i
),
BasicBlock
(
num_channels
=
num_channels
[
block
]
if
i
==
0
else
num_filters
[
block
],
num_filters
=
num_filters
[
block
],
stride
=
2
if
i
==
0
and
block
!=
0
else
1
,
shortcut
=
shortcut
,
if_first
=
block
==
i
==
0
,
name
=
conv_name
,
lr_mult
=
self
.
lr_mult_list
[
block
+
1
]))
self
.
block_list
.
append
(
basic_block
)
shortcut
=
True
self
.
pool2d_avg
=
AdaptiveAvgPool2D
(
1
)
self
.
pool2d_avg_channels
=
num_channels
[
-
1
]
*
2
stdv
=
1.0
/
math
.
sqrt
(
self
.
pool2d_avg_channels
*
1.0
)
self
.
out
=
Linear
(
self
.
pool2d_avg_channels
,
class_dim
,
weight_attr
=
ParamAttr
(
initializer
=
Uniform
(
-
stdv
,
stdv
),
name
=
"fc_0.w_0"
),
bias_attr
=
ParamAttr
(
name
=
"fc_0.b_0"
))
def
forward
(
self
,
inputs
):
y
=
self
.
conv1_1
(
inputs
)
y
=
self
.
conv1_2
(
y
)
y
=
self
.
conv1_3
(
y
)
y
=
self
.
pool2d_max
(
y
)
for
block
in
self
.
block_list
:
y
=
block
(
y
)
y
=
self
.
pool2d_avg
(
y
)
y
=
paddle
.
reshape
(
y
,
shape
=
[
-
1
,
self
.
pool2d_avg_channels
])
y
=
self
.
out
(
y
)
return
y
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
ResNet18_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
18
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet18_vd"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet34_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
34
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet34_vd"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet50_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
50
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet50_vd"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet101_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
101
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet101_vd"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet152_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
152
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet152_vd"
],
use_ssld
=
use_ssld
)
return
model
def
ResNet200_vd
(
pretrained
=
False
,
use_ssld
=
False
,
**
kwargs
):
model
=
ResNet_vd
(
layers
=
200
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"ResNet200_vd"
],
use_ssld
=
use_ssld
)
return
model
ppcls/arch/backbone/model_zoo/vgg.py
已删除
100644 → 0
浏览文件 @
1c55e08a
import
paddle
from
paddle
import
ParamAttr
import
paddle.nn
as
nn
import
paddle.nn.functional
as
F
from
paddle.nn
import
Conv2D
,
BatchNorm
,
Linear
,
Dropout
from
paddle.nn
import
AdaptiveAvgPool2D
,
MaxPool2D
,
AvgPool2D
from
ppcls.utils.save_load
import
load_dygraph_pretrain
,
load_dygraph_pretrain_from_url
MODEL_URLS
=
{
"VGG11"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG11_pretrained.pdparams"
,
"VGG13"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG13_pretrained.pdparams"
,
"VGG16"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG16_pretrained.pdparams"
,
"VGG19"
:
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/VGG19_pretrained.pdparams"
,
}
__all__
=
list
(
MODEL_URLS
.
keys
())
class
ConvBlock
(
nn
.
Layer
):
def
__init__
(
self
,
input_channels
,
output_channels
,
groups
,
name
=
None
):
super
(
ConvBlock
,
self
).
__init__
()
self
.
groups
=
groups
self
.
_conv_1
=
Conv2D
(
in_channels
=
input_channels
,
out_channels
=
output_channels
,
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
weight_attr
=
ParamAttr
(
name
=
name
+
"1_weights"
),
bias_attr
=
False
)
if
groups
==
2
or
groups
==
3
or
groups
==
4
:
self
.
_conv_2
=
Conv2D
(
in_channels
=
output_channels
,
out_channels
=
output_channels
,
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
weight_attr
=
ParamAttr
(
name
=
name
+
"2_weights"
),
bias_attr
=
False
)
if
groups
==
3
or
groups
==
4
:
self
.
_conv_3
=
Conv2D
(
in_channels
=
output_channels
,
out_channels
=
output_channels
,
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
weight_attr
=
ParamAttr
(
name
=
name
+
"3_weights"
),
bias_attr
=
False
)
if
groups
==
4
:
self
.
_conv_4
=
Conv2D
(
in_channels
=
output_channels
,
out_channels
=
output_channels
,
kernel_size
=
3
,
stride
=
1
,
padding
=
1
,
weight_attr
=
ParamAttr
(
name
=
name
+
"4_weights"
),
bias_attr
=
False
)
self
.
_pool
=
MaxPool2D
(
kernel_size
=
2
,
stride
=
2
,
padding
=
0
)
def
forward
(
self
,
inputs
):
x
=
self
.
_conv_1
(
inputs
)
x
=
F
.
relu
(
x
)
if
self
.
groups
==
2
or
self
.
groups
==
3
or
self
.
groups
==
4
:
x
=
self
.
_conv_2
(
x
)
x
=
F
.
relu
(
x
)
if
self
.
groups
==
3
or
self
.
groups
==
4
:
x
=
self
.
_conv_3
(
x
)
x
=
F
.
relu
(
x
)
if
self
.
groups
==
4
:
x
=
self
.
_conv_4
(
x
)
x
=
F
.
relu
(
x
)
x
=
self
.
_pool
(
x
)
return
x
class
VGGNet
(
nn
.
Layer
):
def
__init__
(
self
,
layers
=
11
,
stop_grad_layers
=
0
,
class_dim
=
1000
):
super
(
VGGNet
,
self
).
__init__
()
self
.
layers
=
layers
self
.
stop_grad_layers
=
stop_grad_layers
self
.
vgg_configure
=
{
11
:
[
1
,
1
,
2
,
2
,
2
],
13
:
[
2
,
2
,
2
,
2
,
2
],
16
:
[
2
,
2
,
3
,
3
,
3
],
19
:
[
2
,
2
,
4
,
4
,
4
]
}
assert
self
.
layers
in
self
.
vgg_configure
.
keys
(),
\
"supported layers are {} but input layer is {}"
.
format
(
self
.
vgg_configure
.
keys
(),
layers
)
self
.
groups
=
self
.
vgg_configure
[
self
.
layers
]
self
.
_conv_block_1
=
ConvBlock
(
3
,
64
,
self
.
groups
[
0
],
name
=
"conv1_"
)
self
.
_conv_block_2
=
ConvBlock
(
64
,
128
,
self
.
groups
[
1
],
name
=
"conv2_"
)
self
.
_conv_block_3
=
ConvBlock
(
128
,
256
,
self
.
groups
[
2
],
name
=
"conv3_"
)
self
.
_conv_block_4
=
ConvBlock
(
256
,
512
,
self
.
groups
[
3
],
name
=
"conv4_"
)
self
.
_conv_block_5
=
ConvBlock
(
512
,
512
,
self
.
groups
[
4
],
name
=
"conv5_"
)
for
idx
,
block
in
enumerate
([
self
.
_conv_block_1
,
self
.
_conv_block_2
,
self
.
_conv_block_3
,
self
.
_conv_block_4
,
self
.
_conv_block_5
]):
if
self
.
stop_grad_layers
>=
idx
+
1
:
for
param
in
block
.
parameters
():
param
.
trainable
=
False
self
.
_drop
=
Dropout
(
p
=
0.5
,
mode
=
"downscale_in_infer"
)
self
.
_fc1
=
Linear
(
7
*
7
*
512
,
4096
,
weight_attr
=
ParamAttr
(
name
=
"fc6_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc6_offset"
))
self
.
_fc2
=
Linear
(
4096
,
4096
,
weight_attr
=
ParamAttr
(
name
=
"fc7_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc7_offset"
))
self
.
_out
=
Linear
(
4096
,
class_dim
,
weight_attr
=
ParamAttr
(
name
=
"fc8_weights"
),
bias_attr
=
ParamAttr
(
name
=
"fc8_offset"
))
def
forward
(
self
,
inputs
):
x
=
self
.
_conv_block_1
(
inputs
)
x
=
self
.
_conv_block_2
(
x
)
x
=
self
.
_conv_block_3
(
x
)
x
=
self
.
_conv_block_4
(
x
)
x
=
self
.
_conv_block_5
(
x
)
x
=
paddle
.
flatten
(
x
,
start_axis
=
1
,
stop_axis
=-
1
)
x
=
self
.
_fc1
(
x
)
x
=
F
.
relu
(
x
)
x
=
self
.
_drop
(
x
)
x
=
self
.
_fc2
(
x
)
x
=
F
.
relu
(
x
)
x
=
self
.
_drop
(
x
)
x
=
self
.
_out
(
x
)
return
x
def
_load_pretrained
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
):
if
pretrained
is
False
:
pass
elif
pretrained
is
True
:
load_dygraph_pretrain_from_url
(
model
,
model_url
,
use_ssld
=
use_ssld
)
elif
isinstance
(
pretrained
,
str
):
load_dygraph_pretrain
(
model
,
pretrained
)
else
:
raise
RuntimeError
(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def
VGG11
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
,
**
kwargs
):
model
=
VGGNet
(
layers
=
11
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"VGG11"
],
use_ssld
=
use_ssld
)
return
model
def
VGG13
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
,
**
kwargs
):
model
=
VGGNet
(
layers
=
13
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"VGG13"
],
use_ssld
=
use_ssld
)
return
model
def
VGG16
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
,
**
kwargs
):
model
=
VGGNet
(
layers
=
16
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"VGG16"
],
use_ssld
=
use_ssld
)
return
model
def
VGG19
(
pretrained
,
model
,
model_url
,
use_ssld
=
False
,
**
kwargs
):
model
=
VGGNet
(
layers
=
19
,
**
kwargs
)
_load_pretrained
(
pretrained
,
model
,
MODEL_URLS
[
"VGG19"
],
use_ssld
=
use_ssld
)
return
model
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录