Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
别团等shy哥发育
Tensorflow Deep Learning
提交
e63ec6d3
T
Tensorflow Deep Learning
项目概览
别团等shy哥发育
/
Tensorflow Deep Learning
10 个月 前同步成功
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
Tensorflow Deep Learning
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
e63ec6d3
编写于
8月 17, 2022
作者:
别团等shy哥发育
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
MobileNetV1/V2花朵识别
上级
18440b7d
变更
3
展开全部
隐藏空白更改
内联
并排
Showing
3 changed file
with
99 addition
and
79 deletion
+99
-79
经典网络/MobileNet/MobileNet-V1花朵识别.ipynb
经典网络/MobileNet/MobileNet-V1花朵识别.ipynb
+0
-0
经典网络/MobileNet/MobileNet-v2花朵识别.ipynb
经典网络/MobileNet/MobileNet-v2花朵识别.ipynb
+0
-0
经典网络/MobileNet/MobileNetV2.py
经典网络/MobileNet/MobileNetV2.py
+99
-79
未找到文件。
经典网络/MobileNet/MobileNet-V1花朵识别.ipynb
0 → 100644
浏览文件 @
e63ec6d3
此差异已折叠。
点击以展开。
经典网络/MobileNet/MobileNet-v2花朵识别.ipynb
0 → 100644
浏览文件 @
e63ec6d3
此差异已折叠。
点击以展开。
经典网络/MobileNet/MobileNetV2.py
浏览文件 @
e63ec6d3
...
...
@@ -11,8 +11,10 @@ from tensorflow.keras.models import Model
from
tensorflow.keras.layers
import
BatchNormalization
from
tensorflow.keras.layers
import
Conv2D
,
Add
,
ZeroPadding2D
,
GlobalAveragePooling2D
,
Dropout
,
Dense
from
tensorflow.keras.layers
import
MaxPooling2D
,
Activation
,
DepthwiseConv2D
,
Input
,
GlobalMaxPooling2D
from
tensorflow.keras.layers
import
ReLU
from
tensorflow.keras.applications
import
imagenet_utils
from
tensorflow.keras.applications.imagenet_utils
import
decode_predictions
# from tensorflow.keras.utils.data_utils import get_file
# TODO Change path to v1.1
...
...
@@ -49,23 +51,95 @@ def correct_pad(inputs, kernel_size):
def
_make_divisible
(
v
,
divisor
,
min_value
=
None
):
if
min_value
is
None
:
min_value
=
divisor
# 与value最接近的8的倍数。若value=30,那么处理后的卷积核=((30+4)//8)*8=32
new_v
=
max
(
min_value
,
int
(
v
+
divisor
/
2
)
//
divisor
*
divisor
)
# 如果计算出的结果比原值要小,那就再加上一个倍数
if
new_v
<
0.9
*
v
:
new_v
+=
divisor
# 返回新的卷积核个数
return
new_v
def
MobileNetV2
(
input_shape
=
(
224
,
224
,
3
),
alpha
=
1.0
,
include_top
=
True
,
weights
=
'imagenet'
,
classes
=
1000
):
def
_inverted_res_block
(
inputs
,
expansion
,
stride
,
alpha
,
filters
,
block_id
):
# 取输入shape的最后一维(通道数)
in_channels
=
backend
.
int_shape
(
inputs
)[
-
1
]
# 超参数控制逐点卷积的卷积核个数,使用int()函数取整,防止alpha是小数
pointwise_conv_filters
=
int
(
filters
*
alpha
)
# 由于网络结构图中特征图的个数为32、16、24、64...都是8的倍数,
# 保证超参数处理过后的卷积核个数能满足网络特征图个数的要求,因此需要保证逐点卷积的卷积核个数为8的倍数
pointwise_filters
=
_make_divisible
(
pointwise_conv_filters
,
8
)
# 先用x表示输入,如果不执行part1的话,那么x没机会接收inputs
x
=
inputs
prefix
=
'block_{}_'
.
format
(
block_id
)
# part1 数据扩张:在通道维度上升维
if
block_id
:
# Expand 利用1*1卷积升维(提升通道数)
x
=
Conv2D
(
expansion
*
in_channels
,
# 卷积核个数,通道维度提升expansion倍数
kernel_size
=
1
,
# 卷积核大小1*1
strides
=
1
,
# 步长1
padding
=
'same'
,
# 卷积过程中特征图size不变
use_bias
=
False
,
# 有BN层就不需要计算偏置
activation
=
None
,
name
=
prefix
+
'expand'
)(
x
)
# 批标准化
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'expand_BN'
)(
x
)
# relu6激活函数
# x = Activation(relu6, name=prefix + 'expand_relu')(x)
x
=
ReLU
(
6.0
,
name
=
prefix
+
'expand_relu'
)(
x
)
else
:
prefix
=
'expanded_conv_'
if
stride
==
2
:
x
=
ZeroPadding2D
(
padding
=
correct_pad
(
x
,
3
),
name
=
prefix
+
'pad'
)(
x
)
# part2 深度可分离卷积
x
=
DepthwiseConv2D
(
kernel_size
=
3
,
strides
=
stride
,
activation
=
None
,
use_bias
=
False
,
padding
=
'same'
if
stride
==
1
else
'valid'
,
name
=
prefix
+
'depthwise'
)(
x
)
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'depthwise_BN'
)(
x
)
x
=
Activation
(
relu6
,
name
=
prefix
+
'depthwise_relu'
)(
x
)
# part3:1*1卷积降维 压缩特征,而且不使用relu函数,保证特征不被破坏
x
=
Conv2D
(
pointwise_filters
,
# 卷积核个数(即通道数)下降到起始状态
kernel_size
=
1
,
padding
=
'same'
,
use_bias
=
False
,
activation
=
None
,
name
=
prefix
+
'project'
)(
x
)
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'project_BN'
)(
x
)
# 残差连接,输入和输出短接
# 当输入通道数=输出通道数且步长为1,进行残差边的连接
if
in_channels
==
pointwise_filters
and
stride
==
1
:
# 将输入和输出的tensor的对应元素值相加,要求shape完全一致
return
Add
(
name
=
prefix
+
'add'
)([
inputs
,
x
])
return
x
# MobileNetV2网络结构
def
MobileNetV2
(
input_shape
=
(
224
,
224
,
3
),
alpha
=
1.0
,
include_top
=
True
,
weights
=
'imagenet'
,
classes
=
1000
):
# 规定超参数的取值范围
if
alpha
not
in
[
0.5
,
0.75
,
1.0
,
1.3
]:
raise
ValueError
(
'alpha should use 0.5, 0.75, 1.0, 1.3'
)
rows
=
input_shape
[
0
]
img_input
=
Input
(
shape
=
input_shape
)
# stem部分
# 224,224,3 -> 112,112,32
# 超参数控制卷积核个数,先处理第一次卷积的卷积核个数(32*alpha)使它能被8整除
first_block_filters
=
_make_divisible
(
32
*
alpha
,
8
)
x
=
ZeroPadding2D
(
padding
=
correct_pad
(
img_input
,
3
),
name
=
'Conv1_pad'
)(
img_input
)
...
...
@@ -82,34 +156,35 @@ def MobileNetV2(input_shape=(224, 224, 3),
x
=
Activation
(
relu6
,
name
=
'Conv1_relu'
)(
x
)
# 112,112,32 -> 112,112,16
x
=
_inverted_res_block
(
x
,
filters
=
16
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
1
,
block_id
=
0
)
# 第一个残差卷积由于expansion=1,不用上升维度,因此不用执行第part1步的标准卷积操作,所以block_id=0
x
=
_inverted_res_block
(
x
,
filters
=
16
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
1
,
block_id
=
0
)
# 112,112,16 -> 56,56,24
x
=
_inverted_res_block
(
x
,
filters
=
24
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
1
)
x
=
_inverted_res_block
(
x
,
filters
=
24
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
2
)
x
=
_inverted_res_block
(
x
,
filters
=
24
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
1
)
x
=
_inverted_res_block
(
x
,
filters
=
24
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
2
)
# 56,56,24 -> 28,28,32
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
3
)
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
4
)
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
5
)
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
3
)
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
4
)
x
=
_inverted_res_block
(
x
,
filters
=
32
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
5
)
# 28,28,32 -> 14,14,64
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
6
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
7
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
8
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
9
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
6
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
7
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
8
)
x
=
_inverted_res_block
(
x
,
filters
=
64
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
9
)
# 14,14,64 -> 14,14,96
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
10
)
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
11
)
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
12
)
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
10
)
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
11
)
x
=
_inverted_res_block
(
x
,
filters
=
96
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
12
)
# 14,14,96 -> 7,7,160
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
13
)
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
14
)
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
15
)
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
2
,
expansion
=
6
,
block_id
=
13
)
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
14
)
x
=
_inverted_res_block
(
x
,
filters
=
160
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
15
)
# 7,7,160 -> 7,7,320
x
=
_inverted_res_block
(
x
,
filters
=
320
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
16
)
x
=
_inverted_res_block
(
x
,
filters
=
320
,
alpha
=
alpha
,
stride
=
1
,
expansion
=
6
,
block_id
=
16
)
if
alpha
>
1.0
:
last_block_filters
=
_make_divisible
(
1280
*
alpha
,
8
)
...
...
@@ -125,7 +200,7 @@ def MobileNetV2(input_shape=(224, 224, 3),
momentum
=
0.999
,
name
=
'Conv_1_bn'
)(
x
)
x
=
Activation
(
relu6
,
name
=
'out_relu'
)(
x
)
# [7,7,1280]->[None,1280]
x
=
GlobalAveragePooling2D
()(
x
)
x
=
Dense
(
classes
,
activation
=
'softmax'
,
use_bias
=
True
,
name
=
'Logits'
)(
x
)
...
...
@@ -155,68 +230,13 @@ def MobileNetV2(input_shape=(224, 224, 3),
return
model
def
_inverted_res_block
(
inputs
,
expansion
,
stride
,
alpha
,
filters
,
block_id
):
# 取输入shape的最后一维(通道数)
in_channels
=
backend
.
int_shape
(
inputs
)[
-
1
]
pointwise_conv_filters
=
int
(
filters
*
alpha
)
pointwise_filters
=
_make_divisible
(
pointwise_conv_filters
,
8
)
x
=
inputs
prefix
=
'block_{}_'
.
format
(
block_id
)
# part1 数据扩张
if
block_id
:
# Expand 利用1*1卷积升维
x
=
Conv2D
(
expansion
*
in_channels
,
kernel_size
=
1
,
padding
=
'same'
,
use_bias
=
False
,
activation
=
None
,
name
=
prefix
+
'expand'
)(
x
)
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'expand_BN'
)(
x
)
x
=
Activation
(
relu6
,
name
=
prefix
+
'expand_relu'
)(
x
)
else
:
prefix
=
'expanded_conv_'
if
stride
==
2
:
x
=
ZeroPadding2D
(
padding
=
correct_pad
(
x
,
3
),
name
=
prefix
+
'pad'
)(
x
)
# part2 深度可分离卷积
x
=
DepthwiseConv2D
(
kernel_size
=
3
,
strides
=
stride
,
activation
=
None
,
use_bias
=
False
,
padding
=
'same'
if
stride
==
1
else
'valid'
,
name
=
prefix
+
'depthwise'
)(
x
)
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'depthwise_BN'
)(
x
)
x
=
Activation
(
relu6
,
name
=
prefix
+
'depthwise_relu'
)(
x
)
# part3:1*1卷积降维 压缩特征,而且不使用relu函数,保证特征不被破坏
x
=
Conv2D
(
pointwise_filters
,
kernel_size
=
1
,
padding
=
'same'
,
use_bias
=
False
,
activation
=
None
,
name
=
prefix
+
'project'
)(
x
)
x
=
BatchNormalization
(
epsilon
=
1e-3
,
momentum
=
0.999
,
name
=
prefix
+
'project_BN'
)(
x
)
# 当输入通道数=输出通道数且步长为1,进行残差边的连接
if
in_channels
==
pointwise_filters
and
stride
==
1
:
return
Add
(
name
=
prefix
+
'add'
)([
inputs
,
x
])
return
x
def
preprocess_input
(
x
):
x
/=
255.
x
-=
0.5
x
*=
2.
return
x
if
__name__
==
'__main__'
:
model
=
MobileNetV2
(
input_shape
=
(
224
,
224
,
3
))
model
.
summary
()
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录