Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
5a15c165
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5a15c165
编写于
8月 28, 2020
作者:
littletomatodonkey
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add dygrapgh load and static load
上级
95ed78e2
变更
14
显示空白变更内容
内联
并排
Showing
14 changed file
with
235 addition
and
103 deletion
+235
-103
configs/quick_start/CSPResNet50.yaml
configs/quick_start/CSPResNet50.yaml
+70
-0
configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml
configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml
+1
-0
configs/quick_start/MobileNetV3_large_x1_0_ssld_finetune.yaml
...igs/quick_start/MobileNetV3_large_x1_0_ssld_finetune.yaml
+73
-0
configs/quick_start/R50_vd_distill_MV3_large_x1_0.yaml
configs/quick_start/R50_vd_distill_MV3_large_x1_0.yaml
+3
-0
configs/quick_start/ResNet50_vd.yaml
configs/quick_start/ResNet50_vd.yaml
+3
-0
configs/quick_start/ResNet50_vd_ssld_finetune.yaml
configs/quick_start/ResNet50_vd_ssld_finetune.yaml
+1
-0
configs/quick_start/ResNet50_vd_ssld_random_erasing_finetune.yaml
...quick_start/ResNet50_vd_ssld_random_erasing_finetune.yaml
+1
-0
ppcls/modeling/architectures/__init__.py
ppcls/modeling/architectures/__init__.py
+2
-0
ppcls/modeling/architectures/distillation_models.py
ppcls/modeling/architectures/distillation_models.py
+22
-19
ppcls/utils/logger.py
ppcls/utils/logger.py
+0
-3
ppcls/utils/save_load.py
ppcls/utils/save_load.py
+39
-67
tools/infer/infer.py
tools/infer/infer.py
+17
-13
tools/program.py
tools/program.py
+2
-0
tools/train.py
tools/train.py
+1
-1
未找到文件。
configs/quick_start/CSPResNet50.yaml
0 → 100644
浏览文件 @
5a15c165
mode
:
'
train'
ARCHITECTURE
:
name
:
'
CSPResNet50'
pretrained_model
:
"
"
model_save_dir
:
"
./output/"
classes_num
:
1000
total_images
:
1020
save_interval
:
1
validate
:
False
valid_interval
:
1
epochs
:
1
topk
:
5
image_shape
:
[
3
,
224
,
224
]
LEARNING_RATE
:
function
:
'
Cosine'
params
:
lr
:
0.0125
OPTIMIZER
:
function
:
'
Momentum'
params
:
momentum
:
0.9
regularizer
:
function
:
'
L2'
factor
:
0.00001
TRAIN
:
batch_size
:
32
num_workers
:
4
file_list
:
"
./dataset/flowers102/train_list.txt"
data_dir
:
"
./dataset/flowers102/"
shuffle_seed
:
0
transforms
:
-
DecodeImage
:
to_rgb
:
True
to_np
:
False
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1./255.
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
VALID
:
batch_size
:
20
num_workers
:
4
file_list
:
"
./dataset/flowers102/val_list.txt"
data_dir
:
"
./dataset/flowers102/"
shuffle_seed
:
0
transforms
:
-
DecodeImage
:
to_rgb
:
True
to_np
:
False
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml
浏览文件 @
5a15c165
...
@@ -2,6 +2,7 @@ mode: 'train'
...
@@ -2,6 +2,7 @@ mode: 'train'
ARCHITECTURE
:
ARCHITECTURE
:
name
:
'
MobileNetV3_large_x1_0'
name
:
'
MobileNetV3_large_x1_0'
pretrained_model
:
"
./pretrained/MobileNetV3_large_x1_0_pretrained"
pretrained_model
:
"
./pretrained/MobileNetV3_large_x1_0_pretrained"
load_static_weights
:
True
model_save_dir
:
"
./output/"
model_save_dir
:
"
./output/"
classes_num
:
102
classes_num
:
102
total_images
:
1020
total_images
:
1020
...
...
configs/quick_start/MobileNetV3_large_x1_0_ssld_finetune.yaml
0 → 100644
浏览文件 @
5a15c165
mode
:
'
train'
ARCHITECTURE
:
name
:
'
MobileNetV3_large_x1_0'
params
:
lr_mult_list
:
[
0.25
,
0.25
,
0.5
,
0.5
,
0.75
]
pretrained_model
:
"
./pretrained/MobileNetV3_large_x1_0_ssld_pretrained"
load_static_weights
:
True
model_save_dir
:
"
./output/"
classes_num
:
102
total_images
:
1020
save_interval
:
1
validate
:
True
valid_interval
:
1
epochs
:
20
topk
:
5
image_shape
:
[
3
,
224
,
224
]
LEARNING_RATE
:
function
:
'
Cosine'
params
:
lr
:
0.00375
OPTIMIZER
:
function
:
'
Momentum'
params
:
momentum
:
0.9
regularizer
:
function
:
'
L2'
factor
:
0.000001
TRAIN
:
batch_size
:
32
num_workers
:
4
file_list
:
"
./dataset/flowers102/train_list.txt"
data_dir
:
"
./dataset/flowers102/"
shuffle_seed
:
0
transforms
:
-
DecodeImage
:
to_rgb
:
True
to_np
:
False
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1./255.
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
VALID
:
batch_size
:
20
num_workers
:
4
file_list
:
"
./dataset/flowers102/val_list.txt"
data_dir
:
"
./dataset/flowers102/"
shuffle_seed
:
0
transforms
:
-
DecodeImage
:
to_rgb
:
True
to_np
:
False
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
configs/quick_start/R50_vd_distill_MV3_large_x1_0.yaml
浏览文件 @
5a15c165
...
@@ -5,6 +5,9 @@ ARCHITECTURE:
...
@@ -5,6 +5,9 @@ ARCHITECTURE:
pretrained_model
:
pretrained_model
:
-
"
./pretrained/flowers102_R50_vd_final/ppcls"
-
"
./pretrained/flowers102_R50_vd_final/ppcls"
-
"
./pretrained/MobileNetV3_large_x1_0_pretrained/"
-
"
./pretrained/MobileNetV3_large_x1_0_pretrained/"
load_static_weights
:
-
False
-
True
model_save_dir
:
"
./output/"
model_save_dir
:
"
./output/"
classes_num
:
102
classes_num
:
102
total_images
:
7169
total_images
:
7169
...
...
configs/quick_start/ResNet50_vd.yaml
浏览文件 @
5a15c165
mode
:
'
train'
mode
:
'
train'
ARCHITECTURE
:
ARCHITECTURE
:
name
:
'
ResNet50_vd'
name
:
'
ResNet50_vd'
checkpoints
:
"
"
pretrained_model
:
"
"
pretrained_model
:
"
"
load_static_weights
:
True
model_save_dir
:
"
./output/"
model_save_dir
:
"
./output/"
classes_num
:
102
classes_num
:
102
total_images
:
1020
total_images
:
1020
...
...
configs/quick_start/ResNet50_vd_ssld_finetune.yaml
浏览文件 @
5a15c165
...
@@ -4,6 +4,7 @@ ARCHITECTURE:
...
@@ -4,6 +4,7 @@ ARCHITECTURE:
params
:
params
:
lr_mult_list
:
[
0.1
,
0.1
,
0.2
,
0.2
,
0.3
]
lr_mult_list
:
[
0.1
,
0.1
,
0.2
,
0.2
,
0.3
]
pretrained_model
:
"
./pretrained/ResNet50_vd_ssld_pretrained"
pretrained_model
:
"
./pretrained/ResNet50_vd_ssld_pretrained"
load_static_weights
:
True
model_save_dir
:
"
./output/"
model_save_dir
:
"
./output/"
classes_num
:
102
classes_num
:
102
total_images
:
1020
total_images
:
1020
...
...
configs/quick_start/ResNet50_vd_ssld_random_erasing_finetune.yaml
浏览文件 @
5a15c165
...
@@ -4,6 +4,7 @@ ARCHITECTURE:
...
@@ -4,6 +4,7 @@ ARCHITECTURE:
params
:
params
:
lr_mult_list
:
[
0.1
,
0.1
,
0.2
,
0.2
,
0.3
]
lr_mult_list
:
[
0.1
,
0.1
,
0.2
,
0.2
,
0.3
]
pretrained_model
:
"
./pretrained/ResNet50_vd_ssld_pretrained"
pretrained_model
:
"
./pretrained/ResNet50_vd_ssld_pretrained"
load_static_weights
:
True
model_save_dir
:
"
./output/"
model_save_dir
:
"
./output/"
classes_num
:
102
classes_num
:
102
total_images
:
1020
total_images
:
1020
...
...
ppcls/modeling/architectures/__init__.py
浏览文件 @
5a15c165
...
@@ -28,3 +28,5 @@ from .mobilenet_v1 import MobileNetV1_x0_25, MobileNetV1_x0_5, MobileNetV1_x0_75
...
@@ -28,3 +28,5 @@ from .mobilenet_v1 import MobileNetV1_x0_25, MobileNetV1_x0_5, MobileNetV1_x0_75
from
.mobilenet_v2
import
MobileNetV2_x0_25
,
MobileNetV2_x0_5
,
MobileNetV2_x0_75
,
MobileNetV2
,
MobileNetV2_x1_5
,
MobileNetV2_x2_0
from
.mobilenet_v2
import
MobileNetV2_x0_25
,
MobileNetV2_x0_5
,
MobileNetV2_x0_75
,
MobileNetV2
,
MobileNetV2_x1_5
,
MobileNetV2_x2_0
from
.mobilenet_v3
import
MobileNetV3_small_x0_35
,
MobileNetV3_small_x0_5
,
MobileNetV3_small_x0_75
,
MobileNetV3_small_x1_0
,
MobileNetV3_small_x1_25
,
MobileNetV3_large_x0_35
,
MobileNetV3_large_x0_5
,
MobileNetV3_large_x0_75
,
MobileNetV3_large_x1_0
,
MobileNetV3_large_x1_25
from
.mobilenet_v3
import
MobileNetV3_small_x0_35
,
MobileNetV3_small_x0_5
,
MobileNetV3_small_x0_75
,
MobileNetV3_small_x1_0
,
MobileNetV3_small_x1_25
,
MobileNetV3_large_x0_35
,
MobileNetV3_large_x0_5
,
MobileNetV3_large_x0_75
,
MobileNetV3_large_x1_0
,
MobileNetV3_large_x1_25
from
.shufflenet_v2
import
ShuffleNetV2_x0_25
,
ShuffleNetV2_x0_33
,
ShuffleNetV2_x0_5
,
ShuffleNetV2
,
ShuffleNetV2_x1_5
,
ShuffleNetV2_x2_0
,
ShuffleNetV2_swish
from
.shufflenet_v2
import
ShuffleNetV2_x0_25
,
ShuffleNetV2_x0_33
,
ShuffleNetV2_x0_5
,
ShuffleNetV2
,
ShuffleNetV2_x1_5
,
ShuffleNetV2_x2_0
,
ShuffleNetV2_swish
from
.distillation_models
import
ResNet50_vd_distill_MobileNetV3_large_x1_0
\ No newline at end of file
ppcls/modeling/architectures/distillation_models.py
浏览文件 @
5a15c165
#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
#
#Licensed under the Apache License, Version 2.0 (the "License");
#
Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#
you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
#
#Unless required by applicable law or agreed to in writing, software
#
Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#
distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#
See the License for the specific language governing permissions and
#limitations under the License.
#
limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
division
...
@@ -32,17 +32,20 @@ __all__ = [
...
@@ -32,17 +32,20 @@ __all__ = [
]
]
class
ResNet50_vd_distill_MobileNetV3_large_x1_0
():
class
ResNet50_vd_distill_MobileNetV3_large_x1_0
(
fluid
.
dygraph
.
Layer
):
def
net
(
self
,
input
,
class_dim
=
1000
):
def
__init__
(
self
,
class_dim
=
1000
,
**
args
):
# student
super
(
ResNet50_vd_distill_MobileNetV3_large_x1_0
,
self
).
__init__
()
student
=
MobileNetV3_large_x1_0
()
out_student
=
student
.
net
(
input
,
class_dim
=
class_dim
)
# teacher
teacher
=
ResNet50_vd
()
out_teacher
=
teacher
.
net
(
input
,
class_dim
=
class_dim
)
out_teacher
.
stop_gradient
=
True
return
out_teacher
,
out_student
self
.
teacher
=
ResNet50_vd
(
class_dim
=
class_dim
,
**
args
)
self
.
student
=
MobileNetV3_large_x1_0
(
class_dim
=
class_dim
,
**
args
)
def
forward
(
self
,
input
):
teacher_label
=
self
.
teacher
(
input
)
student_label
=
self
.
student
(
input
)
return
teacher_label
,
student_label
class
ResNeXt101_32x16d_wsl_distill_ResNet50_vd
():
class
ResNeXt101_32x16d_wsl_distill_ResNet50_vd
():
...
...
ppcls/utils/logger.py
浏览文件 @
5a15c165
...
@@ -16,9 +16,6 @@ import logging
...
@@ -16,9 +16,6 @@ import logging
import
os
import
os
import
datetime
import
datetime
from
imp
import
reload
reload
(
logging
)
logging
.
basicConfig
(
logging
.
basicConfig
(
level
=
logging
.
INFO
,
level
=
logging
.
INFO
,
format
=
"%(asctime)s %(levelname)s: %(message)s"
,
format
=
"%(asctime)s %(levelname)s: %(message)s"
,
...
...
ppcls/utils/save_load.py
浏览文件 @
5a15c165
...
@@ -26,7 +26,7 @@ import paddle.fluid as fluid
...
@@ -26,7 +26,7 @@ import paddle.fluid as fluid
from
ppcls.utils
import
logger
from
ppcls.utils
import
logger
__all__
=
[
'init_model'
,
'save_model'
]
__all__
=
[
'init_model'
,
'save_model'
,
'load_dygraph_pretrain'
]
def
_mkdir_if_not_exist
(
path
):
def
_mkdir_if_not_exist
(
path
):
...
@@ -45,71 +45,35 @@ def _mkdir_if_not_exist(path):
...
@@ -45,71 +45,35 @@ def _mkdir_if_not_exist(path):
raise
OSError
(
'Failed to mkdir {}'
.
format
(
path
))
raise
OSError
(
'Failed to mkdir {}'
.
format
(
path
))
def
_load_state
(
path
):
def
load_dygraph_pretrain
(
if
os
.
path
.
exists
(
path
+
'.pdopt'
):
model
,
# XXX another hack to ignore the optimizer state
path
=
None
,
tmp
=
tempfile
.
mkdtemp
()
load_static_weights
=
False
,
):
dst
=
os
.
path
.
join
(
tmp
,
os
.
path
.
basename
(
os
.
path
.
normpath
(
path
)))
shutil
.
copy
(
path
+
'.pdparams'
,
dst
+
'.pdparams'
)
state
=
fluid
.
io
.
load_program_state
(
dst
)
shutil
.
rmtree
(
tmp
)
else
:
state
=
fluid
.
io
.
load_program_state
(
path
)
return
state
def
load_params
(
exe
,
prog
,
path
,
ignore_params
=
None
):
"""
Load model from the given path.
Args:
exe (fluid.Executor): The fluid.Executor object.
prog (fluid.Program): load weight to which Program object.
path (string): URL string or loca model path.
ignore_params (list): ignore variable to load when finetuning.
It can be specified by finetune_exclude_pretrained_params
and the usage can refer to the document
docs/advanced_tutorials/TRANSFER_LEARNING.md
"""
if
not
(
os
.
path
.
isdir
(
path
)
or
os
.
path
.
exists
(
path
+
'.pdparams'
)):
if
not
(
os
.
path
.
isdir
(
path
)
or
os
.
path
.
exists
(
path
+
'.pdparams'
)):
raise
ValueError
(
"Model pretrain path {} does not "
raise
ValueError
(
"Model pretrain path {} does not "
"exists."
.
format
(
path
))
"exists."
.
format
(
path
))
if
load_static_weights
:
pre_state_dict
=
fluid
.
load_program_state
(
path
)
param_state_dict
=
{}
model_dict
=
model
.
state_dict
()
for
key
in
model_dict
.
keys
():
weight_name
=
model_dict
[
key
].
name
print
(
"dyg key: {}, weight_name: {}"
.
format
(
key
,
weight_name
))
if
weight_name
in
pre_state_dict
.
keys
():
print
(
'Load weight: {}, shape: {}'
.
format
(
weight_name
,
pre_state_dict
[
weight_name
].
shape
))
param_state_dict
[
key
]
=
pre_state_dict
[
weight_name
]
else
:
param_state_dict
[
key
]
=
model_dict
[
key
]
model
.
set_dict
(
param_state_dict
)
return
logger
.
info
(
param_state_dict
,
optim_state_dict
=
fluid
.
load_dygraph
(
path
)
logger
.
coloring
(
'Loading parameters from {}...'
.
format
(
path
),
model
.
set_dict
(
param_state_dict
)
'HEADER'
))
return
ignore_set
=
set
()
state
=
_load_state
(
path
)
# ignore the parameter which mismatch the shape
# between the model and pretrain weight.
all_var_shape
=
{}
for
block
in
prog
.
blocks
:
for
param
in
block
.
all_parameters
():
all_var_shape
[
param
.
name
]
=
param
.
shape
ignore_set
.
update
([
name
for
name
,
shape
in
all_var_shape
.
items
()
if
name
in
state
and
shape
!=
state
[
name
].
shape
])
if
ignore_params
:
all_var_names
=
[
var
.
name
for
var
in
prog
.
list_vars
()]
ignore_list
=
filter
(
lambda
var
:
any
([
re
.
match
(
name
,
var
)
for
name
in
ignore_params
]),
all_var_names
)
ignore_set
.
update
(
list
(
ignore_list
))
if
len
(
ignore_set
)
>
0
:
for
k
in
ignore_set
:
if
k
in
state
:
logger
.
warning
(
'variable {} is already excluded automatically'
.
format
(
k
))
del
state
[
k
]
fluid
.
io
.
set_program_state
(
prog
,
state
)
def
init_model
(
config
,
net
,
optimizer
):
def
init_model
(
config
,
net
,
optimizer
=
None
):
"""
"""
load model from checkpoint or pretrained_model
load model from checkpoint or pretrained_model
"""
"""
...
@@ -128,13 +92,21 @@ def init_model(config, net, optimizer):
...
@@ -128,13 +92,21 @@ def init_model(config, net, optimizer):
return
return
pretrained_model
=
config
.
get
(
'pretrained_model'
)
pretrained_model
=
config
.
get
(
'pretrained_model'
)
load_static_weights
=
config
.
get
(
'load_static_weights'
,
False
)
use_distillation
=
config
.
get
(
'use_distillation'
,
False
)
if
pretrained_model
:
if
pretrained_model
:
if
not
isinstance
(
pretrained_model
,
list
):
if
not
isinstance
(
pretrained_model
,
list
):
pretrained_model
=
[
pretrained_model
]
pretrained_model
=
[
pretrained_model
]
# TODO: load pretrained_model
if
not
isinstance
(
load_static_weights
,
list
):
raise
NotImplementedError
load_static_weights
=
[
load_static_weights
]
*
len
(
pretrained_model
)
for
pretrain
in
pretrained_model
:
for
idx
,
pretrained
in
enumerate
(
pretrained_model
):
load_params
(
exe
,
program
,
pretrain
)
load_static
=
load_static_weights
[
idx
]
model
=
net
if
use_distillation
and
not
load_static
:
model
=
net
.
teacher
load_dygraph_pretrain
(
model
,
path
=
pretrained
,
load_static_weights
=
load_static
)
logger
.
info
(
logger
.
info
(
logger
.
coloring
(
"Finish initing model from {}"
.
format
(
logger
.
coloring
(
"Finish initing model from {}"
.
format
(
pretrained_model
),
"HEADER"
))
pretrained_model
),
"HEADER"
))
...
...
tools/infer/infer.py
浏览文件 @
5a15c165
...
@@ -18,6 +18,8 @@ import numpy as np
...
@@ -18,6 +18,8 @@ import numpy as np
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
ppcls.modeling
import
architectures
from
ppcls.modeling
import
architectures
from
ppcls.utils.save_load
import
load_dygraph_pretrain
def
parse_args
():
def
parse_args
():
def
str2bool
(
v
):
def
str2bool
(
v
):
...
@@ -28,9 +30,11 @@ def parse_args():
...
@@ -28,9 +30,11 @@ def parse_args():
parser
.
add_argument
(
"-m"
,
"--model"
,
type
=
str
)
parser
.
add_argument
(
"-m"
,
"--model"
,
type
=
str
)
parser
.
add_argument
(
"-p"
,
"--pretrained_model"
,
type
=
str
)
parser
.
add_argument
(
"-p"
,
"--pretrained_model"
,
type
=
str
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
str2bool
,
default
=
True
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
str2bool
,
default
=
True
)
parser
.
add_argument
(
"--load_static_weights"
,
type
=
str2bool
,
default
=
True
)
return
parser
.
parse_args
()
return
parser
.
parse_args
()
def
create_operators
():
def
create_operators
():
size
=
224
size
=
224
img_mean
=
[
0.485
,
0.456
,
0.406
]
img_mean
=
[
0.485
,
0.456
,
0.406
]
...
@@ -66,21 +70,19 @@ def main():
...
@@ -66,21 +70,19 @@ def main():
args
=
parse_args
()
args
=
parse_args
()
operators
=
create_operators
()
operators
=
create_operators
()
# assign the place
# assign the place
if
args
.
use_gpu
:
gpu_id
=
fluid
.
dygraph
.
parallel
.
Env
().
dev_id
gpu_id
=
fluid
.
dygraph
.
parallel
.
Env
().
dev_id
place
=
fluid
.
CUDAPlace
(
gpu_id
)
place
=
fluid
.
CUDAPlace
(
gpu_id
)
else
:
place
=
fluid
.
CPUPlace
()
pre_weights_dict
=
fluid
.
load_program_state
(
args
.
pretrained_model
)
with
fluid
.
dygraph
.
guard
(
place
):
with
fluid
.
dygraph
.
guard
(
place
):
net
=
architectures
.
__dict__
[
args
.
model
]()
net
=
architectures
.
__dict__
[
args
.
model
]()
data
=
preprocess
(
args
.
image_file
,
operators
)
data
=
preprocess
(
args
.
image_file
,
operators
)
data
=
np
.
expand_dims
(
data
,
axis
=
0
)
data
=
np
.
expand_dims
(
data
,
axis
=
0
)
data
=
fluid
.
dygraph
.
to_variable
(
data
)
data
=
fluid
.
dygraph
.
to_variable
(
data
)
dy_weights_dict
=
net
.
state_dict
()
load_dygraph_pretrain
(
net
,
args
.
pretrained_model
,
pre_weights_dict_new
=
{}
args
.
load_static_weights
)
for
key
in
dy_weights_dict
:
weights_name
=
dy_weights_dict
[
key
].
name
pre_weights_dict_new
[
key
]
=
pre_weights_dict
[
weights_name
]
net
.
set_dict
(
pre_weights_dict_new
)
net
.
eval
()
net
.
eval
()
outputs
=
net
(
data
)
outputs
=
net
(
data
)
outputs
=
fluid
.
layers
.
softmax
(
outputs
)
outputs
=
fluid
.
layers
.
softmax
(
outputs
)
...
@@ -89,9 +91,11 @@ def main():
...
@@ -89,9 +91,11 @@ def main():
probs
=
postprocess
(
outputs
)
probs
=
postprocess
(
outputs
)
rank
=
1
rank
=
1
for
idx
,
prob
in
probs
:
for
idx
,
prob
in
probs
:
print
(
"top{:d}, class id: {:d}, probability: {:.4f}"
.
format
(
print
(
"top{:d}, class id: {:d}, probability: {:.4f}"
.
format
(
rank
,
idx
,
rank
,
idx
,
prob
))
prob
))
rank
+=
1
rank
+=
1
return
if
__name__
==
"__main__"
:
if
__name__
==
"__main__"
:
main
()
main
()
tools/program.py
浏览文件 @
5a15c165
...
@@ -71,6 +71,8 @@ def create_model(architecture, classes_num):
...
@@ -71,6 +71,8 @@ def create_model(architecture, classes_num):
"""
"""
name
=
architecture
[
"name"
]
name
=
architecture
[
"name"
]
params
=
architecture
.
get
(
"params"
,
{})
params
=
architecture
.
get
(
"params"
,
{})
print
(
name
)
print
(
params
)
return
architectures
.
__dict__
[
name
](
class_dim
=
classes_num
,
**
params
)
return
architectures
.
__dict__
[
name
](
class_dim
=
classes_num
,
**
params
)
...
...
tools/train.py
浏览文件 @
5a15c165
...
@@ -102,7 +102,7 @@ def main(args):
...
@@ -102,7 +102,7 @@ def main(args):
config
.
model_save_dir
,
config
.
model_save_dir
,
config
.
ARCHITECTURE
[
"name"
])
config
.
ARCHITECTURE
[
"name"
])
save_model
(
net
,
optimizer
,
model_path
,
save_model
(
net
,
optimizer
,
model_path
,
"best_model
_in_epoch_"
+
str
(
epoch_id
)
)
"best_model
"
)
# 3. save the persistable model
# 3. save the persistable model
if
epoch_id
%
config
.
save_interval
==
0
:
if
epoch_id
%
config
.
save_interval
==
0
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录