Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleSeg
提交
1a5a29d0
P
PaddleSeg
项目概览
PaddlePaddle
/
PaddleSeg
通知
285
Star
8
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
53
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleSeg
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
53
Issue
53
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
1a5a29d0
编写于
8月 26, 2020
作者:
M
michaelowenliu
提交者:
GitHub
8月 26, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1 from PaddlePaddle/develop
Develop
上级
7fd281e5
96b1dfa1
变更
26
隐藏空白更改
内联
并排
Showing
26 changed file
with
681 addition
and
167 deletion
+681
-167
contrib/SpatialEmbeddings/README.md
contrib/SpatialEmbeddings/README.md
+63
-0
contrib/SpatialEmbeddings/config.py
contrib/SpatialEmbeddings/config.py
+24
-0
contrib/SpatialEmbeddings/data/kitti/0007/kitti_0007_000512.png
...b/SpatialEmbeddings/data/kitti/0007/kitti_0007_000512.png
+0
-0
contrib/SpatialEmbeddings/data/kitti/0007/kitti_0007_000518.png
...b/SpatialEmbeddings/data/kitti/0007/kitti_0007_000518.png
+0
-0
contrib/SpatialEmbeddings/data/test.txt
contrib/SpatialEmbeddings/data/test.txt
+2
-0
contrib/SpatialEmbeddings/download_SpatialEmbeddings_kitti.py
...rib/SpatialEmbeddings/download_SpatialEmbeddings_kitti.py
+32
-0
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_ori.png
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_ori.png
+0
-0
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_pred.png
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_pred.png
+0
-0
contrib/SpatialEmbeddings/infer.py
contrib/SpatialEmbeddings/infer.py
+109
-0
contrib/SpatialEmbeddings/utils/__init__.py
contrib/SpatialEmbeddings/utils/__init__.py
+0
-0
contrib/SpatialEmbeddings/utils/data_util.py
contrib/SpatialEmbeddings/utils/data_util.py
+87
-0
contrib/SpatialEmbeddings/utils/palette.py
contrib/SpatialEmbeddings/utils/palette.py
+38
-0
contrib/SpatialEmbeddings/utils/util.py
contrib/SpatialEmbeddings/utils/util.py
+47
-0
dygraph/benchmark/deeplabv3p.py
dygraph/benchmark/deeplabv3p.py
+23
-18
dygraph/benchmark/hrnet.py
dygraph/benchmark/hrnet.py
+22
-17
dygraph/core/infer.py
dygraph/core/infer.py
+3
-3
dygraph/core/train.py
dygraph/core/train.py
+61
-59
dygraph/core/val.py
dygraph/core/val.py
+18
-18
dygraph/infer.py
dygraph/infer.py
+1
-1
dygraph/models/hrnet.py
dygraph/models/hrnet.py
+5
-6
dygraph/train.py
dygraph/train.py
+22
-17
dygraph/utils/__init__.py
dygraph/utils/__init__.py
+2
-1
dygraph/utils/get_environ_info.py
dygraph/utils/get_environ_info.py
+113
-0
dygraph/utils/logger.py
dygraph/utils/logger.py
+0
-0
dygraph/utils/utils.py
dygraph/utils/utils.py
+8
-26
dygraph/val.py
dygraph/val.py
+1
-1
未找到文件。
contrib/SpatialEmbeddings/README.md
0 → 100644
浏览文件 @
1a5a29d0
# SpatialEmbeddings
## 模型概述
本模型是基于proposal-free的实例分割模型,快速实时,同时准确率高,适用于自动驾驶等实时场景。
本模型基于KITTI中MOTS数据集训练得到,是论文 Segment as Points for Efficient Online Multi-Object Tracking and Segmentation中的分割部分
[
论文地址
](
https://arxiv.org/pdf/2007.01550.pdf
)
## KITTI MOTS指标
KITTI MOTS验证集AP:0.76, AP_50%:0.915
## 代码使用说明
### 1. 模型下载
执行以下命令下载并解压SpatialEmbeddings预测模型:
```
python download_SpatialEmbeddings_kitti.py
```
或点击
[
链接
](
https://paddleseg.bj.bcebos.com/models/SpatialEmbeddings_kitti.tar
)
进行手动下载并解压。
### 2. 数据下载
前往KITTI官网下载MOTS比赛数据
[
链接
](
https://www.vision.rwth-aachen.de/page/mots
)
下载后解压到./data文件夹下, 并生成验证集图片路径的test.txt
### 3. 快速预测
使用GPU预测
```
python -u infer.py --use_gpu
```
使用CPU预测:
```
python -u infer.py
```
数据及模型路径等详细配置见config.py文件
#### 4. 预测结果示例:
原图:
!
[](
imgs/kitti_0007_000518_ori.png
)
预测结果:
!
[](
imgs/kitti_0007_000518_pred.png
)
## 引用
**论文**
*Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth*
**代码**
https://github.com/davyneven/SpatialEmbeddings
contrib/SpatialEmbeddings/config.py
0 → 100644
浏览文件 @
1a5a29d0
# -*- coding: utf-8 -*-
from
utils.util
import
AttrDict
,
merge_cfg_from_args
,
get_arguments
import
os
args
=
get_arguments
()
cfg
=
AttrDict
()
# 待预测图像所在路径
cfg
.
data_dir
=
"data"
# 待预测图像名称列表
cfg
.
data_list_file
=
os
.
path
.
join
(
"data"
,
"test.txt"
)
# 模型加载路径
cfg
.
model_path
=
'SpatialEmbeddings_kitti'
# 预测结果保存路径
cfg
.
vis_dir
=
"result"
# sigma值
cfg
.
n_sigma
=
2
# 中心点阈值
cfg
.
threshold
=
0.94
# 点集数阈值
cfg
.
min_pixel
=
160
merge_cfg_from_args
(
args
,
cfg
)
contrib/SpatialEmbeddings/data/kitti/0007/kitti_0007_000512.png
0 → 100755
浏览文件 @
1a5a29d0
952.5 KB
contrib/SpatialEmbeddings/data/kitti/0007/kitti_0007_000518.png
0 → 100755
浏览文件 @
1a5a29d0
960.0 KB
contrib/SpatialEmbeddings/data/test.txt
0 → 100644
浏览文件 @
1a5a29d0
kitti/0007/kitti_0007_000512.png
kitti/0007/kitti_0007_000518.png
contrib/SpatialEmbeddings/download_SpatialEmbeddings_kitti.py
0 → 100644
浏览文件 @
1a5a29d0
# coding: utf8
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
sys
import
os
LOCAL_PATH
=
os
.
path
.
dirname
(
os
.
path
.
abspath
(
__file__
))
TEST_PATH
=
os
.
path
.
join
(
LOCAL_PATH
,
".."
,
".."
,
"test"
)
sys
.
path
.
append
(
TEST_PATH
)
from
test_utils
import
download_file_and_uncompress
if
__name__
==
"__main__"
:
download_file_and_uncompress
(
url
=
'https://paddleseg.bj.bcebos.com/models/SpatialEmbeddings_kitti.tar'
,
savepath
=
LOCAL_PATH
,
extrapath
=
LOCAL_PATH
,
extraname
=
'SpatialEmbeddings_kitti'
)
print
(
"Pretrained Model download success!"
)
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_ori.png
0 → 100755
浏览文件 @
1a5a29d0
960.0 KB
contrib/SpatialEmbeddings/imgs/kitti_0007_000518_pred.png
0 → 100644
浏览文件 @
1a5a29d0
1.7 KB
contrib/SpatialEmbeddings/infer.py
0 → 100644
浏览文件 @
1a5a29d0
# -*- coding: utf-8 -*-
import
os
import
numpy
as
np
from
utils.util
import
get_arguments
from
utils.palette
import
get_palette
from
utils.data_util
import
Cluster
,
pad_img
from
PIL
import
Image
as
PILImage
import
importlib
import
paddle.fluid
as
fluid
args
=
get_arguments
()
config
=
importlib
.
import_module
(
'config'
)
cfg
=
getattr
(
config
,
'cfg'
)
cluster
=
Cluster
()
# 预测数据集类
class
TestDataSet
():
def
__init__
(
self
):
self
.
data_dir
=
cfg
.
data_dir
self
.
data_list_file
=
cfg
.
data_list_file
self
.
data_list
=
self
.
get_data_list
()
self
.
data_num
=
len
(
self
.
data_list
)
def
get_data_list
(
self
):
# 获取预测图像路径列表
data_list
=
[]
data_file_handler
=
open
(
self
.
data_list_file
,
'r'
)
for
line
in
data_file_handler
:
img_name
=
line
.
strip
()
name_prefix
=
img_name
.
split
(
'.'
)[
0
]
if
len
(
img_name
.
split
(
'.'
))
==
1
:
img_name
=
img_name
+
'.jpg'
img_path
=
os
.
path
.
join
(
self
.
data_dir
,
img_name
)
data_list
.
append
(
img_path
)
return
data_list
def
preprocess
(
self
,
img
):
# 图像预处理
h
,
w
=
img
.
shape
[:
2
]
h_new
=
(
h
//
32
+
1
if
h
%
32
!=
0
else
h
//
32
)
*
32
w_new
=
(
w
//
32
+
1
if
w
%
32
!=
0
else
w
//
32
)
*
32
img
=
np
.
pad
(
img
,
((
0
,
h_new
-
h
),
(
0
,
w_new
-
w
),
(
0
,
0
)),
'edge'
)
img
=
img
.
astype
(
np
.
float32
)
/
255.0
img
=
img
.
transpose
((
2
,
0
,
1
))
img
=
np
.
expand_dims
(
img
,
axis
=
0
)
return
img
def
get_data
(
self
,
index
):
# 获取图像信息
img_path
=
self
.
data_list
[
index
]
img
=
np
.
array
(
PILImage
.
open
(
img_path
))
if
img
is
None
:
return
img
,
img
,
img_path
,
None
img_name
=
img_path
.
split
(
os
.
sep
)[
-
1
]
name_prefix
=
img_name
.
replace
(
'.'
+
img_name
.
split
(
'.'
)[
-
1
],
''
)
img_shape
=
img
.
shape
[:
2
]
img_process
=
self
.
preprocess
(
img
)
return
img_process
,
name_prefix
,
img_shape
def
infer
():
if
not
os
.
path
.
exists
(
cfg
.
vis_dir
):
os
.
makedirs
(
cfg
.
vis_dir
)
place
=
fluid
.
CUDAPlace
(
0
)
if
cfg
.
use_gpu
else
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
# 加载预测模型
test_prog
,
feed_name
,
fetch_list
=
fluid
.
io
.
load_inference_model
(
dirname
=
cfg
.
model_path
,
executor
=
exe
,
params_filename
=
'__params__'
)
#加载预测数据集
test_dataset
=
TestDataSet
()
data_num
=
test_dataset
.
data_num
for
idx
in
range
(
data_num
):
# 数据获取
image
,
im_name
,
im_shape
=
test_dataset
.
get_data
(
idx
)
if
image
is
None
:
print
(
im_name
,
'is None'
)
continue
# 预测
output
=
exe
.
run
(
program
=
test_prog
,
feed
=
{
feed_name
[
0
]:
image
},
fetch_list
=
fetch_list
)
instance_map
,
predictions
=
cluster
.
cluster
(
output
[
0
][
0
],
n_sigma
=
cfg
.
n_sigma
,
\
min_pixel
=
cfg
.
min_pixel
,
threshold
=
cfg
.
threshold
)
# 预测结果保存
instance_map
=
pad_img
(
instance_map
,
image
.
shape
[
2
:])
instance_map
=
instance_map
[:
im_shape
[
0
],
:
im_shape
[
1
]]
output_im
=
PILImage
.
fromarray
(
np
.
asarray
(
instance_map
,
dtype
=
np
.
uint8
))
palette
=
get_palette
(
len
(
predictions
)
+
1
)
output_im
.
putpalette
(
palette
)
result_path
=
os
.
path
.
join
(
cfg
.
vis_dir
,
im_name
+
'.png'
)
output_im
.
save
(
result_path
)
if
(
idx
+
1
)
%
100
==
0
:
print
(
'%d processd'
%
(
idx
+
1
))
print
(
'%d processd done'
%
(
idx
+
1
))
return
0
if
__name__
==
"__main__"
:
infer
()
contrib/SpatialEmbeddings/utils/__init__.py
0 → 100644
浏览文件 @
1a5a29d0
contrib/SpatialEmbeddings/utils/data_util.py
0 → 100644
浏览文件 @
1a5a29d0
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
os
import
numpy
as
np
from
PIL
import
Image
as
PILImage
def
sigmoid_np
(
x
):
return
1
/
(
1
+
np
.
exp
(
-
x
))
class
Cluster
:
def
__init__
(
self
,
):
xm
=
np
.
repeat
(
np
.
linspace
(
0
,
2
,
2048
)[
np
.
newaxis
,
np
.
newaxis
,:],
1024
,
axis
=
1
)
ym
=
np
.
repeat
(
np
.
linspace
(
0
,
1
,
1024
)[
np
.
newaxis
,
:,
np
.
newaxis
],
2048
,
axis
=
2
)
self
.
xym
=
np
.
vstack
((
xm
,
ym
))
def
cluster
(
self
,
prediction
,
n_sigma
=
1
,
min_pixel
=
160
,
threshold
=
0.5
):
height
,
width
=
prediction
.
shape
[
1
:
3
]
xym_s
=
self
.
xym
[:,
0
:
height
,
0
:
width
]
spatial_emb
=
np
.
tanh
(
prediction
[
0
:
2
])
+
xym_s
sigma
=
prediction
[
2
:
2
+
n_sigma
]
seed_map
=
sigmoid_np
(
prediction
[
2
+
n_sigma
:
2
+
n_sigma
+
1
])
instance_map
=
np
.
zeros
((
height
,
width
),
np
.
float32
)
instances
=
[]
count
=
1
mask
=
seed_map
>
0.5
if
mask
.
sum
()
>
min_pixel
:
spatial_emb_masked
=
spatial_emb
[
np
.
repeat
(
mask
,
\
spatial_emb
.
shape
[
0
],
0
)].
reshape
(
2
,
-
1
)
sigma_masked
=
sigma
[
np
.
repeat
(
mask
,
n_sigma
,
0
)].
reshape
(
n_sigma
,
-
1
)
seed_map_masked
=
seed_map
[
mask
].
reshape
(
1
,
-
1
)
unclustered
=
np
.
ones
(
mask
.
sum
(),
np
.
float32
)
instance_map_masked
=
np
.
zeros
(
mask
.
sum
(),
np
.
float32
)
while
(
unclustered
.
sum
()
>
min_pixel
):
seed
=
(
seed_map_masked
*
unclustered
).
argmax
().
item
()
seed_score
=
(
seed_map_masked
*
unclustered
).
max
().
item
()
if
seed_score
<
threshold
:
break
center
=
spatial_emb_masked
[:,
seed
:
seed
+
1
]
unclustered
[
seed
]
=
0
s
=
np
.
exp
(
sigma_masked
[:,
seed
:
seed
+
1
]
*
10
)
dist
=
np
.
exp
(
-
1
*
np
.
sum
((
spatial_emb_masked
-
center
)
**
2
*
s
,
0
))
proposal
=
(
dist
>
0.5
).
squeeze
()
if
proposal
.
sum
()
>
min_pixel
:
if
unclustered
[
proposal
].
sum
()
/
proposal
.
sum
()
>
0.5
:
instance_map_masked
[
proposal
.
squeeze
()]
=
count
instance_mask
=
np
.
zeros
((
height
,
width
),
np
.
float32
)
instance_mask
[
mask
.
squeeze
()]
=
proposal
instances
.
append
(
{
'mask'
:
(
instance_mask
.
squeeze
()
*
255
).
astype
(
np
.
uint8
),
\
'score'
:
seed_score
})
count
+=
1
unclustered
[
proposal
]
=
0
instance_map
[
mask
.
squeeze
()]
=
instance_map_masked
return
instance_map
,
instances
def
pad_img
(
img
,
dst_shape
,
mode
=
'constant'
):
img_h
,
img_w
=
img
.
shape
[:
2
]
dst_h
,
dst_w
=
dst_shape
pad_shape
=
((
0
,
max
(
0
,
dst_h
-
img_h
)),
(
0
,
max
(
0
,
dst_w
-
img_w
)))
return
np
.
pad
(
img
,
pad_shape
,
mode
)
def
save_for_eval
(
predictions
,
infer_shape
,
im_shape
,
vis_dir
,
im_name
):
txt_file
=
os
.
path
.
join
(
vis_dir
,
im_name
+
'.txt'
)
with
open
(
txt_file
,
'w'
)
as
f
:
for
id
,
pred
in
enumerate
(
predictions
):
save_name
=
im_name
+
'_{:02d}.png'
.
format
(
id
)
pred_mask
=
pad_img
(
pred
[
'mask'
],
infer_shape
)
pred_mask
=
pred_mask
[:
im_shape
[
0
],
:
im_shape
[
1
]]
im
=
PILImage
.
fromarray
(
pred_mask
)
im
.
save
(
os
.
path
.
join
(
vis_dir
,
save_name
))
cl
=
26
score
=
pred
[
'score'
]
f
.
writelines
(
"{} {} {:.02f}
\n
"
.
format
(
save_name
,
cl
,
score
))
contrib/SpatialEmbeddings/utils/palette.py
0 → 100644
浏览文件 @
1a5a29d0
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: RainbowSecret
## Microsoft Research
## yuyua@microsoft.com
## Copyright (c) 2018
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numpy
as
np
import
cv2
def
get_palette
(
num_cls
):
""" Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
"""
n
=
num_cls
palette
=
[
0
]
*
(
n
*
3
)
for
j
in
range
(
0
,
n
):
lab
=
j
palette
[
j
*
3
+
0
]
=
0
palette
[
j
*
3
+
1
]
=
0
palette
[
j
*
3
+
2
]
=
0
i
=
0
while
lab
:
palette
[
j
*
3
+
0
]
|=
(((
lab
>>
0
)
&
1
)
<<
(
7
-
i
))
palette
[
j
*
3
+
1
]
|=
(((
lab
>>
1
)
&
1
)
<<
(
7
-
i
))
palette
[
j
*
3
+
2
]
|=
(((
lab
>>
2
)
&
1
)
<<
(
7
-
i
))
i
+=
1
lab
>>=
3
return
palette
contrib/SpatialEmbeddings/utils/util.py
0 → 100644
浏览文件 @
1a5a29d0
from
__future__
import
division
from
__future__
import
print_function
from
__future__
import
unicode_literals
import
argparse
import
os
def
get_arguments
():
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
"--use_gpu"
,
action
=
"store_true"
,
help
=
"Use gpu or cpu to test."
)
parser
.
add_argument
(
'--example'
,
type
=
str
,
help
=
'RoadLine, HumanSeg or ACE2P'
)
return
parser
.
parse_args
()
class
AttrDict
(
dict
):
def
__init__
(
self
,
*
args
,
**
kwargs
):
super
(
AttrDict
,
self
).
__init__
(
*
args
,
**
kwargs
)
def
__getattr__
(
self
,
name
):
if
name
in
self
.
__dict__
:
return
self
.
__dict__
[
name
]
elif
name
in
self
:
return
self
[
name
]
else
:
raise
AttributeError
(
name
)
def
__setattr__
(
self
,
name
,
value
):
if
name
in
self
.
__dict__
:
self
.
__dict__
[
name
]
=
value
else
:
self
[
name
]
=
value
def
merge_cfg_from_args
(
args
,
cfg
):
"""Merge config keys, values in args into the global config."""
for
k
,
v
in
vars
(
args
).
items
():
d
=
cfg
try
:
value
=
eval
(
v
)
except
:
value
=
v
if
value
is
not
None
:
cfg
[
k
]
=
value
dygraph/benchmark/deeplabv3p.py
浏览文件 @
1a5a29d0
...
@@ -21,6 +21,7 @@ from dygraph.datasets import DATASETS
...
@@ -21,6 +21,7 @@ from dygraph.datasets import DATASETS
import
dygraph.transforms
as
T
import
dygraph.transforms
as
T
from
dygraph.models
import
MODELS
from
dygraph.models
import
MODELS
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
logger
from
dygraph.core
import
train
from
dygraph.core
import
train
...
@@ -60,11 +61,11 @@ def parse_args():
...
@@ -60,11 +61,11 @@ def parse_args():
default
=
[
512
,
512
],
default
=
[
512
,
512
],
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
'--
num_epoch
s'
,
'--
iter
s'
,
dest
=
'
num_epoch
s'
,
dest
=
'
iter
s'
,
help
=
'
Number epoch
s for training'
,
help
=
'
iter
s for training'
,
type
=
int
,
type
=
int
,
default
=
100
)
default
=
100
00
)
parser
.
add_argument
(
parser
.
add_argument
(
'--batch_size'
,
'--batch_size'
,
dest
=
'batch_size'
,
dest
=
'batch_size'
,
...
@@ -90,9 +91,9 @@ def parse_args():
...
@@ -90,9 +91,9 @@ def parse_args():
type
=
str
,
type
=
str
,
default
=
None
)
default
=
None
)
parser
.
add_argument
(
parser
.
add_argument
(
'--save_interval_
epoch
s'
,
'--save_interval_
iter
s'
,
dest
=
'save_interval_
epoch
s'
,
dest
=
'save_interval_
iter
s'
,
help
=
'The interval
epoch
s for save a model snapshot'
,
help
=
'The interval
iter
s for save a model snapshot'
,
type
=
int
,
type
=
int
,
default
=
5
)
default
=
5
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -113,9 +114,9 @@ def parse_args():
...
@@ -113,9 +114,9 @@ def parse_args():
help
=
'Eval while training'
,
help
=
'Eval while training'
,
action
=
'store_true'
)
action
=
'store_true'
)
parser
.
add_argument
(
parser
.
add_argument
(
'--log_
step
s'
,
'--log_
iter
s'
,
dest
=
'log_
step
s'
,
dest
=
'log_
iter
s'
,
help
=
'Display logging information at every log_
step
s'
,
help
=
'Display logging information at every log_
iter
s'
,
default
=
10
,
default
=
10
,
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -129,8 +130,13 @@ def parse_args():
...
@@ -129,8 +130,13 @@ def parse_args():
def
main
(
args
):
def
main
(
args
):
env_info
=
get_environ_info
()
env_info
=
get_environ_info
()
info
=
[
'{}: {}'
.
format
(
k
,
v
)
for
k
,
v
in
env_info
.
items
()]
info
=
'
\n
'
.
join
([
'
\n
'
,
format
(
'Environment Information'
,
'-^48s'
)]
+
info
+
[
'-'
*
48
])
logger
.
info
(
info
)
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
if
env_info
[
'
place'
]
==
'cuda'
and
fluid
.
is_compiled_with_cuda
()
\
if
env_info
[
'
Paddle compiled with cuda'
]
and
env_info
[
'GPUs used'
]
\
else
fluid
.
CPUPlace
()
else
fluid
.
CPUPlace
()
if
args
.
dataset
not
in
DATASETS
:
if
args
.
dataset
not
in
DATASETS
:
...
@@ -155,7 +161,7 @@ def main(args):
...
@@ -155,7 +161,7 @@ def main(args):
eval_dataset
=
None
eval_dataset
=
None
if
args
.
do_eval
:
if
args
.
do_eval
:
eval_transforms
=
T
.
Compose
(
eval_transforms
=
T
.
Compose
(
[
T
.
Padding
((
2049
,
1025
)
),
[
T
.
Resize
(
args
.
input_size
),
T
.
Normalize
()])
T
.
Normalize
()])
eval_dataset
=
dataset
(
eval_dataset
=
dataset
(
dataset_root
=
args
.
dataset_root
,
dataset_root
=
args
.
dataset_root
,
...
@@ -170,11 +176,10 @@ def main(args):
...
@@ -170,11 +176,10 @@ def main(args):
# Creat optimizer
# Creat optimizer
# todo, may less one than len(loader)
# todo, may less one than len(loader)
num_
step
s_each_epoch
=
len
(
train_dataset
)
//
(
num_
iter
s_each_epoch
=
len
(
train_dataset
)
//
(
args
.
batch_size
*
ParallelEnv
().
nranks
)
args
.
batch_size
*
ParallelEnv
().
nranks
)
decay_step
=
args
.
num_epochs
*
num_steps_each_epoch
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
args
.
learning_rate
,
decay_step
,
end_learning_rate
=
0
,
power
=
0.9
)
args
.
learning_rate
,
args
.
iters
,
end_learning_rate
=
0
,
power
=
0.9
)
optimizer
=
fluid
.
optimizer
.
Momentum
(
optimizer
=
fluid
.
optimizer
.
Momentum
(
lr_decay
,
lr_decay
,
momentum
=
0.9
,
momentum
=
0.9
,
...
@@ -188,12 +193,12 @@ def main(args):
...
@@ -188,12 +193,12 @@ def main(args):
eval_dataset
=
eval_dataset
,
eval_dataset
=
eval_dataset
,
optimizer
=
optimizer
,
optimizer
=
optimizer
,
save_dir
=
args
.
save_dir
,
save_dir
=
args
.
save_dir
,
num_epochs
=
args
.
num_epoch
s
,
iters
=
args
.
iter
s
,
batch_size
=
args
.
batch_size
,
batch_size
=
args
.
batch_size
,
pretrained_model
=
args
.
pretrained_model
,
pretrained_model
=
args
.
pretrained_model
,
resume_model
=
args
.
resume_model
,
resume_model
=
args
.
resume_model
,
save_interval_
epochs
=
args
.
save_interval_epoch
s
,
save_interval_
iters
=
args
.
save_interval_iter
s
,
log_
steps
=
args
.
log_step
s
,
log_
iters
=
args
.
log_iter
s
,
num_classes
=
train_dataset
.
num_classes
,
num_classes
=
train_dataset
.
num_classes
,
num_workers
=
args
.
num_workers
,
num_workers
=
args
.
num_workers
,
use_vdl
=
args
.
use_vdl
)
use_vdl
=
args
.
use_vdl
)
...
...
dygraph/benchmark/hrnet.py
浏览文件 @
1a5a29d0
...
@@ -21,6 +21,7 @@ from dygraph.datasets import DATASETS
...
@@ -21,6 +21,7 @@ from dygraph.datasets import DATASETS
import
dygraph.transforms
as
T
import
dygraph.transforms
as
T
from
dygraph.models
import
MODELS
from
dygraph.models
import
MODELS
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
logger
from
dygraph.core
import
train
from
dygraph.core
import
train
...
@@ -60,11 +61,11 @@ def parse_args():
...
@@ -60,11 +61,11 @@ def parse_args():
default
=
[
512
,
512
],
default
=
[
512
,
512
],
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
'--
num_epoch
s'
,
'--
iter
s'
,
dest
=
'
num_epoch
s'
,
dest
=
'
iter
s'
,
help
=
'
Number epoch
s for training'
,
help
=
'
iter
s for training'
,
type
=
int
,
type
=
int
,
default
=
100
)
default
=
100
00
)
parser
.
add_argument
(
parser
.
add_argument
(
'--batch_size'
,
'--batch_size'
,
dest
=
'batch_size'
,
dest
=
'batch_size'
,
...
@@ -90,9 +91,9 @@ def parse_args():
...
@@ -90,9 +91,9 @@ def parse_args():
type
=
str
,
type
=
str
,
default
=
None
)
default
=
None
)
parser
.
add_argument
(
parser
.
add_argument
(
'--save_interval_
epoch
s'
,
'--save_interval_
iter
s'
,
dest
=
'save_interval_
epoch
s'
,
dest
=
'save_interval_
iter
s'
,
help
=
'The interval
epoch
s for save a model snapshot'
,
help
=
'The interval
iter
s for save a model snapshot'
,
type
=
int
,
type
=
int
,
default
=
5
)
default
=
5
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -113,9 +114,9 @@ def parse_args():
...
@@ -113,9 +114,9 @@ def parse_args():
help
=
'Eval while training'
,
help
=
'Eval while training'
,
action
=
'store_true'
)
action
=
'store_true'
)
parser
.
add_argument
(
parser
.
add_argument
(
'--log_
step
s'
,
'--log_
iter
s'
,
dest
=
'log_
step
s'
,
dest
=
'log_
iter
s'
,
help
=
'Display logging information at every log_
step
s'
,
help
=
'Display logging information at every log_
iter
s'
,
default
=
10
,
default
=
10
,
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -129,8 +130,13 @@ def parse_args():
...
@@ -129,8 +130,13 @@ def parse_args():
def
main
(
args
):
def
main
(
args
):
env_info
=
get_environ_info
()
env_info
=
get_environ_info
()
info
=
[
'{}: {}'
.
format
(
k
,
v
)
for
k
,
v
in
env_info
.
items
()]
info
=
'
\n
'
.
join
([
'
\n
'
,
format
(
'Environment Information'
,
'-^48s'
)]
+
info
+
[
'-'
*
48
])
logger
.
info
(
info
)
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
if
env_info
[
'
place'
]
==
'cuda'
and
fluid
.
is_compiled_with_cuda
()
\
if
env_info
[
'
Paddle compiled with cuda'
]
and
env_info
[
'GPUs used'
]
\
else
fluid
.
CPUPlace
()
else
fluid
.
CPUPlace
()
if
args
.
dataset
not
in
DATASETS
:
if
args
.
dataset
not
in
DATASETS
:
...
@@ -168,11 +174,10 @@ def main(args):
...
@@ -168,11 +174,10 @@ def main(args):
# Creat optimizer
# Creat optimizer
# todo, may less one than len(loader)
# todo, may less one than len(loader)
num_
step
s_each_epoch
=
len
(
train_dataset
)
//
(
num_
iter
s_each_epoch
=
len
(
train_dataset
)
//
(
args
.
batch_size
*
ParallelEnv
().
nranks
)
args
.
batch_size
*
ParallelEnv
().
nranks
)
decay_step
=
args
.
num_epochs
*
num_steps_each_epoch
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
args
.
learning_rate
,
decay_step
,
end_learning_rate
=
0
,
power
=
0.9
)
args
.
learning_rate
,
args
.
iters
,
end_learning_rate
=
0
,
power
=
0.9
)
optimizer
=
fluid
.
optimizer
.
Momentum
(
optimizer
=
fluid
.
optimizer
.
Momentum
(
lr_decay
,
lr_decay
,
momentum
=
0.9
,
momentum
=
0.9
,
...
@@ -186,12 +191,12 @@ def main(args):
...
@@ -186,12 +191,12 @@ def main(args):
eval_dataset
=
eval_dataset
,
eval_dataset
=
eval_dataset
,
optimizer
=
optimizer
,
optimizer
=
optimizer
,
save_dir
=
args
.
save_dir
,
save_dir
=
args
.
save_dir
,
num_epochs
=
args
.
num_epoch
s
,
iters
=
args
.
iter
s
,
batch_size
=
args
.
batch_size
,
batch_size
=
args
.
batch_size
,
pretrained_model
=
args
.
pretrained_model
,
pretrained_model
=
args
.
pretrained_model
,
resume_model
=
args
.
resume_model
,
resume_model
=
args
.
resume_model
,
save_interval_
epochs
=
args
.
save_interval_epoch
s
,
save_interval_
iters
=
args
.
save_interval_iter
s
,
log_
steps
=
args
.
log_step
s
,
log_
iters
=
args
.
log_iter
s
,
num_classes
=
train_dataset
.
num_classes
,
num_classes
=
train_dataset
.
num_classes
,
num_workers
=
args
.
num_workers
,
num_workers
=
args
.
num_workers
,
use_vdl
=
args
.
use_vdl
)
use_vdl
=
args
.
use_vdl
)
...
...
dygraph/core/infer.py
浏览文件 @
1a5a29d0
...
@@ -21,7 +21,7 @@ import cv2
...
@@ -21,7 +21,7 @@ import cv2
import
tqdm
import
tqdm
from
dygraph
import
utils
from
dygraph
import
utils
import
dygraph.utils.logg
ing
as
logging
import
dygraph.utils.logg
er
as
logger
def
mkdir
(
path
):
def
mkdir
(
path
):
...
@@ -39,7 +39,7 @@ def infer(model, test_dataset=None, model_dir=None, save_dir='output'):
...
@@ -39,7 +39,7 @@ def infer(model, test_dataset=None, model_dir=None, save_dir='output'):
added_saved_dir
=
os
.
path
.
join
(
save_dir
,
'added'
)
added_saved_dir
=
os
.
path
.
join
(
save_dir
,
'added'
)
pred_saved_dir
=
os
.
path
.
join
(
save_dir
,
'prediction'
)
pred_saved_dir
=
os
.
path
.
join
(
save_dir
,
'prediction'
)
logg
ing
.
info
(
"Start to predict..."
)
logg
er
.
info
(
"Start to predict..."
)
for
im
,
im_info
,
im_path
in
tqdm
.
tqdm
(
test_dataset
):
for
im
,
im_info
,
im_path
in
tqdm
.
tqdm
(
test_dataset
):
im
=
to_variable
(
im
)
im
=
to_variable
(
im
)
pred
,
_
=
model
(
im
)
pred
,
_
=
model
(
im
)
...
@@ -56,7 +56,7 @@ def infer(model, test_dataset=None, model_dir=None, save_dir='output'):
...
@@ -56,7 +56,7 @@ def infer(model, test_dataset=None, model_dir=None, save_dir='output'):
raise
Exception
(
"Unexpected info '{}' in im_info"
.
format
(
raise
Exception
(
"Unexpected info '{}' in im_info"
.
format
(
info
[
0
]))
info
[
0
]))
im_file
=
im_path
.
replace
(
test_dataset
.
data
_dir
,
''
)
im_file
=
im_path
.
replace
(
test_dataset
.
data
set_root
,
''
)
if
im_file
[
0
]
==
'/'
:
if
im_file
[
0
]
==
'/'
:
im_file
=
im_file
[
1
:]
im_file
=
im_file
[
1
:]
# save added image
# save added image
...
...
dygraph/core/train.py
浏览文件 @
1a5a29d0
...
@@ -19,7 +19,7 @@ from paddle.fluid.dygraph.parallel import ParallelEnv
...
@@ -19,7 +19,7 @@ from paddle.fluid.dygraph.parallel import ParallelEnv
from
paddle.fluid.io
import
DataLoader
from
paddle.fluid.io
import
DataLoader
from
paddle.incubate.hapi.distributed
import
DistributedBatchSampler
from
paddle.incubate.hapi.distributed
import
DistributedBatchSampler
import
dygraph.utils.logg
ing
as
logging
import
dygraph.utils.logg
er
as
logger
from
dygraph.utils
import
load_pretrained_model
from
dygraph.utils
import
load_pretrained_model
from
dygraph.utils
import
resume
from
dygraph.utils
import
resume
from
dygraph.utils
import
Timer
,
calculate_eta
from
dygraph.utils
import
Timer
,
calculate_eta
...
@@ -32,21 +32,21 @@ def train(model,
...
@@ -32,21 +32,21 @@ def train(model,
eval_dataset
=
None
,
eval_dataset
=
None
,
optimizer
=
None
,
optimizer
=
None
,
save_dir
=
'output'
,
save_dir
=
'output'
,
num_epochs
=
1
00
,
iters
=
100
00
,
batch_size
=
2
,
batch_size
=
2
,
pretrained_model
=
None
,
pretrained_model
=
None
,
resume_model
=
None
,
resume_model
=
None
,
save_interval_
epochs
=
1
,
save_interval_
iters
=
1000
,
log_
step
s
=
10
,
log_
iter
s
=
10
,
num_classes
=
None
,
num_classes
=
None
,
num_workers
=
8
,
num_workers
=
8
,
use_vdl
=
False
):
use_vdl
=
False
):
ignore_index
=
model
.
ignore_index
ignore_index
=
model
.
ignore_index
nranks
=
ParallelEnv
().
nranks
nranks
=
ParallelEnv
().
nranks
start_
epoch
=
0
start_
iter
=
0
if
resume_model
is
not
None
:
if
resume_model
is
not
None
:
start_
epoch
=
resume
(
model
,
optimizer
,
resume_model
)
start_
iter
=
resume
(
model
,
optimizer
,
resume_model
)
elif
pretrained_model
is
not
None
:
elif
pretrained_model
is
not
None
:
load_pretrained_model
(
model
,
pretrained_model
)
load_pretrained_model
(
model
,
pretrained_model
)
...
@@ -75,16 +75,19 @@ def train(model,
...
@@ -75,16 +75,19 @@ def train(model,
timer
=
Timer
()
timer
=
Timer
()
avg_loss
=
0.0
avg_loss
=
0.0
steps_per_epoch
=
len
(
batch_sampler
)
iters_per_epoch
=
len
(
batch_sampler
)
total_steps
=
steps_per_epoch
*
(
num_epochs
-
start_epoch
)
num_steps
=
0
best_mean_iou
=
-
1.0
best_mean_iou
=
-
1.0
best_model_
epoch
=
-
1
best_model_
iter
=
-
1
train_reader_cost
=
0.0
train_reader_cost
=
0.0
train_batch_cost
=
0.0
train_batch_cost
=
0.0
for
epoch
in
range
(
start_epoch
,
num_epochs
):
timer
.
start
()
timer
.
start
()
for
step
,
data
in
enumerate
(
loader
):
iter
=
0
while
iter
<
iters
:
for
data
in
loader
:
iter
+=
1
if
iter
>
iters
:
break
train_reader_cost
+=
timer
.
elapsed_time
()
train_reader_cost
+=
timer
.
elapsed_time
()
images
=
data
[
0
]
images
=
data
[
0
]
labels
=
data
[
1
].
astype
(
'int64'
)
labels
=
data
[
1
].
astype
(
'int64'
)
...
@@ -101,64 +104,63 @@ def train(model,
...
@@ -101,64 +104,63 @@ def train(model,
model
.
clear_gradients
()
model
.
clear_gradients
()
avg_loss
+=
loss
.
numpy
()[
0
]
avg_loss
+=
loss
.
numpy
()[
0
]
lr
=
optimizer
.
current_step_lr
()
lr
=
optimizer
.
current_step_lr
()
num_steps
+=
1
train_batch_cost
+=
timer
.
elapsed_time
()
train_batch_cost
+=
timer
.
elapsed_time
()
if
num_steps
%
log_step
s
==
0
and
ParallelEnv
().
local_rank
==
0
:
if
(
iter
)
%
log_iter
s
==
0
and
ParallelEnv
().
local_rank
==
0
:
avg_loss
/=
log_
step
s
avg_loss
/=
log_
iter
s
avg_train_reader_cost
=
train_reader_cost
/
log_
step
s
avg_train_reader_cost
=
train_reader_cost
/
log_
iter
s
avg_train_batch_cost
=
train_batch_cost
/
log_
step
s
avg_train_batch_cost
=
train_batch_cost
/
log_
iter
s
train_reader_cost
=
0.0
train_reader_cost
=
0.0
train_batch_cost
=
0.0
train_batch_cost
=
0.0
remain_
steps
=
total_steps
-
num_steps
remain_
iters
=
iters
-
iter
eta
=
calculate_eta
(
remain_
step
s
,
avg_train_batch_cost
)
eta
=
calculate_eta
(
remain_
iter
s
,
avg_train_batch_cost
)
logg
ing
.
info
(
logg
er
.
info
(
"[TRAIN]
Epoch={}/{}, Step
={}/{}, loss={:.4f}, lr={:.6f}, batch_cost={:.4f}, reader_cost={:.4f} | ETA {}"
"[TRAIN]
epoch={}, iter
={}/{}, loss={:.4f}, lr={:.6f}, batch_cost={:.4f}, reader_cost={:.4f} | ETA {}"
.
format
(
epoch
+
1
,
num_epochs
,
step
+
1
,
steps_per_epoch
,
.
format
(
(
iter
-
1
)
//
iters_per_epoch
+
1
,
iter
,
iters
,
avg_loss
*
nranks
,
lr
,
avg_train_batch_cost
,
avg_loss
*
nranks
,
lr
,
avg_train_batch_cost
,
avg_train_reader_cost
,
eta
))
avg_train_reader_cost
,
eta
))
if
use_vdl
:
if
use_vdl
:
log_writer
.
add_scalar
(
'Train/loss'
,
avg_loss
*
nranks
,
log_writer
.
add_scalar
(
'Train/loss'
,
avg_loss
*
nranks
,
iter
)
num_steps
)
log_writer
.
add_scalar
(
'Train/lr'
,
lr
,
iter
)
log_writer
.
add_scalar
(
'Train/lr'
,
lr
,
num_steps
)
log_writer
.
add_scalar
(
'Train/batch_cost'
,
log_writer
.
add_scalar
(
'Train/batch_cost'
,
avg_train_batch_cost
,
num_steps
)
avg_train_batch_cost
,
iter
)
log_writer
.
add_scalar
(
'Train/reader_cost'
,
log_writer
.
add_scalar
(
'Train/reader_cost'
,
avg_train_reader_cost
,
num_steps
)
avg_train_reader_cost
,
iter
)
avg_loss
=
0.0
avg_loss
=
0.0
timer
.
restart
()
timer
.
restart
()
if
((
epoch
+
1
)
%
save_interval_epoch
s
==
0
if
(
iter
%
save_interval_iter
s
==
0
or
epoch
+
1
==
num_epoch
s
)
and
ParallelEnv
().
local_rank
==
0
:
or
iter
==
iter
s
)
and
ParallelEnv
().
local_rank
==
0
:
current_save_dir
=
os
.
path
.
join
(
save_dir
,
current_save_dir
=
os
.
path
.
join
(
save_dir
,
"epoch_{}"
.
format
(
epoch
+
1
))
"iter_{}"
.
format
(
iter
))
if
not
os
.
path
.
isdir
(
current_save_dir
):
if
not
os
.
path
.
isdir
(
current_save_dir
):
os
.
makedirs
(
current_save_dir
)
os
.
makedirs
(
current_save_dir
)
fluid
.
save_dygraph
(
model
.
state_dict
(),
fluid
.
save_dygraph
(
model
.
state_dict
(),
os
.
path
.
join
(
current_save_dir
,
'model'
))
os
.
path
.
join
(
current_save_dir
,
'model'
))
fluid
.
save_dygraph
(
optimizer
.
state_dict
(),
fluid
.
save_dygraph
(
optimizer
.
state_dict
(),
os
.
path
.
join
(
current_save_dir
,
'model'
))
os
.
path
.
join
(
current_save_dir
,
'model'
))
if
eval_dataset
is
not
None
:
if
eval_dataset
is
not
None
:
mean_iou
,
avg_acc
=
evaluate
(
mean_iou
,
avg_acc
=
evaluate
(
model
,
model
,
eval_dataset
,
eval_dataset
,
model_dir
=
current_save_dir
,
model_dir
=
current_save_dir
,
num_classes
=
num_classes
,
num_classes
=
num_classes
,
ignore_index
=
ignore_index
,
ignore_index
=
ignore_index
,
epoch_id
=
epoch
+
1
)
iter_id
=
iter
)
if
mean_iou
>
best_mean_iou
:
if
mean_iou
>
best_mean_iou
:
best_mean_iou
=
mean_iou
best_mean_iou
=
mean_iou
best_model_epoch
=
epoch
+
1
best_model_iter
=
iter
best_model_dir
=
os
.
path
.
join
(
save_dir
,
"best_model"
)
best_model_dir
=
os
.
path
.
join
(
save_dir
,
"best_model"
)
fluid
.
save_dygraph
(
model
.
state_dict
(),
fluid
.
save_dygraph
(
os
.
path
.
join
(
best_model_dir
,
'model'
))
model
.
state_dict
(),
logging
.
info
(
os
.
path
.
join
(
best_model_dir
,
'model'
))
'Current evaluated best model in eval_dataset is epoch_{}, miou={:4f}'
logger
.
info
(
.
format
(
best_model_epoch
,
best_mean_iou
))
'Current evaluated best model in eval_dataset is iter_{}, miou={:4f}'
.
format
(
best_model_iter
,
best_mean_iou
))
if
use_vdl
:
if
use_vdl
:
log_writer
.
add_scalar
(
'Evaluate/mIoU'
,
mean_iou
,
epoch
+
1
)
log_writer
.
add_scalar
(
'Evaluate/mIoU'
,
mean_iou
,
iter
)
log_writer
.
add_scalar
(
'Evaluate/aAcc'
,
avg_acc
,
epoch
+
1
)
log_writer
.
add_scalar
(
'Evaluate/aAcc'
,
avg_acc
,
iter
)
model
.
train
()
model
.
train
()
if
use_vdl
:
if
use_vdl
:
log_writer
.
close
()
log_writer
.
close
()
dygraph/core/val.py
浏览文件 @
1a5a29d0
...
@@ -20,7 +20,7 @@ import cv2
...
@@ -20,7 +20,7 @@ import cv2
from
paddle.fluid.dygraph.base
import
to_variable
from
paddle.fluid.dygraph.base
import
to_variable
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
dygraph.utils.logg
ing
as
logging
import
dygraph.utils.logg
er
as
logger
from
dygraph.utils
import
ConfusionMatrix
from
dygraph.utils
import
ConfusionMatrix
from
dygraph.utils
import
Timer
,
calculate_eta
from
dygraph.utils
import
Timer
,
calculate_eta
...
@@ -30,22 +30,22 @@ def evaluate(model,
...
@@ -30,22 +30,22 @@ def evaluate(model,
model_dir
=
None
,
model_dir
=
None
,
num_classes
=
None
,
num_classes
=
None
,
ignore_index
=
255
,
ignore_index
=
255
,
epoch
_id
=
None
):
iter
_id
=
None
):
ckpt_path
=
os
.
path
.
join
(
model_dir
,
'model'
)
ckpt_path
=
os
.
path
.
join
(
model_dir
,
'model'
)
para_state_dict
,
opti_state_dict
=
fluid
.
load_dygraph
(
ckpt_path
)
para_state_dict
,
opti_state_dict
=
fluid
.
load_dygraph
(
ckpt_path
)
model
.
set_dict
(
para_state_dict
)
model
.
set_dict
(
para_state_dict
)
model
.
eval
()
model
.
eval
()
total_
step
s
=
len
(
eval_dataset
)
total_
iter
s
=
len
(
eval_dataset
)
conf_mat
=
ConfusionMatrix
(
num_classes
,
streaming
=
True
)
conf_mat
=
ConfusionMatrix
(
num_classes
,
streaming
=
True
)
logg
ing
.
info
(
logg
er
.
info
(
"Start to evaluating(total_samples={}, total_
step
s={})..."
.
format
(
"Start to evaluating(total_samples={}, total_
iter
s={})..."
.
format
(
len
(
eval_dataset
),
total_
step
s
))
len
(
eval_dataset
),
total_
iter
s
))
timer
=
Timer
()
timer
=
Timer
()
timer
.
start
()
timer
.
start
()
for
step
,
(
im
,
im_info
,
label
)
in
tqdm
.
tqdm
(
for
iter
,
(
im
,
im_info
,
label
)
in
tqdm
.
tqdm
(
enumerate
(
eval_dataset
),
total
=
total_
step
s
):
enumerate
(
eval_dataset
),
total
=
total_
iter
s
):
im
=
to_variable
(
im
)
im
=
to_variable
(
im
)
pred
,
_
=
model
(
im
)
pred
,
_
=
model
(
im
)
pred
=
pred
.
numpy
().
astype
(
'float32'
)
pred
=
pred
.
numpy
().
astype
(
'float32'
)
...
@@ -67,19 +67,19 @@ def evaluate(model,
...
@@ -67,19 +67,19 @@ def evaluate(model,
conf_mat
.
calculate
(
pred
=
pred
,
label
=
label
,
ignore
=
mask
)
conf_mat
.
calculate
(
pred
=
pred
,
label
=
label
,
ignore
=
mask
)
_
,
iou
=
conf_mat
.
mean_iou
()
_
,
iou
=
conf_mat
.
mean_iou
()
time_
step
=
timer
.
elapsed_time
()
time_
iter
=
timer
.
elapsed_time
()
remain_
step
=
total_steps
-
step
-
1
remain_
iter
=
total_iters
-
iter
-
1
logg
ing
.
debug
(
logg
er
.
debug
(
"[EVAL]
Epoch={}, Step={}/{}, iou={:4f}, sec/step={:.4f} | ETA {}"
.
"[EVAL]
iter_id={}, iter={}/{}, iou={:4f}, sec/iter={:.4f} | ETA {}"
format
(
epoch_id
,
step
+
1
,
total_steps
,
iou
,
time_step
,
.
format
(
iter_id
,
iter
+
1
,
total_iters
,
iou
,
time_iter
,
calculate_eta
(
remain_step
,
time_step
)))
calculate_eta
(
remain_iter
,
time_iter
)))
timer
.
restart
()
timer
.
restart
()
category_iou
,
miou
=
conf_mat
.
mean_iou
()
category_iou
,
miou
=
conf_mat
.
mean_iou
()
category_acc
,
macc
=
conf_mat
.
accuracy
()
category_acc
,
macc
=
conf_mat
.
accuracy
()
logg
ing
.
info
(
"[EVAL] #Images={} mAcc={:.4f} mIoU={:.4f}"
.
format
(
logg
er
.
info
(
"[EVAL] #Images={} mAcc={:.4f} mIoU={:.4f}"
.
format
(
len
(
eval_dataset
),
macc
,
miou
))
len
(
eval_dataset
),
macc
,
miou
))
logg
ing
.
info
(
"[EVAL] Category IoU: "
+
str
(
category_iou
))
logg
er
.
info
(
"[EVAL] Category IoU: "
+
str
(
category_iou
))
logg
ing
.
info
(
"[EVAL] Category Acc: "
+
str
(
category_acc
))
logg
er
.
info
(
"[EVAL] Category Acc: "
+
str
(
category_acc
))
logg
ing
.
info
(
"[EVAL] Kappa:{:.4f} "
.
format
(
conf_mat
.
kappa
()))
logg
er
.
info
(
"[EVAL] Kappa:{:.4f} "
.
format
(
conf_mat
.
kappa
()))
return
miou
,
macc
return
miou
,
macc
dygraph/infer.py
浏览文件 @
1a5a29d0
...
@@ -84,7 +84,7 @@ def parse_args():
...
@@ -84,7 +84,7 @@ def parse_args():
def
main
(
args
):
def
main
(
args
):
env_info
=
get_environ_info
()
env_info
=
get_environ_info
()
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
if
env_info
[
'
place'
]
==
'cuda'
and
fluid
.
is_compiled_with_cuda
()
\
if
env_info
[
'
Paddle compiled with cuda'
]
and
env_info
[
'GPUs used'
]
\
else
fluid
.
CPUPlace
()
else
fluid
.
CPUPlace
()
if
args
.
dataset
not
in
DATASETS
:
if
args
.
dataset
not
in
DATASETS
:
...
...
dygraph/models/hrnet.py
浏览文件 @
1a5a29d0
...
@@ -216,26 +216,25 @@ class ConvBNLayer(fluid.dygraph.Layer):
...
@@ -216,26 +216,25 @@ class ConvBNLayer(fluid.dygraph.Layer):
stride
=
stride
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
groups
=
groups
,
act
=
None
,
param_attr
=
ParamAttr
(
param_attr
=
ParamAttr
(
initializer
=
Normal
(
scale
=
0.001
),
name
=
name
+
"_weights"
),
initializer
=
Normal
(
scale
=
0.001
),
name
=
name
+
"_weights"
),
bias_attr
=
False
)
bias_attr
=
False
)
bn_name
=
name
+
'_bn'
bn_name
=
name
+
'_bn'
self
.
_batch_norm
=
BatchNorm
(
self
.
_batch_norm
=
BatchNorm
(
num_filters
,
num_filters
,
act
=
act
,
weight_attr
=
ParamAttr
(
param_attr
=
ParamAttr
(
name
=
bn_name
+
'_scale'
,
name
=
bn_name
+
'_scale'
,
initializer
=
fluid
.
initializer
.
Constant
(
1.0
)),
initializer
=
fluid
.
initializer
.
Constant
(
1.0
)),
bias_attr
=
ParamAttr
(
bias_attr
=
ParamAttr
(
bn_name
+
'_offset'
,
bn_name
+
'_offset'
,
initializer
=
fluid
.
initializer
.
Constant
(
0.0
)),
initializer
=
fluid
.
initializer
.
Constant
(
0.0
)))
moving_mean_name
=
bn_name
+
'_mean'
,
self
.
act
=
act
moving_variance_name
=
bn_name
+
'_variance'
)
def
forward
(
self
,
input
):
def
forward
(
self
,
input
):
y
=
self
.
_conv
(
input
)
y
=
self
.
_conv
(
input
)
y
=
self
.
_batch_norm
(
y
)
y
=
self
.
_batch_norm
(
y
)
if
self
.
act
==
'relu'
:
y
=
fluid
.
layers
.
relu
(
y
)
return
y
return
y
...
...
dygraph/train.py
浏览文件 @
1a5a29d0
...
@@ -22,6 +22,7 @@ import dygraph.transforms as T
...
@@ -22,6 +22,7 @@ import dygraph.transforms as T
#from dygraph.models import MODELS
#from dygraph.models import MODELS
from
dygraph.cvlibs
import
manager
from
dygraph.cvlibs
import
manager
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
get_environ_info
from
dygraph.utils
import
logger
from
dygraph.core
import
train
from
dygraph.core
import
train
...
@@ -61,11 +62,11 @@ def parse_args():
...
@@ -61,11 +62,11 @@ def parse_args():
default
=
[
512
,
512
],
default
=
[
512
,
512
],
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
'--
num_epoch
s'
,
'--
iter
s'
,
dest
=
'
num_epoch
s'
,
dest
=
'
iter
s'
,
help
=
'
Number epoch
s for training'
,
help
=
'
iter
s for training'
,
type
=
int
,
type
=
int
,
default
=
100
)
default
=
100
00
)
parser
.
add_argument
(
parser
.
add_argument
(
'--batch_size'
,
'--batch_size'
,
dest
=
'batch_size'
,
dest
=
'batch_size'
,
...
@@ -91,9 +92,9 @@ def parse_args():
...
@@ -91,9 +92,9 @@ def parse_args():
type
=
str
,
type
=
str
,
default
=
None
)
default
=
None
)
parser
.
add_argument
(
parser
.
add_argument
(
'--save_interval_
epoch
s'
,
'--save_interval_
iter
s'
,
dest
=
'save_interval_
epoch
s'
,
dest
=
'save_interval_
iter
s'
,
help
=
'The interval
epoch
s for save a model snapshot'
,
help
=
'The interval
iter
s for save a model snapshot'
,
type
=
int
,
type
=
int
,
default
=
5
)
default
=
5
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -114,9 +115,9 @@ def parse_args():
...
@@ -114,9 +115,9 @@ def parse_args():
help
=
'Eval while training'
,
help
=
'Eval while training'
,
action
=
'store_true'
)
action
=
'store_true'
)
parser
.
add_argument
(
parser
.
add_argument
(
'--log_
step
s'
,
'--log_
iter
s'
,
dest
=
'log_
step
s'
,
dest
=
'log_
iter
s'
,
help
=
'Display logging information at every log_
step
s'
,
help
=
'Display logging information at every log_
iter
s'
,
default
=
10
,
default
=
10
,
type
=
int
)
type
=
int
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -130,8 +131,13 @@ def parse_args():
...
@@ -130,8 +131,13 @@ def parse_args():
def
main
(
args
):
def
main
(
args
):
env_info
=
get_environ_info
()
env_info
=
get_environ_info
()
info
=
[
'{}: {}'
.
format
(
k
,
v
)
for
k
,
v
in
env_info
.
items
()]
info
=
'
\n
'
.
join
([
'
\n
'
,
format
(
'Environment Information'
,
'-^48s'
)]
+
info
+
[
'-'
*
48
])
logger
.
info
(
info
)
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
if
env_info
[
'
place'
]
==
'cuda'
and
fluid
.
is_compiled_with_cuda
()
\
if
env_info
[
'
Paddle compiled with cuda'
]
and
env_info
[
'GPUs used'
]
\
else
fluid
.
CPUPlace
()
else
fluid
.
CPUPlace
()
if
args
.
dataset
not
in
DATASETS
:
if
args
.
dataset
not
in
DATASETS
:
...
@@ -166,11 +172,10 @@ def main(args):
...
@@ -166,11 +172,10 @@ def main(args):
# Creat optimizer
# Creat optimizer
# todo, may less one than len(loader)
# todo, may less one than len(loader)
num_
step
s_each_epoch
=
len
(
train_dataset
)
//
(
num_
iter
s_each_epoch
=
len
(
train_dataset
)
//
(
args
.
batch_size
*
ParallelEnv
().
nranks
)
args
.
batch_size
*
ParallelEnv
().
nranks
)
decay_step
=
args
.
num_epochs
*
num_steps_each_epoch
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
lr_decay
=
fluid
.
layers
.
polynomial_decay
(
args
.
learning_rate
,
decay_step
,
end_learning_rate
=
0
,
power
=
0.9
)
args
.
learning_rate
,
args
.
iters
,
end_learning_rate
=
0
,
power
=
0.9
)
optimizer
=
fluid
.
optimizer
.
Momentum
(
optimizer
=
fluid
.
optimizer
.
Momentum
(
lr_decay
,
lr_decay
,
momentum
=
0.9
,
momentum
=
0.9
,
...
@@ -184,12 +189,12 @@ def main(args):
...
@@ -184,12 +189,12 @@ def main(args):
eval_dataset
=
eval_dataset
,
eval_dataset
=
eval_dataset
,
optimizer
=
optimizer
,
optimizer
=
optimizer
,
save_dir
=
args
.
save_dir
,
save_dir
=
args
.
save_dir
,
num_epochs
=
args
.
num_epoch
s
,
iters
=
args
.
iter
s
,
batch_size
=
args
.
batch_size
,
batch_size
=
args
.
batch_size
,
pretrained_model
=
args
.
pretrained_model
,
pretrained_model
=
args
.
pretrained_model
,
resume_model
=
args
.
resume_model
,
resume_model
=
args
.
resume_model
,
save_interval_
epochs
=
args
.
save_interval_epoch
s
,
save_interval_
iters
=
args
.
save_interval_iter
s
,
log_
steps
=
args
.
log_step
s
,
log_
iters
=
args
.
log_iter
s
,
num_classes
=
train_dataset
.
num_classes
,
num_classes
=
train_dataset
.
num_classes
,
num_workers
=
args
.
num_workers
,
num_workers
=
args
.
num_workers
,
use_vdl
=
args
.
use_vdl
)
use_vdl
=
args
.
use_vdl
)
...
...
dygraph/utils/__init__.py
浏览文件 @
1a5a29d0
...
@@ -12,8 +12,9 @@
...
@@ -12,8 +12,9 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
from
.
import
logg
ing
from
.
import
logg
er
from
.
import
download
from
.
import
download
from
.metrics
import
ConfusionMatrix
from
.metrics
import
ConfusionMatrix
from
.utils
import
*
from
.utils
import
*
from
.timer
import
Timer
,
calculate_eta
from
.timer
import
Timer
,
calculate_eta
from
.get_environ_info
import
get_environ_info
dygraph/utils/get_environ_info.py
0 → 100644
浏览文件 @
1a5a29d0
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
os
import
sys
from
collections
import
OrderedDict
import
subprocess
import
glob
import
paddle
import
paddle.fluid
as
fluid
import
cv2
IS_WINDOWS
=
sys
.
platform
==
'win32'
def
_find_cuda_home
():
'''Finds the CUDA install path. It refers to the implementation of
pytorch <https://github.com/pytorch/pytorch/blob/master/torch/utils/cpp_extension.py>.
'''
# Guess #1
cuda_home
=
os
.
environ
.
get
(
'CUDA_HOME'
)
or
os
.
environ
.
get
(
'CUDA_PATH'
)
if
cuda_home
is
None
:
# Guess #2
try
:
which
=
'where'
if
IS_WINDOWS
else
'which'
nvcc
=
subprocess
.
check_output
([
which
,
'nvcc'
]).
decode
().
rstrip
(
'
\r\n
'
)
cuda_home
=
os
.
path
.
dirname
(
os
.
path
.
dirname
(
nvcc
))
except
Exception
:
# Guess #3
if
IS_WINDOWS
:
cuda_homes
=
glob
.
glob
(
'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v*.*'
)
if
len
(
cuda_homes
)
==
0
:
cuda_home
=
''
else
:
cuda_home
=
cuda_homes
[
0
]
else
:
cuda_home
=
'/usr/local/cuda'
if
not
os
.
path
.
exists
(
cuda_home
):
cuda_home
=
None
return
cuda_home
def
_get_nvcc_info
(
cuda_home
):
if
cuda_home
is
not
None
and
os
.
path
.
isdir
(
cuda_home
):
try
:
nvcc
=
os
.
path
.
join
(
cuda_home
,
'bin/nvcc'
)
nvcc
=
subprocess
.
check_output
(
"{} -V"
.
format
(
nvcc
),
shell
=
True
).
decode
()
nvcc
=
nvcc
.
strip
().
split
(
'
\n
'
)[
-
1
]
except
subprocess
.
SubprocessError
:
nvcc
=
"Not Available"
return
nvcc
def
_get_gpu_info
():
try
:
gpu_info
=
subprocess
.
check_output
([
'nvidia-smi'
,
'-L'
]).
decode
().
strip
()
gpu_info
=
gpu_info
.
split
(
'
\n
'
)
for
i
in
range
(
len
(
gpu_info
)):
gpu_info
[
i
]
=
' '
.
join
(
gpu_info
[
i
].
split
(
' '
)[:
4
])
except
:
gpu_info
=
' Can not get GPU information. Please make sure CUDA have been installed successfully.'
return
gpu_info
def
get_environ_info
():
"""collect environment information"""
env_info
=
{}
env_info
[
'System Platform'
]
=
sys
.
platform
if
env_info
[
'System Platform'
]
==
'linux'
:
lsb_v
=
subprocess
.
check_output
([
'lsb_release'
,
'-v'
]).
decode
().
strip
()
lsb_v
=
lsb_v
.
replace
(
'
\t
'
,
' '
)
lsb_d
=
subprocess
.
check_output
([
'lsb_release'
,
'-d'
]).
decode
().
strip
()
lsb_d
=
lsb_d
.
replace
(
'
\t
'
,
' '
)
env_info
[
'LSB'
]
=
[
lsb_v
,
lsb_d
]
env_info
[
'Python'
]
=
sys
.
version
.
replace
(
'
\n
'
,
''
)
compiled_with_cuda
=
paddle
.
fluid
.
is_compiled_with_cuda
()
env_info
[
'Paddle compiled with cuda'
]
=
compiled_with_cuda
if
compiled_with_cuda
:
cuda_home
=
_find_cuda_home
()
env_info
[
'NVCC'
]
=
_get_nvcc_info
(
cuda_home
)
gpu_nums
=
fluid
.
core
.
get_cuda_device_count
()
env_info
[
'GPUs used'
]
=
gpu_nums
env_info
[
'CUDA_VISIBLE_DEVICES'
]
=
os
.
environ
.
get
(
'CUDA_VISIBLE_DEVICES'
)
env_info
[
'GPU'
]
=
_get_gpu_info
()
gcc
=
subprocess
.
check_output
([
'gcc'
,
'--version'
]).
decode
()
gcc
=
gcc
.
strip
().
split
(
'
\n
'
)[
0
]
env_info
[
'GCC'
]
=
gcc
env_info
[
'PaddlePaddle'
]
=
paddle
.
__version__
env_info
[
'OpenCV'
]
=
cv2
.
__version__
return
env_info
dygraph/utils/logg
ing
.py
→
dygraph/utils/logg
er
.py
浏览文件 @
1a5a29d0
文件已移动
dygraph/utils/utils.py
浏览文件 @
1a5a29d0
...
@@ -18,7 +18,7 @@ import math
...
@@ -18,7 +18,7 @@ import math
import
cv2
import
cv2
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
.
import
logg
ing
from
.
import
logg
er
def
seconds_to_hms
(
seconds
):
def
seconds_to_hms
(
seconds
):
...
@@ -29,27 +29,9 @@ def seconds_to_hms(seconds):
...
@@ -29,27 +29,9 @@ def seconds_to_hms(seconds):
return
hms_str
return
hms_str
def
get_environ_info
():
info
=
dict
()
info
[
'place'
]
=
'cpu'
info
[
'num'
]
=
int
(
os
.
environ
.
get
(
'CPU_NUM'
,
1
))
if
os
.
environ
.
get
(
'CUDA_VISIBLE_DEVICES'
,
None
)
!=
""
:
if
hasattr
(
fluid
.
core
,
'get_cuda_device_count'
):
gpu_num
=
0
try
:
gpu_num
=
fluid
.
core
.
get_cuda_device_count
()
except
:
os
.
environ
[
'CUDA_VISIBLE_DEVICES'
]
=
''
pass
if
gpu_num
>
0
:
info
[
'place'
]
=
'cuda'
info
[
'num'
]
=
fluid
.
core
.
get_cuda_device_count
()
return
info
def
load_pretrained_model
(
model
,
pretrained_model
):
def
load_pretrained_model
(
model
,
pretrained_model
):
if
pretrained_model
is
not
None
:
if
pretrained_model
is
not
None
:
logg
ing
.
info
(
'Load pretrained model from {}'
.
format
(
pretrained_model
))
logg
er
.
info
(
'Load pretrained model from {}'
.
format
(
pretrained_model
))
if
os
.
path
.
exists
(
pretrained_model
):
if
os
.
path
.
exists
(
pretrained_model
):
ckpt_path
=
os
.
path
.
join
(
pretrained_model
,
'model'
)
ckpt_path
=
os
.
path
.
join
(
pretrained_model
,
'model'
)
try
:
try
:
...
@@ -62,10 +44,10 @@ def load_pretrained_model(model, pretrained_model):
...
@@ -62,10 +44,10 @@ def load_pretrained_model(model, pretrained_model):
num_params_loaded
=
0
num_params_loaded
=
0
for
k
in
keys
:
for
k
in
keys
:
if
k
not
in
para_state_dict
:
if
k
not
in
para_state_dict
:
logg
ing
.
warning
(
"{} is not in pretrained model"
.
format
(
k
))
logg
er
.
warning
(
"{} is not in pretrained model"
.
format
(
k
))
elif
list
(
para_state_dict
[
k
].
shape
)
!=
list
(
elif
list
(
para_state_dict
[
k
].
shape
)
!=
list
(
model_state_dict
[
k
].
shape
):
model_state_dict
[
k
].
shape
):
logg
ing
.
warning
(
logg
er
.
warning
(
"[SKIP] Shape of pretrained params {} doesn't match.(Pretrained: {}, Actual: {})"
"[SKIP] Shape of pretrained params {} doesn't match.(Pretrained: {}, Actual: {})"
.
format
(
k
,
para_state_dict
[
k
].
shape
,
.
format
(
k
,
para_state_dict
[
k
].
shape
,
model_state_dict
[
k
].
shape
))
model_state_dict
[
k
].
shape
))
...
@@ -73,7 +55,7 @@ def load_pretrained_model(model, pretrained_model):
...
@@ -73,7 +55,7 @@ def load_pretrained_model(model, pretrained_model):
model_state_dict
[
k
]
=
para_state_dict
[
k
]
model_state_dict
[
k
]
=
para_state_dict
[
k
]
num_params_loaded
+=
1
num_params_loaded
+=
1
model
.
set_dict
(
model_state_dict
)
model
.
set_dict
(
model_state_dict
)
logg
ing
.
info
(
"There are {}/{} varaibles are loaded."
.
format
(
logg
er
.
info
(
"There are {}/{} varaibles are loaded."
.
format
(
num_params_loaded
,
len
(
model_state_dict
)))
num_params_loaded
,
len
(
model_state_dict
)))
else
:
else
:
...
@@ -81,12 +63,12 @@ def load_pretrained_model(model, pretrained_model):
...
@@ -81,12 +63,12 @@ def load_pretrained_model(model, pretrained_model):
'The pretrained model directory is not Found: {}'
.
format
(
'The pretrained model directory is not Found: {}'
.
format
(
pretrained_model
))
pretrained_model
))
else
:
else
:
logg
ing
.
info
(
'No pretrained model to load, train from scratch'
)
logg
er
.
info
(
'No pretrained model to load, train from scratch'
)
def
resume
(
model
,
optimizer
,
resume_model
):
def
resume
(
model
,
optimizer
,
resume_model
):
if
resume_model
is
not
None
:
if
resume_model
is
not
None
:
logg
ing
.
info
(
'Resume model from {}'
.
format
(
resume_model
))
logg
er
.
info
(
'Resume model from {}'
.
format
(
resume_model
))
if
os
.
path
.
exists
(
resume_model
):
if
os
.
path
.
exists
(
resume_model
):
resume_model
=
os
.
path
.
normpath
(
resume_model
)
resume_model
=
os
.
path
.
normpath
(
resume_model
)
ckpt_path
=
os
.
path
.
join
(
resume_model
,
'model'
)
ckpt_path
=
os
.
path
.
join
(
resume_model
,
'model'
)
...
@@ -102,7 +84,7 @@ def resume(model, optimizer, resume_model):
...
@@ -102,7 +84,7 @@ def resume(model, optimizer, resume_model):
'The resume model directory is not Found: {}'
.
format
(
'The resume model directory is not Found: {}'
.
format
(
resume_model
))
resume_model
))
else
:
else
:
logg
ing
.
info
(
'No model need to resume'
)
logg
er
.
info
(
'No model need to resume'
)
def
visualize
(
image
,
result
,
save_dir
=
None
,
weight
=
0.6
):
def
visualize
(
image
,
result
,
save_dir
=
None
,
weight
=
0.6
):
...
...
dygraph/val.py
浏览文件 @
1a5a29d0
...
@@ -72,7 +72,7 @@ def parse_args():
...
@@ -72,7 +72,7 @@ def parse_args():
def
main
(
args
):
def
main
(
args
):
env_info
=
get_environ_info
()
env_info
=
get_environ_info
()
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
places
=
fluid
.
CUDAPlace
(
ParallelEnv
().
dev_id
)
\
if
env_info
[
'
place'
]
==
'cuda'
and
fluid
.
is_compiled_with_cuda
()
\
if
env_info
[
'
Paddle compiled with cuda'
]
and
env_info
[
'GPUs used'
]
\
else
fluid
.
CPUPlace
()
else
fluid
.
CPUPlace
()
if
args
.
dataset
not
in
DATASETS
:
if
args
.
dataset
not
in
DATASETS
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录