Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
777e0fd1
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
777e0fd1
编写于
5月 29, 2020
作者:
Y
yangyongjie
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'deeplabv3' of
https://gitee.com/zhouyaqiang0/mindspore
into deeplabv3
上级
2065383e
db7b728d
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
74 addition
and
6 deletion
+74
-6
model_zoo/deeplabv3/README.md
model_zoo/deeplabv3/README.md
+69
-0
model_zoo/deeplabv3/evaluation.py
model_zoo/deeplabv3/evaluation.py
+1
-1
model_zoo/deeplabv3/readme.txt
model_zoo/deeplabv3/readme.txt
+0
-0
model_zoo/deeplabv3/scripts/run_eval.sh
model_zoo/deeplabv3/scripts/run_eval.sh
+2
-2
model_zoo/deeplabv3/train.py
model_zoo/deeplabv3/train.py
+2
-3
未找到文件。
model_zoo/deeplabv3/README.md
0 → 100644
浏览文件 @
777e0fd1
# Deeplab-V3 Example
## Description
-
This is an example of training DeepLabv3 with PASCAL VOC 2012 dataset in MindSpore.
-
Paper Rethinking Atrous Convolution for Semantic Image Segmentation
Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam
## Requirements
-
Install
[
MindSpore
](
https://www.mindspore.cn/install/en
)
.
-
Download the VOC 2012 dataset for training.
> Notes:
If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.
## Running the Example
### Training
-
Set options in config.py.
-
Run
`run_standalone_train.sh`
for non-distributed training.
```
bash
sh scripts/run_standalone_train.sh DEVICE_ID EPOCH_SIZE DATA_DIR
```
-
Run
`run_distribute_train.sh`
for distributed training.
```
bash
sh scripts/run_distribute_train.sh DEVICE_NUM EPOCH_SIZE DATA_DIR MINDSPORE_HCCL_CONFIG_PATH
```
### Evaluation
Set options in evaluation_config.py. Make sure the 'data_file' and 'finetune_ckpt' are set to your own path.
-
Run run_eval.sh for evaluation.
```
bash
sh scripts/run_eval.sh DEVICE_ID DATA_DIR
```
## Options and Parameters
It contains of parameters of Deeplab-V3 model and options for training, which is set in file config.py.
### Options:
```
config.py:
learning_rate Learning rate, default is 0.0014.
weight_decay Weight decay, default is 5e-5.
momentum Momentum, default is 0.97.
crop_size Image crop size [height, width] during training, default is 513.
eval_scales The scales to resize images for evaluation, default is [0.5, 0.75, 1.0, 1.25, 1.5, 1.75].
output_stride The ratio of input to output spatial resolution, default is 16.
ignore_label Ignore label value, default is 255.
seg_num_classes Number of semantic classes, including the background class (if exists).
foreground classes + 1 background class in the PASCAL VOC 2012 dataset, default is 21.
fine_tune_batch_norm Fine tune the batch norm parameters or not, default is False.
atrous_rates Atrous rates for atrous spatial pyramid pooling, default is None.
decoder_output_stride The ratio of input to output spatial resolution when employing decoder
to refine segmentation results, default is None.
image_pyramid Input scales for multi-scale feature extraction, default is None.
```
### Parameters:
```
Parameters for dataset and network:
distribute Run distribute, default is false.
epoch_size Epoch size, default is 6.
batch_size batch size of input dataset: N, default is 2.
data_url Train/Evaluation data url, required.
checkpoint_url Checkpoint path, default is None.
enable_save_ckpt Enable save checkpoint, default is true.
save_checkpoint_steps Save checkpoint steps, default is 1000.
save_checkpoint_num Save checkpoint numbers, default is 1.
```
\ No newline at end of file
model_zoo/deeplabv3/evaluation.py
浏览文件 @
777e0fd1
...
...
@@ -28,7 +28,7 @@ parser = argparse.ArgumentParser(description="Deeplabv3 evaluation")
parser
.
add_argument
(
'--epoch_size'
,
type
=
int
,
default
=
2
,
help
=
'Epoch size.'
)
parser
.
add_argument
(
"--device_id"
,
type
=
int
,
default
=
0
,
help
=
"Device id, default is 0."
)
parser
.
add_argument
(
'--batch_size'
,
type
=
int
,
default
=
2
,
help
=
'Batch size.'
)
parser
.
add_argument
(
'--data_url'
,
required
=
True
,
default
=
None
,
help
=
'
Trai
n data url'
)
parser
.
add_argument
(
'--data_url'
,
required
=
True
,
default
=
None
,
help
=
'
Evaluatio
n data url'
)
parser
.
add_argument
(
'--checkpoint_url'
,
default
=
None
,
help
=
'Checkpoint path'
)
args_opt
=
parser
.
parse_args
()
...
...
model_zoo/deeplabv3/readme.txt
已删除
100644 → 0
浏览文件 @
2065383e
model_zoo/deeplabv3/scripts/run_eval.sh
浏览文件 @
777e0fd1
...
...
@@ -15,8 +15,8 @@
# ============================================================================
echo
"=============================================================================================================="
echo
"Please run the scipt as: "
echo
"bash run_eval.sh DEVICE_ID
EPOCH_SIZE
DATA_DIR"
echo
"for example: bash run_eval.sh
0
/path/zh-wiki/ "
echo
"bash run_eval.sh DEVICE_ID DATA_DIR"
echo
"for example: bash run_eval.sh /path/zh-wiki/ "
echo
"=============================================================================================================="
DEVICE_ID
=
$1
...
...
model_zoo/deeplabv3/train.py
浏览文件 @
777e0fd1
...
...
@@ -27,13 +27,12 @@ from src.config import config
parser
=
argparse
.
ArgumentParser
(
description
=
"Deeplabv3 training"
)
parser
.
add_argument
(
"--distribute"
,
type
=
str
,
default
=
"false"
,
help
=
"Run distribute, default is false."
)
parser
.
add_argument
(
'--epoch_size'
,
type
=
int
,
default
=
2
,
help
=
'Epoch size.'
)
parser
.
add_argument
(
'--epoch_size'
,
type
=
int
,
default
=
6
,
help
=
'Epoch size.'
)
parser
.
add_argument
(
'--batch_size'
,
type
=
int
,
default
=
2
,
help
=
'Batch size.'
)
parser
.
add_argument
(
'--data_url'
,
required
=
True
,
default
=
None
,
help
=
'Train data url'
)
parser
.
add_argument
(
"--device_id"
,
type
=
int
,
default
=
0
,
help
=
"Device id, default is 0."
)
parser
.
add_argument
(
'--checkpoint_url'
,
default
=
None
,
help
=
'Checkpoint path'
)
parser
.
add_argument
(
"--enable_save_ckpt"
,
type
=
str
,
default
=
"true"
,
help
=
"Enable save checkpoint, default is true."
)
parser
.
add_argument
(
'--max_checkpoint_num'
,
type
=
int
,
default
=
5
,
help
=
'Max checkpoint number.'
)
parser
.
add_argument
(
"--save_checkpoint_steps"
,
type
=
int
,
default
=
1000
,
help
=
"Save checkpoint steps, default is 1000."
)
parser
.
add_argument
(
"--save_checkpoint_num"
,
type
=
int
,
default
=
1
,
help
=
"Save checkpoint numbers, default is 1."
)
args_opt
=
parser
.
parse_args
()
...
...
@@ -80,7 +79,7 @@ if __name__ == "__main__":
keep_checkpoint_max
=
args_opt
.
save_checkpoint_num
)
ckpoint_cb
=
ModelCheckpoint
(
prefix
=
'checkpoint_deeplabv3'
,
config
=
config_ck
)
callback
.
append
(
ckpoint_cb
)
net
=
deeplabv3_resnet50
(
config
.
seg_num_classes
,
[
args_opt
.
batch_size
,
3
,
args_opt
.
crop_size
,
args_opt
.
crop_size
],
net
=
deeplabv3_resnet50
(
config
.
seg_num_classes
,
[
args_opt
.
batch_size
,
3
,
args_opt
.
crop_size
,
args_opt
.
crop_size
],
infer_scale_sizes
=
config
.
eval_scales
,
atrous_rates
=
config
.
atrous_rates
,
decoder_output_stride
=
config
.
decoder_output_stride
,
output_stride
=
config
.
output_stride
,
fine_tune_batch_norm
=
config
.
fine_tune_batch_norm
,
image_pyramid
=
config
.
image_pyramid
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录