Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
0b1ae674
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
0b1ae674
编写于
4月 09, 2020
作者:
J
jinyaohui
提交者:
高东海
4月 10, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
modify comment
上级
c0e2a63f
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
7 addition
and
7 deletion
+7
-7
example/yolov3_coco2017/train.py
example/yolov3_coco2017/train.py
+1
-1
mindspore/ccsrc/transform/convert.cc
mindspore/ccsrc/transform/convert.cc
+1
-1
mindspore/nn/wrap/loss_scale.py
mindspore/nn/wrap/loss_scale.py
+1
-1
tests/ut/python/utils/test_callback.py
tests/ut/python/utils/test_callback.py
+4
-4
未找到文件。
example/yolov3_coco2017/train.py
浏览文件 @
0b1ae674
...
...
@@ -67,7 +67,7 @@ if __name__ == '__main__':
parser
.
add_argument
(
"--distribute"
,
type
=
bool
,
default
=
False
,
help
=
"Run distribute, default is false."
)
parser
.
add_argument
(
"--device_id"
,
type
=
int
,
default
=
0
,
help
=
"Device id, default is 0."
)
parser
.
add_argument
(
"--device_num"
,
type
=
int
,
default
=
1
,
help
=
"Use device nums, default is 1."
)
parser
.
add_argument
(
"--mode"
,
type
=
str
,
default
=
"sink"
,
help
=
"Run sink mode or no
n-sink mode
, default is sink"
)
parser
.
add_argument
(
"--mode"
,
type
=
str
,
default
=
"sink"
,
help
=
"Run sink mode or no
t
, default is sink"
)
parser
.
add_argument
(
"--epoch_size"
,
type
=
int
,
default
=
10
,
help
=
"Epoch size, default is 10"
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
32
,
help
=
"Batch size, default is 32."
)
parser
.
add_argument
(
"--checkpoint_path"
,
type
=
str
,
default
=
""
,
help
=
"Checkpoint file path"
)
...
...
mindspore/ccsrc/transform/convert.cc
浏览文件 @
0b1ae674
...
...
@@ -449,7 +449,7 @@ void DfGraphConvertor::InitLoopVar(std::vector<ge::Operator> *init_input) {
if
(
ConfigManager
::
GetInstance
().
dataset_mode
()
==
DS_SINK_MODE
)
{
value
=
ConfigManager
::
GetInstance
().
iter_num
();
}
else
{
MS_LOG
(
INFO
)
<<
"Run with no
n-sink
mode, the iterator number will always be 1"
;
MS_LOG
(
INFO
)
<<
"Run with no
rmal(non-sink)
mode, the iterator number will always be 1"
;
value
=
1
;
ConfigManager
::
GetInstance
().
set_iter_num
(
value
);
}
...
...
mindspore/nn/wrap/loss_scale.py
浏览文件 @
0b1ae674
...
...
@@ -51,7 +51,7 @@ class DynamicLossScaleUpdateCell(Cell):
In every training step, the loss scaling value will be updated by loss scaling value/`scale_factor`
when there is overflow. And it will be increased by loss scaling value * `scale_factor` if there is no
overflow for a continuous `scale_window` steps. This cell is used for Graph mode training in which all
logic will be executed on device side(Another training mode is no
n-sink
mode in which some logic will be
logic will be executed on device side(Another training mode is no
rmal(non-sink)
mode in which some logic will be
executed on host).
Args:
...
...
tests/ut/python/utils/test_callback.py
浏览文件 @
0b1ae674
...
...
@@ -112,8 +112,8 @@ def test_save_checkpoint():
os
.
remove
(
'./test_files/test_ckpt-model.pkl'
)
def
test_loss_monitor_sink_mode
l
():
"""Test loss monitor sink mode
l
."""
def
test_loss_monitor_sink_mode
():
"""Test loss monitor sink mode."""
cb_params
=
_InternalCallbackParam
()
cb_params
.
cur_epoch_num
=
4
cb_params
.
cur_step_num
=
2
...
...
@@ -131,8 +131,8 @@ def test_loss_monitor_sink_model():
callbacklist
.
end
(
run_context
)
def
test_loss_monitor_
feed_model
():
"""Test loss monitor no
n-sink
mode."""
def
test_loss_monitor_
normal_mode
():
"""Test loss monitor no
rmal(non-sink)
mode."""
cb_params
=
_InternalCallbackParam
()
run_context
=
RunContext
(
cb_params
)
loss_cb
=
LossMonitor
(
1
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录