Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MindSpore
docs
提交
af1ad154
D
docs
项目概览
MindSpore
/
docs
通知
4
Star
2
Fork
2
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
docs
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
af1ad154
编写于
8月 27, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
8月 27, 2020
浏览文件
操作
浏览文件
下载
差异文件
!758 remove name arg from gradoperation
Merge pull request !758 from riemann_penn/remove_name_arg_from_gradoperation
上级
7cf140bd
4350508c
变更
10
隐藏空白更改
内联
并排
Showing
10 changed file
with
12 addition
and
12 deletion
+12
-12
tutorials/notebook/debugging_in_pynative_mode.ipynb
tutorials/notebook/debugging_in_pynative_mode.ipynb
+1
-1
tutorials/notebook/linear_regression.ipynb
tutorials/notebook/linear_regression.ipynb
+1
-1
tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
...ials/source_en/advanced_use/debugging_in_pynative_mode.md
+2
-2
tutorials/source_en/use/custom_operator.md
tutorials/source_en/use/custom_operator.md
+1
-1
tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
...s/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
+2
-2
tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
+1
-1
tutorials/source_zh_cn/quick_start/linear_regression.md
tutorials/source_zh_cn/quick_start/linear_regression.md
+1
-1
tutorials/source_zh_cn/use/custom_operator.md
tutorials/source_zh_cn/use/custom_operator.md
+1
-1
tutorials/tutorial_code/gradient_accumulation/train.py
tutorials/tutorial_code/gradient_accumulation/train.py
+1
-1
tutorials/tutorial_code/linear_regression.py
tutorials/tutorial_code/linear_regression.py
+1
-1
未找到文件。
tutorials/notebook/debugging_in_pynative_mode.ipynb
浏览文件 @
af1ad154
...
...
@@ -373,7 +373,7 @@
"\n",
" def construct(self, x, label):\n",
" weights = self.weights\n",
" return C.GradOperation(
'get_by_list',
get_by_list=True)(self.network, weights)(x, label)"
" return C.GradOperation(get_by_list=True)(self.network, weights)(x, label)"
]
},
{
...
...
tutorials/notebook/linear_regression.ipynb
浏览文件 @
af1ad154
...
...
@@ -524,7 +524,7 @@
"\n",
" def construct(self, data, label):\n",
" weights = self.weights\n",
" return C.GradOperation(
'get_by_list',
get_by_list=True) \\\n",
" return C.GradOperation(get_by_list=True) \\\n",
" (self.network, weights)(data, label)\n"
]
},
...
...
tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
浏览文件 @
af1ad154
...
...
@@ -262,7 +262,7 @@ def mul(x, y):
return
x
*
y
def
mainf
(
x
,
y
):
return
C
.
GradOperation
(
'get_all'
,
get_all
=
True
)(
mul
)(
x
,
y
)
return
C
.
GradOperation
(
get_all
=
True
)(
mul
)(
x
,
y
)
print
(
mainf
(
Tensor
(
1
,
mstype
.
int32
),
Tensor
(
2
,
mstype
.
int32
)))
```
...
...
@@ -357,7 +357,7 @@ class GradWrap(nn.Cell):
def
construct
(
self
,
x
,
label
):
weights
=
self
.
weights
return
C
.
GradOperation
(
'get_by_list'
,
get_by_list
=
True
)(
self
.
network
,
weights
)(
x
,
label
)
return
C
.
GradOperation
(
get_by_list
=
True
)(
self
.
network
,
weights
)(
x
,
label
)
net
=
LeNet5
()
optimizer
=
Momentum
(
filter
(
lambda
x
:
x
.
requires_grad
,
net
.
get_parameters
()),
0.1
,
0.9
)
...
...
tutorials/source_en/use/custom_operator.md
浏览文件 @
af1ad154
...
...
@@ -232,7 +232,7 @@ def test_grad_net():
x
=
np
.
array
([
1.0
,
4.0
,
9.0
]).
astype
(
np
.
float32
)
sens
=
np
.
array
([
1.0
,
1.0
,
1.0
]).
astype
(
np
.
float32
)
square
=
Net
()
grad
=
C
.
GradOperation
(
'grad_with_sens'
,
sens_param
=
True
)
grad
=
C
.
GradOperation
(
sens_param
=
True
)
dx
=
grad
(
square
)(
Tensor
(
x
),
Tensor
(
sens
))
print
(
"x: "
,
x
)
print
(
"dx: "
,
dx
)
...
...
tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
浏览文件 @
af1ad154
...
...
@@ -264,7 +264,7 @@ def mul(x, y):
return
x
*
y
def
mainf
(
x
,
y
):
return
C
.
GradOperation
(
'get_all'
,
get_all
=
True
)(
mul
)(
x
,
y
)
return
C
.
GradOperation
(
get_all
=
True
)(
mul
)(
x
,
y
)
print
(
mainf
(
Tensor
(
1
,
mstype
.
int32
),
Tensor
(
2
,
mstype
.
int32
)))
```
...
...
@@ -359,7 +359,7 @@ class GradWrap(nn.Cell):
def
construct
(
self
,
x
,
label
):
weights
=
self
.
weights
return
C
.
GradOperation
(
'get_by_list'
,
get_by_list
=
True
)(
self
.
network
,
weights
)(
x
,
label
)
return
C
.
GradOperation
(
get_by_list
=
True
)(
self
.
network
,
weights
)(
x
,
label
)
net
=
LeNet5
()
optimizer
=
Momentum
(
filter
(
lambda
x
:
x
.
requires_grad
,
net
.
get_parameters
()),
0.1
,
0.9
)
...
...
tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
浏览文件 @
af1ad154
...
...
@@ -100,7 +100,7 @@ class TrainForwardBackward(Cell):
self
.
weights
=
ParameterTuple
(
network
.
trainable_params
())
self
.
optimizer
=
optimizer
self
.
grad_sum
=
grad_sum
self
.
grad
=
C
.
GradOperation
(
'grad'
,
get_by_list
=
True
,
sens_param
=
True
)
self
.
grad
=
C
.
GradOperation
(
get_by_list
=
True
,
sens_param
=
True
)
self
.
sens
=
sens
self
.
hyper_map
=
C
.
HyperMap
()
...
...
tutorials/source_zh_cn/quick_start/linear_regression.md
浏览文件 @
af1ad154
...
...
@@ -297,7 +297,7 @@ class GradWrap(nn.Cell):
def
construct
(
self
,
data
,
label
):
weights
=
self
.
weights
return
C
.
GradOperation
(
'get_by_list'
,
get_by_list
=
True
)
\
return
C
.
GradOperation
(
get_by_list
=
True
)
\
(
self
.
network
,
weights
)(
data
,
label
)
```
...
...
tutorials/source_zh_cn/use/custom_operator.md
浏览文件 @
af1ad154
...
...
@@ -232,7 +232,7 @@ def test_grad_net():
x
=
np
.
array
([
1.0
,
4.0
,
9.0
]).
astype
(
np
.
float32
)
sens
=
np
.
array
([
1.0
,
1.0
,
1.0
]).
astype
(
np
.
float32
)
square
=
Net
()
grad
=
C
.
GradOperation
(
'grad_with_sens'
,
sens_param
=
True
)
grad
=
C
.
GradOperation
(
sens_param
=
True
)
dx
=
grad
(
square
)(
Tensor
(
x
),
Tensor
(
sens
))
print
(
"x: "
,
x
)
print
(
"dx: "
,
dx
)
...
...
tutorials/tutorial_code/gradient_accumulation/train.py
浏览文件 @
af1ad154
...
...
@@ -41,7 +41,7 @@ class TrainForwardBackward(Cell):
self
.
weights
=
ParameterTuple
(
network
.
trainable_params
())
self
.
optimizer
=
optimizer
self
.
grad_sum
=
grad_sum
self
.
grad
=
C
.
GradOperation
(
'grad'
,
get_by_list
=
True
,
sens_param
=
True
)
self
.
grad
=
C
.
GradOperation
(
get_by_list
=
True
,
sens_param
=
True
)
self
.
sens
=
sens
self
.
hyper_map
=
C
.
HyperMap
()
...
...
tutorials/tutorial_code/linear_regression.py
浏览文件 @
af1ad154
...
...
@@ -41,7 +41,7 @@ class GradWrap(nn.Cell):
def
construct
(
self
,
data
,
label
):
weights
=
self
.
weights
return
C
.
GradOperation
(
'get_by_list'
,
get_by_list
=
True
)
\
return
C
.
GradOperation
(
get_by_list
=
True
)
\
(
self
.
network
,
weights
)(
data
,
label
)
# Initializing model functions
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录