Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
正统之独孤求败
mindspore
提交
95f02396
M
mindspore
项目概览
正统之独孤求败
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
95f02396
编写于
3月 23, 2020
作者:
万
万万没想到
提交者:
高东海
4月 08, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
modify comments
上级
d4295465
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
9 addition
and
9 deletion
+9
-9
mindspore/ops/operations/nn_ops.py
mindspore/ops/operations/nn_ops.py
+9
-9
未找到文件。
mindspore/ops/operations/nn_ops.py
浏览文件 @
95f02396
...
...
@@ -1219,14 +1219,14 @@ class ApplyMomentum(PrimitiveWithInfer):
gradient_scale (float): The scale of the gradient. Default: 1.0.
Inputs:
- **variable** (Tensor) - Weights to be update.
- **variable** (Tensor) - Weights to be update
d
.
- **accumulation** (Tensor) - Accumulated gradient value by moment weight.
- **learning_rate** (float) - Learning rate.
- **gradient** (Tensor) - Gradients.
- **momentum** (float) - Momentum.
Outputs:
Tensor, parameters to be update.
Tensor, parameters to be update
d
.
Examples:
>>> net = ResNet50()
...
...
@@ -1318,15 +1318,15 @@ class SGD(PrimitiveWithInfer):
nesterov (bool): Enable Nesterov momentum. Default: False.
Inputs:
- **parameters** (Tensor) - Parameters to be update.
- **parameters** (Tensor) - Parameters to be update
d
.
- **gradient** (Tensor) - Gradients.
- **learning_rate** (Tensor) - Learning rate. e.g. Tensor(0.1, mindspore.float32).
- **accum** (Tensor) - Accum(velocity) to be update.
- **accum** (Tensor) - Accum(velocity) to be update
d
.
- **momentum** (Tensor) - Momentum. e.g. Tensor(0.1, mindspore.float32).
- **stat** (Tensor) - States to be updated with the same shape as gradient.
Outputs:
Tensor, parameters to be update.
Tensor, parameters to be update
d
.
"""
@
prim_attr_register
...
...
@@ -2141,7 +2141,7 @@ class Adam(PrimitiveWithInfer):
If False, updates the gradients without using NAG. Default: False.
Inputs:
- **var** (Tensor) - Weights to be update.
- **var** (Tensor) - Weights to be update
d
.
- **m** (Tensor) - The 1st moment vector in the updating formula.
- **v** (Tensor) - the 2nd moment vector in the updating formula.
- **beta1_power** (float) - :math:`beta_1^t` in the updating formula.
...
...
@@ -2251,8 +2251,8 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
use_locking (bool): If True, updating of the var and accum tensors will be protected. Default: False.
Inputs:
- **var** (Tensor) - Variable to be update. The type must be float32.
- **accum** (Tensor) - Accum to be update. The shape must be the same as `var`'s shape,
- **var** (Tensor) - Variable to be update
d
. The type must be float32.
- **accum** (Tensor) - Accum to be update
d
. The shape must be the same as `var`'s shape,
the type must be float32.
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape
except first dimension, the type must be float32.
...
...
@@ -2372,7 +2372,7 @@ class LARSUpdate(PrimitiveWithInfer):
use_clip (bool): Whether to use clip operation for calculating the local learning rate. Default: False.
Inputs:
- **weight** (Tensor) - The weight to be update.
- **weight** (Tensor) - The weight to be update
d
.
- **gradient** (Tensor) - The gradient of weight, which has the same shape and dtype with weight.
- **norm_weight** (Tensor) - A scalar tensor, representing the square sum of weight.
- **norm_gradient** (Tensor) - A scalar tensor, representing the square sum of gradient.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录