Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
正统之独孤求败
mindspore
提交
98044a83
M
mindspore
项目概览
正统之独孤求败
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
98044a83
编写于
9月 03, 2020
作者:
L
lihongkang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix bugs
上级
bdf2082a
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
7 addition
and
5 deletion
+7
-5
mindspore/nn/optim/adam.py
mindspore/nn/optim/adam.py
+1
-1
mindspore/nn/optim/lazyadam.py
mindspore/nn/optim/lazyadam.py
+1
-1
mindspore/ops/operations/array_ops.py
mindspore/ops/operations/array_ops.py
+3
-3
mindspore/ops/operations/math_ops.py
mindspore/ops/operations/math_ops.py
+2
-0
未找到文件。
mindspore/nn/optim/adam.py
浏览文件 @
98044a83
...
...
@@ -220,7 +220,7 @@ class Adam(Optimizer):
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
>>> {'params': no_conv_params, 'lr': 0.01},
>>> {'order_params': net.trainable_params()}]
>>> optm = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> opt
i
m = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and defaule weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
...
...
mindspore/nn/optim/lazyadam.py
浏览文件 @
98044a83
...
...
@@ -168,7 +168,7 @@ class LazyAdam(Optimizer):
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
>>> {'params': no_conv_params, 'lr': 0.01},
>>> {'order_params': net.trainable_params()}]
>>> opt = nn.LazyAdam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> opt
im
= nn.LazyAdam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
...
...
mindspore/ops/operations/array_ops.py
浏览文件 @
98044a83
...
...
@@ -3013,12 +3013,12 @@ class DepthToSpace(PrimitiveWithInfer):
This is the reverse operation of SpaceToDepth.
The depth of output tensor is :math:`input\_depth / (block\_size * block\_size)`.
The output tensor's `height` dimension is :math:`height * block\_size`.
The output tensor's `weight` dimension is :math:`weight * block\_size`.
The depth of output tensor is :math:`input\_depth / (block\_size * block\_size)`.
The input tensor's depth must be divisible by `block_size * block_size`.
The data format is "NCHW".
...
...
@@ -3029,7 +3029,7 @@ class DepthToSpace(PrimitiveWithInfer):
- **x** (Tensor) - The target tensor. It must be a 4-D tensor.
Outputs:
Tensor,
the same type as `x`
.
Tensor,
has the same shape and dtype as the 'x'
.
Examples:
>>> x = Tensor(np.random.rand(1,12,1,1), mindspore.float32)
...
...
mindspore/ops/operations/math_ops.py
浏览文件 @
98044a83
...
...
@@ -741,6 +741,7 @@ class CumSum(PrimitiveWithInfer):
Inputs:
- **input** (Tensor) - The input tensor to accumulate.
- **axis** (int) - The axis to accumulate the tensor's value. Only constant value is allowed.
Must be in the range [-rank(input), rank(input)).
Outputs:
Tensor, the shape of the output tensor is consistent with the input tensor's.
...
...
@@ -1764,6 +1765,7 @@ class Div(_MathBinaryOp):
>>> input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = P.Div()
>>> div(input_x, input_y)
[-1.3, 2.5, 2.0]
"""
def
infer_value
(
self
,
x
,
y
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录