Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
76e544fa
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
76e544fa
编写于
9月 08, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
9月 08, 2020
浏览文件
操作
浏览文件
下载
差异文件
!5894 enhance ops API comment part3
Merge pull request !5894 from Simson/push-to-opensource
上级
65c5663c
e7f3a283
变更
11
隐藏空白更改
内联
并排
Showing
11 changed file
with
174 addition
and
171 deletion
+174
-171
mindspore/nn/layer/conv.py
mindspore/nn/layer/conv.py
+9
-9
mindspore/ops/operations/_grad_ops.py
mindspore/ops/operations/_grad_ops.py
+12
-12
mindspore/ops/operations/array_ops.py
mindspore/ops/operations/array_ops.py
+9
-9
mindspore/ops/operations/control_ops.py
mindspore/ops/operations/control_ops.py
+6
-5
mindspore/ops/operations/debug_ops.py
mindspore/ops/operations/debug_ops.py
+4
-4
mindspore/ops/operations/image_ops.py
mindspore/ops/operations/image_ops.py
+3
-3
mindspore/ops/operations/math_ops.py
mindspore/ops/operations/math_ops.py
+40
-40
mindspore/ops/operations/nn_ops.py
mindspore/ops/operations/nn_ops.py
+83
-81
mindspore/ops/operations/other_ops.py
mindspore/ops/operations/other_ops.py
+2
-2
mindspore/ops/operations/random_ops.py
mindspore/ops/operations/random_ops.py
+2
-2
mindspore/parallel/algo_parameter_config.py
mindspore/parallel/algo_parameter_config.py
+4
-4
未找到文件。
mindspore/nn/layer/conv.py
浏览文件 @
76e544fa
...
...
@@ -111,14 +111,14 @@ class Conv2d(_Conv):
2D convolution layer.
Applies a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`,
where :math:`N` is batch size
and :math:`C_{in}` is channel number. For each batch of shape
:math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as:
where :math:`N` is batch size
, :math:`C_{in}` is channel number, and :math:`H_{in}, W_{in})` are height and width.
For each batch of shape
:math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as:
.. math::
out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,
where :math:`ccor` is the cross
correlation operator, :math:`C_{in}` is the input channel number, :math:`j` ranges
where :math:`ccor` is the cross
-
correlation operator, :math:`C_{in}` is the input channel number, :math:`j` ranges
from :math:`0` to :math:`C_{out} - 1`, :math:`W_{ij}` corresponds to the :math:`i`-th channel of the :math:`j`-th
filter and :math:`out_{j}` corresponds to the :math:`j`-th channel of the output. :math:`W_{ij}` is a slice
of kernel and it has shape :math:`(\text{ks_h}, \text{ks_w})`, where :math:`\text{ks_h}` and
...
...
@@ -162,8 +162,8 @@ class Conv2d(_Conv):
Tensor borders. `padding` should be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding
of top, bottom, left and right is
the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0],
the padding
s of top, bottom, left and right are
the same, equal to padding. If `padding` is a tuple
with four integers, the padding
s
of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
...
...
@@ -472,8 +472,8 @@ class Conv2dTranspose(_Conv):
- valid: Adopted the way of discarding.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding
of top, bottom, left and right is
the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0],
the padding
s of top, bottom, left and right are
the same, equal to padding. If `padding` is a tuple
with four integers, the padding
s
of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
...
...
@@ -856,8 +856,8 @@ class DepthwiseConv2d(Cell):
Tensor borders. `padding` should be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding
of top, bottom, left and right is
the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0],
the padding
s of top, bottom, left and right are
the same, equal to padding. If `padding` is a tuple
with four integers, the padding
s
of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
...
...
mindspore/ops/operations/_grad_ops.py
浏览文件 @
76e544fa
...
...
@@ -284,12 +284,12 @@ class Conv2DBackpropFilter(PrimitiveWithInfer):
Args:
out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str):
"valid", "same", "pad" the mode to fill padding
. Default: "valid".
pad (int): The pad value to
fill
. Default: 0.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution ,
pad_mode (str):
Modes to fill padding. It could be "valid", "same", or "pad"
. Default: "valid".
pad (int): The pad value to
be filled
. Default: 0.
mode (int):
Modes for different convolutions.
0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1.
stride (tuple): The stride to
apply conv
filter. Default: (1, 1).
dilation (tuple): Specifies the dilation rate to
use for
dilated convolution. Default: (1, 1, 1, 1).
stride (tuple): The stride to
be applied to the convolution
filter. Default: (1, 1).
dilation (tuple): Specifies the dilation rate to
be used for the
dilated convolution. Default: (1, 1, 1, 1).
group (int): Splits input into groups. Default: 1.
Returns:
...
...
@@ -349,12 +349,12 @@ class DepthwiseConv2dNativeBackpropFilter(PrimitiveWithInfer):
Args:
channel_multiplier (int): The multipiler for the original output conv.
kernel_size (int or tuple): The size of the conv kernel.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution,
mode (int):
Modes for different convolutions.
0 Math convolutiuon, 1 cross-correlation convolution,
2 deconvolution,3 depthwise convolution. Defaul: 3.
pad_mode (str): The mode to fill padding which can be: "valid", "same" or "pad". Default: "valid".
pad (int): The pad value to
fill
. Default: 0.
pad (int): The pad value to
be filled
. Default: 0.
pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0).
stride (int): The stride to
apply conv
filter. Default: 1.
stride (int): The stride to
be applied to the convolution
filter. Default: 1.
dilation (int): Specifies the space to use between kernel elements. Default: 1.
group (int): Splits input into groups. Default: 1.
...
...
@@ -410,12 +410,12 @@ class DepthwiseConv2dNativeBackpropInput(PrimitiveWithInfer):
Args:
channel_multiplier (int): The multipiler for the original output conv.
kernel_size (int or tuple): The size of the conv kernel.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution ,
mode (int):
Modes for different convolutions.
0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution,3 depthwise convolution. Default: 3.
pad_mode (str):
"valid", "same", "pad" the mode to fill padding
. Default: "valid".
pad (int):
the pad value to fill
. Default: 0.
pad_mode (str):
Modes to fill padding. It could be "valid", "same", or "pad"
. Default: "valid".
pad (int):
The pad value to be filled
. Default: 0.
pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0).
stride (int):
the stride to apply conv
filter. Default: 1.
stride (int):
The stride to be applied to the convolution
filter. Default: 1.
dilation (int): Specifies the space to use between kernel elements. Default: 1.
group (int): Splits input into groups. Default: 1.
...
...
mindspore/ops/operations/array_ops.py
浏览文件 @
76e544fa
...
...
@@ -292,7 +292,7 @@ class IsSubClass(PrimitiveWithInfer):
Check whether one type is sub class of another type.
Inputs:
- **sub_type** (mindspore.dtype) - The type to be check. Only constant value is allowed.
- **sub_type** (mindspore.dtype) - The type to be check
ed
. Only constant value is allowed.
- **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
Outputs:
...
...
@@ -326,7 +326,7 @@ class IsInstance(PrimitiveWithInfer):
Check whether an object is an instance of a target type.
Inputs:
- **inst** (Any Object) - The instance to be check. Only constant value is allowed.
- **inst** (Any Object) - The instance to be check
ed
. Only constant value is allowed.
- **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
Outputs:
...
...
@@ -1100,7 +1100,7 @@ class InvertPermutation(PrimitiveWithInfer):
Only constant value is allowed.
Outputs:
tuple[int].
the lenth is same as
input.
tuple[int].
It has the same length as the
input.
Examples:
>>> invert = P.InvertPermutation()
...
...
@@ -2355,15 +2355,15 @@ class DiagPart(PrimitiveWithInfer):
class
Eye
(
PrimitiveWithInfer
):
"""
Creates a tensor with ones on the diagonal and zeros
elsewhere
.
Creates a tensor with ones on the diagonal and zeros
the rest
.
Inputs:
- **n** (int) -
N
umber of rows of returned tensor
- **m** (int) -
N
umber of columns of returned tensor
- **n** (int) -
The n
umber of rows of returned tensor
- **m** (int) -
The n
umber of columns of returned tensor
- **t** (mindspore.dtype) - MindSpore's dtype, The data type of the returned tensor.
Outputs:
Tensor, a tensor with ones on the diagonal and
zeros elsewhere
.
Tensor, a tensor with ones on the diagonal and
the rest of elements are zero
.
Examples:
>>> eye = P.Eye()
...
...
@@ -3453,8 +3453,8 @@ class InplaceUpdate(PrimitiveWithInfer):
Inputs:
- **x** (Tensor) - A tensor which to be inplace updated. It can be one of the following data types:
float32, float16
,
int32.
- **v** (Tensor) - A tensor
of the same type as `x`. S
ame dimension size as `x` except
float32, float16
and
int32.
- **v** (Tensor) - A tensor
with the same type as `x` and the s
ame dimension size as `x` except
the first dimension, which must be the same as the size of `indices`.
Outputs:
...
...
mindspore/ops/operations/control_ops.py
浏览文件 @
76e544fa
...
...
@@ -26,23 +26,24 @@ class ControlDepend(Primitive):
Adds control dependency relation between source and destination operation.
In many cases, we need to control the execution order of operations. ControlDepend is designed for this.
ControlDepend will in
dicate the execution engine to run the operations in
specific order. ControlDepend
ControlDepend will in
struct the execution engine to run the operations in a
specific order. ControlDepend
tells the engine that the destination operations should depend on the source operation which means the source
operations should be executed before the destination.
Note:
This operation does not work in `PYNATIVE_MODE`.
Args:
depend_mode (int): Use 0 for normal depend, 1 for depend on operations that used the parameter. Default: 0.
depend_mode (int): Use 0 for a normal dependency relation. Use 1 to depends on operations which using Parameter
as its input. Default: 0.
Inputs:
- **src** (Any) - The source input. It can be a tuple of operations output or a single operation output. We do
not concern about the input data, but concern about the operation that generates the input data.
If `depend_mode
= 1` is specified and the source input is p
arameter, we will try to find the operations that
If `depend_mode
` is 1 and the source input is P
arameter, we will try to find the operations that
used the parameter as input.
- **dst** (Any) - The destination input. It can be a tuple of operations output or a single operation output.
We do not concern about the input data, but concern about the operation that generates the input data.
If `depend_mode
= 1` is specified and the source input is p
arameter, we will try to find the operations that
If `depend_mode
` is 1 and the source input is P
arameter, we will try to find the operations that
used the parameter as input.
Outputs:
...
...
@@ -80,7 +81,7 @@ class GeSwitch(PrimitiveWithInfer):
"""
Adds control switch to data.
Switch data
to flow into false or true branch depend
on the condition. If the condition is true,
Switch data
flows into false or true branch depending
on the condition. If the condition is true,
the true branch will be activated, or vise verse.
Inputs:
...
...
mindspore/ops/operations/debug_ops.py
浏览文件 @
76e544fa
...
...
@@ -248,14 +248,14 @@ class InsertGradientOf(PrimitiveWithInfer):
class
HookBackward
(
PrimitiveWithInfer
):
"""
Used as
tag to hook gradient in intermediate variables. Note that this function
This operation is used as a
tag to hook gradient in intermediate variables. Note that this function
is only supported in Pynative Mode.
Note:
The hook function should be defined like `hook_fn(grad) -> Tensor or None`,
wh
ich
grad is the gradient passed to the primitive and gradient may be
modified and passed to nex
primitive. the difference between
hook function and
callback of InsertGradientOf is that
hook function is executed in
python
wh
ere
grad is the gradient passed to the primitive and gradient may be
modified and passed to nex
t primitive. The difference between a
hook function and
callback of InsertGradientOf is that
a hook function is executed in the
python
environment while callback will be parsed and added to the graph.
Args:
...
...
mindspore/ops/operations/image_ops.py
浏览文件 @
76e544fa
...
...
@@ -29,9 +29,9 @@ class CropAndResize(PrimitiveWithInfer):
In case that the output shape depends on crop_size, the crop_size should be constant.
Args:
method (str): An optional string
specifying
the sampling method for resizing.
It can be either "bilinear" or "nearest"
and default to
"bilinear"
extrapolation_value (float): An optional float
defaults to 0. Value used for extrapolation, when applicable
.
method (str): An optional string
that specifies
the sampling method for resizing.
It can be either "bilinear" or "nearest"
. Default:
"bilinear"
extrapolation_value (float): An optional float
value used extrapolation, if applicable. Default: 0
.
Inputs:
- **x** (Tensor) - The input image must be a 4-D tensor of shape [batch, image_height, image_width, depth].
...
...
mindspore/ops/operations/math_ops.py
浏览文件 @
76e544fa
...
...
@@ -122,7 +122,7 @@ class TensorAdd(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -957,11 +957,11 @@ class InplaceAdd(PrimitiveWithInfer):
Args:
indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x
to add with v. It is a
int or
tuple, whose value is in [0, the first dimension size of x).
to add with v. It is a
n integer or a
tuple, whose value is in [0, the first dimension size of x).
Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
- **input_v** (Tensor) - The second input is a tensor
who
has the same dimension sizes as x except
- **input_v** (Tensor) - The second input is a tensor
that
has the same dimension sizes as x except
the first dimension, which must be the same as indices's size. It has the same data type with `input_x`.
Outputs:
...
...
@@ -1015,7 +1015,7 @@ class InplaceSub(PrimitiveWithInfer):
Args:
indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x
to sub with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
to sub
tract
with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
...
...
@@ -1076,7 +1076,7 @@ class Sub(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1115,7 +1115,7 @@ class Mul(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1154,7 +1154,7 @@ class SquaredDifference(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1341,7 +1341,7 @@ class Pow(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1453,11 +1453,11 @@ class HistogramFixedWidth(PrimitiveWithInfer):
Args:
dtype (string): An optional attribute. Must be one of the following types: "int32", "int64". Default: "int32".
nbins (int):
Number of histogram bins, the type is
positive integer.
nbins (int):
The number of histogram bins, the type is a
positive integer.
Inputs:
- **x** (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.
- **range** (Tensor) - Must ha
ve the same type as x. Shape [2] Tensor of same dtype as x
.
- **range** (Tensor) - Must ha
s the same data type as `x`, and the shape is [2]
.
x <= range[0] will be mapped to hist[0], x >= range[1] will be mapped to hist[-1].
Outputs:
...
...
@@ -1593,7 +1593,7 @@ class Erfc(PrimitiveWithInfer):
Computes the complementary error function of `input_x` element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor. The data type m
a
st be float16 or float32.
- **input_x** (Tensor) - The input tensor. The data type m
u
st be float16 or float32.
Outputs:
Tensor, has the same shape and dtype as the `input_x`.
...
...
@@ -1627,7 +1627,7 @@ class Minimum(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1666,7 +1666,7 @@ class Maximum(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1705,7 +1705,7 @@ class RealDiv(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1744,13 +1744,13 @@ class Div(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
a bool or a tensor whose data type is number or bool.
- **input_y** (Union[Tensor, Number, bool]) - When the first input is a tensor, The second input
could be a number
or
a bool, or a tensor whose data type is number or bool. When the first input
could be a number
,
a bool, or a tensor whose data type is number or bool. When the first input
is a number or a bool, the second input should be a tensor whose data type is number or bool.
Outputs:
...
...
@@ -1758,7 +1758,7 @@ class Div(_MathBinaryOp):
and the data type is the one with high precision or high digits among the two inputs.
Raises:
ValueError: When `input_x` and `input_y`
are not
the same dtype.
ValueError: When `input_x` and `input_y`
do not have
the same dtype.
Examples:
>>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
...
...
@@ -1786,7 +1786,7 @@ class DivNoNan(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1799,7 +1799,7 @@ class DivNoNan(_MathBinaryOp):
and the data type is the one with high precision or high digits among the two inputs.
Raises:
ValueError: When `input_x` and `input_y`
are not
the same dtype.
ValueError: When `input_x` and `input_y`
do not have
the same dtype.
Examples:
>>> input_x = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
...
...
@@ -1822,14 +1822,14 @@ class DivNoNan(_MathBinaryOp):
class
FloorDiv
(
_MathBinaryOp
):
"""
Divide the first input tensor by the second input tensor element-wise and round
s
down to the closest integer.
Divide the first input tensor by the second input tensor element-wise and round down to the closest integer.
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1860,7 +1860,7 @@ class TruncateDiv(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1890,7 +1890,7 @@ class TruncateMod(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -1918,7 +1918,7 @@ class Mod(_MathBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors,
both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor
and one scalar, the scalar
only could
be a constant.
and one scalar, the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.
...
...
@@ -1953,7 +1953,7 @@ class Floor(PrimitiveWithInfer):
Round a tensor down to the closest integer element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor. It
'
s element data type must be float.
- **input_x** (Tensor) - The input tensor. Its element data type must be float.
Outputs:
Tensor, has the same shape as `input_x`.
...
...
@@ -1979,14 +1979,14 @@ class Floor(PrimitiveWithInfer):
class
FloorMod
(
_MathBinaryOp
):
"""
Compute
element-wise remainder of division
.
Compute
the remainder of division element-wise
.
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors,
dtypes of them cannot be both bool , and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2045,7 +2045,7 @@ class Xdivy(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2079,7 +2079,7 @@ class Xlogy(_MathBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2241,7 +2241,7 @@ class Equal(_LogicBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, the scalar
only could
be a constant.
When the inputs are one tensor and one scalar, the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number]) - The first input is a number or
...
...
@@ -2356,7 +2356,7 @@ class NotEqual(_LogicBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, the scalar
only could
be a constant.
When the inputs are one tensor and one scalar, the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2393,7 +2393,7 @@ class Greater(_LogicBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2430,7 +2430,7 @@ class GreaterEqual(_LogicBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2467,7 +2467,7 @@ class Less(_LogicBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2504,7 +2504,7 @@ class LessEqual(_LogicBinaryOp):
When the inputs are two tensors,
dtypes of them cannot be both bool , and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar,
the scalar
only could
be a constant.
the scalar
could only
be a constant.
Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...
...
@@ -2570,7 +2570,7 @@ class LogicalAnd(_LogicBinaryOp):
The inputs must be two tensors or one tensor and one bool.
When the inputs are two tensors, the shapes of them could be broadcast,
and the data types of them should be bool.
When the inputs are one tensor and one bool, the bool object
only could
be a constant,
When the inputs are one tensor and one bool, the bool object
could only
be a constant,
and the data type of the tensor should be bool.
Inputs:
...
...
@@ -2601,7 +2601,7 @@ class LogicalOr(_LogicBinaryOp):
The inputs must be two tensors or one tensor and one bool.
When the inputs are two tensors, the shapes of them could be broadcast,
and the data types of them should be bool.
When the inputs are one tensor and one bool, the bool object
only could
be a constant,
When the inputs are one tensor and one bool, the bool object
could only
be a constant,
and the data type of the tensor should be bool.
Inputs:
...
...
@@ -2626,7 +2626,7 @@ class LogicalOr(_LogicBinaryOp):
class
IsNan
(
PrimitiveWithInfer
):
"""
Judg
ing which elements are nan for each position
Judg
e which elements are nan for each position.
Inputs:
- **input_x** (Tensor) - The input tensor.
...
...
@@ -2682,7 +2682,7 @@ class IsInf(PrimitiveWithInfer):
class
IsFinite
(
PrimitiveWithInfer
):
"""
Judg
ing which elements are finite for each position
Judg
e which elements are finite for each position.
Inputs:
- **input_x** (Tensor) - The input tensor.
...
...
@@ -2713,7 +2713,7 @@ class IsFinite(PrimitiveWithInfer):
class
FloatStatus
(
PrimitiveWithInfer
):
"""
Determine if the elements contain
s nan, inf or -inf. `0` for normal, `1`
for overflow.
Determine if the elements contain
Not a Number(NaN), infinite or negative infinite. 0 for normal, 1
for overflow.
Inputs:
- **input_x** (Tensor) - The input tensor. The data type must be float16 or float32.
...
...
mindspore/ops/operations/nn_ops.py
浏览文件 @
76e544fa
...
...
@@ -657,7 +657,7 @@ class FusedBatchNormEx(PrimitiveWithInfer):
- **variance** (Tensor) - variance value, Tensor of shape :math:`(C,)`, data type: float32.
Outputs:
Tuple of 6 Tensor, the normalized input, the updated parameters and reserve.
Tuple of 6 Tensor
s
, the normalized input, the updated parameters and reserve.
- **output_x** (Tensor) - The input of FusedBatchNormEx, same type and shape as the `input_x`.
- **updated_scale** (Tensor) - Updated parameter scale, Tensor of shape :math:`(C,)`, data type: float32.
...
...
@@ -870,13 +870,13 @@ class Conv2D(PrimitiveWithInfer):
Args:
out_channel (int): The dimension of the output.
kernel_size (Union[int, tuple[int]]): The kernel size of the 2D convolution.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution ,
mode (int):
Modes for different convolutions.
0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1.
pad_mode (str):
"valid", "same", "pad" the mode to fill padding
. Default: "valid".
pad (Union(int, tuple[int])): The pad value to
fill. Default: 0. If `pad` is one integer, the padding
of
top, bottom, left and right
is same, equal to pad. If `pad` is tuple with four integer, the padding
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding
.
stride (Union(int, tuple[int])): The stride to
apply conv
filter. Default: 1.
pad_mode (str):
Modes to fill padding. It could be "valid", "same", or "pad"
. Default: "valid".
pad (Union(int, tuple[int])): The pad value to
be filled. Default: 0. If `pad` is an integer, the paddings
of
top, bottom, left and right
are the same, equal to pad. If `pad` is a tuple of four integers, the
padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly
.
stride (Union(int, tuple[int])): The stride to
be applied to the convolution
filter. Default: 1.
dilation (Union(int, tuple[int])): Specify the space to use between kernel elements. Default: 1.
group (int): Split input into groups. Default: 1.
...
...
@@ -997,25 +997,26 @@ class DepthwiseConv2dNative(PrimitiveWithInfer):
Given an input tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})` where :math:`N` is the batch size and a
filter tensor with kernel size :math:`(ks_{h}, ks_{w})`, containing :math:`C_{in} * \text{channel_multiplier}`
convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels
for each
with
default value 1), then concatenates the results together. The output has
for each
input channel has the
default value 1), then concatenates the results together. The output has
:math:`\text{in_channels} * \text{channel_multiplier}` channels.
Args:
channel_multiplier (int): The multipiler for the original output conv. Its value must be greater than 0.
kernel_size (Union[int, tuple[int]]): The size of the conv kernel.
mode (int): 0 Math convolution, 1 cross-correlation convolution ,
channel_multiplier (int): The multipiler for the original output conv
olution
. Its value must be greater than 0.
kernel_size (Union[int, tuple[int]]): The size of the conv
olution
kernel.
mode (int):
Modes for different convolutions.
0 Math convolution, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 3.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid".
pad (Union[int, tuple[int]]): The pad value to fill. If `pad` is one integer, the padding of
top, bottom, left and right is same, equal to pad. If `pad` is tuple with four integer, the padding
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding. Default: 0.
stride (Union[int, tuple[int]]): The stride to apply conv filter. Default: 1.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (Union[int, tuple[int]]): The pad value to be filled. If `pad` is an integer, the paddings of
top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the padding
of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly. Default: 0.
stride (Union[int, tuple[int]]): The stride to be applied to the convolution filter. Default: 1.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to be used for the dilated convolution.
Default: 1.
group (int): Splits input into groups. Default: 1.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **weight** (Tensor) - Set
size of kernel i
s :math:`(K_1, K_2)`, then the shape is
- **weight** (Tensor) - Set
the size of kernel a
s :math:`(K_1, K_2)`, then the shape is
:math:`(K, C_{in}, K_1, K_2)`, `K` must be 1.
Outputs:
...
...
@@ -1398,14 +1399,15 @@ class Conv2DBackpropInput(PrimitiveWithInfer):
Args:
out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str):
"valid", "same", "pad" the mode to fill padding
. Default: "valid".
pad (Union[int, tuple[int]]): The pad value to
fill. Default: 0. If `pad` is one integer, the padding
of
top, bottom, left and right
is same, equal to pad. If `pad` is tuple with four integer, the padding
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding
.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution ,
pad_mode (str):
Modes to fill padding. It could be "valid", "same", or "pad"
. Default: "valid".
pad (Union[int, tuple[int]]): The pad value to
be filled. Default: 0. If `pad` is an integer, the paddings
of
top, bottom, left and right
are the same, equal to pad. If `pad` is a tuple of four integers, the
padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly
.
mode (int):
Modes for different convolutions.
0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1.
stride (Union[int. tuple[int]]): The stride to apply conv filter. Default: 1.
dilation (Union[int. tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
stride (Union[int. tuple[int]]): The stride to be applied to the convolution filter. Default: 1.
dilation (Union[int. tuple[int]]): Specifies the dilation rate to be used for the dilated convolution.
Default: 1.
group (int): Splits input into groups. Default: 1.
Returns:
...
...
@@ -1842,7 +1844,7 @@ class L2Loss(PrimitiveWithInfer):
class
DataFormatDimMap
(
PrimitiveWithInfer
):
"""
Returns the dimension index in the destination data format given
the one
in the source data format.
Returns the dimension index in the destination data format given in the source data format.
Args:
src_format (string): An optional value for source data format. Default: 'NHWC'.
...
...
@@ -2336,7 +2338,7 @@ class DropoutDoMask(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The input tensor.
- **mask** (Tensor) - The mask to be applied on `input_x`, which is the output of `DropoutGenMask`. And the
shape of `input_x` must be same as the value of `DropoutGenMask`'s input `shape`. If input wrong `mask`,
shape of `input_x` must be
the
same as the value of `DropoutGenMask`'s input `shape`. If input wrong `mask`,
the output of `DropoutDoMask` are unpredictable.
- **keep_prob** (Tensor) - The keep rate, between 0 and 1, e.g. keep_prob = 0.9,
means dropping out 10% of input units. The value of `keep_prob` is the same as the input `keep_prob` of
...
...
@@ -2494,10 +2496,10 @@ class Gelu(PrimitiveWithInfer):
Gaussian Error Linear Units activation function.
GeLU is described in the paper `Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>`_.
And also please refer to `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
.
And also please refer to `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
<https://arxiv.org/abs/1810.04805>`_.
D
efined as follows:
Gelu is d
efined as follows:
.. math::
\text{output} = 0.5 * x * (1 + erf(x / \sqrt{2})),
...
...
@@ -2505,7 +2507,7 @@ class Gelu(PrimitiveWithInfer):
where :math:`erf` is the "Gauss error function" .
Inputs:
- **input_x** (Tensor) - Input to compute the Gelu
. W
ith data type of float16 or float32.
- **input_x** (Tensor) - Input to compute the Gelu
w
ith data type of float16 or float32.
Outputs:
Tensor, with the same type and shape as input.
...
...
@@ -2534,8 +2536,8 @@ class GetNext(PrimitiveWithInfer):
Returns the next element in the dataset queue.
Note:
GetNext op needs to be associated with network and
also depends on the init_dataset interface,
it can't be used directly as a single op.
The GetNext operation needs to be associated with network and it
also depends on the init_dataset interface,
it can't be used directly as a single op
eration
.
For details, please refer to `nn.DataWrapper` source code.
Args:
...
...
@@ -3057,7 +3059,7 @@ class Adam(PrimitiveWithInfer):
class
FusedSparseAdam
(
PrimitiveWithInfer
):
r
"""
Merge the duplicate value of the gradient and then update
s
parameters by Adaptive Moment Estimation (Adam)
Merge the duplicate value of the gradient and then update parameters by Adaptive Moment Estimation (Adam)
algorithm. This operator is used when the gradient is sparse.
The Adam algorithm is proposed in `Adam: A Method for Stochastic Optimization <https://arxiv.org/abs/1412.6980>`_.
...
...
@@ -3092,22 +3094,22 @@ class FusedSparseAdam(PrimitiveWithInfer):
If true, update the gradients without using NAG. Default: False.
Inputs:
- **var** (Parameter) - Parameters to be updated
. W
ith float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`
. W
ith
float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
with the same type as `var`. W
ith float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula
. W
ith float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula
. W
ith float32 data type.
- **var** (Parameter) - Parameters to be updated
w
ith float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`
w
ith
float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
, has the same type as
`var` w
ith float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula
w
ith float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula
w
ith float32 data type.
- **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations
. W
ith float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations
. W
ith float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability
. W
ith float32 data type.
- **gradient** (Tensor) - Gradient value
. W
ith float32 data type.
- **indices** (Tensor) - Gradient indices
. W
ith int32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations
w
ith float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations
w
ith float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability
w
ith float32 data type.
- **gradient** (Tensor) - Gradient value
w
ith float32 data type.
- **indices** (Tensor) - Gradient indices
w
ith int32 data type.
Outputs:
Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.
Tuple of 3 Tensor
s
, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,).
- **m** (Tensor) - A Tensor with shape (1,).
...
...
@@ -3189,7 +3191,7 @@ class FusedSparseAdam(PrimitiveWithInfer):
class
FusedSparseLazyAdam
(
PrimitiveWithInfer
):
r
"""
Merge the duplicate value of the gradient and then update
s
parameters by Adaptive Moment Estimation (Adam)
Merge the duplicate value of the gradient and then update parameters by Adaptive Moment Estimation (Adam)
algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the
original Adam algorithm, as only the current indices parameters will be updated.
...
...
@@ -3225,22 +3227,22 @@ class FusedSparseLazyAdam(PrimitiveWithInfer):
If true, update the gradients without using NAG. Default: False.
Inputs:
- **var** (Parameter) - Parameters to be updated
. W
ith float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`
. W
ith
float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
with the same type as `var`. W
ith float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula
. W
ith float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula
. W
ith float32 data type.
- **lr** (Tensor) - :math:`l` in the updating formula
. W
ith float32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations
. W
ith float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations
. W
ith float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability
. W
ith float32 data type.
- **gradient** (Tensor) - Gradient value
. W
ith float32 data type.
- **indices** (Tensor) - Gradient indices
. W
ith int32 data type.
- **var** (Parameter) - Parameters to be updated
w
ith float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`
w
ith
float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
, has the same type as
`var` w
ith float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula
w
ith float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula
w
ith float32 data type.
- **lr** (Tensor) - :math:`l` in the updating formula
w
ith float32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations
w
ith float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations
w
ith float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability
w
ith float32 data type.
- **gradient** (Tensor) - Gradient value
w
ith float32 data type.
- **indices** (Tensor) - Gradient indices
w
ith int32 data type.
Outputs:
Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.
Tuple of 3 Tensor
s
, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,).
- **m** (Tensor) - A Tensor with shape (1,).
...
...
@@ -3418,7 +3420,7 @@ class FusedSparseFtrl(PrimitiveWithInfer):
class
FusedSparseProximalAdagrad
(
PrimitiveWithInfer
):
r
"""
Merge the duplicate value of the gradient and then
Updates
relevant entries according to the proximal adagrad
Merge the duplicate value of the gradient and then
update
relevant entries according to the proximal adagrad
algorithm.
.. math::
...
...
@@ -3434,7 +3436,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
Args:
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated.
use_locking (bool): If true, the var
iable
and accumulation tensors will be protected from being updated.
Default: False.
Inputs:
...
...
@@ -3448,7 +3450,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
must be int32.
Outputs:
Tuple of 2 Tensor, this operator will update the input parameters directly, the outputs are useless.
Tuple of 2 Tensor
s
, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,).
- **accum** (Tensor) - A Tensor with shape (1,).
...
...
@@ -3524,9 +3526,9 @@ class KLDivLoss(PrimitiveWithInfer):
.. math::
\ell(x, y) = \begin{cases}
L, & \text{if reduction} = \text{
'
none';}\\
\operatorname{mean}(L), & \text{if reduction} = \text{
'
mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{
'
sum'.}
L, & \text{if reduction} = \text{
`
none';}\\
\operatorname{mean}(L), & \text{if reduction} = \text{
`
mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{
`
sum'.}
\end{cases}
Args:
...
...
@@ -3535,10 +3537,10 @@ class KLDivLoss(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The input Tensor. The data type must be float32.
- **input_y** (Tensor) - The label Tensor which has same shape as `input_x`. The data type must be float32.
- **input_y** (Tensor) - The label Tensor which has
the
same shape as `input_x`. The data type must be float32.
Outputs:
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and same shape as `input_x`.
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and
has the
same shape as `input_x`.
Otherwise it is a scalar.
Examples:
...
...
@@ -5151,15 +5153,15 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
class
ConfusionMulGrad
(
PrimitiveWithInfer
):
"""
`output0` is the
result of which input0 dot multily
input1.
`output0` is the
dot product result of input0 and
input1.
`output1` is the
result of which input0 dot multily input1, then reducesum
it.
`output1` is the
dot product result of input0 and input1, then apply the reducesum operation on
it.
Args:
axis (Union[int, tuple[int], list[int]]): The dimensions to reduce.
Default:(), reduce all dimensions. Only constant value is allowed.
keep_dims (bool):
- If true, keep these reduced dimensions and the length
i
s 1.
- If true, keep these reduced dimensions and the length
a
s 1.
- If false, don't keep these dimensions. Default:False.
Inputs:
...
...
@@ -5167,8 +5169,8 @@ class ConfusionMulGrad(PrimitiveWithInfer):
- **input_1** (Tensor) - The input Tensor.
- **input_2** (Tensor) - The input Tensor.
o
utputs:
- **output_0** (Tensor) - The same shape
with
`input0`.
O
utputs:
- **output_0** (Tensor) - The same shape
as
`input0`.
- **output_1** (Tensor)
- If axis is (), and keep_dims is false, the output is a 0-D array representing
...
...
@@ -5462,7 +5464,7 @@ class BasicLSTMCell(PrimitiveWithInfer):
- **w** (Tensor) - Weight. Tensor of shape (`input_size + hidden_size`, `4 x hidden_size`).
The data type must be float16 or float32.
- **b** (Tensor) - Bias. Tensor of shape (`4 x hidden_size`).
The data type must be same as `c`.
The data type must be
the
same as `c`.
Outputs:
- **ct** (Tensor) - Forward :math:`c_t` cache at moment `t`. Tensor of shape (`batch_size`, `hidden_size`).
...
...
@@ -5532,18 +5534,18 @@ class BasicLSTMCell(PrimitiveWithInfer):
class
InTopK
(
PrimitiveWithInfer
):
r
"""
Says w
hether the targets are in the top `k` predictions.
W
hether the targets are in the top `k` predictions.
Args:
k (int): Speci
al the number of top elements to look at
for computing precision.
k (int): Speci
fy the number of top elements to be used
for computing precision.
Inputs:
- **x1** (Tensor) - A 2D Tensor define the predictions of a batch of samples with float16 or float32 data type.
- **x2** (Tensor) - A 1D Tensor define the labels of a batch of samples with int32 data type.
- **x1** (Tensor) - A 2D Tensor define
s
the predictions of a batch of samples with float16 or float32 data type.
- **x2** (Tensor) - A 1D Tensor define
s
the labels of a batch of samples with int32 data type.
Outputs:
Tensor
, which is 1 dimension of type bool and has same shape with `x2`. for label of
sample `i` in `x2`,
if
label in first `k` predictions for sample `i` in `x1`, then the value is True el
se False.
Tensor
has 1 dimension of type bool and the same shape with `x2`. For labeling
sample `i` in `x2`,
if
the label in the first `k` predictions for sample `i` is in `x1`, then the value is True, otherwi
se False.
Examples:
>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
...
...
mindspore/ops/operations/other_ops.py
浏览文件 @
76e544fa
...
...
@@ -244,7 +244,7 @@ class IOU(PrimitiveWithInfer):
Args:
mode (string): The mode is used to specify the calculation method,
now support 'iou' (intersection over union) or 'iof'
now support
ing
'iou' (intersection over union) or 'iof'
(intersection over foreground) mode. Default: 'iou'.
Inputs:
...
...
@@ -350,7 +350,7 @@ class Partial(Primitive):
class
Depend
(
Primitive
):
"""
Depend is used for process side-effect operations.
Depend is used for process
ing
side-effect operations.
Inputs:
- **value** (Tensor) - the real value to return for depend operator.
...
...
mindspore/ops/operations/random_ops.py
浏览文件 @
76e544fa
...
...
@@ -131,9 +131,9 @@ class Gamma(PrimitiveWithInfer):
Inputs:
- **shape** (tuple) - The shape of random tensor to be generated. Only constant value is allowed.
- **alpha** (Tensor) - The α distribution parameter.
It is also known as the shape parameter
. W
ith float32 data type.
It is also known as the shape parameter
w
ith float32 data type.
- **beta** (Tensor) - The β distribution parameter.
It is also known as the scale parameter
. W
ith float32 data type.
It is also known as the scale parameter
w
ith float32 data type.
Outputs:
Tensor. The shape should be the broadcasted shape of Input "shape" and shapes of alpha and beta.
...
...
mindspore/parallel/algo_parameter_config.py
浏览文件 @
76e544fa
...
...
@@ -130,14 +130,14 @@ def set_algo_parameters(**kwargs):
Set algo parameter config.
Note:
Attribute name is need
ed.
The attribute name is requir
ed.
Args:
tensor_slice_align_enable (bool): Whether
checking tensor slice shape for
MatMul. Default: False
tensor_slice_align_enable (bool): Whether
to check the shape of tensor slice of
MatMul. Default: False
tensor_slice_align_size (int): The minimum tensor slice shape of MatMul, the value must be in [1, 1024].
Default: 16
fully_use_devices (bool): Whether ONLY generating strategies that fully use all available devices. Default: True
elementwise_op_strategy_follow (bool): Whether the elementwise operator ha
ve
the same strategies as its
elementwise_op_strategy_follow (bool): Whether the elementwise operator ha
s
the same strategies as its
subsequent operators. Default: False
Raises:
...
...
@@ -155,7 +155,7 @@ def get_algo_parameters(attr_key):
Get algo parameter config attributes.
Note:
Return
value according to the
attribute value.
Return
s the specified
attribute value.
Args:
attr_key (str): The key of the attribute.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录