提交 76e544fa 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!5894 enhance ops API comment part3

Merge pull request !5894 from Simson/push-to-opensource
...@@ -111,14 +111,14 @@ class Conv2d(_Conv): ...@@ -111,14 +111,14 @@ class Conv2d(_Conv):
2D convolution layer. 2D convolution layer.
Applies a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`, Applies a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`,
where :math:`N` is batch size and :math:`C_{in}` is channel number. For each batch of shape where :math:`N` is batch size, :math:`C_{in}` is channel number, and :math:`H_{in}, W_{in})` are height and width.
:math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as: For each batch of shape :math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as:
.. math:: .. math::
out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j, out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,
where :math:`ccor` is the cross correlation operator, :math:`C_{in}` is the input channel number, :math:`j` ranges where :math:`ccor` is the cross-correlation operator, :math:`C_{in}` is the input channel number, :math:`j` ranges
from :math:`0` to :math:`C_{out} - 1`, :math:`W_{ij}` corresponds to the :math:`i`-th channel of the :math:`j`-th from :math:`0` to :math:`C_{out} - 1`, :math:`W_{ij}` corresponds to the :math:`i`-th channel of the :math:`j`-th
filter and :math:`out_{j}` corresponds to the :math:`j`-th channel of the output. :math:`W_{ij}` is a slice filter and :math:`out_{j}` corresponds to the :math:`j`-th channel of the output. :math:`W_{ij}` is a slice
of kernel and it has shape :math:`(\text{ks_h}, \text{ks_w})`, where :math:`\text{ks_h}` and of kernel and it has shape :math:`(\text{ks_h}, \text{ks_w})`, where :math:`\text{ks_h}` and
...@@ -162,8 +162,8 @@ class Conv2d(_Conv): ...@@ -162,8 +162,8 @@ class Conv2d(_Conv):
Tensor borders. `padding` should be greater than or equal to 0. Tensor borders. `padding` should be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer, padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding of top, bottom, left and right is the same, equal to padding. If `padding` is a tuple the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0], with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0. padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will to use for dilated convolution. If set to be :math:`k > 1`, there will
...@@ -472,8 +472,8 @@ class Conv2dTranspose(_Conv): ...@@ -472,8 +472,8 @@ class Conv2dTranspose(_Conv):
- valid: Adopted the way of discarding. - valid: Adopted the way of discarding.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer, padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding of top, bottom, left and right is the same, equal to padding. If `padding` is a tuple the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0], with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0. padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will to use for dilated convolution. If set to be :math:`k > 1`, there will
...@@ -856,8 +856,8 @@ class DepthwiseConv2d(Cell): ...@@ -856,8 +856,8 @@ class DepthwiseConv2d(Cell):
Tensor borders. `padding` should be greater than or equal to 0. Tensor borders. `padding` should be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer, padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
the padding of top, bottom, left and right is the same, equal to padding. If `padding` is a tuple the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the padding of top, bottom, left and right will be equal to padding[0], with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0. padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will to use for dilated convolution. If set to be :math:`k > 1`, there will
......
...@@ -284,12 +284,12 @@ class Conv2DBackpropFilter(PrimitiveWithInfer): ...@@ -284,12 +284,12 @@ class Conv2DBackpropFilter(PrimitiveWithInfer):
Args: Args:
out_channel (int): The dimensionality of the output space. out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window. kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid". pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (int): The pad value to fill. Default: 0. pad (int): The pad value to be filled. Default: 0.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution , mode (int): Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1. 2 deconvolution, 3 depthwise convolution. Default: 1.
stride (tuple): The stride to apply conv filter. Default: (1, 1). stride (tuple): The stride to be applied to the convolution filter. Default: (1, 1).
dilation (tuple): Specifies the dilation rate to use for dilated convolution. Default: (1, 1, 1, 1). dilation (tuple): Specifies the dilation rate to be used for the dilated convolution. Default: (1, 1, 1, 1).
group (int): Splits input into groups. Default: 1. group (int): Splits input into groups. Default: 1.
Returns: Returns:
...@@ -349,12 +349,12 @@ class DepthwiseConv2dNativeBackpropFilter(PrimitiveWithInfer): ...@@ -349,12 +349,12 @@ class DepthwiseConv2dNativeBackpropFilter(PrimitiveWithInfer):
Args: Args:
channel_multiplier (int): The multipiler for the original output conv. channel_multiplier (int): The multipiler for the original output conv.
kernel_size (int or tuple): The size of the conv kernel. kernel_size (int or tuple): The size of the conv kernel.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution, mode (int): Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution,
2 deconvolution,3 depthwise convolution. Defaul: 3. 2 deconvolution,3 depthwise convolution. Defaul: 3.
pad_mode (str): The mode to fill padding which can be: "valid", "same" or "pad". Default: "valid". pad_mode (str): The mode to fill padding which can be: "valid", "same" or "pad". Default: "valid".
pad (int): The pad value to fill. Default: 0. pad (int): The pad value to be filled. Default: 0.
pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0). pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0).
stride (int): The stride to apply conv filter. Default: 1. stride (int): The stride to be applied to the convolution filter. Default: 1.
dilation (int): Specifies the space to use between kernel elements. Default: 1. dilation (int): Specifies the space to use between kernel elements. Default: 1.
group (int): Splits input into groups. Default: 1. group (int): Splits input into groups. Default: 1.
...@@ -410,12 +410,12 @@ class DepthwiseConv2dNativeBackpropInput(PrimitiveWithInfer): ...@@ -410,12 +410,12 @@ class DepthwiseConv2dNativeBackpropInput(PrimitiveWithInfer):
Args: Args:
channel_multiplier (int): The multipiler for the original output conv. channel_multiplier (int): The multipiler for the original output conv.
kernel_size (int or tuple): The size of the conv kernel. kernel_size (int or tuple): The size of the conv kernel.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution , mode (int): Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution,3 depthwise convolution. Default: 3. 2 deconvolution,3 depthwise convolution. Default: 3.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid". pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (int): the pad value to fill. Default: 0. pad (int): The pad value to be filled. Default: 0.
pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0). pads (tuple): The pad list like (top, bottom, left, right). Default: (0, 0, 0, 0).
stride (int): the stride to apply conv filter. Default: 1. stride (int): The stride to be applied to the convolution filter. Default: 1.
dilation (int): Specifies the space to use between kernel elements. Default: 1. dilation (int): Specifies the space to use between kernel elements. Default: 1.
group (int): Splits input into groups. Default: 1. group (int): Splits input into groups. Default: 1.
......
...@@ -292,7 +292,7 @@ class IsSubClass(PrimitiveWithInfer): ...@@ -292,7 +292,7 @@ class IsSubClass(PrimitiveWithInfer):
Check whether one type is sub class of another type. Check whether one type is sub class of another type.
Inputs: Inputs:
- **sub_type** (mindspore.dtype) - The type to be check. Only constant value is allowed. - **sub_type** (mindspore.dtype) - The type to be checked. Only constant value is allowed.
- **type_** (mindspore.dtype) - The target type. Only constant value is allowed. - **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
Outputs: Outputs:
...@@ -326,7 +326,7 @@ class IsInstance(PrimitiveWithInfer): ...@@ -326,7 +326,7 @@ class IsInstance(PrimitiveWithInfer):
Check whether an object is an instance of a target type. Check whether an object is an instance of a target type.
Inputs: Inputs:
- **inst** (Any Object) - The instance to be check. Only constant value is allowed. - **inst** (Any Object) - The instance to be checked. Only constant value is allowed.
- **type_** (mindspore.dtype) - The target type. Only constant value is allowed. - **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
Outputs: Outputs:
...@@ -1100,7 +1100,7 @@ class InvertPermutation(PrimitiveWithInfer): ...@@ -1100,7 +1100,7 @@ class InvertPermutation(PrimitiveWithInfer):
Only constant value is allowed. Only constant value is allowed.
Outputs: Outputs:
tuple[int]. the lenth is same as input. tuple[int]. It has the same length as the input.
Examples: Examples:
>>> invert = P.InvertPermutation() >>> invert = P.InvertPermutation()
...@@ -2355,15 +2355,15 @@ class DiagPart(PrimitiveWithInfer): ...@@ -2355,15 +2355,15 @@ class DiagPart(PrimitiveWithInfer):
class Eye(PrimitiveWithInfer): class Eye(PrimitiveWithInfer):
""" """
Creates a tensor with ones on the diagonal and zeros elsewhere. Creates a tensor with ones on the diagonal and zeros the rest.
Inputs: Inputs:
- **n** (int) - Number of rows of returned tensor - **n** (int) - The number of rows of returned tensor
- **m** (int) - Number of columns of returned tensor - **m** (int) - The number of columns of returned tensor
- **t** (mindspore.dtype) - MindSpore's dtype, The data type of the returned tensor. - **t** (mindspore.dtype) - MindSpore's dtype, The data type of the returned tensor.
Outputs: Outputs:
Tensor, a tensor with ones on the diagonal and zeros elsewhere. Tensor, a tensor with ones on the diagonal and the rest of elements are zero.
Examples: Examples:
>>> eye = P.Eye() >>> eye = P.Eye()
...@@ -3453,8 +3453,8 @@ class InplaceUpdate(PrimitiveWithInfer): ...@@ -3453,8 +3453,8 @@ class InplaceUpdate(PrimitiveWithInfer):
Inputs: Inputs:
- **x** (Tensor) - A tensor which to be inplace updated. It can be one of the following data types: - **x** (Tensor) - A tensor which to be inplace updated. It can be one of the following data types:
float32, float16, int32. float32, float16 and int32.
- **v** (Tensor) - A tensor of the same type as `x`. Same dimension size as `x` except - **v** (Tensor) - A tensor with the same type as `x` and the same dimension size as `x` except
the first dimension, which must be the same as the size of `indices`. the first dimension, which must be the same as the size of `indices`.
Outputs: Outputs:
......
...@@ -26,23 +26,24 @@ class ControlDepend(Primitive): ...@@ -26,23 +26,24 @@ class ControlDepend(Primitive):
Adds control dependency relation between source and destination operation. Adds control dependency relation between source and destination operation.
In many cases, we need to control the execution order of operations. ControlDepend is designed for this. In many cases, we need to control the execution order of operations. ControlDepend is designed for this.
ControlDepend will indicate the execution engine to run the operations in specific order. ControlDepend ControlDepend will instruct the execution engine to run the operations in a specific order. ControlDepend
tells the engine that the destination operations should depend on the source operation which means the source tells the engine that the destination operations should depend on the source operation which means the source
operations should be executed before the destination. operations should be executed before the destination.
Note: Note:
This operation does not work in `PYNATIVE_MODE`. This operation does not work in `PYNATIVE_MODE`.
Args: Args:
depend_mode (int): Use 0 for normal depend, 1 for depend on operations that used the parameter. Default: 0. depend_mode (int): Use 0 for a normal dependency relation. Use 1 to depends on operations which using Parameter
as its input. Default: 0.
Inputs: Inputs:
- **src** (Any) - The source input. It can be a tuple of operations output or a single operation output. We do - **src** (Any) - The source input. It can be a tuple of operations output or a single operation output. We do
not concern about the input data, but concern about the operation that generates the input data. not concern about the input data, but concern about the operation that generates the input data.
If `depend_mode = 1` is specified and the source input is parameter, we will try to find the operations that If `depend_mode` is 1 and the source input is Parameter, we will try to find the operations that
used the parameter as input. used the parameter as input.
- **dst** (Any) - The destination input. It can be a tuple of operations output or a single operation output. - **dst** (Any) - The destination input. It can be a tuple of operations output or a single operation output.
We do not concern about the input data, but concern about the operation that generates the input data. We do not concern about the input data, but concern about the operation that generates the input data.
If `depend_mode = 1` is specified and the source input is parameter, we will try to find the operations that If `depend_mode` is 1 and the source input is Parameter, we will try to find the operations that
used the parameter as input. used the parameter as input.
Outputs: Outputs:
...@@ -80,7 +81,7 @@ class GeSwitch(PrimitiveWithInfer): ...@@ -80,7 +81,7 @@ class GeSwitch(PrimitiveWithInfer):
""" """
Adds control switch to data. Adds control switch to data.
Switch data to flow into false or true branch depend on the condition. If the condition is true, Switch data flows into false or true branch depending on the condition. If the condition is true,
the true branch will be activated, or vise verse. the true branch will be activated, or vise verse.
Inputs: Inputs:
......
...@@ -248,14 +248,14 @@ class InsertGradientOf(PrimitiveWithInfer): ...@@ -248,14 +248,14 @@ class InsertGradientOf(PrimitiveWithInfer):
class HookBackward(PrimitiveWithInfer): class HookBackward(PrimitiveWithInfer):
""" """
Used as tag to hook gradient in intermediate variables. Note that this function This operation is used as a tag to hook gradient in intermediate variables. Note that this function
is only supported in Pynative Mode. is only supported in Pynative Mode.
Note: Note:
The hook function should be defined like `hook_fn(grad) -> Tensor or None`, The hook function should be defined like `hook_fn(grad) -> Tensor or None`,
which grad is the gradient passed to the primitive and gradient may be where grad is the gradient passed to the primitive and gradient may be
modified and passed to nex primitive. the difference between hook function and modified and passed to next primitive. The difference between a hook function and
callback of InsertGradientOf is that hook function is executed in python callback of InsertGradientOf is that a hook function is executed in the python
environment while callback will be parsed and added to the graph. environment while callback will be parsed and added to the graph.
Args: Args:
......
...@@ -29,9 +29,9 @@ class CropAndResize(PrimitiveWithInfer): ...@@ -29,9 +29,9 @@ class CropAndResize(PrimitiveWithInfer):
In case that the output shape depends on crop_size, the crop_size should be constant. In case that the output shape depends on crop_size, the crop_size should be constant.
Args: Args:
method (str): An optional string specifying the sampling method for resizing. method (str): An optional string that specifies the sampling method for resizing.
It can be either "bilinear" or "nearest" and default to "bilinear" It can be either "bilinear" or "nearest". Default: "bilinear"
extrapolation_value (float): An optional float defaults to 0. Value used for extrapolation, when applicable. extrapolation_value (float): An optional float value used extrapolation, if applicable. Default: 0.
Inputs: Inputs:
- **x** (Tensor) - The input image must be a 4-D tensor of shape [batch, image_height, image_width, depth]. - **x** (Tensor) - The input image must be a 4-D tensor of shape [batch, image_height, image_width, depth].
......
...@@ -122,7 +122,7 @@ class TensorAdd(_MathBinaryOp): ...@@ -122,7 +122,7 @@ class TensorAdd(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -957,11 +957,11 @@ class InplaceAdd(PrimitiveWithInfer): ...@@ -957,11 +957,11 @@ class InplaceAdd(PrimitiveWithInfer):
Args: Args:
indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x
to add with v. It is a int or tuple, whose value is in [0, the first dimension size of x). to add with v. It is an integer or a tuple, whose value is in [0, the first dimension size of x).
Inputs: Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32. - **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
- **input_v** (Tensor) - The second input is a tensor who has the same dimension sizes as x except - **input_v** (Tensor) - The second input is a tensor that has the same dimension sizes as x except
the first dimension, which must be the same as indices's size. It has the same data type with `input_x`. the first dimension, which must be the same as indices's size. It has the same data type with `input_x`.
Outputs: Outputs:
...@@ -1015,7 +1015,7 @@ class InplaceSub(PrimitiveWithInfer): ...@@ -1015,7 +1015,7 @@ class InplaceSub(PrimitiveWithInfer):
Args: Args:
indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x indices (Union[int, tuple]): Indices into the left-most dimension of x, and determines which rows of x
to sub with v. It is a int or tuple, whose value is in [0, the first dimension size of x). to subtract with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
Inputs: Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32. - **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
...@@ -1076,7 +1076,7 @@ class Sub(_MathBinaryOp): ...@@ -1076,7 +1076,7 @@ class Sub(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1115,7 +1115,7 @@ class Mul(_MathBinaryOp): ...@@ -1115,7 +1115,7 @@ class Mul(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1154,7 +1154,7 @@ class SquaredDifference(_MathBinaryOp): ...@@ -1154,7 +1154,7 @@ class SquaredDifference(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1341,7 +1341,7 @@ class Pow(_MathBinaryOp): ...@@ -1341,7 +1341,7 @@ class Pow(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1453,11 +1453,11 @@ class HistogramFixedWidth(PrimitiveWithInfer): ...@@ -1453,11 +1453,11 @@ class HistogramFixedWidth(PrimitiveWithInfer):
Args: Args:
dtype (string): An optional attribute. Must be one of the following types: "int32", "int64". Default: "int32". dtype (string): An optional attribute. Must be one of the following types: "int32", "int64". Default: "int32".
nbins (int): Number of histogram bins, the type is positive integer. nbins (int): The number of histogram bins, the type is a positive integer.
Inputs: Inputs:
- **x** (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16. - **x** (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.
- **range** (Tensor) - Must have the same type as x. Shape [2] Tensor of same dtype as x. - **range** (Tensor) - Must has the same data type as `x`, and the shape is [2].
x <= range[0] will be mapped to hist[0], x >= range[1] will be mapped to hist[-1]. x <= range[0] will be mapped to hist[0], x >= range[1] will be mapped to hist[-1].
Outputs: Outputs:
...@@ -1593,7 +1593,7 @@ class Erfc(PrimitiveWithInfer): ...@@ -1593,7 +1593,7 @@ class Erfc(PrimitiveWithInfer):
Computes the complementary error function of `input_x` element-wise. Computes the complementary error function of `input_x` element-wise.
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. The data type mast be float16 or float32. - **input_x** (Tensor) - The input tensor. The data type must be float16 or float32.
Outputs: Outputs:
Tensor, has the same shape and dtype as the `input_x`. Tensor, has the same shape and dtype as the `input_x`.
...@@ -1627,7 +1627,7 @@ class Minimum(_MathBinaryOp): ...@@ -1627,7 +1627,7 @@ class Minimum(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1666,7 +1666,7 @@ class Maximum(_MathBinaryOp): ...@@ -1666,7 +1666,7 @@ class Maximum(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1705,7 +1705,7 @@ class RealDiv(_MathBinaryOp): ...@@ -1705,7 +1705,7 @@ class RealDiv(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1744,13 +1744,13 @@ class Div(_MathBinaryOp): ...@@ -1744,13 +1744,13 @@ class Div(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
a bool or a tensor whose data type is number or bool. a bool or a tensor whose data type is number or bool.
- **input_y** (Union[Tensor, Number, bool]) - When the first input is a tensor, The second input - **input_y** (Union[Tensor, Number, bool]) - When the first input is a tensor, The second input
could be a number or a bool, or a tensor whose data type is number or bool. When the first input could be a number, a bool, or a tensor whose data type is number or bool. When the first input
is a number or a bool, the second input should be a tensor whose data type is number or bool. is a number or a bool, the second input should be a tensor whose data type is number or bool.
Outputs: Outputs:
...@@ -1758,7 +1758,7 @@ class Div(_MathBinaryOp): ...@@ -1758,7 +1758,7 @@ class Div(_MathBinaryOp):
and the data type is the one with high precision or high digits among the two inputs. and the data type is the one with high precision or high digits among the two inputs.
Raises: Raises:
ValueError: When `input_x` and `input_y` are not the same dtype. ValueError: When `input_x` and `input_y` do not have the same dtype.
Examples: Examples:
>>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32) >>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
...@@ -1786,7 +1786,7 @@ class DivNoNan(_MathBinaryOp): ...@@ -1786,7 +1786,7 @@ class DivNoNan(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1799,7 +1799,7 @@ class DivNoNan(_MathBinaryOp): ...@@ -1799,7 +1799,7 @@ class DivNoNan(_MathBinaryOp):
and the data type is the one with high precision or high digits among the two inputs. and the data type is the one with high precision or high digits among the two inputs.
Raises: Raises:
ValueError: When `input_x` and `input_y` are not the same dtype. ValueError: When `input_x` and `input_y` do not have the same dtype.
Examples: Examples:
>>> input_x = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32) >>> input_x = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
...@@ -1822,14 +1822,14 @@ class DivNoNan(_MathBinaryOp): ...@@ -1822,14 +1822,14 @@ class DivNoNan(_MathBinaryOp):
class FloorDiv(_MathBinaryOp): class FloorDiv(_MathBinaryOp):
""" """
Divide the first input tensor by the second input tensor element-wise and rounds down to the closest integer. Divide the first input tensor by the second input tensor element-wise and round down to the closest integer.
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1860,7 +1860,7 @@ class TruncateDiv(_MathBinaryOp): ...@@ -1860,7 +1860,7 @@ class TruncateDiv(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1890,7 +1890,7 @@ class TruncateMod(_MathBinaryOp): ...@@ -1890,7 +1890,7 @@ class TruncateMod(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -1918,7 +1918,7 @@ class Mod(_MathBinaryOp): ...@@ -1918,7 +1918,7 @@ class Mod(_MathBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors,
both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor
and one scalar, the scalar only could be a constant. and one scalar, the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number. - **input_x** (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.
...@@ -1953,7 +1953,7 @@ class Floor(PrimitiveWithInfer): ...@@ -1953,7 +1953,7 @@ class Floor(PrimitiveWithInfer):
Round a tensor down to the closest integer element-wise. Round a tensor down to the closest integer element-wise.
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. It's element data type must be float. - **input_x** (Tensor) - The input tensor. Its element data type must be float.
Outputs: Outputs:
Tensor, has the same shape as `input_x`. Tensor, has the same shape as `input_x`.
...@@ -1979,14 +1979,14 @@ class Floor(PrimitiveWithInfer): ...@@ -1979,14 +1979,14 @@ class Floor(PrimitiveWithInfer):
class FloorMod(_MathBinaryOp): class FloorMod(_MathBinaryOp):
""" """
Compute element-wise remainder of division. Compute the remainder of division element-wise.
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool , and the shapes of them could be broadcast. dtypes of them cannot be both bool , and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2045,7 +2045,7 @@ class Xdivy(_MathBinaryOp): ...@@ -2045,7 +2045,7 @@ class Xdivy(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2079,7 +2079,7 @@ class Xlogy(_MathBinaryOp): ...@@ -2079,7 +2079,7 @@ class Xlogy(_MathBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2241,7 +2241,7 @@ class Equal(_LogicBinaryOp): ...@@ -2241,7 +2241,7 @@ class Equal(_LogicBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are two tensors, the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, the scalar only could be a constant. When the inputs are one tensor and one scalar, the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number]) - The first input is a number or - **input_x** (Union[Tensor, Number]) - The first input is a number or
...@@ -2356,7 +2356,7 @@ class NotEqual(_LogicBinaryOp): ...@@ -2356,7 +2356,7 @@ class NotEqual(_LogicBinaryOp):
Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent.
The inputs must be two tensors or one tensor and one scalar. The inputs must be two tensors or one tensor and one scalar.
When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are two tensors, the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, the scalar only could be a constant. When the inputs are one tensor and one scalar, the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2393,7 +2393,7 @@ class Greater(_LogicBinaryOp): ...@@ -2393,7 +2393,7 @@ class Greater(_LogicBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2430,7 +2430,7 @@ class GreaterEqual(_LogicBinaryOp): ...@@ -2430,7 +2430,7 @@ class GreaterEqual(_LogicBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2467,7 +2467,7 @@ class Less(_LogicBinaryOp): ...@@ -2467,7 +2467,7 @@ class Less(_LogicBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool, and the shapes of them could be broadcast. dtypes of them cannot be both bool, and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2504,7 +2504,7 @@ class LessEqual(_LogicBinaryOp): ...@@ -2504,7 +2504,7 @@ class LessEqual(_LogicBinaryOp):
When the inputs are two tensors, When the inputs are two tensors,
dtypes of them cannot be both bool , and the shapes of them could be broadcast. dtypes of them cannot be both bool , and the shapes of them could be broadcast.
When the inputs are one tensor and one scalar, When the inputs are one tensor and one scalar,
the scalar only could be a constant. the scalar could only be a constant.
Inputs: Inputs:
- **input_x** (Union[Tensor, Number, bool]) - The first input is a number or - **input_x** (Union[Tensor, Number, bool]) - The first input is a number or
...@@ -2570,7 +2570,7 @@ class LogicalAnd(_LogicBinaryOp): ...@@ -2570,7 +2570,7 @@ class LogicalAnd(_LogicBinaryOp):
The inputs must be two tensors or one tensor and one bool. The inputs must be two tensors or one tensor and one bool.
When the inputs are two tensors, the shapes of them could be broadcast, When the inputs are two tensors, the shapes of them could be broadcast,
and the data types of them should be bool. and the data types of them should be bool.
When the inputs are one tensor and one bool, the bool object only could be a constant, When the inputs are one tensor and one bool, the bool object could only be a constant,
and the data type of the tensor should be bool. and the data type of the tensor should be bool.
Inputs: Inputs:
...@@ -2601,7 +2601,7 @@ class LogicalOr(_LogicBinaryOp): ...@@ -2601,7 +2601,7 @@ class LogicalOr(_LogicBinaryOp):
The inputs must be two tensors or one tensor and one bool. The inputs must be two tensors or one tensor and one bool.
When the inputs are two tensors, the shapes of them could be broadcast, When the inputs are two tensors, the shapes of them could be broadcast,
and the data types of them should be bool. and the data types of them should be bool.
When the inputs are one tensor and one bool, the bool object only could be a constant, When the inputs are one tensor and one bool, the bool object could only be a constant,
and the data type of the tensor should be bool. and the data type of the tensor should be bool.
Inputs: Inputs:
...@@ -2626,7 +2626,7 @@ class LogicalOr(_LogicBinaryOp): ...@@ -2626,7 +2626,7 @@ class LogicalOr(_LogicBinaryOp):
class IsNan(PrimitiveWithInfer): class IsNan(PrimitiveWithInfer):
""" """
Judging which elements are nan for each position Judge which elements are nan for each position.
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. - **input_x** (Tensor) - The input tensor.
...@@ -2682,7 +2682,7 @@ class IsInf(PrimitiveWithInfer): ...@@ -2682,7 +2682,7 @@ class IsInf(PrimitiveWithInfer):
class IsFinite(PrimitiveWithInfer): class IsFinite(PrimitiveWithInfer):
""" """
Judging which elements are finite for each position Judge which elements are finite for each position.
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. - **input_x** (Tensor) - The input tensor.
...@@ -2713,7 +2713,7 @@ class IsFinite(PrimitiveWithInfer): ...@@ -2713,7 +2713,7 @@ class IsFinite(PrimitiveWithInfer):
class FloatStatus(PrimitiveWithInfer): class FloatStatus(PrimitiveWithInfer):
""" """
Determine if the elements contains nan, inf or -inf. `0` for normal, `1` for overflow. Determine if the elements contain Not a Number(NaN), infinite or negative infinite. 0 for normal, 1 for overflow.
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. The data type must be float16 or float32. - **input_x** (Tensor) - The input tensor. The data type must be float16 or float32.
......
...@@ -657,7 +657,7 @@ class FusedBatchNormEx(PrimitiveWithInfer): ...@@ -657,7 +657,7 @@ class FusedBatchNormEx(PrimitiveWithInfer):
- **variance** (Tensor) - variance value, Tensor of shape :math:`(C,)`, data type: float32. - **variance** (Tensor) - variance value, Tensor of shape :math:`(C,)`, data type: float32.
Outputs: Outputs:
Tuple of 6 Tensor, the normalized input, the updated parameters and reserve. Tuple of 6 Tensors, the normalized input, the updated parameters and reserve.
- **output_x** (Tensor) - The input of FusedBatchNormEx, same type and shape as the `input_x`. - **output_x** (Tensor) - The input of FusedBatchNormEx, same type and shape as the `input_x`.
- **updated_scale** (Tensor) - Updated parameter scale, Tensor of shape :math:`(C,)`, data type: float32. - **updated_scale** (Tensor) - Updated parameter scale, Tensor of shape :math:`(C,)`, data type: float32.
...@@ -870,13 +870,13 @@ class Conv2D(PrimitiveWithInfer): ...@@ -870,13 +870,13 @@ class Conv2D(PrimitiveWithInfer):
Args: Args:
out_channel (int): The dimension of the output. out_channel (int): The dimension of the output.
kernel_size (Union[int, tuple[int]]): The kernel size of the 2D convolution. kernel_size (Union[int, tuple[int]]): The kernel size of the 2D convolution.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution , mode (int): Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1. 2 deconvolution, 3 depthwise convolution. Default: 1.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid". pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (Union(int, tuple[int])): The pad value to fill. Default: 0. If `pad` is one integer, the padding of pad (Union(int, tuple[int])): The pad value to be filled. Default: 0. If `pad` is an integer, the paddings of
top, bottom, left and right is same, equal to pad. If `pad` is tuple with four integer, the padding top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding. padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.
stride (Union(int, tuple[int])): The stride to apply conv filter. Default: 1. stride (Union(int, tuple[int])): The stride to be applied to the convolution filter. Default: 1.
dilation (Union(int, tuple[int])): Specify the space to use between kernel elements. Default: 1. dilation (Union(int, tuple[int])): Specify the space to use between kernel elements. Default: 1.
group (int): Split input into groups. Default: 1. group (int): Split input into groups. Default: 1.
...@@ -997,25 +997,26 @@ class DepthwiseConv2dNative(PrimitiveWithInfer): ...@@ -997,25 +997,26 @@ class DepthwiseConv2dNative(PrimitiveWithInfer):
Given an input tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})` where :math:`N` is the batch size and a Given an input tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})` where :math:`N` is the batch size and a
filter tensor with kernel size :math:`(ks_{h}, ks_{w})`, containing :math:`C_{in} * \text{channel_multiplier}` filter tensor with kernel size :math:`(ks_{h}, ks_{w})`, containing :math:`C_{in} * \text{channel_multiplier}`
convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels
for each with default value 1), then concatenates the results together. The output has for each input channel has the default value 1), then concatenates the results together. The output has
:math:`\text{in_channels} * \text{channel_multiplier}` channels. :math:`\text{in_channels} * \text{channel_multiplier}` channels.
Args: Args:
channel_multiplier (int): The multipiler for the original output conv. Its value must be greater than 0. channel_multiplier (int): The multipiler for the original output convolution. Its value must be greater than 0.
kernel_size (Union[int, tuple[int]]): The size of the conv kernel. kernel_size (Union[int, tuple[int]]): The size of the convolution kernel.
mode (int): 0 Math convolution, 1 cross-correlation convolution , mode (int): Modes for different convolutions. 0 Math convolution, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 3. 2 deconvolution, 3 depthwise convolution. Default: 3.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid". pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (Union[int, tuple[int]]): The pad value to fill. If `pad` is one integer, the padding of pad (Union[int, tuple[int]]): The pad value to be filled. If `pad` is an integer, the paddings of
top, bottom, left and right is same, equal to pad. If `pad` is tuple with four integer, the padding top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the padding
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding. Default: 0. of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly. Default: 0.
stride (Union[int, tuple[int]]): The stride to apply conv filter. Default: 1. stride (Union[int, tuple[int]]): The stride to be applied to the convolution filter. Default: 1.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1. dilation (Union[int, tuple[int]]): Specifies the dilation rate to be used for the dilated convolution.
Default: 1.
group (int): Splits input into groups. Default: 1. group (int): Splits input into groups. Default: 1.
Inputs: Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`. - **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **weight** (Tensor) - Set size of kernel is :math:`(K_1, K_2)`, then the shape is - **weight** (Tensor) - Set the size of kernel as :math:`(K_1, K_2)`, then the shape is
:math:`(K, C_{in}, K_1, K_2)`, `K` must be 1. :math:`(K, C_{in}, K_1, K_2)`, `K` must be 1.
Outputs: Outputs:
...@@ -1398,14 +1399,15 @@ class Conv2DBackpropInput(PrimitiveWithInfer): ...@@ -1398,14 +1399,15 @@ class Conv2DBackpropInput(PrimitiveWithInfer):
Args: Args:
out_channel (int): The dimensionality of the output space. out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window. kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str): "valid", "same", "pad" the mode to fill padding. Default: "valid". pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
pad (Union[int, tuple[int]]): The pad value to fill. Default: 0. If `pad` is one integer, the padding of pad (Union[int, tuple[int]]): The pad value to be filled. Default: 0. If `pad` is an integer, the paddings of
top, bottom, left and right is same, equal to pad. If `pad` is tuple with four integer, the padding top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the
of top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3] with corresponding. padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.
mode (int): 0 Math convolutiuon, 1 cross-correlation convolution , mode (int): Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution ,
2 deconvolution, 3 depthwise convolution. Default: 1. 2 deconvolution, 3 depthwise convolution. Default: 1.
stride (Union[int. tuple[int]]): The stride to apply conv filter. Default: 1. stride (Union[int. tuple[int]]): The stride to be applied to the convolution filter. Default: 1.
dilation (Union[int. tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1. dilation (Union[int. tuple[int]]): Specifies the dilation rate to be used for the dilated convolution.
Default: 1.
group (int): Splits input into groups. Default: 1. group (int): Splits input into groups. Default: 1.
Returns: Returns:
...@@ -1842,7 +1844,7 @@ class L2Loss(PrimitiveWithInfer): ...@@ -1842,7 +1844,7 @@ class L2Loss(PrimitiveWithInfer):
class DataFormatDimMap(PrimitiveWithInfer): class DataFormatDimMap(PrimitiveWithInfer):
""" """
Returns the dimension index in the destination data format given the one in the source data format. Returns the dimension index in the destination data format given in the source data format.
Args: Args:
src_format (string): An optional value for source data format. Default: 'NHWC'. src_format (string): An optional value for source data format. Default: 'NHWC'.
...@@ -2336,7 +2338,7 @@ class DropoutDoMask(PrimitiveWithInfer): ...@@ -2336,7 +2338,7 @@ class DropoutDoMask(PrimitiveWithInfer):
Inputs: Inputs:
- **input_x** (Tensor) - The input tensor. - **input_x** (Tensor) - The input tensor.
- **mask** (Tensor) - The mask to be applied on `input_x`, which is the output of `DropoutGenMask`. And the - **mask** (Tensor) - The mask to be applied on `input_x`, which is the output of `DropoutGenMask`. And the
shape of `input_x` must be same as the value of `DropoutGenMask`'s input `shape`. If input wrong `mask`, shape of `input_x` must be the same as the value of `DropoutGenMask`'s input `shape`. If input wrong `mask`,
the output of `DropoutDoMask` are unpredictable. the output of `DropoutDoMask` are unpredictable.
- **keep_prob** (Tensor) - The keep rate, between 0 and 1, e.g. keep_prob = 0.9, - **keep_prob** (Tensor) - The keep rate, between 0 and 1, e.g. keep_prob = 0.9,
means dropping out 10% of input units. The value of `keep_prob` is the same as the input `keep_prob` of means dropping out 10% of input units. The value of `keep_prob` is the same as the input `keep_prob` of
...@@ -2494,10 +2496,10 @@ class Gelu(PrimitiveWithInfer): ...@@ -2494,10 +2496,10 @@ class Gelu(PrimitiveWithInfer):
Gaussian Error Linear Units activation function. Gaussian Error Linear Units activation function.
GeLU is described in the paper `Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>`_. GeLU is described in the paper `Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>`_.
And also please refer to `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. And also please refer to `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
<https://arxiv.org/abs/1810.04805>`_. <https://arxiv.org/abs/1810.04805>`_.
Defined as follows: Gelu is defined as follows:
.. math:: .. math::
\text{output} = 0.5 * x * (1 + erf(x / \sqrt{2})), \text{output} = 0.5 * x * (1 + erf(x / \sqrt{2})),
...@@ -2505,7 +2507,7 @@ class Gelu(PrimitiveWithInfer): ...@@ -2505,7 +2507,7 @@ class Gelu(PrimitiveWithInfer):
where :math:`erf` is the "Gauss error function" . where :math:`erf` is the "Gauss error function" .
Inputs: Inputs:
- **input_x** (Tensor) - Input to compute the Gelu. With data type of float16 or float32. - **input_x** (Tensor) - Input to compute the Gelu with data type of float16 or float32.
Outputs: Outputs:
Tensor, with the same type and shape as input. Tensor, with the same type and shape as input.
...@@ -2534,8 +2536,8 @@ class GetNext(PrimitiveWithInfer): ...@@ -2534,8 +2536,8 @@ class GetNext(PrimitiveWithInfer):
Returns the next element in the dataset queue. Returns the next element in the dataset queue.
Note: Note:
GetNext op needs to be associated with network and also depends on the init_dataset interface, The GetNext operation needs to be associated with network and it also depends on the init_dataset interface,
it can't be used directly as a single op. it can't be used directly as a single operation.
For details, please refer to `nn.DataWrapper` source code. For details, please refer to `nn.DataWrapper` source code.
Args: Args:
...@@ -3057,7 +3059,7 @@ class Adam(PrimitiveWithInfer): ...@@ -3057,7 +3059,7 @@ class Adam(PrimitiveWithInfer):
class FusedSparseAdam(PrimitiveWithInfer): class FusedSparseAdam(PrimitiveWithInfer):
r""" r"""
Merge the duplicate value of the gradient and then updates parameters by Adaptive Moment Estimation (Adam) Merge the duplicate value of the gradient and then update parameters by Adaptive Moment Estimation (Adam)
algorithm. This operator is used when the gradient is sparse. algorithm. This operator is used when the gradient is sparse.
The Adam algorithm is proposed in `Adam: A Method for Stochastic Optimization <https://arxiv.org/abs/1412.6980>`_. The Adam algorithm is proposed in `Adam: A Method for Stochastic Optimization <https://arxiv.org/abs/1412.6980>`_.
...@@ -3092,22 +3094,22 @@ class FusedSparseAdam(PrimitiveWithInfer): ...@@ -3092,22 +3094,22 @@ class FusedSparseAdam(PrimitiveWithInfer):
If true, update the gradients without using NAG. Default: False. If true, update the gradients without using NAG. Default: False.
Inputs: Inputs:
- **var** (Parameter) - Parameters to be updated. With float32 data type. - **var** (Parameter) - Parameters to be updated with float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`. With - **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var` with
float32 data type. float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients - **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients, has the same type as
with the same type as `var`. With float32 data type. `var` with float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula. With float32 data type. - **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula with float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula. With float32 data type. - **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula with float32 data type.
- **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type. - **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations. With float32 data type. - **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations. With float32 data type. - **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability. With float32 data type. - **epsilon** (Tensor) - Term added to the denominator to improve numerical stability with float32 data type.
- **gradient** (Tensor) - Gradient value. With float32 data type. - **gradient** (Tensor) - Gradient value with float32 data type.
- **indices** (Tensor) - Gradient indices. With int32 data type. - **indices** (Tensor) - Gradient indices with int32 data type.
Outputs: Outputs:
Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless. Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,). - **var** (Tensor) - A Tensor with shape (1,).
- **m** (Tensor) - A Tensor with shape (1,). - **m** (Tensor) - A Tensor with shape (1,).
...@@ -3189,7 +3191,7 @@ class FusedSparseAdam(PrimitiveWithInfer): ...@@ -3189,7 +3191,7 @@ class FusedSparseAdam(PrimitiveWithInfer):
class FusedSparseLazyAdam(PrimitiveWithInfer): class FusedSparseLazyAdam(PrimitiveWithInfer):
r""" r"""
Merge the duplicate value of the gradient and then updates parameters by Adaptive Moment Estimation (Adam) Merge the duplicate value of the gradient and then update parameters by Adaptive Moment Estimation (Adam)
algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the
original Adam algorithm, as only the current indices parameters will be updated. original Adam algorithm, as only the current indices parameters will be updated.
...@@ -3225,22 +3227,22 @@ class FusedSparseLazyAdam(PrimitiveWithInfer): ...@@ -3225,22 +3227,22 @@ class FusedSparseLazyAdam(PrimitiveWithInfer):
If true, update the gradients without using NAG. Default: False. If true, update the gradients without using NAG. Default: False.
Inputs: Inputs:
- **var** (Parameter) - Parameters to be updated. With float32 data type. - **var** (Parameter) - Parameters to be updated with float32 data type.
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`. With - **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var` with
float32 data type. float32 data type.
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients - **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients, has the same type as
with the same type as `var`. With float32 data type. `var` with float32 data type.
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula. With float32 data type. - **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula with float32 data type.
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula. With float32 data type. - **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula with float32 data type.
- **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type. - **lr** (Tensor) - :math:`l` in the updating formula with float32 data type.
- **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations. With float32 data type. - **beta1** (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type.
- **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations. With float32 data type. - **beta2** (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type.
- **epsilon** (Tensor) - Term added to the denominator to improve numerical stability. With float32 data type. - **epsilon** (Tensor) - Term added to the denominator to improve numerical stability with float32 data type.
- **gradient** (Tensor) - Gradient value. With float32 data type. - **gradient** (Tensor) - Gradient value with float32 data type.
- **indices** (Tensor) - Gradient indices. With int32 data type. - **indices** (Tensor) - Gradient indices with int32 data type.
Outputs: Outputs:
Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless. Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,). - **var** (Tensor) - A Tensor with shape (1,).
- **m** (Tensor) - A Tensor with shape (1,). - **m** (Tensor) - A Tensor with shape (1,).
...@@ -3418,7 +3420,7 @@ class FusedSparseFtrl(PrimitiveWithInfer): ...@@ -3418,7 +3420,7 @@ class FusedSparseFtrl(PrimitiveWithInfer):
class FusedSparseProximalAdagrad(PrimitiveWithInfer): class FusedSparseProximalAdagrad(PrimitiveWithInfer):
r""" r"""
Merge the duplicate value of the gradient and then Updates relevant entries according to the proximal adagrad Merge the duplicate value of the gradient and then update relevant entries according to the proximal adagrad
algorithm. algorithm.
.. math:: .. math::
...@@ -3434,7 +3436,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer): ...@@ -3434,7 +3436,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
RuntimeError exception will be thrown when the data type conversion of Parameter is required. RuntimeError exception will be thrown when the data type conversion of Parameter is required.
Args: Args:
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated. use_locking (bool): If true, the variable and accumulation tensors will be protected from being updated.
Default: False. Default: False.
Inputs: Inputs:
...@@ -3448,7 +3450,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer): ...@@ -3448,7 +3450,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
must be int32. must be int32.
Outputs: Outputs:
Tuple of 2 Tensor, this operator will update the input parameters directly, the outputs are useless. Tuple of 2 Tensors, this operator will update the input parameters directly, the outputs are useless.
- **var** (Tensor) - A Tensor with shape (1,). - **var** (Tensor) - A Tensor with shape (1,).
- **accum** (Tensor) - A Tensor with shape (1,). - **accum** (Tensor) - A Tensor with shape (1,).
...@@ -3524,9 +3526,9 @@ class KLDivLoss(PrimitiveWithInfer): ...@@ -3524,9 +3526,9 @@ class KLDivLoss(PrimitiveWithInfer):
.. math:: .. math::
\ell(x, y) = \begin{cases} \ell(x, y) = \begin{cases}
L, & \text{if reduction} = \text{'none';}\\ L, & \text{if reduction} = \text{`none';}\\
\operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases} \end{cases}
Args: Args:
...@@ -3535,10 +3537,10 @@ class KLDivLoss(PrimitiveWithInfer): ...@@ -3535,10 +3537,10 @@ class KLDivLoss(PrimitiveWithInfer):
Inputs: Inputs:
- **input_x** (Tensor) - The input Tensor. The data type must be float32. - **input_x** (Tensor) - The input Tensor. The data type must be float32.
- **input_y** (Tensor) - The label Tensor which has same shape as `input_x`. The data type must be float32. - **input_y** (Tensor) - The label Tensor which has the same shape as `input_x`. The data type must be float32.
Outputs: Outputs:
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and same shape as `input_x`. Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `input_x`.
Otherwise it is a scalar. Otherwise it is a scalar.
Examples: Examples:
...@@ -5151,15 +5153,15 @@ class SparseApplyFtrlV2(PrimitiveWithInfer): ...@@ -5151,15 +5153,15 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
class ConfusionMulGrad(PrimitiveWithInfer): class ConfusionMulGrad(PrimitiveWithInfer):
""" """
`output0` is the result of which input0 dot multily input1. `output0` is the dot product result of input0 and input1.
`output1` is the result of which input0 dot multily input1, then reducesum it. `output1` is the dot product result of input0 and input1, then apply the reducesum operation on it.
Args: Args:
axis (Union[int, tuple[int], list[int]]): The dimensions to reduce. axis (Union[int, tuple[int], list[int]]): The dimensions to reduce.
Default:(), reduce all dimensions. Only constant value is allowed. Default:(), reduce all dimensions. Only constant value is allowed.
keep_dims (bool): keep_dims (bool):
- If true, keep these reduced dimensions and the length is 1. - If true, keep these reduced dimensions and the length as 1.
- If false, don't keep these dimensions. Default:False. - If false, don't keep these dimensions. Default:False.
Inputs: Inputs:
...@@ -5167,8 +5169,8 @@ class ConfusionMulGrad(PrimitiveWithInfer): ...@@ -5167,8 +5169,8 @@ class ConfusionMulGrad(PrimitiveWithInfer):
- **input_1** (Tensor) - The input Tensor. - **input_1** (Tensor) - The input Tensor.
- **input_2** (Tensor) - The input Tensor. - **input_2** (Tensor) - The input Tensor.
outputs: Outputs:
- **output_0** (Tensor) - The same shape with `input0`. - **output_0** (Tensor) - The same shape as `input0`.
- **output_1** (Tensor) - **output_1** (Tensor)
- If axis is (), and keep_dims is false, the output is a 0-D array representing - If axis is (), and keep_dims is false, the output is a 0-D array representing
...@@ -5462,7 +5464,7 @@ class BasicLSTMCell(PrimitiveWithInfer): ...@@ -5462,7 +5464,7 @@ class BasicLSTMCell(PrimitiveWithInfer):
- **w** (Tensor) - Weight. Tensor of shape (`input_size + hidden_size`, `4 x hidden_size`). - **w** (Tensor) - Weight. Tensor of shape (`input_size + hidden_size`, `4 x hidden_size`).
The data type must be float16 or float32. The data type must be float16 or float32.
- **b** (Tensor) - Bias. Tensor of shape (`4 x hidden_size`). - **b** (Tensor) - Bias. Tensor of shape (`4 x hidden_size`).
The data type must be same as `c`. The data type must be the same as `c`.
Outputs: Outputs:
- **ct** (Tensor) - Forward :math:`c_t` cache at moment `t`. Tensor of shape (`batch_size`, `hidden_size`). - **ct** (Tensor) - Forward :math:`c_t` cache at moment `t`. Tensor of shape (`batch_size`, `hidden_size`).
...@@ -5532,18 +5534,18 @@ class BasicLSTMCell(PrimitiveWithInfer): ...@@ -5532,18 +5534,18 @@ class BasicLSTMCell(PrimitiveWithInfer):
class InTopK(PrimitiveWithInfer): class InTopK(PrimitiveWithInfer):
r""" r"""
Says whether the targets are in the top `k` predictions. Whether the targets are in the top `k` predictions.
Args: Args:
k (int): Special the number of top elements to look at for computing precision. k (int): Specify the number of top elements to be used for computing precision.
Inputs: Inputs:
- **x1** (Tensor) - A 2D Tensor define the predictions of a batch of samples with float16 or float32 data type. - **x1** (Tensor) - A 2D Tensor defines the predictions of a batch of samples with float16 or float32 data type.
- **x2** (Tensor) - A 1D Tensor define the labels of a batch of samples with int32 data type. - **x2** (Tensor) - A 1D Tensor defines the labels of a batch of samples with int32 data type.
Outputs: Outputs:
Tensor, which is 1 dimension of type bool and has same shape with `x2`. for label of sample `i` in `x2`, Tensor has 1 dimension of type bool and the same shape with `x2`. For labeling sample `i` in `x2`,
if label in first `k` predictions for sample `i` in `x1`, then the value is True else False. if the label in the first `k` predictions for sample `i` is in `x1`, then the value is True, otherwise False.
Examples: Examples:
>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32) >>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
......
...@@ -244,7 +244,7 @@ class IOU(PrimitiveWithInfer): ...@@ -244,7 +244,7 @@ class IOU(PrimitiveWithInfer):
Args: Args:
mode (string): The mode is used to specify the calculation method, mode (string): The mode is used to specify the calculation method,
now support 'iou' (intersection over union) or 'iof' now supporting 'iou' (intersection over union) or 'iof'
(intersection over foreground) mode. Default: 'iou'. (intersection over foreground) mode. Default: 'iou'.
Inputs: Inputs:
...@@ -350,7 +350,7 @@ class Partial(Primitive): ...@@ -350,7 +350,7 @@ class Partial(Primitive):
class Depend(Primitive): class Depend(Primitive):
""" """
Depend is used for process side-effect operations. Depend is used for processing side-effect operations.
Inputs: Inputs:
- **value** (Tensor) - the real value to return for depend operator. - **value** (Tensor) - the real value to return for depend operator.
......
...@@ -131,9 +131,9 @@ class Gamma(PrimitiveWithInfer): ...@@ -131,9 +131,9 @@ class Gamma(PrimitiveWithInfer):
Inputs: Inputs:
- **shape** (tuple) - The shape of random tensor to be generated. Only constant value is allowed. - **shape** (tuple) - The shape of random tensor to be generated. Only constant value is allowed.
- **alpha** (Tensor) - The α distribution parameter. - **alpha** (Tensor) - The α distribution parameter.
It is also known as the shape parameter. With float32 data type. It is also known as the shape parameter with float32 data type.
- **beta** (Tensor) - The β distribution parameter. - **beta** (Tensor) - The β distribution parameter.
It is also known as the scale parameter. With float32 data type. It is also known as the scale parameter with float32 data type.
Outputs: Outputs:
Tensor. The shape should be the broadcasted shape of Input "shape" and shapes of alpha and beta. Tensor. The shape should be the broadcasted shape of Input "shape" and shapes of alpha and beta.
......
...@@ -130,14 +130,14 @@ def set_algo_parameters(**kwargs): ...@@ -130,14 +130,14 @@ def set_algo_parameters(**kwargs):
Set algo parameter config. Set algo parameter config.
Note: Note:
Attribute name is needed. The attribute name is required.
Args: Args:
tensor_slice_align_enable (bool): Whether checking tensor slice shape for MatMul. Default: False tensor_slice_align_enable (bool): Whether to check the shape of tensor slice of MatMul. Default: False
tensor_slice_align_size (int): The minimum tensor slice shape of MatMul, the value must be in [1, 1024]. tensor_slice_align_size (int): The minimum tensor slice shape of MatMul, the value must be in [1, 1024].
Default: 16 Default: 16
fully_use_devices (bool): Whether ONLY generating strategies that fully use all available devices. Default: True fully_use_devices (bool): Whether ONLY generating strategies that fully use all available devices. Default: True
elementwise_op_strategy_follow (bool): Whether the elementwise operator have the same strategies as its elementwise_op_strategy_follow (bool): Whether the elementwise operator has the same strategies as its
subsequent operators. Default: False subsequent operators. Default: False
Raises: Raises:
...@@ -155,7 +155,7 @@ def get_algo_parameters(attr_key): ...@@ -155,7 +155,7 @@ def get_algo_parameters(attr_key):
Get algo parameter config attributes. Get algo parameter config attributes.
Note: Note:
Return value according to the attribute value. Returns the specified attribute value.
Args: Args:
attr_key (str): The key of the attribute. attr_key (str): The key of the attribute.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册