提交 f7998b10 编写于 作者: L liuxiao93

Modified some API about float16 and float32 data type.

上级 01158763
......@@ -1423,7 +1423,9 @@ class UnsortedSegmentMin(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
The data type should be float16, float32 or int32.
- **segment_ids** (Tensor) - A `1-D` tensor whose shape is :math:`(x_1)`, the value should be >= 0.
The data type must be int32.
- **num_segments** (int) - The value spcifies the number of distinct `segment_ids`.
Outputs:
......@@ -2410,7 +2412,7 @@ class GatherNd(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The target tensor to gather values.
- **indices** (Tensor) - The index tensor.
- **indices** (Tensor) - The index tensor, with int data type.
Outputs:
Tensor, has the same type as `input_x` and the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].
......@@ -2807,7 +2809,7 @@ class ScatterNonAliasingAdd(_ScatterNdOp):
This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
Inputs:
- **input_x** (Parameter) - The target parameter.
- **input_x** (Parameter) - The target parameter. The data type should be float16, float32 or int32.
- **indices** (Tensor) - The index to do add operation whose data type should be mindspore.int32.
- **updates** (Tensor) - The tensor doing the add operation with `input_x`,
the data type is same as `input_x`, the shape is `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
......
......@@ -943,9 +943,9 @@ class InplaceAdd(PrimitiveWithInfer):
to add with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is number.
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
- **input_v** (Tensor) - The second input is a tensor who has the same dimension sizes as x except
the first dimension, which must be the same as indices's size.
the first dimension, which must be the same as indices's size. It has the same data type with `input_x`.
Outputs:
Tensor, has the same shape and dtype as input.
......@@ -1001,9 +1001,9 @@ class InplaceSub(PrimitiveWithInfer):
to sub with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
Inputs:
- **input_x** (Tensor) - The first input is a tensor whose data type is number.
- **input_x** (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.
- **input_v** (Tensor) - The second input is a tensor who has the same dimension sizes as x except
the first dimension, which must be the same as indices's size.
the first dimension, which must be the same as indices's size. It has the same data type with `input_x`.
Outputs:
Tensor, has the same shape and dtype as input.
......@@ -1403,7 +1403,7 @@ class Expm1(PrimitiveWithInfer):
Returns exponential then minus 1 of a tensor element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. With float16 or float32 data type.
Outputs:
Tensor, has the same shape as the `input_x`.
......@@ -1425,6 +1425,7 @@ class Expm1(PrimitiveWithInfer):
def infer_dtype(self, x_type):
validator.check_subclass("x", x_type, mstype.tensor, self.name)
validator.check_tensor_type_same({"x": x_type}, [mstype.float16, mstype.float32], self.name)
return x_type
......@@ -1515,7 +1516,7 @@ class Log1p(PrimitiveWithInfer):
Returns the natural logarithm of one plus the input tensor element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. With float16 or float32 data type.
Outputs:
Tensor, has the same shape as the `input_x`.
......@@ -1536,6 +1537,7 @@ class Log1p(PrimitiveWithInfer):
def infer_dtype(self, x):
validator.check_subclass("x", x, mstype.tensor, self.name)
validator.check_tensor_type_same({"x": x}, [mstype.float16, mstype.float32], self.name)
return x
......@@ -1544,7 +1546,7 @@ class Erf(PrimitiveWithInfer):
Computes the Gauss error function of `input_x` element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. The data type must be float16 or float32.
Outputs:
Tensor, has the same shape and dtype as the `input_x`.
......@@ -1574,7 +1576,7 @@ class Erfc(PrimitiveWithInfer):
Computes the complementary error function of `input_x` element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. The data type mast be float16 or float32.
Outputs:
Tensor, has the same shape and dtype as the `input_x`.
......@@ -1674,6 +1676,7 @@ class Maximum(_MathBinaryOp):
return Tensor(out)
return None
class RealDiv(_MathBinaryOp):
"""
Divide the first input tensor by the second input tensor in floating-point type element-wise.
......@@ -1923,7 +1926,7 @@ class Floor(PrimitiveWithInfer):
Round a tensor down to the closest integer element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor. Its element data type must be float.
- **input_x** (Tensor) - The input tensor. It's element data type must be float.
Outputs:
Tensor, has the same shape as `input_x`.
......@@ -1981,7 +1984,7 @@ class Ceil(PrimitiveWithInfer):
Round a tensor up to the closest integer element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor. Its element data type must be float.
- **input_x** (Tensor) - The input tensor. It's element data type must be float16 or float32.
Outputs:
Tensor, has the same shape as `input_x`.
......@@ -2001,7 +2004,7 @@ class Ceil(PrimitiveWithInfer):
return x_shape
def infer_dtype(self, x_dtype):
validator.check_tensor_type_same({"x": x_dtype}, mstype.float_type, self.name)
validator.check_tensor_type_same({"x": x_dtype}, [mstype.float16, mstype.float32], self.name)
return x_dtype
......@@ -2666,7 +2669,7 @@ class FloatStatus(PrimitiveWithInfer):
Determine if the elements contains nan, inf or -inf. `0` for normal, `1` for overflow.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. The data type must be float16 or float32.
Outputs:
Tensor, has the shape of `(1,)`, and has the same dtype of input `mindspore.dtype.float32` or
......@@ -2731,6 +2734,7 @@ class NPUGetFloatStatus(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The output tensor of `NPUAllocFloatStatus`.
The data type must be float16 or float32.
Outputs:
Tensor, has the same shape as `input_x`. All the elements in the tensor will be zero.
......@@ -2755,7 +2759,7 @@ class NPUGetFloatStatus(PrimitiveWithInfer):
return [8]
def infer_dtype(self, x_dtype):
validator.check_tensor_type_same({'x': x_dtype}, [mstype.float32], self.name)
validator.check_tensor_type_same({'x': x_dtype}, [mstype.float16, mstype.float32], self.name)
return mstype.float32
......@@ -2771,6 +2775,7 @@ class NPUClearFloatStatus(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - The output tensor of `NPUAllocFloatStatus`.
The data type must be float16 or float32.
Outputs:
Tensor, has the same shape as `input_x`. All the elements in the tensor will be zero.
......@@ -2797,7 +2802,7 @@ class NPUClearFloatStatus(PrimitiveWithInfer):
return [8]
def infer_dtype(self, x_dtype):
validator.check_tensor_type_same({'x': x_dtype}, [mstype.float32], self.name)
validator.check_tensor_type_same({'x': x_dtype}, [mstype.float16, mstype.float32], self.name)
return mstype.float32
......@@ -2932,6 +2937,7 @@ class NMSWithMask(PrimitiveWithInfer):
`N` is the number of input bounding boxes. Every bounding box
contains 5 values, the first 4 values are the coordinates of bounding
box, and the last value is the score of this bounding box.
The data type must be float16 or float32.
Outputs:
tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask.
......@@ -3186,12 +3192,13 @@ class Atan2(_MathBinaryOp):
[[0. 0.7853982]]
"""
class SquareSumAll(PrimitiveWithInfer):
"""
Returns square sum all of a tensor element-wise
Inputs:
- **input_x1** (Tensor) - The input tensor.
- **input_x1** (Tensor) - The input tensor. The data type must be float16 or float32.
- **input_x2** (Tensor) - The input tensor same type and shape as the `input_x1`.
Note:
......@@ -3227,7 +3234,7 @@ class BitwiseAnd(_BitwiseBinaryOp):
Returns bitwise `and` of two tensors element-wise.
Inputs:
- **input_x1** (Tensor) - The input tensor with int or uint type.
- **input_x1** (Tensor) - The input tensor with int16 or uint16 data type.
- **input_x2** (Tensor) - The input tensor with same type as the `input_x1`.
Outputs:
......@@ -3247,7 +3254,7 @@ class BitwiseOr(_BitwiseBinaryOp):
Returns bitwise `or` of two tensors element-wise.
Inputs:
- **input_x1** (Tensor) - The input tensor with int or uint type.
- **input_x1** (Tensor) - The input tensor with int16 or uint16 data type.
- **input_x2** (Tensor) - The input tensor with same type as the `input_x1`.
Outputs:
......@@ -3267,7 +3274,7 @@ class BitwiseXor(_BitwiseBinaryOp):
Returns bitwise `xor` of two tensors element-wise.
Inputs:
- **input_x1** (Tensor) - The input tensor with int or uint type.
- **input_x1** (Tensor) - The input tensor with int16 or uint16 data type.
- **input_x2** (Tensor) - The input tensor with same type as the `input_x1`.
Outputs:
......@@ -3405,7 +3412,7 @@ class Eps(PrimitiveWithInfer):
Creates a tensor filled with `input_x` dtype minimum val.
Inputs:
- **input_x** (Tensor) - Input tensor.
- **input_x** (Tensor) - Input tensor. The data type must be float16 or float32.
Outputs:
Tensor, has the same type and shape as `input_x`, but filled with `input_x` dtype minimum val.
......
......@@ -112,7 +112,7 @@ class Softmax(PrimitiveWithInfer):
axis (Union[int, tuple]): The axis to do the Softmax operation. Default: -1.
Inputs:
- **logits** (Tensor) - The input of Softmax.
- **logits** (Tensor) - The input of Softmax, with float16 or float32 data type.
Outputs:
Tensor, with the same type and shape as the logits.
......@@ -142,6 +142,7 @@ class Softmax(PrimitiveWithInfer):
def infer_dtype(self, logits):
validator.check_subclass("logits", logits, mstype.tensor, self.name)
validator.check_tensor_type_same({"logits": logits}, mstype.float_type, self.name)
return logits
......@@ -162,7 +163,7 @@ class LogSoftmax(PrimitiveWithInfer):
axis (int): The axis to do the Log softmax operation. Default: -1.
Inputs:
- **logits** (Tensor) - The input of Log Softmax.
- **logits** (Tensor) - The input of Log Softmax, with float16 or float32 data type.
Outputs:
Tensor, with the same type and shape as the logits.
......@@ -185,6 +186,7 @@ class LogSoftmax(PrimitiveWithInfer):
def infer_dtype(self, logits):
validator.check_subclass("logits", logits, mstype.tensor, self.name)
validator.check_tensor_type_same({"logits": logits}, mstype.float_type, self.name)
return logits
......@@ -298,7 +300,7 @@ class ReLU6(PrimitiveWithInfer):
It returns :math:`\min(\max(0,x), 6)` element-wise.
Inputs:
- **input_x** (Tensor) - The input tensor.
- **input_x** (Tensor) - The input tensor. With float16 or float32 data type.
Outputs:
Tensor, with the same type and shape as the `input_x`.
......@@ -430,7 +432,7 @@ class HSwish(PrimitiveWithInfer):
where :math:`x_{i}` is the :math:`i`-th slice along the given dim of the input Tensor.
Inputs:
- **input_data** (Tensor) - The input of HSwish.
- **input_data** (Tensor) - The input of HSwish, data type should be float16 or float32.
Outputs:
Tensor, with the same type and shape as the `input_data`.
......@@ -465,7 +467,7 @@ class Sigmoid(PrimitiveWithInfer):
where :math:`x_i` is the element of the input.
Inputs:
- **input_x** (Tensor) - The input of Sigmoid.
- **input_x** (Tensor) - The input of Sigmoid, data type should be float16 or float32.
Outputs:
Tensor, with the same type and shape as the input_x.
......@@ -503,7 +505,7 @@ class HSigmoid(PrimitiveWithInfer):
where :math:`x_{i}` is the :math:`i`-th slice along the given dim of the input Tensor.
Inputs:
- **input_data** (Tensor) - The input of HSigmoid.
- **input_data** (Tensor) - The input of HSigmoid, data type should be float16 or float32.
Outputs:
Tensor, with the same type and shape as the `input_data`.
......@@ -687,11 +689,11 @@ class BatchNorm(PrimitiveWithInfer):
epsilon (float): A small value added for numerical stability. Default: 1e-5.
Inputs:
- **input_x** (Tensor) - Tensor of shape :math:`(N, C)`.
- **scale** (Tensor) - Tensor of shape :math:`(C,)`.
- **bias** (Tensor) - Tensor of shape :math:`(C,)`.
- **mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **variance** (Tensor) - Tensor of shape :math:`(C,)`.
- **input_x** (Tensor) - Tensor of shape :math:`(N, C)`, with float16 or float32 data type.
- **scale** (Tensor) - Tensor of shape :math:`(C,)`, with float16 or float32 data type.
- **bias** (Tensor) - Tensor of shape :math:`(C,)`, has the same data type with `scale`.
- **mean** (Tensor) - Tensor of shape :math:`(C,)`, with float16 or float32 data type.
- **variance** (Tensor) - Tensor of shape :math:`(C,)`, has the same data type with `mean`.
Outputs:
Tuple of 5 Tensor, the normalized inputs and the updated parameters.
......@@ -1165,6 +1167,7 @@ class MaxPoolWithArgmax(_Pool):
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Data type should be float16 or float32.
Outputs:
Tuple of 2 Tensor, the maxpool result and where max values from.
......@@ -1446,7 +1449,7 @@ class TopK(PrimitiveWithInfer):
be sorted by the values in descending order. Default: False.
Inputs:
- **input_x** (Tensor) - Input to be computed.
- **input_x** (Tensor) - Input to be computed, data type should be float16, float32 or int32.
- **k** (int) - Number of top elements to be computed along the last dimension, constant input is needed.
Outputs:
......@@ -1498,8 +1501,8 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})}
Inputs:
- **logits** (Tensor) - Input logits, with shape :math:`(N, C)`.
- **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`.
- **logits** (Tensor) - Input logits, with shape :math:`(N, C)`. Data type should be float16 or float32.
- **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`. Has the same data type with `logits`.
Outputs:
Tuple of 2 Tensor, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`.
......@@ -1549,8 +1552,9 @@ class SparseSoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
is_grad (bool): If it's true, this operation returns the computed gradient. Default: False.
Inputs:
- **logits** (Tensor) - Input logits, with shape :math:`(N, C)`.
- **logits** (Tensor) - Input logits, with shape :math:`(N, C)`. Data type should be float16 or float32.
- **labels** (Tensor) - Ground truth labels, with shape :math:`(N)`.
Data type should be int32 or int64.
Outputs:
Tensor, if `is_grad` is False, the output tensor is the value of loss which is a scalar tensor;
......@@ -1592,11 +1596,14 @@ class ApplyMomentum(PrimitiveWithInfer):
gradient_scale (float): The scale of the gradient. Default: 1.0.
Inputs:
- **variable** (Tensor) - Weights to be updated.
- **accumulation** (Tensor) - Accumulated gradient value by moment weight.
- **learning_rate** (float) - Learning rate.
- **gradient** (Tensor) - Gradients.
- **momentum** (float) - Momentum.
- **variable** (Parameter) - Weights to be updated. data type should be float.
- **accumulation** (Parameter) - Accumulated gradient value by moment weight.
Has the same data type with `variable`.
- **learning_rate** (Union[Number, Tensor]) - The learning rate value, should be a float number or
a scalar tensor with float data type.
- **gradient** (Tensor) - Gradients, has the same data type as `variable`.
- **momentum** (Union[Number, Tensor]) - Momentum, should be a float number or
a scalar tensor with float data type.
Outputs:
Tensor, parameters to be updated.
......@@ -1658,7 +1665,7 @@ class SmoothL1Loss(PrimitiveWithInfer):
quadratic to linear. Default: 1.0.
Inputs:
- **prediction** (Tensor) - Predict data.
- **prediction** (Tensor) - Predict data. Data type should be float16 or float32.
- **target** (Tensor) - Ground truth data, with the same type and shape as `prediction`.
Outputs:
......@@ -1700,7 +1707,7 @@ class L2Loss(PrimitiveWithInfer):
:math:`nelement(x)` represents the number of `input_x`.
Inputs:
- **input_x** (Tensor) - A input Tensor.
- **input_x** (Tensor) - A input Tensor. Data type should be float16 or float32.
Outputs:
Tensor. Has the same dtype as `input_x`. The output tensor is the value of loss which is a scalar tensor.
......@@ -1765,6 +1772,7 @@ class DataFormatDimMap(PrimitiveWithInfer):
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
return x_type
class RNNTLoss(PrimitiveWithInfer):
"""
Computes the RNNTLoss and its gradient with respect to the softmax outputs.
......@@ -1773,7 +1781,7 @@ class RNNTLoss(PrimitiveWithInfer):
blank_label (int): blank label. Default: 0.
Inputs:
- **acts** (Tensor[float32]) - Tensor of shape :math:`(B, T, U, V)`.
- **acts** (Tensor) - Tensor of shape :math:`(B, T, U, V)`. Data type should be float16 or float32.
- **labels** (Tensor[int32]) - Tensor of shape :math:`(B, U-1)`.
- **input_lengths** (Tensor[int32]) - Tensor of shape :math:`(B,)`.
- **label_lebgths** (Tensor[int32]) - Tensor of shape :math:`(B,)`.
......@@ -1791,6 +1799,7 @@ class RNNTLoss(PrimitiveWithInfer):
>>> rnnt_loss = P.RNNTLoss(blank_label=blank)
>>> costs, grads = rnnt_loss(Tensor(acts), Tensor(labels), Tensor(input_length), Tensor(label_length))
"""
@prim_attr_register
def __init__(self, blank_label=0):
validator.check_value_type('blank_label', blank_label, [int], self.name)
......@@ -1814,7 +1823,7 @@ class RNNTLoss(PrimitiveWithInfer):
validator.check_subclass("labels_type", labels_type, mstype.tensor, self.name)
validator.check_subclass("input_length_type", input_length_type, mstype.tensor, self.name)
validator.check_subclass("label_length_type", label_length_type, mstype.tensor, self.name)
validator.check_tensor_type_same({"acts_type": acts_type}, [mstype.float32], self.name)
validator.check_tensor_type_same({"acts_type": acts_type}, [mstype.float32, mstype.float16], self.name)
validator.check_tensor_type_same({"labels_type": labels_type}, [mstype.int32], self.name)
validator.check_tensor_type_same({"input_length_type": input_length_type}, [mstype.int32], self.name)
validator.check_tensor_type_same({"label_length_type": label_length_type}, [mstype.int32], self.name)
......@@ -1837,12 +1846,14 @@ class SGD(PrimitiveWithInfer):
nesterov (bool): Enable Nesterov momentum. Default: False.
Inputs:
- **parameters** (Tensor) - Parameters to be updated. Their data type can be list or tuple.
- **gradient** (Tensor) - Gradients.
- **learning_rate** (Tensor) - Learning rate. Must be float value. e.g. Tensor(0.1, mindspore.float32).
- **accum** (Tensor) - Accum(velocity) to be updated.
- **momentum** (Tensor) - Momentum. e.g. Tensor(0.1, mindspore.float32).
- **stat** (Tensor) - States to be updated with the same shape as gradient.
- **parameters** (Tensor) - Parameters to be updated. With float16 or float32 data type.
- **gradient** (Tensor) - Gradients. With float16 or float32 data type.
- **learning_rate** (Tensor) - Learning rate, a scalar tensor with float16 or float32 data type.
e.g. Tensor(0.1, mindspore.float32)
- **accum** (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type.
- **momentum** (Tensor) - Momentum, a scalar tensor with float16 or float32 data type.
e.g. Tensor(0.1, mindspore.float32).
- **stat** (Tensor) - States to be updated with the same shape as gradient. With float16 or float32 data type.
Outputs:
Tensor, parameters to be updated.
......@@ -1920,7 +1931,8 @@ class ApplyRMSProp(PrimitiveWithInfer):
- **var** (Tensor) - Weights to be update.
- **mean_square** (Tensor) - Mean square gradients, must have the same type as `var`.
- **moment** (Tensor) - Delta of `var`, must have the same type as `var`.
- **learning_rate** (Union[Number, Tensor]) - Learning rate.
- **learning_rate** (Union[Number, Tensor]) - Learning rate. Should be a float number or
a scalar tensor with float16 or float32 data type.
- **grad** (Tensor) - Gradients, must have the same type as `var`.
- **decay** (float) - Decay rate. Only constant value is allowed.
- **momentum** (float) - Momentum. Only constant value is allowed.
......@@ -2016,7 +2028,8 @@ class ApplyCenteredRMSProp(PrimitiveWithInfer):
- **mean_square** (Tensor) - Mean square gradients, must have the same type as `var`.
- **moment** (Tensor) - Delta of `var`, must have the same type as `var`.
- **grad** (Tensor) - Gradients, must have the same type as `var`.
- **learning_rate** (Union[Number, Tensor]) - Learning rate.
- **learning_rate** (Union[Number, Tensor]) - Learning rate. Should be a float number or
a scalar tensor with float16 or float32 data type.
- **decay** (float) - Decay rate.
- **momentum** (float) - Momentum.
- **epsilon** (float) - Ridge term.
......@@ -2144,7 +2157,7 @@ class L2Normalize(PrimitiveWithInfer):
epsilon (float): A small value added for numerical stability. Default: 1e-4.
Inputs:
- **input_x** (Tensor) - Input to compute the normalization.
- **input_x** (Tensor) - Input to compute the normalization. Data type should be float16 or float32.
Outputs:
Tensor, with the same type and shape as the input.
......@@ -2173,6 +2186,7 @@ class L2Normalize(PrimitiveWithInfer):
def infer_dtype(self, input_x):
validator.check_subclass("x", input_x, mstype.tensor, self.name)
validator.check_tensor_type_same({"input_x": input_x}, [mstype.float16, mstype.float32], self.name)
return input_x
......@@ -2327,9 +2341,11 @@ class OneHot(PrimitiveWithInfer):
Inputs:
- **indices** (Tensor) - A tensor of indices. Tensor of shape :math:`(X_0, \ldots, X_n)`.
Data type must be int32.
- **depth** (int) - A scalar defining the depth of the one hot dimension.
- **on_value** (Tensor) - A value to fill in output when `indices[j] = i`.
- **on_value** (Tensor) - A value to fill in output when `indices[j] = i`. With data type of float16 or float32.
- **off_value** (Tensor) - A value to fill in output when `indices[j] != i`.
Has the same data type with as `on_value`.
Outputs:
Tensor, one_hot tensor. Tensor of shape :math:`(X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)`.
......@@ -2383,7 +2399,7 @@ class Gelu(PrimitiveWithInfer):
where :math:`erf` is the "Gauss error function" .
Inputs:
- **input_x** (Tensor) - Input to compute the Gelu.
- **input_x** (Tensor) - Input to compute the Gelu. With data type of float16 or float32.
Outputs:
Tensor, with the same type and shape as input.
......@@ -2465,8 +2481,9 @@ class PReLU(PrimitiveWithInfer):
Inputs:
- **input_x** (Tensor) - Float tensor, representing the output of the preview layer.
With data type of float16 or float32.
- **weight** (Tensor) - Float Tensor, w > 0, there is only two shapes are legitimate,
1 or the number of channels at input.
1 or the number of channels at input. With data type of float16 or float32.
Outputs:
Tensor, with the same type as `input_x`.
......@@ -2795,9 +2812,9 @@ class ROIAlign(PrimitiveWithInfer):
Inputs:
- **features** (Tensor) - The input features, whose shape should be `(N, C, H, W)`.
- **rois** (Tensor) - The shape is `(rois_n, 5)`. `rois_n` represents the number of RoI. The size of
the second dimension should be `5` and the `5` colunms are
`(image_index, top_left_x, top_left_y, bottom_right_x, bottom_right_y)`. `image_index` represents the
- **rois** (Tensor) - The shape is `(rois_n, 5)`. With data type of float16 or float32.
`rois_n` represents the number of RoI. The size of the second dimension should be `5` and the `5` colunms
are `(image_index, top_left_x, top_left_y, bottom_right_x, bottom_right_y)`. `image_index` represents the
index of image. `top_left_x` and `top_left_y` represent the `x, y` coordinates of the top left corner
of corresponding RoI, respectively. `bottom_right_x` and `bottom_right_y` represent the `x, y`
coordinates of the bottom right corner of corresponding RoI, respectively.
......@@ -2832,6 +2849,9 @@ class ROIAlign(PrimitiveWithInfer):
return [rois_shape[0], inputs_shape[1], self.pooled_height, self.pooled_width]
def infer_dtype(self, inputs_type, rois_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same({"inputs_type": inputs_type}, valid_types, self.name)
validator.check_tensor_type_same({"rois_type": rois_type}, valid_types, self.name)
return inputs_type
......@@ -2876,7 +2896,7 @@ class Adam(PrimitiveWithInfer):
- **beta1** (float) - The exponential decay rate for the 1st moment estimates.
- **beta2** (float) - The exponential decay rate for the 2nd moment estimates.
- **epsilon** (float) - Term added to the denominator to improve numerical stability.
- **gradient** (Tensor) - Gradients.
- **gradient** (Tensor) - Gradients. Has the same type as `var`.
Outputs:
Tuple of 3 Tensor, the updated parameters.
......@@ -3371,6 +3391,7 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
validator.check_tensor_type_same({'indices': indices_dtype}, valid_types, self.name)
return var_dtype, accum_dtype
class KLDivLoss(PrimitiveWithInfer):
r"""
Computes the Kullback-Leibler divergence between the target and the output.
......@@ -3442,6 +3463,7 @@ class KLDivLoss(PrimitiveWithInfer):
validator.check_tensor_type_same(args, valid_types, self.name)
return x_type
class BinaryCrossEntropy(PrimitiveWithInfer):
r"""
Computes the Binary Cross Entropy between the target and the output.
......@@ -3468,10 +3490,10 @@ class BinaryCrossEntropy(PrimitiveWithInfer):
Its value should be one of 'none', 'mean', 'sum'. Default: 'mean'.
Inputs:
- **input_x** (Tensor) - The input Tensor.
- **input_y** (Tensor) - The label Tensor which has same shape as `input_x`.
- **input_x** (Tensor) - The input Tensor. The data type should be float16 or float32.
- **input_y** (Tensor) - The label Tensor which has same shape and data type as `input_x`.
- **weight** (Tensor, optional) - A rescaling weight applied to the loss of each batch element.
And it should have same shape as `input_x`. Default: None.
And it should have same shape and data type as `input_x`. Default: None.
Outputs:
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and same shape as `input_x`.
......@@ -3841,12 +3863,13 @@ class ApplyAdagradV2(PrimitiveWithInfer):
update_slots (bool): If `True`, `accum` will be updated. Default: True.
Inputs:
- **var** (Parameter) - Variable to be updated. With float32 data type.
- **var** (Parameter) - Variable to be updated. With float16 or float32 data type.
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
With float32 data type.
- **lr** (Union[Number, Tensor]) - The learning rate value, should be scalar. With float32 data type.
With float16 or float32 data type.
- **lr** (Union[Number, Tensor]) - The learning rate value, should be a float number or
a scalar tensor with float16 or float32 data type.
- **grad** (Tensor) - A tensor for gradient. The shape and dtype should be the same as `var`.
With float32 data type.
With float16 or float32 data type.
Outputs:
Tuple of 2 Tensor, the updated parameters.
......@@ -3898,8 +3921,8 @@ class ApplyAdagradV2(PrimitiveWithInfer):
def infer_dtype(self, var_dtype, accum_dtype, lr_dtype, grad_dtype):
args = {'var': var_dtype, 'accum': accum_dtype, 'grad': grad_dtype}
validator.check_tensor_type_same(args, [mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({'lr': lr_dtype}, [mstype.float32], self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({'lr': lr_dtype}, [mstype.float16, mstype.float32], self.name)
return var_dtype, accum_dtype
......@@ -3918,11 +3941,10 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
use_locking (bool): If True, updating of the var and accum tensors will be protected. Default: False.
Inputs:
- **var** (Parameter) - Variable to be updated. The type must be float32.
- **accum** (Parameter) - Accum to be updated. The shape must be the same as `var`'s shape,
the type must be float32.
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape
except first dimension, the type must be float32.
- **var** (Parameter) - Variable to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape except first dimension.
Has the same data type as `var`.
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
The shape of `indices` must be the same as `grad` in first dimension, the type must be int32.
......@@ -3978,7 +4000,7 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
def infer_dtype(self, var_type, accum_type, grad_type, indices_type):
args = {'var': var_type, 'accum': accum_type, 'grad': grad_type}
validator.check_tensor_type_same(args, (mstype.float32,), self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_tensor_type_same({'indices': indices_type}, [mstype.int32], self.name)
return var_type, accum_type
......@@ -3999,11 +4021,10 @@ class SparseApplyAdagradV2(PrimitiveWithInfer):
update_slots (bool): If `True`, the computation logic will be different to `False`. Default: True.
Inputs:
- **var** (Parameter) - Variable to be updated. The type must be float32.
- **accum** (Parameter) - Accum to be updated. The shape must be the same as `var`'s shape,
the type must be float32.
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape except first dimension,
the type must be float32.
- **var** (Parameter) - Variable to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape except first dimension.
Has the same data type as `var`.
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
The shape of `indices` must be the same as `grad` in first dimension, the type must be int32.
......@@ -4060,7 +4081,7 @@ class SparseApplyAdagradV2(PrimitiveWithInfer):
def infer_dtype(self, var_type, accum_type, grad_type, indices_type):
args = {'var': var_type, 'accum': accum_type, 'grad': grad_type}
validator.check_tensor_type_same(args, [mstype.float32], self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_tensor_type_same({'indices': indices_type}, [mstype.int32], self.name)
return var_type, accum_type
......@@ -4176,12 +4197,16 @@ class SparseApplyProximalAdagrad(PrimitiveWithInfer):
use_locking (bool): If True, updating of the var and accum tensors will be protected. Default: False.
Inputs:
- **var** (Parameter) - Variable tensor to be updated. The data type must be float32.
- **var** (Parameter) - Variable tensor to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - Variable tensor to be updated. Has the same dtype as `var`.
- **lr** (Union[Number, Tensor]) - The learning rate value. The data type must be float32.
- **l1** (Union[Number, Tensor]) - l1 regularization strength. The data type must be float32.
- **l2** (Union[Number, Tensor]) - l2 regularization strength. The data type must be float32.
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient. The data type must be float32.
- **lr** (Union[Number, Tensor]) - The learning rate value. Tshould be a float number or
a scalar tensor with float16 or float32 data type.
- **l1** (Union[Number, Tensor]) - l1 regularization strength. should be a float number or
a scalar tensor with float16 or float32 data type.
- **l2** (Union[Number, Tensor]) - l2 regularization strength. should be a float number or
a scalar tensor with float16 or float32 data type..
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
The data type must be float16 or float32.
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
Outputs:
......@@ -4236,10 +4261,10 @@ class SparseApplyProximalAdagrad(PrimitiveWithInfer):
def infer_dtype(self, var_dtype, accum_dtype, lr_dtype, l1_dtype, l2_dtype, grad_dtype, indices_dtype):
args = {'var': var_dtype, 'accum': accum_dtype, 'grad': grad_dtype}
validator.check_tensor_type_same(args, [mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"lr": lr_dtype}, [mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"l1": l1_dtype}, [mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"l2": l2_dtype}, [mstype.float32], self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"lr": lr_dtype}, [mstype.float16, mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"l1": l1_dtype}, [mstype.float16, mstype.float32], self.name)
validator.check_scalar_or_tensor_type_same({"l2": l2_dtype}, [mstype.float16, mstype.float32], self.name)
valid_types = [mstype.int16, mstype.int32, mstype.int64,
mstype.uint16, mstype.uint32, mstype.uint64]
validator.check_tensor_type_same({'indices': indices_dtype}, valid_types, self.name)
......@@ -4674,18 +4699,19 @@ class ApplyFtrl(PrimitiveWithInfer):
use_locking (bool): Use locks for update operation if True . Default: False.
Inputs:
- **var** (Tensor) - The variable to be updated.
- **accum** (Tensor) - The accum to be updated, must be same type and shape as `var`.
- **linear** (Tensor) - The linear to be updated, must be same type and shape as `var`.
- **grad** (Tensor) - Gradient.
- **var** (Parameter) - The variable to be updated. The data type should be float16 or float32.
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
- **grad** (Tensor) - Gradient. The data type should be float16 or float32.
- **lr** (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001.
It should be a float number or a scalar tensor with float16 or float32 data type.
- **l1** (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero.
Default: 0.0.
Default: 0.0. It should be a float number or a scalar tensor with float16 or float32 data type.
- **l2** (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero.
Default: 0.0.
Default: 0.0. It should be a float number or a scalar tensor with float16 or float32 data type.
- **lr_power** (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases
during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.
Default: -0.5.
Default: -0.5. It should be a float number or a scalar tensor with float16 or float32 data type.
Outputs:
Tensor, representing the updated var.
......@@ -4764,7 +4790,7 @@ class SparseApplyFtrl(PrimitiveWithInfer):
use_locking (bool): Use locks for update operation if True . Default: False.
Inputs:
- **var** (Parameter) - The variable to be updated. The data type must be float32.
- **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
......@@ -4833,7 +4859,7 @@ class SparseApplyFtrl(PrimitiveWithInfer):
def infer_dtype(self, var_dtype, accum_dtype, linear_dtype, grad_dtype, indices_dtype):
args = {"var_dtype": var_dtype, "accum_dtype": accum_dtype,
"linear_dtype": linear_dtype, "grad_dtype": grad_dtype}
validator.check_tensor_type_same(args, [mstype.float32], self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_tensor_type_same({"indices_dtype": indices_dtype}, [mstype.int32], self.name)
return var_dtype, accum_dtype, linear_dtype
......@@ -4852,7 +4878,7 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
use_locking (bool): If `True`, updating of the var and accum tensors will be protected. Default: False.
Inputs:
- **var** (Parameter) - The variable to be updated. The data type must be float32.
- **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
......@@ -4925,7 +4951,7 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
def infer_dtype(self, var_dtype, accum_dtype, linear_dtype, grad_dtype, indices_dtype):
args = {"var_dtype": var_dtype, "accum_dtype": accum_dtype,
"linear_dtype": linear_dtype, "grad_dtype": grad_dtype}
validator.check_tensor_type_same(args, [mstype.float32], self.name)
validator.check_tensor_type_same(args, [mstype.float16, mstype.float32], self.name)
validator.check_tensor_type_same({"indices_dtype": indices_dtype}, [mstype.int32], self.name)
return var_dtype, accum_dtype, linear_dtype
......@@ -5253,7 +5279,7 @@ class InTopK(PrimitiveWithInfer):
k (int): Special the number of top elements to look at for computing precision.
Inputs:
- **x1** (Tensor) - A 2D Tensor define the predictions of a batch of samples with float32 data type.
- **x1** (Tensor) - A 2D Tensor define the predictions of a batch of samples with float16 or float32 data type.
- **x2** (Tensor) - A 1D Tensor define the labels of a batch of samples with int32 data type.
Outputs:
......@@ -5274,7 +5300,7 @@ class InTopK(PrimitiveWithInfer):
validator.check_value_type("k", k, [int], self.name)
def infer_dtype(self, x1_dtype, x2_dtype):
validator.check_tensor_type_same({"x1": x1_dtype}, (mstype.float32,), self.name)
validator.check_tensor_type_same({"x1": x1_dtype}, (mstype.float16, mstype.float32,), self.name)
validator.check_tensor_type_same({"x2": x2_dtype}, (mstype.int32,), self.name)
return mstype.tensor_type(mstype.bool_)
......@@ -5328,6 +5354,7 @@ class LRN(PrimitiveWithInfer):
validator.check_integer("x_shape", len(x_shape), 4, Rel.EQ, self.name)
return x_shape
class CTCLossV2(PrimitiveWithInfer):
r"""
Calculates the CTC(Connectionist Temporal Classification) loss. Also calculates the gradient.
......
......@@ -181,8 +181,9 @@ class CheckValid(PrimitiveWithInfer):
Check whether the bounding box cross data and data border.
Inputs:
- **bboxes** (Tensor) - Bounding boxes tensor with shape (N, 4).
- **bboxes** (Tensor) - Bounding boxes tensor with shape (N, 4). Data type should be float16 or float32.
- **img_metas** (Tensor) - Raw image size information, format (height, width, ratio).
Data type should be float16 or float32.
Outputs:
Tensor, the valided tensor.
......@@ -220,6 +221,9 @@ class CheckValid(PrimitiveWithInfer):
return bboxes_shape[:-1]
def infer_dtype(self, bboxes_type, metas_type):
valid_type = [mstype.float32, mstype.float16]
validator.check_tensor_type_same({"bboxes_type": bboxes_type}, valid_type, self.name)
validator.check_tensor_type_same({"metas_type": metas_type}, valid_type, self.name)
return mstype.bool_
......@@ -242,12 +246,12 @@ class IOU(PrimitiveWithInfer):
Inputs:
- **anchor_boxes** (Tensor) - Anchor boxes, tensor of shape (N, 4). "N" indicates the number of anchor boxes,
and the value "4" refers to "x0", "x1", "y0", and "y1". Data type must be float16.
and the value "4" refers to "x0", "x1", "y0", and "y1". Data type must be float16 or float32.
- **gt_boxes** (Tensor) - Ground truth boxes, tensor of shape (M, 4). "M" indicates the number of ground
truth boxes, and the value "4" refers to "x0", "x1", "y0", and "y1". Data type must be float16.
truth boxes, and the value "4" refers to "x0", "x1", "y0", and "y1". Data type must be float16 or float32.
Outputs:
Tensor, the 'iou' values, tensor of shape (M, N), with data type float16.
Tensor, the 'iou' values, tensor of shape (M, N), with the same data type as `anchor_boxes`.
Raises:
KeyError: When `mode` is not 'iou' or 'iof'.
......@@ -274,6 +278,9 @@ class IOU(PrimitiveWithInfer):
return iou
def infer_dtype(self, anchor_boxes, gt_boxes):
valid_type = [mstype.float32, mstype.float16]
validator.check_tensor_type_same({"anchor_boxes": anchor_boxes}, valid_type, self.name)
validator.check_tensor_type_same({"gt_boxes": gt_boxes}, valid_type, self.name)
return anchor_boxes
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册