An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc)
An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc)
to compute Connectionist Temporal Classification (CTC) loss.
to compute Connectionist Temporal Classification (CTC) loss.
It can be aliased as softmax with CTC, since a native softmax activation
It can be aliased as softmax with CTC, since a native softmax activation
is interated to the Warp-CTC library to normalize values for each row of the input tensor.
is interated to the Warp-CTC library to normalize values for each row of the input tensor.
Parameters:
Parameters:
...
@@ -967,7 +954,7 @@ def ctc_loss(log_probs,
...
@@ -967,7 +954,7 @@ def ctc_loss(log_probs,
Returns:
Returns:
Tensor, The Connectionist Temporal Classification (CTC) loss between ``log_probs`` and ``labels``. If attr:`reduction` is ``'none'``, the shape of loss is [batch_size], otherwise, the shape of loss is [1]. Data type is the same as ``log_probs``.
Tensor, The Connectionist Temporal Classification (CTC) loss between ``log_probs`` and ``labels``. If attr:`reduction` is ``'none'``, the shape of loss is [batch_size], otherwise, the shape of loss is [1]. Data type is the same as ``log_probs``.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
...
@@ -1012,18 +999,18 @@ def ctc_loss(log_probs,
...
@@ -1012,18 +999,18 @@ def ctc_loss(log_probs,
input_lengths = paddle.to_tensor(input_lengths)
input_lengths = paddle.to_tensor(input_lengths)
label_lengths = paddle.to_tensor(label_lengths)
label_lengths = paddle.to_tensor(label_lengths)
loss = F.ctc_loss(log_probs, labels,
loss = F.ctc_loss(log_probs, labels,
input_lengths,
input_lengths,
label_lengths,
label_lengths,
blank=0,
blank=0,
reduction='none')
reduction='none')
print(loss.numpy()) #[3.9179852 2.9076521]
print(loss.numpy()) #[3.9179852 2.9076521]
loss = F.ctc_loss(log_probs, labels,
loss = F.ctc_loss(log_probs, labels,
input_lengths,
input_lengths,
label_lengths,
label_lengths,
blank=0,
blank=0,
reduction='mean')
reduction='mean')
print(loss.numpy()) #[1.1376063]
print(loss.numpy()) #[1.1376063]
"""
"""
...
@@ -1071,8 +1058,8 @@ def cross_entropy(input,
...
@@ -1071,8 +1058,8 @@ def cross_entropy(input,
Parameters:
Parameters:
input (Tensor): Input tensor, the data type is float32, float64. Shape is
input (Tensor): Input tensor, the data type is float32, float64. Shape is
(N, C), where C is number of classes, and if shape is more than 2D, this
(N, C), where C is number of classes, and if shape is more than 2D, this
is (N, C, D1, D2,..., Dk), k >= 1.
is (N, C, D1, D2,..., Dk), k >= 1.
label (Tensor): Label tensor, the data type is int64. Shape is (N), where each
label (Tensor): Label tensor, the data type is int64. Shape is (N), where each
value is 0 <= label[i] <= C-1, and if shape is more than 2D, this is
value is 0 <= label[i] <= C-1, and if shape is more than 2D, this is
(N, D1, D2,..., Dk), k >= 1.
(N, D1, D2,..., Dk), k >= 1.
weight (Tensor, optional): Weight tensor, a manual rescaling weight given
weight (Tensor, optional): Weight tensor, a manual rescaling weight given
...
@@ -1105,7 +1092,7 @@ def cross_entropy(input,
...
@@ -1105,7 +1092,7 @@ def cross_entropy(input,
weight = paddle.to_tensor(weight_data)
weight = paddle.to_tensor(weight_data)
loss = paddle.nn.functional.cross_entropy(input=input, label=label, weight=weight)
loss = paddle.nn.functional.cross_entropy(input=input, label=label, weight=weight)
print(loss.numpy())
print(loss.numpy())
"""
"""
ifnotin_dygraph_mode():
ifnotin_dygraph_mode():
fluid.data_feeder.check_variable_and_dtype(
fluid.data_feeder.check_variable_and_dtype(
...
@@ -1124,7 +1111,7 @@ def cross_entropy(input,
...
@@ -1124,7 +1111,7 @@ def cross_entropy(input,
raiseValueError(
raiseValueError(
"The weight' is not a Variable, please convert to Variable.")
"The weight' is not a Variable, please convert to Variable.")