未验证 提交 d5de7886 编写于 作者: N Nyakku Shigure 提交者: GitHub

[Docs][en] adjust code example format (#44679)

* add name attribute to code-block, test=document_fix

* remove redundant labels, test=document_fix

* remove redundant labels (from upstream), test=document_fix

* more COPY-FROM (try multiple code example), test=document_fix

* empty commit, try to trigger PR-CI-build

* fix some `Examples:` format issues

* fix some ci errors
上级 ffc8defa
......@@ -590,7 +590,7 @@ class ChainTransform(Transform):
class ExpTransform(Transform):
r"""Exponent transformation with mapping :math:`y = \exp(x)`.
Exapmles:
Examples:
.. code-block:: python
......@@ -1169,7 +1169,7 @@ class StickBreakingTransform(Transform):
class TanhTransform(Transform):
r"""Tanh transformation with mapping :math:`y = \tanh(x)`.
Examples
Examples:
.. code-block:: python
......
......@@ -413,7 +413,7 @@ class Subset(Dataset):
Returns:
Dataset: A Dataset which is the subset of the original dataset.
Example code:
Examples:
.. code-block:: python
......@@ -452,10 +452,10 @@ def random_split(dataset, lengths, generator=None):
lengths (sequence): lengths of splits to be produced
generator (Generator, optional): Generator used for the random permutation. Default is None then the DefaultGenerator is used in manual_seed().
Returns:
Returns:
Datasets: A list of subset Datasets, which are the non-overlapping subsets of the original Dataset.
Example code:
Examples:
.. code-block:: python
......
......@@ -485,8 +485,9 @@ def grad(outputs,
inside `inputs`, and the i-th returned Tensor is the sum of gradients of
`outputs` with respect to the i-th `inputs`.
Examples 1:
Examples:
.. code-block:: python
:name: code-example-1
import paddle
......@@ -519,8 +520,8 @@ def grad(outputs,
print(test_dygraph_grad(create_graph=False)) # [2.]
print(test_dygraph_grad(create_graph=True)) # [4.]
Examples 2:
.. code-block:: python
:name: code-example-2
import paddle
......
......@@ -98,6 +98,26 @@ class Layer(object):
Returns:
None
Examples:
.. code-block:: python
import paddle
class MyLayer(paddle.nn.Layer):
def __init__(self):
super(MyLayer, self).__init__()
self._linear = paddle.nn.Linear(1, 1)
self._dropout = paddle.nn.Dropout(p=0.5)
def forward(self, input):
temp = self._linear(input)
temp = self._dropout(temp)
return temp
x = paddle.randn([10, 1], 'float32')
mylayer = MyLayer()
mylayer.eval() # set mylayer._dropout to eval mode
out = mylayer(x)
mylayer.train() # set mylayer._dropout to train mode
out = mylayer(x)
"""
def __init__(self, name_scope=None, dtype="float32"):
......
......@@ -1187,8 +1187,9 @@ class Executor(object):
results are spliced together in dimension 0 for the same Tensor values
(Tensors in fetch_list) on different devices.
Examples 1:
Examples:
.. code-block:: python
:name: code-example-1
import paddle
import numpy
......@@ -1215,9 +1216,10 @@ class Executor(object):
print(array_val)
# [array([0.02153828], dtype=float32)]
Examples 2:
.. code-block:: python
:name: code-example-2
# required: gpu
import paddle
import numpy as np
......@@ -1265,7 +1267,7 @@ class Executor(object):
print("The merged prediction shape: {}".format(
np.array(merged_prediction).shape))
print(merged_prediction)
# Out:
# The unmerged prediction shape: (2, 3, 2)
# [array([[-0.37620035, -0.19752218],
......
......@@ -1187,7 +1187,7 @@ def calculate_gain(nonlinearity, param=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
gain = paddle.nn.initializer.calculate_gain('tanh') # 5.0 / 3
gain = paddle.nn.initializer.calculate_gain('leaky_relu', param=1.0) # 1.0 = math.sqrt(2.0 / (1+param^2))
......
......@@ -54,7 +54,6 @@ def set_config(config=None):
Examples:
.. code-block:: python
:name: auto-tuning
import paddle
import json
......
......@@ -601,7 +601,6 @@ def rrelu(x, lower=1. / 8., upper=1. / 3., training=True, name=None):
Examples:
.. code-block:: python
:name: rrelu-example
import paddle
import paddle.nn.functional as F
......
......@@ -2110,7 +2110,7 @@ def cross_entropy(input,
Return the average value of the previous results
.. math::
.. math::
\\loss=\sum_{j}loss_j/N
where, N is the number of samples and C is the number of categories.
......@@ -2119,21 +2119,21 @@ def cross_entropy(input,
1. Hard labels (soft_label = False)
.. math::
.. math::
\\loss=\sum_{j}loss_j/\sum_{j}weight[label_j]
2. Soft labels (soft_label = True)
.. math::
.. math::
\\loss=\sum_{j}loss_j/\sum_{j}\left(\sum_{i}weight[label_i]\right)
Parameters:
- **input** (Tensor)
Input tensor, the data type is float32, float64. Shape is
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
Note:
......@@ -2141,7 +2141,7 @@ def cross_entropy(input,
output of softmax operator, which will produce incorrect results.
2. when use_softmax=False, it expects the output of softmax operator.
- **label** (Tensor)
1. If soft_label=False, the shape is
......@@ -2209,10 +2209,11 @@ def cross_entropy(input,
2. if soft_label = True, the dimension of return value is :math:`[N_1, N_2, ..., N_k, 1]` .
Example1(hard labels):
Examples:
.. code-block:: python
# hard labels
import paddle
paddle.seed(99999)
N=100
......@@ -2229,11 +2230,9 @@ def cross_entropy(input,
label)
print(dy_ret.numpy()) #[5.41993642]
Example2(soft labels):
.. code-block:: python
# soft labels
import paddle
paddle.seed(99999)
axis = -1
......@@ -2900,7 +2899,6 @@ def cosine_embedding_loss(input1,
Examples:
.. code-block:: python
:name: code-example1
import paddle
......
......@@ -1296,26 +1296,25 @@ def adaptive_avg_pool1d(x, output_size, name=None):
Tensor: The result of 1D adaptive average pooling. Its data type is same as input.
Examples:
.. code-block:: python
:name: adaptive_avg_pool1d-example
# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive max pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend)
#
import paddle
import paddle.nn.functional as F
# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive max pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend)
#
import paddle
import paddle.nn.functional as F
data = paddle.uniform([1, 3, 32])
pool_out = F.adaptive_avg_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
data = paddle.uniform([1, 3, 32])
pool_out = F.adaptive_avg_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
"""
pool_type = 'avg'
if not in_dynamic_mode():
......
......@@ -367,7 +367,6 @@ def pixel_unshuffle(x, downscale_factor, data_format="NCHW", name=None):
Examples:
.. code-block:: python
:name: pixel_unshuffle-example
import paddle
import paddle.nn.functional as F
......@@ -424,7 +423,6 @@ def channel_shuffle(x, groups, data_format="NCHW", name=None):
Examples:
.. code-block:: python
:name: channel_shuffle-example
import paddle
import paddle.nn.functional as F
......
......@@ -26,7 +26,7 @@ class Constant(ConstantInitializer):
Examples:
.. code-block:: python
:name: code-example1
import paddle
import paddle.nn as nn
......
......@@ -72,7 +72,6 @@ class TruncatedNormal(TruncatedNormalInitializer):
Examples:
.. code-block:: python
:name: initializer_TruncatedNormal-example
import paddle
......
......@@ -30,7 +30,6 @@ class Uniform(UniformInitializer):
Examples:
.. code-block:: python
:name: initializer_Uniform-example
import paddle
......
......@@ -41,7 +41,6 @@ class XavierNormal(XavierInitializer):
Examples:
.. code-block:: python
:name: initializer_XavierNormal-example
import paddle
......@@ -97,7 +96,6 @@ class XavierUniform(XavierInitializer):
Examples:
.. code-block:: python
:name: initializer_XavierUniform-example
import paddle
......
......@@ -486,7 +486,6 @@ class RReLU(Layer):
Examples:
.. code-block:: python
:name: RReLU-example
import paddle
......
......@@ -295,7 +295,7 @@ class CrossEntropyLoss(Layer):
- **input** (Tensor)
Input tensor, the data type is float32, float64. Shape is
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
Note:
......@@ -303,7 +303,7 @@ class CrossEntropyLoss(Layer):
output of softmax operator, which will produce incorrect results.
2. when use_softmax=False, it expects the output of softmax operator.
- **label** (Tensor)
......@@ -313,7 +313,7 @@ class CrossEntropyLoss(Layer):
2. If soft_label=True, the shape and data type should be same with ``input`` ,
and the sum of the labels for each sample should be 1.
- **output** (Tensor)
Return the softmax cross_entropy loss of ``input`` and ``label``.
......@@ -328,10 +328,11 @@ class CrossEntropyLoss(Layer):
2. if soft_label = True, the dimension of return value is :math:`[N_1, N_2, ..., N_k, 1]` .
Example1(hard labels):
Examples:
.. code-block:: python
# hard labels
import paddle
paddle.seed(99999)
N=100
......@@ -348,11 +349,9 @@ class CrossEntropyLoss(Layer):
label)
print(dy_ret.numpy()) #[5.41993642]
Example2(soft labels):
.. code-block:: python
# soft labels
import paddle
paddle.seed(99999)
axis = -1
......@@ -1435,7 +1434,6 @@ class CosineEmbeddingLoss(Layer):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......
......@@ -644,7 +644,7 @@ class AdaptiveAvgPool1D(Layer):
Examples:
.. code-block:: python
:name: AdaptiveAvgPool1D-example
# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
......
......@@ -110,7 +110,6 @@ class PixelUnshuffle(Layer):
Examples:
.. code-block:: python
:name: PixelUnshuffle-example
import paddle
import paddle.nn as nn
......@@ -173,7 +172,6 @@ class ChannelShuffle(Layer):
Examples:
.. code-block:: python
:name: ChannelShuffle-example
import paddle
import paddle.nn as nn
......
......@@ -173,7 +173,6 @@ def export_chrome_tracing(dir_name: str,
The return value can be used as parameter ``on_trace_ready`` in :ref:`Profiler <api_paddle_profiler_Profiler>` .
.. code-block:: python
:name: code-example1
# required: gpu
import paddle.profiler as profiler
......@@ -224,7 +223,6 @@ def export_protobuf(dir_name: str,
The return value can be used as parameter ``on_trace_ready`` in :ref:`Profiler <api_paddle_profiler_Profiler>` .
.. code-block:: python
:name: code-example1
# required: gpu
import paddle.profiler as profiler
......
......@@ -138,7 +138,6 @@ def load_profiler_result(filename: str):
Examples:
.. code-block:: python
:name: code-example1
# required: gpu
import paddle.profiler as profiler
......
......@@ -423,6 +423,25 @@ def save_to_file(path, content):
content(bytes): Content to write.
Returns:
None
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
path_prefix = "./infer_model"
# 用户自定义网络,此处用 softmax 回归为例。
image = paddle.static.data(name='img', shape=[None, 28, 28], dtype='float32')
label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
predict = paddle.static.nn.fc(image, 10, activation='softmax')
loss = paddle.nn.functional.cross_entropy(predict, label)
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# 序列化参数
serialized_params = paddle.static.serialize_persistables([image], [predict], exe)
# 将序列化之后的参数保存到文件
params_path = path_prefix + ".params"
paddle.static.save_to_file(params_path, serialized_params)
"""
if not isinstance(content, bytes):
......@@ -675,6 +694,28 @@ def load_from_file(path):
path(str): Path of an existed file.
Returns:
bytes: Content of file.
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
path_prefix = "./infer_model"
# 用户自定义网络,此处用 softmax 回归为例。
image = paddle.static.data(name='img', shape=[None, 28, 28], dtype='float32')
label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
predict = paddle.static.nn.fc(image, 10, activation='softmax')
loss = paddle.nn.functional.cross_entropy(predict, label)
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# 序列化参数
serialized_params = paddle.static.serialize_persistables([image], [predict], exe)
# 将序列化之后的参数保存到文件
params_path = path_prefix + ".params"
paddle.static.save_to_file(params_path, serialized_params)
# 从文件加载序列化之后的参数
serialized_params_copy = paddle.static.load_from_file(params_path)
"""
with open(path, 'rb') as f:
data = f.read()
......
......@@ -178,7 +178,6 @@ def logspace(start, stop, num, base=10.0, dtype=None, name=None):
Examples:
.. code-block:: python
:name: logspace-example
import paddle
data = paddle.logspace(0, 10, 5, 2, 'float32')
......@@ -492,25 +491,24 @@ def ones(shape, dtype=None, name=None):
Examples:
.. code-block:: python
:name: ones-example
import paddle
# default dtype for ones OP
data1 = paddle.ones(shape=[3, 2])
# [[1. 1.]
# [1. 1.]
# [1. 1.]]
data2 = paddle.ones(shape=[2, 2], dtype='int32')
# [[1 1]
# [1 1]]
# shape is a Tensor
shape = paddle.full(shape=[2], dtype='int32', fill_value=2)
data3 = paddle.ones(shape=shape, dtype='int32')
# [[1 1]
# [1 1]]
import paddle
# default dtype for ones OP
data1 = paddle.ones(shape=[3, 2])
# [[1. 1.]
# [1. 1.]
# [1. 1.]]
data2 = paddle.ones(shape=[2, 2], dtype='int32')
# [[1 1]
# [1 1]]
# shape is a Tensor
shape = paddle.full(shape=[2], dtype='int32', fill_value=2)
data3 = paddle.ones(shape=shape, dtype='int32')
# [[1 1]
# [1 1]]
"""
if dtype is None:
dtype = 'float32'
......@@ -713,30 +711,29 @@ def full(shape, fill_value, dtype=None, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
import paddle
data1 = paddle.full(shape=[2,1], fill_value=0, dtype='int64')
#[[0]
# [0]]
data1 = paddle.full(shape=[2,1], fill_value=0, dtype='int64')
#[[0]
# [0]]
# attr shape is a list which contains Tensor.
positive_2 = paddle.full([1], 2, "int32")
data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5)
# [[1.5 1.5]]
# attr shape is a list which contains Tensor.
positive_2 = paddle.full([1], 2, "int32")
data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5)
# [[1.5 1.5]]
# attr shape is a Tensor.
shape = paddle.full([2], 2, "int32")
data4 = paddle.full(shape=shape, dtype='bool', fill_value=True)
# [[True True]
# [True True]]
# attr fill_value is a Tensor.
val = paddle.full([1], 2.0, "float32")
data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32')
# [[2.0]
# [2.0]]
# attr shape is a Tensor.
shape = paddle.full([2], 2, "int32")
data4 = paddle.full(shape=shape, dtype='bool', fill_value=True)
# [[True True]
# [True True]]
# attr fill_value is a Tensor.
val = paddle.full([1], 2.0, "float32")
data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32')
# [[2.0]
# [2.0]]
"""
if dtype is None:
......@@ -1110,57 +1107,59 @@ def diagflat(x, offset=0, name=None):
Examples:
.. code-block:: python
:name: code-example-1
import paddle
import paddle
x = paddle.to_tensor([1, 2, 3])
y = paddle.diagflat(x)
print(y.numpy())
# [[1 0 0]
# [0 2 0]
# [0 0 3]]
y = paddle.diagflat(x, offset=1)
print(y.numpy())
# [[0 1 0 0]
# [0 0 2 0]
# [0 0 0 3]
# [0 0 0 0]]
y = paddle.diagflat(x, offset=-1)
print(y.numpy())
# [[0 0 0 0]
# [1 0 0 0]
# [0 2 0 0]
# [0 0 3 0]]
x = paddle.to_tensor([1, 2, 3])
y = paddle.diagflat(x)
print(y.numpy())
# [[1 0 0]
# [0 2 0]
# [0 0 3]]
y = paddle.diagflat(x, offset=1)
print(y.numpy())
# [[0 1 0 0]
# [0 0 2 0]
# [0 0 0 3]
# [0 0 0 0]]
y = paddle.diagflat(x, offset=-1)
print(y.numpy())
# [[0 0 0 0]
# [1 0 0 0]
# [0 2 0 0]
# [0 0 3 0]]
.. code-block:: python
:name: code-example-2
import paddle
import paddle
x = paddle.to_tensor([[1, 2], [3, 4]])
y = paddle.diagflat(x)
print(y.numpy())
# [[1 0 0 0]
# [0 2 0 0]
# [0 0 3 0]
# [0 0 0 4]]
y = paddle.diagflat(x, offset=1)
print(y.numpy())
# [[0 1 0 0 0]
# [0 0 2 0 0]
# [0 0 0 3 0]
# [0 0 0 0 4]
# [0 0 0 0 0]]
y = paddle.diagflat(x, offset=-1)
print(y.numpy())
# [[0 0 0 0 0]
# [1 0 0 0 0]
# [0 2 0 0 0]
# [0 0 3 0 0]
# [0 0 0 4 0]]
x = paddle.to_tensor([[1, 2], [3, 4]])
y = paddle.diagflat(x)
print(y.numpy())
# [[1 0 0 0]
# [0 2 0 0]
# [0 0 3 0]
# [0 0 0 4]]
y = paddle.diagflat(x, offset=1)
print(y.numpy())
# [[0 1 0 0 0]
# [0 0 2 0 0]
# [0 0 0 3 0]
# [0 0 0 0 4]
# [0 0 0 0 0]]
y = paddle.diagflat(x, offset=-1)
print(y.numpy())
# [[0 0 0 0 0]
# [1 0 0 0 0]
# [0 2 0 0 0]
# [0 0 3 0 0]
# [0 0 0 4 0]]
"""
padding_value = 0
if paddle.in_dynamic_mode():
......@@ -1240,47 +1239,49 @@ def diag(x, offset=0, padding_value=0, name=None):
Examples:
.. code-block:: python
:name: code-example-1
import paddle
import paddle
paddle.disable_static()
x = paddle.to_tensor([1, 2, 3])
y = paddle.diag(x)
print(y.numpy())
# [[1 0 0]
# [0 2 0]
# [0 0 3]]
y = paddle.diag(x, offset=1)
print(y.numpy())
# [[0 1 0 0]
# [0 0 2 0]
# [0 0 0 3]
# [0 0 0 0]]
y = paddle.diag(x, padding_value=6)
print(y.numpy())
# [[1 6 6]
# [6 2 6]
# [6 6 3]]
paddle.disable_static()
x = paddle.to_tensor([1, 2, 3])
y = paddle.diag(x)
print(y.numpy())
# [[1 0 0]
# [0 2 0]
# [0 0 3]]
y = paddle.diag(x, offset=1)
print(y.numpy())
# [[0 1 0 0]
# [0 0 2 0]
# [0 0 0 3]
# [0 0 0 0]]
y = paddle.diag(x, padding_value=6)
print(y.numpy())
# [[1 6 6]
# [6 2 6]
# [6 6 3]]
.. code-block:: python
:name: code-example-2
import paddle
import paddle
paddle.disable_static()
x = paddle.to_tensor([[1, 2, 3], [4, 5, 6]])
y = paddle.diag(x)
print(y.numpy())
# [1 5]
paddle.disable_static()
x = paddle.to_tensor([[1, 2, 3], [4, 5, 6]])
y = paddle.diag(x)
print(y.numpy())
# [1 5]
y = paddle.diag(x, offset=1)
print(y.numpy())
# [2 6]
y = paddle.diag(x, offset=1)
print(y.numpy())
# [2 6]
y = paddle.diag(x, offset=-1)
print(y.numpy())
# [4]
y = paddle.diag(x, offset=-1)
print(y.numpy())
# [4]
"""
if in_dygraph_mode():
return _C_ops.final_state_diag(x, offset, padding_value)
......@@ -1485,18 +1486,17 @@ def assign(x, output=None):
Examples:
.. code-block:: python
:name: assign-example
import paddle
import numpy as np
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1],
[3, 4],
[1, 3]]).astype(np.int64)
result1 = paddle.zeros(shape=[3, 3], dtype='float32')
paddle.assign(array, result1) # result1 = [[1, 1], [3 4], [1, 3]]
result2 = paddle.assign(data) # result2 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
result3 = paddle.assign(np.array([[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]], dtype='float32')) # result3 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
import paddle
import numpy as np
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1],
[3, 4],
[1, 3]]).astype(np.int64)
result1 = paddle.zeros(shape=[3, 3], dtype='float32')
paddle.assign(array, result1) # result1 = [[1, 1], [3 4], [1, 3]]
result2 = paddle.assign(data) # result2 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
result3 = paddle.assign(np.array([[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]], dtype='float32')) # result3 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
"""
input = x
helper = LayerHelper('assign', **locals())
......@@ -1777,7 +1777,6 @@ def tril_indices(row, col, offset=0, dtype='int64'):
Examples:
.. code-block:: python
:name: tril_indices-example
import paddle
......
......@@ -3267,8 +3267,7 @@ def corrcoef(x, rowvar=True, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
xt = paddle.rand((3,4))
......
......@@ -619,7 +619,6 @@ def crop(x, shape=None, offsets=None, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
x = paddle.to_tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
......@@ -1975,7 +1974,7 @@ def squeeze(x, axis=None, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
x = paddle.rand([5, 1, 10])
......@@ -2187,7 +2186,7 @@ def unique(x,
Examples:
.. code-block:: python
:name: code-example1
import paddle
x = paddle.to_tensor([2, 3, 3, 1, 5, 3])
......@@ -3219,7 +3218,6 @@ def reshape(x, shape, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......@@ -4185,7 +4183,6 @@ def take_along_axis(arr, indices, axis):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......@@ -4251,7 +4248,6 @@ def put_along_axis(arr, indices, values, axis, reduce='assign'):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......
......@@ -288,7 +288,6 @@ def multiplex(inputs, index, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......@@ -1376,7 +1375,6 @@ def count_nonzero(x, axis=None, keepdim=False, name=None):
Examples:
.. code-block:: python
:name: count_nonzero-example
import paddle
# x is a 2-D Tensor:
......@@ -1468,7 +1466,7 @@ def add_n(inputs, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
input0 = paddle.to_tensor([[1, 2, 3], [4, 5, 6]], dtype='float32')
......@@ -4629,7 +4627,6 @@ def heaviside(x, y, name=None):
Examples:
.. code-block:: python
:name: heaviside-example
import paddle
x = paddle.to_tensor([-0.5, 0, 0.5])
......@@ -4662,7 +4659,7 @@ def frac(x, name=None):
Tensor: The output Tensor of frac.
Examples:
.. code-block:: Python
.. code-block:: python
import paddle
import numpy as np
......
......@@ -47,7 +47,6 @@ def bernoulli(x, name=None):
Examples:
.. code-block:: python
:name: bernoulli-example
import paddle
......
......@@ -225,7 +225,7 @@ def argmin(x, axis=None, keepdim=False, dtype="int64", name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
x = paddle.to_tensor([[5,8,9,5],
......@@ -447,7 +447,6 @@ def sort(x, axis=-1, descending=False, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
......@@ -849,7 +848,7 @@ def topk(x, k, axis=None, largest=True, sorted=True, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
data_1 = paddle.to_tensor([1, 4, 5, 7])
......
......@@ -272,7 +272,6 @@ def nanmedian(x, axis=None, keepdim=True, name=None):
Examples:
.. code-block:: python
:name: nanmedian-example
import paddle
x = paddle.to_tensor([[float('nan'), 2. , 3. ], [0. , 1. , 2. ]])
......
......@@ -1075,8 +1075,7 @@ def psroi_pool(x, boxes, boxes_num, output_size, spatial_scale=1.0, name=None):
Examples:
.. code-block:: python
:name: code-example1
import paddle
x = paddle.uniform([2, 490, 28, 28], dtype='float32')
boxes = paddle.to_tensor([[1, 5, 8, 10], [4, 2, 6, 7], [12, 12, 19, 21]], dtype='float32')
......@@ -1144,7 +1143,7 @@ class PSRoIPool(Layer):
Examples:
.. code-block:: python
:name: code-example1
import paddle
psroi_module = paddle.vision.ops.PSRoIPool(7, 1.0)
......@@ -1350,7 +1349,7 @@ def roi_align(x,
Examples:
.. code-block:: python
:name: code-example1
import paddle
from paddle.vision.ops import roi_align
......@@ -1426,7 +1425,7 @@ class RoIAlign(Layer):
Examples:
.. code-block:: python
:name: code-example1
import paddle
from paddle.vision.ops import RoIAlign
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册