未验证 提交 9b39af3f 编写于 作者: W whs 提交者: GitHub

Fix docs in 2.0 API (#29081)

* 1. grid_sample
1.1 fix has_print
2. conv1d_transpose
2.1 fix code_example error
3. conv1d
4. affine_grid
4.1 has_print
4.2 has_disable_static
5. Conv1DTranspose
5.1 fix code_example error
5.2 has_disable_static
6. Conv1d
6.1 code_example
6.2 has_disable_static
上级 bb64efb1
......@@ -182,7 +182,7 @@ def conv1d(x,
.. math::
Out = \sigma (W \\ast X + b)
Out = \sigma (W \ast X + b)
Where:
......@@ -209,7 +209,7 @@ def conv1d(x,
.. math::
L_{out}&= \\frac{(L_{in} + 2 * padding - (dilation * (L_f - 1) + 1))}{stride} + 1
L_{out} = \frac{(L_{in} + 2 * padding - (dilation * (L_f - 1) + 1))}{stride} + 1
Args:
x (Tensor): The input is 3-D Tensor with shape [N, C, L], the data type
......@@ -583,7 +583,7 @@ def conv1d_transpose(x,
.. math::
Out = \sigma (W \\ast X + b)
Out = \sigma (W \ast X + b)
Where:
......@@ -693,13 +693,12 @@ def conv1d_transpose(x,
x=np.array([[[4, 0, 9, 7],
[8, 0, 9, 2,]]]).astype(np.float32)
# shape: (2, 1, 2)
y=np.array([[[7, 0]],
w=np.array([[[7, 0]],
[[4, 2]]]).astype(np.float32)
x_var = paddle.to_tensor(x)
w_var = paddle.to_tensor(w)
y_var = F.conv1d_transpose(x_var, w_var)
y_np = y_var.numpy()
print y_np
print(y_var)
# [[[60. 16. 99. 75. 4.]]]
"""
......
......@@ -86,7 +86,6 @@ def affine_grid(theta, out_shape, align_corners=True, name=None):
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
# theta shape = [1, 2, 3]
theta = np.array([[[-0.7, -0.4, 0.3],
[ 0.6, 0.5, 1.5]]]).astype("float32")
......@@ -95,7 +94,7 @@ def affine_grid(theta, out_shape, align_corners=True, name=None):
theta_t,
[1, 2, 3, 3],
align_corners=False)
print(y_t.numpy())
print(y_t)
#[[[[ 1.0333333 0.76666665]
# [ 0.76666665 1.0999999 ]
......@@ -270,7 +269,7 @@ def grid_sample(x,
mode='bilinear',
padding_mode='border',
align_corners=True)
print(y_t.numpy())
print(y_t)
# output shape = [1, 1, 3, 4]
# [[[[ 0.34 0.016 0.086 -0.448]
......
......@@ -163,25 +163,40 @@ class Conv1D(_ConvNd):
If bias attribution and activation type are provided, bias is added to the
output of the convolution, and the corresponding activation function is
applied to the final result.
For each input :math:`X`, the equation is:
For each input :math:`X` , the equation is:
.. math::
Out = \\sigma (W \\ast X + b)
Out = \sigma (W \\ast X + b)
Where:
* :math:`X`: Input value, a ``Tensor`` with 'NCL' format or 'NLC' format.
* :math:`W`: Filter value, a ``Tensor`` with shape [MCK] .
* :math:`\\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D ``Tensor`` with shape [M, 1].
* :math:`\\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
Example:
- Input:
Input shape: :math:`(N, C_{in}, L_{in})`
Kernel shape: :math:`(C_{out}, C_{in}, K)`
- Output:
Output shape: :math:`(N, C_{out}, L_{out})`
Where
.. math::
L_{out}&= \\frac{(L_{in} + 2 * padding - (dilation * (L_f - 1) + 1))}{stride} + 1
Parameters:
in_channels(int): The number of channels in the input image.
out_channels(int): The number of filter. It is as same as the output
......@@ -218,17 +233,21 @@ class Conv1D(_ConvNd):
If it is set to None or one attribute of ParamAttr, conv1d
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
Attribute:
**weight** (Parameter): the learnable weights of filter of this layer.
**bias** (Parameter or None): the learnable bias of this layer.
Shape:
- x: 3-D tensor with shape: (batch, in_channels, length) or (batch, length, in_channels).
- output: 3-D tensor with same shape as input x.
Raises:
None
Examples:
.. code-block:: python
import paddle
from paddle.nn import Conv1D
import numpy as np
......@@ -242,13 +261,11 @@ class Conv1D(_ConvNd):
[[0, 3, 4],
[2, 9, 7],
[5, 6, 8]]]).astype(np.float32)
paddle.disable_static()
x_t = paddle.to_tensor(x)
conv = Conv1D(3, 2, 3)
conv.weight.set_value(w)
y_t = conv(x_t)
y_np = y_t.numpy()
print(y_np)
print(y_t)
# [[[133. 238.]
# [160. 211.]]]
"""
......@@ -401,14 +418,9 @@ class Conv1DTranspose(_ConvNd):
**bias** (Parameter or None): the learnable bias of this layer.
Shape:
- x(Tensor): 3-D tensor with shape (batch, in_channels, length) when data_format is
"NCL" or shape (batch, length, in_channels) when data_format is "NLC".
- output_size(int|tuple|list, optional): The output image size. If output size is a
tuple, it must contain one integer, (feature_length). None if use
kernel_size, padding, output_padding and stride to calculate output_size.
If output_size and kernel_size are specified at the same time, They
should follow the formula above. Default: None. output_size and kernel_size
should not be None at the same time.
- x(Tensor): 3-D tensor with shape (batch, in_channels, length) when data_format is "NCL" or shape (batch, length, in_channels) when data_format is "NLC".
- output_size(int|tuple|list, optional): The output image size. If output size is a tuple, it must contain one integer, (feature_length). None if use kernel_size, padding, output_padding and stride to calculate output_size. If output_size and kernel_size are specified at the same time, They should follow the formula above. Default: None. output_size and kernel_size should not be None at the same time.
- output(Tensor): 3-D tensor with same shape as input x.
Examples:
......@@ -418,7 +430,6 @@ class Conv1DTranspose(_ConvNd):
from paddle.nn import Conv1DTranspose
import numpy as np
paddle.disable_static()
# shape: (1, 2, 4)
x=np.array([[[4, 0, 9, 7],
[8, 0, 9, 2]]]).astype(np.float32)
......@@ -429,8 +440,7 @@ class Conv1DTranspose(_ConvNd):
conv = Conv1DTranspose(2, 1, 2)
conv.weight.set_value(y)
y_t = conv(x_t)
y_np = y_t.numpy()
print y_np
print(y_t)
# [[[60. 16. 99. 75. 4.]]]
"""
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册