提交 980499fa 编写于 作者: F fengjiayi

fix errors

上级 29bf727e
......@@ -204,8 +204,6 @@ void Pool2dOpMaker::Make() {
// TODO(dzhwinter): need to registered layout transform function
AddComment(R"DOC(
Pool2d Operator.
The pooling2d operation calculates the output based on
the input, pooling_type and ksize, strides, paddings parameters.
Input(X) and output(Out) are in NCHW format, where N is batch size, C is the
......@@ -215,18 +213,27 @@ These two elements represent height and width, respectively.
The input(X) size and output(Out) size may be different.
Example:
Input:
X shape: $(N, C, H_{in}, W_{in})$
Output:
Out shape: $(N, C, H_{out}, W_{out})$
For ceil_mode = false:
$$
H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1 \\
H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1
$$
$$
W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1])}{strides[1]} + 1
$$
For ceil_mode = true:
$$
H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0] + strides[0] - 1)}{strides[0]} + 1 \\
H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0] + strides[0] - 1)}{strides[0]} + 1
$$
$$
W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1] + strides[1] - 1)}{strides[1]} + 1
$$
......
......@@ -753,9 +753,9 @@ def lod_tensor_to_array(x, table):
This function split a LoDTesnor to a LoDTensorArray according to its LoD
information. LoDTensorArray is an alias of C++ std::vector<LoDTensor> in
Paddle. The generated LoDTensorArray of this function can be further read
or written by 'read_from_array()' and 'write_to_array()' operators. However,
this function is generally an internal component of Paddle 'DynamicRNN'.
PaddlePaddle. The generated LoDTensorArray of this function can be further read
or written by `read_from_array()` and `write_to_array()` operators. However,
this function is generally an internal component of PaddlePaddle `DynamicRNN`.
Users should not use it directly.
Args:
......@@ -763,11 +763,10 @@ def lod_tensor_to_array(x, table):
table (ParamAttr|list): The variable that stores the level of lod
which is ordered by sequence length in
descending order. It is generally generated
by 'layers.lod_rank_table()' API.
by `layers.lod_rank_table()` API.
Returns:
Variable: The LoDTensorArray that has been converted from the input
tensor.
Variable: The LoDTensorArray that has been converted from the input tensor.
Examples:
.. code-block:: python
......@@ -1579,24 +1578,26 @@ def reorder_lod_tensor_by_rank(x, rank_table):
def is_empty(x, cond=None, **ignored):
"""
Test whether an Variable is empty.
Test whether a Variable is empty.
Args:
x (Variable): The Variable to be tested.
cond (Variable|None): Output parameter. Returns the test result
of given 'x'.
of given 'x'. Default: None
Returns:
Variable: The tensor variable storing the test result of 'x'.
Variable: A bool scalar. True if 'x' is an empty Variable.
Raises:
TypeError: If input cond is not a variable, or cond's dtype is
not bool
not bool.
Examples:
.. code-block:: python
less = fluid.layers.is_empty(x=input)
res = fluid.layers.is_empty(x=input)
# or:
fluid.layers.is_empty(x=input, cond=res)
"""
helper = LayerHelper("is_empty", **locals())
if cond is None:
......
......@@ -572,6 +572,32 @@ def parallel(reader):
def read_file(file_obj):
"""
Read data from a file object.
A file object is also a Variable. It can be a raw file object generated by
`fluid.layers.open_files()` or a decorated one generated by
`fluid.layers.double_buffer()` and so on.
Args:
file_obj(Variable): The file object from where to read data.
Returns:
Tuple[Variable]: Data read from the given file object.
Examples:
.. code-block:: python
data_file = fluid.layers.open_files(
filenames=['mnist.recordio'],
shapes=[(-1, 748), (-1, 1)],
lod_levels=[0, 0],
dtypes=["float32", "int64"])
data_file = fluid.layers.double_buffer(
fluid.layers.batch(data_file, batch_size=64))
input, label = fluid.layers.read_file(data_file)
"""
helper = LayerHelper('read_file')
out = [
helper.create_tmp_variable(
......
......@@ -90,7 +90,7 @@ def exponential_decay(learning_rate, decay_steps, decay_rate, staircase=False):
Default: False
Returns:
The decayed learning rate
Variable: The decayed learning rate
Examples:
.. code-block:: python
......@@ -167,7 +167,7 @@ def inverse_time_decay(learning_rate, decay_steps, decay_rate, staircase=False):
Default: False
Returns:
The decayed learning rate
Variable: The decayed learning rate
Examples:
.. code-block:: python
......
......@@ -151,7 +151,7 @@ def fc(input,
name (str, default None): The name of this layer.
Returns:
A tensor variable storing the transformation result.
Variable: The transformation result.
Raises:
ValueError: If rank of the input tensor is less than 2.
......@@ -159,8 +159,7 @@ def fc(input,
Examples:
.. code-block:: python
data = fluid.layers.data(
name="data", shape=[32, 32], dtype="float32")
data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
"""
......@@ -1543,21 +1542,24 @@ def pool2d(input,
${comment}
Args:
input (Variable): ${input_comment}
input (Variable): The input tensor of pooling operator. The format of
input tensor is NCHW, where N is batch size, C is the number of
channels, H is the height of the feature, and W is the width of
the feature.
pool_size (int): The side length of pooling windows. All pooling
windows are squares with pool_size on a side.
pool_type (str): ${pooling_type_comment}
pool_type: ${pooling_type_comment}
pool_stride (int): stride of the pooling layer.
pool_padding (int): padding size.
global_pooling (bool): ${global_pooling_comment}
use_cudnn (bool): ${use_cudnn_comment}
ceil_mode (bool): ${ceil_mode_comment}
use_mkldnn (bool): ${use_mkldnn_comment}
global_pooling: ${global_pooling_comment}
use_cudnn: ${use_cudnn_comment}
ceil_mode: ${ceil_mode_comment}
use_mkldnn: ${use_mkldnn_comment}
name (str|None): A name for this layer(optional). If set None, the
layer will be named automatically.
Returns:
Variable: output of pool2d layer.
Variable: The pooling result.
Raises:
ValueError: If 'pool_type' is not "max" nor "avg"
......@@ -2764,6 +2766,27 @@ def topk(input, k, name=None):
If the input is a Tensor with higher rank, this operator computes the top k
entries along the last dimension.
For example:
.. code-block:: text
If:
input = [[5, 4, 2, 3],
[9, 7, 10, 25],
[6, 2, 10, 1]]
k = 2
Then:
The first output:
values = [[5, 4],
[10, 25],
[6, 10]]
The second output:
indices = [[0, 1],
[2, 3],
[0, 2]]
Args:
input(Variable): The input variable which can be a vector or Tensor with
higher rank.
......@@ -2774,10 +2797,10 @@ def topk(input, k, name=None):
Default: None
Returns:
values(Variable): The k largest elements along each last dimensional
slice.
indices(Variable): The indices of values within the last dimension of
input.
Tuple[Variable]: A tuple with two elements. Each element is a Variable.
The first one is k largest elements along each last
dimensional slice. The second one is indices of values
within the last dimension of input.
Raises:
ValueError: If k < 1 or k is not less than the last dimension of input
......
......@@ -159,20 +159,21 @@ def concat(input, axis=0, name=None):
def sums(input, out=None):
"""This function performs the sum operation on the input and returns the
"""
This function performs the sum operation on the input and returns the
result as the output.
Args:
input (Variable|list): The input tensor that has the elements
that need to be summed up.
out (Variable|None): Output parameter. Returns the sum result.
out (Variable|None): Output parameter. The sum result.
Default: None
Returns:
Variable: the sum of input. The same as the argument 'out'
Examples:
.. code-block::python
.. code-block:: python
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册