未验证 提交 0329ee74 编写于 作者: T tensor-tang 提交者: GitHub

Merge pull request #11497 from tensor-tang/doc

Doc update
...@@ -252,15 +252,14 @@ class SoftShrinkOpMaker : public framework::OpProtoAndCheckerMaker { ...@@ -252,15 +252,14 @@ class SoftShrinkOpMaker : public framework::OpProtoAndCheckerMaker {
AddOutput("Out", "Output of Softshrink operator"); AddOutput("Out", "Output of Softshrink operator");
AddAttr<float>("lambda", "non-negative offset").SetDefault(0.5f); AddAttr<float>("lambda", "non-negative offset").SetDefault(0.5f);
AddComment(R"DOC( AddComment(R"DOC(
Softshrink Activation Operator. :strong:`Softshrink Activation Operator`
$$ .. math::
out = \begin{cases} out = \begin{cases}
x - \lambda, \text{if } x > \lambda \\ x - \lambda, \text{if } x > \lambda \\
x + \lambda, \text{if } x < -\lambda \\ x + \lambda, \text{if } x < -\lambda \\
0, \text{otherwise} 0, \text{otherwise}
\end{cases} \end{cases}
$$
)DOC"); )DOC");
} }
......
...@@ -33,12 +33,10 @@ class MeanOp : public framework::OperatorWithKernel { ...@@ -33,12 +33,10 @@ class MeanOp : public framework::OperatorWithKernel {
class MeanOpMaker : public framework::OpProtoAndCheckerMaker { class MeanOpMaker : public framework::OpProtoAndCheckerMaker {
public: public:
void Make() override { void Make() override {
AddInput("X", "The input of mean op"); AddInput("X", "(Tensor) The input of mean op");
AddOutput("Out", "The output of mean op").Reuse("X"); AddOutput("Out", "(Tensor) The output of mean op").Reuse("X");
AddComment(R"DOC( AddComment(R"DOC(
Mean Operator. Mean Operator calculates the mean of all elements in X.
Out is a scalar which is the mean of all elements in X.
)DOC"); )DOC");
} }
......
...@@ -707,7 +707,7 @@ def lod_rank_table(x, level=0): ...@@ -707,7 +707,7 @@ def lod_rank_table(x, level=0):
.. code-block:: python .. code-block:: python
x = fluid.layers.data(name='x', shape=[10], x = fluid.layers.data(name='x', shape=[10],
dtype='float32', lod_level=1) dtype='float32', lod_level=1)
out = layers.lod_rank_table(x=x, level=0) out = layers.lod_rank_table(x=x, level=0)
""" """
helper = LayerHelper("lod_rank_table", **locals()) helper = LayerHelper("lod_rank_table", **locals())
......
...@@ -541,6 +541,9 @@ def __create_unshared_decorated_reader__(op_type, reader, attrs, name=None): ...@@ -541,6 +541,9 @@ def __create_unshared_decorated_reader__(op_type, reader, attrs, name=None):
def shuffle(reader, buffer_size): def shuffle(reader, buffer_size):
"""
Shuffle the reader.
"""
return __create_unshared_decorated_reader__( return __create_unshared_decorated_reader__(
'create_shuffle_reader', reader, {'buffer_size': int(buffer_size)}) 'create_shuffle_reader', reader, {'buffer_size': int(buffer_size)})
......
...@@ -225,11 +225,11 @@ def embedding(input, ...@@ -225,11 +225,11 @@ def embedding(input,
have two elements which indicate the size of the dictionary of have two elements which indicate the size of the dictionary of
embeddings and the size of each embedding vector respectively. embeddings and the size of each embedding vector respectively.
is_sparse(bool): The flag indicating whether to use sparse update. is_sparse(bool): The flag indicating whether to use sparse update.
is_distributed (bool): Whether to run lookup table from remote parameter server. is_distributed(bool): Whether to run lookup table from remote parameter server.
padding_idx(int|long|None): If :attr:`None`, it makes no effect to lookup. padding_idx(int|long|None): If :attr:`None`, it makes no effect to lookup.
Otherwise the given :attr:`padding_idx` indicates padding the output Otherwise the given :attr:`padding_idx` indicates padding the output
with zeros whenever lookup encounters it in :attr:`input`. If with zeros whenever lookup encounters it in :attr:`input`. If
:math:`padding_idx < 0`, the padding_idx to use in lookup is :math:`padding_idx < 0`, the :attr:`padding_idx` to use in lookup is
:math:`size[0] + dim`. :math:`size[0] + dim`.
param_attr(ParamAttr): Parameters for this layer param_attr(ParamAttr): Parameters for this layer
dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc
...@@ -1235,14 +1235,17 @@ def conv2d(input, ...@@ -1235,14 +1235,17 @@ def conv2d(input,
act=None, act=None,
name=None): name=None):
""" """
**Convlution2D Layer**
The convolution2D layer calculates the output based on the input, filter The convolution2D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input(Input) and and strides, paddings, dilations, groups parameters. Input and
Output(Output) are in NCHW format. Where N is batch size, C is the number of Output are in NCHW format, where N is batch size, C is the number of
channels, H is the height of the feature, and W is the width of the feature. channels, H is the height of the feature, and W is the width of the feature.
The details of convolution layer, please refer UFLDL's `convolution, Filter is in MCHW format, where M is the number of output image channels,
<http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/>`_ . C is the number of input image channels, H is the height of the filter,
and W is the width of the filter. If the groups is greater than 1,
C will equal the number of input image channels divided by the groups.
Please refer to UFLDL's `convolution
<http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/>`_
for more detials.
If bias attribution and activation type are provided, bias is added to the If bias attribution and activation type are provided, bias is added to the
output of the convolution, and the corresponding activation function is output of the convolution, and the corresponding activation function is
applied to the final result. applied to the final result.
...@@ -1253,15 +1256,14 @@ def conv2d(input, ...@@ -1253,15 +1256,14 @@ def conv2d(input,
Out = \sigma (W \\ast X + b) Out = \sigma (W \\ast X + b)
In the above equation: Where:
* :math:`X`: Input value, a tensor with NCHW format. * :math:`X`: Input value, a tensor with NCHW format.
* :math:`W`: Filter value, a tensor with MCHW format. * :math:`W`: Filter value, a tensor with MCHW format.
* :math:`\\ast`: Convolution operation. * :math:`\\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D tensor with shape [M, 1]. * :math:`b`: Bias value, a 2-D tensor with shape [M, 1].
* :math:`\\sigma`: Activation function. * :math:`\\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be * :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
different.
Example: Example:
...@@ -1272,6 +1274,7 @@ def conv2d(input, ...@@ -1272,6 +1274,7 @@ def conv2d(input,
Filter shape: :math:`(C_{out}, C_{in}, H_f, W_f)` Filter shape: :math:`(C_{out}, C_{in}, H_f, W_f)`
- Output: - Output:
Output shape: :math:`(N, C_{out}, H_{out}, W_{out})` Output shape: :math:`(N, C_{out}, H_{out}, W_{out})`
Where Where
...@@ -1283,7 +1286,7 @@ def conv2d(input, ...@@ -1283,7 +1286,7 @@ def conv2d(input,
Args: Args:
input (Variable): The input image with [N, C, H, W] format. input (Variable): The input image with [N, C, H, W] format.
num_filters(int): The number of filter. It is as same as the output num_filters(int): The number of filter. It is as same as the output
image channel. image channel.
filter_size (int|tuple|None): The filter size. If filter_size is a tuple, filter_size (int|tuple|None): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_H, filter_size_W). it must contain two integers, (filter_size_H, filter_size_W).
...@@ -1306,7 +1309,8 @@ def conv2d(input, ...@@ -1306,7 +1309,8 @@ def conv2d(input,
bias_attr (ParamAttr): Bias parameter for the Conv2d layer. Default: None bias_attr (ParamAttr): Bias parameter for the Conv2d layer. Default: None
use_cudnn (bool): Use cudnn kernel or not, it is valid only when the cudnn use_cudnn (bool): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True library is installed. Default: True
use_mkldnn (bool): Use mkldnn kernels or not. use_mkldnn (bool): Use mkldnn kernels or not, it is valid only when compiled
with mkldnn library. Default: False
act (str): Activation type. Default: None act (str): Activation type. Default: None
name (str|None): A name for this layer(optional). If set None, the layer name (str|None): A name for this layer(optional). If set None, the layer
will be named automatically. will be named automatically.
......
...@@ -201,6 +201,7 @@ def assign(input, output): ...@@ -201,6 +201,7 @@ def assign(input, output):
Examples: Examples:
.. code-block:: python .. code-block:: python
out = fluid.layers.create_tensor(dtype='float32') out = fluid.layers.create_tensor(dtype='float32')
hidden = fluid.layers.fc(input=data, size=10) hidden = fluid.layers.fc(input=data, size=10)
fluid.layers.assign(hidden, out) fluid.layers.assign(hidden, out)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册