Polish documents for fluid operators.
Created by: pkuyym
We have to polish the documents for fluid operators. This issue takes fc
as an example to show the documentation specification.
python/paddle/v2/fluid/layers.py#fc
def fc(input,
size,
num_flatten_dims=1,
param_attr=None,
bias_attr=None,
act=None,
name=None,
main_program=None,
startup_program=None):
"""Fully Connected Layer. This layer accepts multiple inputs and applies
linear transformation to each input data. If activation type provided,
corresponding nonlinear transformation would be applied then. For each input
:math:`X`, the equation is:
.. math::
Out = Act(WX + b)
In the above equation:
* :math:`X`: Input value, a tensor with rank at least 2.
* :math:`W`: Weight, a 2-D tensor with shape [M, N].
* :math:`b`: Bias, a 2-D tensor with shape [M, 1].
* :math:`Act`: Activation function.
* :math:`Out`: Output value, same shape with :math:`X`.
All the input variables are passed in as local variables to the LayerHelper
constructor.
Args:
Input (Variable|list): The input values, each value is a tensor with
rank at least 2.
size (int): The output size, an interge value.
num_flatten_dims (int): Column number of the input.
param_attr (ParamAttr|list): The parameters/weights to the FC Layer.
bias_attr (ParamAttr|list): The bias parameter.
act (str): Activation type.
name (str): Name/alias of the function.
main_program (Program): The main program calling this.
startup_program (Program): The startup program.
Returns:
Variable: The tensor variable storing the transformation and \
non-linearity activation result.
Raises:
ValueError: If rank of input tensor is less than 2.
Examples:
.. code-block:: python
data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
"""
And the final html looks like:
How to preview
After refined the documents in layers.py
, we need to preview the html page. Here I list some key tips:
- Go to build directory.
- Please make sure
WITH_DOC=1
andsphinx==1.5.6
. make -j `nproc` && python -m SimpleHTTPServer $PORT_NUM
- Assume that PaddlePaddle is compiled on computer with ip being $IP. We can visit
$IP:$PORT_NUM/doc/en/html/api/v2/fluid/layers.html
to check the preview. - Add link in
doc/api/v2/fluid/layers.rst
URLs
Docs of sphnix: http://www.sphinx-doc.org/en/stable/contents.html How to insert codes: http://www.sphinx-doc.org/en/stable/markup/code.html How to insert math equations: http://www.sphinx-doc.org/en/stable/ext/math.html Previous discussion: https://github.com/PaddlePaddle/Paddle/issues/6160
Operators need to polish
Please create an issue first before do polishing.
- fc @pkuyym @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6806
- embedding @qingqing01 @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6806
- dynamic_lstm @kuke https://github.com/PaddlePaddle/Paddle/pull/7640
- gru_unit @sidgoyal78
- data @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6858
- concat @nhzlx @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6855
- sums @wanghaoshuang @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6857
- linear_chain_crf @lcy-seso
- assign @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6855
- split_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6859
- merge_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6859
- cos_sim @lcy-seso
- cross_entropy @kuke https://github.com/PaddlePaddle/Paddle/pull/7018
- square_error_cost @sidgoyal78 #6862
- accuracy @wanghaoshuang #7091
- sequence_conv
- conv2d @chengduoZH #6850
- sequence_pool @luotao1 #6777 (closed)
- pool2d @nhzlx
- batch_norm @sidgoyal78
- beam_search_decode
- lstm
- lod_rank_table @pkuyym https://github.com/PaddlePaddle/Paddle/issues/7024
- max_sequence_len @pkuyym #7023 (closed)
- topk @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6861
- lod_tensor_to_array @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
- array_to_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
- fill_constant @abhinavarora https://github.com/PaddlePaddle/Paddle/commit/ebe4425
- fill_constant_batch_size_like @abhinavarora https://github.com/PaddlePaddle/Paddle/commit/ebe4425
- ones @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/7150
- zeros @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/7150
- increment @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
- array_write @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6820
- create_array @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
- less_than @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6816
- array_read @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6853
- shrink_memory
- array_length @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6817
- conv2d_transpose @chengduoZH https://github.com/PaddlePaddle/Paddle/pull/6920
- seq_expand @pkuyym https://github.com/PaddlePaddle/Paddle/issues/6590
- lstm_unit @pkuyym https://github.com/PaddlePaddle/Paddle/issues/6581
- reduce_sum @guoshengCS
- reduce_mean @guoshengCS
- reduce_max @guoshengCS
- reduce_min @guoshengCS