Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #6526

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 12月 12, 2017 by saxon_zh@saxon_zhGuest36 of 45 tasks completed36/45 tasks

Polish documents for fluid operators.

Created by: pkuyym

We have to polish the documents for fluid operators. This issue takes fc as an example to show the documentation specification.

python/paddle/v2/fluid/layers.py#fc

def fc(input,
       size,
       num_flatten_dims=1,
       param_attr=None,
       bias_attr=None,
       act=None,
       name=None,
       main_program=None,
       startup_program=None):
    """Fully Connected Layer. This layer accepts multiple inputs and applies
    linear transformation to each input data. If activation type provided,
    corresponding nonlinear transformation would be applied then. For each input
    :math:`X`, the equation is:

    .. math::

        Out = Act(WX + b)

    In the above equation:

        * :math:`X`: Input value, a tensor with rank at least 2.
        * :math:`W`: Weight, a 2-D tensor with shape [M, N].
        * :math:`b`: Bias, a 2-D tensor with shape [M, 1].
        * :math:`Act`: Activation function.
        * :math:`Out`: Output value, same shape with :math:`X`.

    All the input variables are passed in as local variables to the LayerHelper
    constructor.

    Args:
        Input (Variable|list): The input values, each value is a tensor with
            rank at least 2.
        size (int): The output size, an interge value.
        num_flatten_dims (int): Column number of the input.
        param_attr (ParamAttr|list): The parameters/weights to the FC Layer.
        bias_attr (ParamAttr|list): The bias parameter.
        act (str): Activation type.
        name (str): Name/alias of the function.
        main_program (Program): The main program calling this.
        startup_program (Program): The startup program.

    Returns:
        Variable: The tensor variable storing the transformation and \
                  non-linearity activation result.

    Raises:
        ValueError: If rank of input tensor is less than 2.

    Examples:
        .. code-block:: python

          data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
          fc = fluid.layers.fc(input=data, size=1000, act="tanh")
    """

And the final html looks like:

fc_example

How to preview

After refined the documents in layers.py, we need to preview the html page. Here I list some key tips:

  1. Go to build directory.
  2. Please make sure WITH_DOC=1 and sphinx==1.5.6.
  3. make -j `nproc` && python -m SimpleHTTPServer $PORT_NUM
  4. Assume that PaddlePaddle is compiled on computer with ip being $IP. We can visit $IP:$PORT_NUM/doc/en/html/api/v2/fluid/layers.html to check the preview.
  5. Add link in doc/api/v2/fluid/layers.rst

URLs

Docs of sphnix: http://www.sphinx-doc.org/en/stable/contents.html How to insert codes: http://www.sphinx-doc.org/en/stable/markup/code.html How to insert math equations: http://www.sphinx-doc.org/en/stable/ext/math.html Previous discussion: https://github.com/PaddlePaddle/Paddle/issues/6160

Operators need to polish

Please create an issue first before do polishing.

  • fc @pkuyym @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6806
  • embedding @qingqing01 @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6806
  • dynamic_lstm @kuke https://github.com/PaddlePaddle/Paddle/pull/7640
  • gru_unit @sidgoyal78
  • data @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6858
  • concat @nhzlx @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6855
  • sums @wanghaoshuang @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6857
  • linear_chain_crf @lcy-seso
  • assign @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6855
  • split_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6859
  • merge_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6859
  • cos_sim @lcy-seso
  • cross_entropy @kuke https://github.com/PaddlePaddle/Paddle/pull/7018
  • square_error_cost @sidgoyal78 #6862
  • accuracy @wanghaoshuang #7091
  • sequence_conv
  • conv2d @chengduoZH #6850
  • sequence_pool @luotao1 #6777 (closed)
  • pool2d @nhzlx
  • batch_norm @sidgoyal78
  • beam_search_decode
  • lstm
  • lod_rank_table @pkuyym https://github.com/PaddlePaddle/Paddle/issues/7024
  • max_sequence_len @pkuyym #7023 (closed)
  • topk @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6861
  • lod_tensor_to_array @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
  • array_to_lod_tensor @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
  • fill_constant @abhinavarora https://github.com/PaddlePaddle/Paddle/commit/ebe4425
  • fill_constant_batch_size_like @abhinavarora https://github.com/PaddlePaddle/Paddle/commit/ebe4425
  • ones @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/7150
  • zeros @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/7150
  • increment @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
  • array_write @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6820
  • create_array @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6807
  • less_than @abhinavarora https://github.com/PaddlePaddle/Paddle/pull/6816
  • array_read @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6853
  • shrink_memory
  • array_length @kavyasrinet https://github.com/PaddlePaddle/Paddle/pull/6817
  • conv2d_transpose @chengduoZH https://github.com/PaddlePaddle/Paddle/pull/6920
  • seq_expand @pkuyym https://github.com/PaddlePaddle/Paddle/issues/6590
  • lstm_unit @pkuyym https://github.com/PaddlePaddle/Paddle/issues/6581
  • reduce_sum @guoshengCS
  • reduce_mean @guoshengCS
  • reduce_max @guoshengCS
  • reduce_min @guoshengCS
指派人
分配到
Release 0.11.1
里程碑
Release 0.11.1 (Past due)
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#6526
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7