提交 859fc64c 编写于 作者: T Travis CI

Deploy to GitHub Pages: d8b923ab

上级 0fe217f8
......@@ -309,10 +309,11 @@ layer.</li>
<h2>embedding<a class="headerlink" href="#embedding" title="Permalink to this headline"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Embedding Layer</strong></p>
<p>This layer is used to lookup a vector of IDs, provided by <em>input</em>, in a lookup table.
The result of this lookup is the embedding of each ID in the <em>input</em>.</p>
<p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in
a lookup table. The result of this lookup is the embedding of each ID in the
<code class="xref py py-attr docutils literal"><span class="pre">input</span></code>.</p>
<p>All the input variables are passed in as local variables to the LayerHelper
constructor.</p>
<table class="docutils field-list" frame="void" rules="none">
......@@ -320,9 +321,16 @@ constructor.</p>
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; Input to the function</li>
<li><strong>size</strong> (<em>tuple|list|None</em>) &#8211; Shape of the look up table parameter</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; Boolean flag that specifying whether the input is sparse</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; The tensor variable containing the IDs.</li>
<li><strong>size</strong> (<em>tuple|list</em>) &#8211; The shape of the look up table parameter. It should
have two elements which indicate the size of the dictionary of
embeddings and the size of each embedding vector respectively.</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; The flag indicating whether to use sparse update.</li>
<li><strong>padding_idx</strong> (<em>int|long|None</em>) &#8211; If <code class="xref py py-attr docutils literal"><span class="pre">None</span></code>, it makes no effect to lookup.
Otherwise the given <code class="xref py py-attr docutils literal"><span class="pre">padding_idx</span></code> indicates padding the output
with zeros whenever lookup encounters it in <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>. If
<span class="math">\(padding_idx &lt; 0\)</span>, the padding_idx to use in lookup is
<span class="math">\(size[0] + dim\)</span>.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; Parameters for this layer</li>
<li><strong>dtype</strong> (<em>np.dtype|core.DataType|str</em>) &#8211; The type of data : float32, float_16, int etc</li>
</ul>
......@@ -1044,10 +1052,11 @@ that need to be summed up.</td>
<h2>assign<a class="headerlink" href="#assign" title="Permalink to this headline"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Embedding Layer</strong></p>
<p>This layer is used to lookup a vector of IDs, provided by <em>input</em>, in a lookup table.
The result of this lookup is the embedding of each ID in the <em>input</em>.</p>
<p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in
a lookup table. The result of this lookup is the embedding of each ID in the
<code class="xref py py-attr docutils literal"><span class="pre">input</span></code>.</p>
<p>All the input variables are passed in as local variables to the LayerHelper
constructor.</p>
<table class="docutils field-list" frame="void" rules="none">
......@@ -1055,9 +1064,16 @@ constructor.</p>
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; Input to the function</li>
<li><strong>size</strong> (<em>tuple|list|None</em>) &#8211; Shape of the look up table parameter</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; Boolean flag that specifying whether the input is sparse</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; The tensor variable containing the IDs.</li>
<li><strong>size</strong> (<em>tuple|list</em>) &#8211; The shape of the look up table parameter. It should
have two elements which indicate the size of the dictionary of
embeddings and the size of each embedding vector respectively.</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; The flag indicating whether to use sparse update.</li>
<li><strong>padding_idx</strong> (<em>int|long|None</em>) &#8211; If <code class="xref py py-attr docutils literal"><span class="pre">None</span></code>, it makes no effect to lookup.
Otherwise the given <code class="xref py py-attr docutils literal"><span class="pre">padding_idx</span></code> indicates padding the output
with zeros whenever lookup encounters it in <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>. If
<span class="math">\(padding_idx &lt; 0\)</span>, the padding_idx to use in lookup is
<span class="math">\(size[0] + dim\)</span>.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; Parameters for this layer</li>
<li><strong>dtype</strong> (<em>np.dtype|core.DataType|str</em>) &#8211; The type of data : float32, float_16, int etc</li>
</ul>
......
......@@ -230,7 +230,7 @@
<h2>img_conv_group<a class="headerlink" href="#img-conv-group" title="Permalink to this headline"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">img_conv_group</code><span class="sig-paren">(</span><em>input</em>, <em>conv_num_filter</em>, <em>pool_size</em>, <em>conv_padding=1</em>, <em>conv_filter_size=3</em>, <em>conv_act=None</em>, <em>param_attr=None</em>, <em>conv_with_batchnorm=False</em>, <em>conv_batchnorm_drop_rate=None</em>, <em>pool_stride=1</em>, <em>pool_type=None</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">img_conv_group</code><span class="sig-paren">(</span><em>input</em>, <em>conv_num_filter</em>, <em>pool_size</em>, <em>conv_padding=1</em>, <em>conv_filter_size=3</em>, <em>conv_act=None</em>, <em>param_attr=None</em>, <em>conv_with_batchnorm=False</em>, <em>conv_batchnorm_drop_rate=0.0</em>, <em>pool_stride=1</em>, <em>pool_type=None</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt>
<dd><p>Image Convolution Group, Used for vgg net.</p>
</dd></dl>
......
......@@ -3440,6 +3440,11 @@
"type" : "bool",
"comment" : "(boolean, default false) Sparse update",
"generated" : 0
}, {
"name" : "padding_idx",
"type" : "long",
"comment" : "(int64, default -1) If the value is -1, it makes no effect to lookup. Otherwise the given value indicates padding the output with zeros whenever lookup encounters it in Ids.",
"generated" : 0
} ]
},{
"type" : "lod_tensor_to_array",
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -328,10 +328,11 @@ layer.</li>
<h2>embedding<a class="headerlink" href="#embedding" title="永久链接至标题"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Embedding Layer</strong></p>
<p>This layer is used to lookup a vector of IDs, provided by <em>input</em>, in a lookup table.
The result of this lookup is the embedding of each ID in the <em>input</em>.</p>
<p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in
a lookup table. The result of this lookup is the embedding of each ID in the
<code class="xref py py-attr docutils literal"><span class="pre">input</span></code>.</p>
<p>All the input variables are passed in as local variables to the LayerHelper
constructor.</p>
<table class="docutils field-list" frame="void" rules="none">
......@@ -339,9 +340,16 @@ constructor.</p>
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; Input to the function</li>
<li><strong>size</strong> (<em>tuple|list|None</em>) &#8211; Shape of the look up table parameter</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; Boolean flag that specifying whether the input is sparse</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; The tensor variable containing the IDs.</li>
<li><strong>size</strong> (<em>tuple|list</em>) &#8211; The shape of the look up table parameter. It should
have two elements which indicate the size of the dictionary of
embeddings and the size of each embedding vector respectively.</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; The flag indicating whether to use sparse update.</li>
<li><strong>padding_idx</strong> (<em>int|long|None</em>) &#8211; If <code class="xref py py-attr docutils literal"><span class="pre">None</span></code>, it makes no effect to lookup.
Otherwise the given <code class="xref py py-attr docutils literal"><span class="pre">padding_idx</span></code> indicates padding the output
with zeros whenever lookup encounters it in <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>. If
<span class="math">\(padding_idx &lt; 0\)</span>, the padding_idx to use in lookup is
<span class="math">\(size[0] + dim\)</span>.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; Parameters for this layer</li>
<li><strong>dtype</strong> (<em>np.dtype|core.DataType|str</em>) &#8211; The type of data : float32, float_16, int etc</li>
</ul>
......@@ -1063,10 +1071,11 @@ that need to be summed up.</td>
<h2>assign<a class="headerlink" href="#assign" title="永久链接至标题"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Embedding Layer</strong></p>
<p>This layer is used to lookup a vector of IDs, provided by <em>input</em>, in a lookup table.
The result of this lookup is the embedding of each ID in the <em>input</em>.</p>
<p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in
a lookup table. The result of this lookup is the embedding of each ID in the
<code class="xref py py-attr docutils literal"><span class="pre">input</span></code>.</p>
<p>All the input variables are passed in as local variables to the LayerHelper
constructor.</p>
<table class="docutils field-list" frame="void" rules="none">
......@@ -1074,9 +1083,16 @@ constructor.</p>
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; Input to the function</li>
<li><strong>size</strong> (<em>tuple|list|None</em>) &#8211; Shape of the look up table parameter</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; Boolean flag that specifying whether the input is sparse</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; The tensor variable containing the IDs.</li>
<li><strong>size</strong> (<em>tuple|list</em>) &#8211; The shape of the look up table parameter. It should
have two elements which indicate the size of the dictionary of
embeddings and the size of each embedding vector respectively.</li>
<li><strong>is_sparse</strong> (<em>bool</em>) &#8211; The flag indicating whether to use sparse update.</li>
<li><strong>padding_idx</strong> (<em>int|long|None</em>) &#8211; If <code class="xref py py-attr docutils literal"><span class="pre">None</span></code>, it makes no effect to lookup.
Otherwise the given <code class="xref py py-attr docutils literal"><span class="pre">padding_idx</span></code> indicates padding the output
with zeros whenever lookup encounters it in <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>. If
<span class="math">\(padding_idx &lt; 0\)</span>, the padding_idx to use in lookup is
<span class="math">\(size[0] + dim\)</span>.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; Parameters for this layer</li>
<li><strong>dtype</strong> (<em>np.dtype|core.DataType|str</em>) &#8211; The type of data : float32, float_16, int etc</li>
</ul>
......
......@@ -249,7 +249,7 @@
<h2>img_conv_group<a class="headerlink" href="#img-conv-group" title="永久链接至标题"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">img_conv_group</code><span class="sig-paren">(</span><em>input</em>, <em>conv_num_filter</em>, <em>pool_size</em>, <em>conv_padding=1</em>, <em>conv_filter_size=3</em>, <em>conv_act=None</em>, <em>param_attr=None</em>, <em>conv_with_batchnorm=False</em>, <em>conv_batchnorm_drop_rate=None</em>, <em>pool_stride=1</em>, <em>pool_type=None</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">img_conv_group</code><span class="sig-paren">(</span><em>input</em>, <em>conv_num_filter</em>, <em>pool_size</em>, <em>conv_padding=1</em>, <em>conv_filter_size=3</em>, <em>conv_act=None</em>, <em>param_attr=None</em>, <em>conv_with_batchnorm=False</em>, <em>conv_batchnorm_drop_rate=0.0</em>, <em>pool_stride=1</em>, <em>pool_type=None</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt>
<dd><p>Image Convolution Group, Used for vgg net.</p>
</dd></dl>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册