提交 38223092 编写于 作者: T Travis CI

Deploy to GitHub Pages: 6a6e1c74

上级 9c7aa741
......@@ -1244,9 +1244,6 @@ layer that share info(the number of sentences and the number
of words in each sentence) with all layer group’s outputs.
targetInlink should be one of the layer group&#8217;s input.</p>
</li>
<li><strong>is_generating</strong> (<em>bool</em>) &#8211; If is generating, none of input type should be paddle.v2.config_base.Layer;
else, for training or testing, one of the input type must
be paddle.v2.config_base.Layer.</li>
</ul>
</td>
</tr>
......@@ -1392,7 +1389,8 @@ sharing a same set of weights.</p>
demo/seqToseq/seqToseq_net.py for more details.</p>
</li>
<li><strong>input</strong> (<em>list</em>) &#8211; Input data for the recurrent unit, which should include the
previously generated words as a GeneratedInput object.</li>
previously generated words as a GeneratedInput object.
In beam_search, none of the input&#8217;s type should be paddle.v2.config_base.Layer.</li>
<li><strong>bos_id</strong> (<em>int</em>) &#8211; Index of the start symbol in the dictionary. The start symbol
is a special token for NLP task, which indicates the
beginning of a sequence. In the generation task, the start
......
......@@ -476,20 +476,20 @@ for more details about LSTM. The link goes as follows:
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput</em>) &#8211; input layer name.</li>
<li><strong>memory_boot</strong> (<em>LayerOutput | None</em>) &#8211; the initialization state of the LSTM cell.</li>
<li><strong>out_memory</strong> (<em>LayerOutput | None</em>) &#8211; output of previous time step</li>
<li><strong>name</strong> (<em>basestring</em>) &#8211; lstmemory unit name.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; lstmemory unit size.</li>
<li><strong>param_attr</strong> (<em>ParameterAttribute</em>) &#8211; Parameter config, None if use default.</li>
<li><strong>act</strong> (<em>BaseActivation</em>) &#8211; lstm final activiation type</li>
<li><strong>gate_act</strong> (<em>BaseActivation</em>) &#8211; lstm gate activiation type</li>
<li><strong>state_act</strong> (<em>BaseActivation</em>) &#8211; lstm state activiation type.</li>
<li><strong>mixed_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of mixed layer.
<li><strong>input_proj_bias_attr</strong> (<em>ParameterAttribute|False|None</em>) &#8211; bias attribute for input-to-hidden projection.
False means no bias, None means default bias.</li>
<li><strong>input_proj_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; extra layer attribute for input to hidden
projection of the LSTM unit, such as dropout, error clipping.</li>
<li><strong>lstm_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of lstm layer.
False means no bias, None means default bias.</li>
<li><strong>mixed_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; mixed layer&#8217;s extra attribute.</li>
<li><strong>lstm_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; lstm layer&#8217;s extra attribute.</li>
<li><strong>get_output_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; get output layer&#8217;s extra attribute.</li>
</ul>
</td>
</tr>
......@@ -539,19 +539,19 @@ full_matrix_projection must be included before lstmemory_unit is called.</p>
<li><strong>input</strong> (<em>LayerOutput</em>) &#8211; input layer name.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; lstmemory group size.</li>
<li><strong>name</strong> (<em>basestring</em>) &#8211; name of the lstmemory group.</li>
<li><strong>memory_boot</strong> (<em>LayerOutput | None</em>) &#8211; the initialization state of LSTM cell.</li>
<li><strong>out_memory</strong> (<em>LayerOutput | None</em>) &#8211; output of previous time step</li>
<li><strong>reverse</strong> (<em>bool</em>) &#8211; is lstm reversed</li>
<li><strong>param_attr</strong> (<em>ParameterAttribute</em>) &#8211; Parameter config, None if use default.</li>
<li><strong>act</strong> (<em>BaseActivation</em>) &#8211; lstm final activiation type</li>
<li><strong>gate_act</strong> (<em>BaseActivation</em>) &#8211; lstm gate activiation type</li>
<li><strong>state_act</strong> (<em>BaseActivation</em>) &#8211; lstm state activiation type.</li>
<li><strong>mixed_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of mixed layer.
False means no bias, None means default bias.</li>
<li><strong>lstm_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of lstm layer.
False means no bias, None means default bias.</li>
<li><strong>mixed_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; mixed layer&#8217;s extra attribute.</li>
<li><strong>input_proj_bias_attr</strong> (<em>ParameterAttribute|False|None</em>) &#8211; bias attribute for input-to-hidden projection.
False means no bias, None means default bias.</li>
<li><strong>input_proj_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; extra layer attribute for input to hidden
projection of the LSTM unit, such as dropout, error clipping.</li>
<li><strong>lstm_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; lstm layer&#8217;s extra attribute.</li>
<li><strong>get_output_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; get output layer&#8217;s extra attribute.</li>
</ul>
</td>
</tr>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -1249,9 +1249,6 @@ layer that share info(the number of sentences and the number
of words in each sentence) with all layer group&#8217;s outputs.
targetInlink should be one of the layer group&#8217;s input.</p>
</li>
<li><strong>is_generating</strong> (<em>bool</em>) &#8211; If is generating, none of input type should be paddle.v2.config_base.Layer;
else, for training or testing, one of the input type must
be paddle.v2.config_base.Layer.</li>
</ul>
</td>
</tr>
......@@ -1397,7 +1394,8 @@ sharing a same set of weights.</p>
demo/seqToseq/seqToseq_net.py for more details.</p>
</li>
<li><strong>input</strong> (<em>list</em>) &#8211; Input data for the recurrent unit, which should include the
previously generated words as a GeneratedInput object.</li>
previously generated words as a GeneratedInput object.
In beam_search, none of the input&#8217;s type should be paddle.v2.config_base.Layer.</li>
<li><strong>bos_id</strong> (<em>int</em>) &#8211; Index of the start symbol in the dictionary. The start symbol
is a special token for NLP task, which indicates the
beginning of a sequence. In the generation task, the start
......
......@@ -481,20 +481,20 @@ for more details about LSTM. The link goes as follows:
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput</em>) &#8211; input layer name.</li>
<li><strong>memory_boot</strong> (<em>LayerOutput | None</em>) &#8211; the initialization state of the LSTM cell.</li>
<li><strong>out_memory</strong> (<em>LayerOutput | None</em>) &#8211; output of previous time step</li>
<li><strong>name</strong> (<em>basestring</em>) &#8211; lstmemory unit name.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; lstmemory unit size.</li>
<li><strong>param_attr</strong> (<em>ParameterAttribute</em>) &#8211; Parameter config, None if use default.</li>
<li><strong>act</strong> (<em>BaseActivation</em>) &#8211; lstm final activiation type</li>
<li><strong>gate_act</strong> (<em>BaseActivation</em>) &#8211; lstm gate activiation type</li>
<li><strong>state_act</strong> (<em>BaseActivation</em>) &#8211; lstm state activiation type.</li>
<li><strong>mixed_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of mixed layer.
<li><strong>input_proj_bias_attr</strong> (<em>ParameterAttribute|False|None</em>) &#8211; bias attribute for input-to-hidden projection.
False means no bias, None means default bias.</li>
<li><strong>input_proj_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; extra layer attribute for input to hidden
projection of the LSTM unit, such as dropout, error clipping.</li>
<li><strong>lstm_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of lstm layer.
False means no bias, None means default bias.</li>
<li><strong>mixed_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; mixed layer&#8217;s extra attribute.</li>
<li><strong>lstm_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; lstm layer&#8217;s extra attribute.</li>
<li><strong>get_output_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; get output layer&#8217;s extra attribute.</li>
</ul>
</td>
</tr>
......@@ -544,19 +544,19 @@ full_matrix_projection must be included before lstmemory_unit is called.</p>
<li><strong>input</strong> (<em>LayerOutput</em>) &#8211; input layer name.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; lstmemory group size.</li>
<li><strong>name</strong> (<em>basestring</em>) &#8211; name of the lstmemory group.</li>
<li><strong>memory_boot</strong> (<em>LayerOutput | None</em>) &#8211; the initialization state of LSTM cell.</li>
<li><strong>out_memory</strong> (<em>LayerOutput | None</em>) &#8211; output of previous time step</li>
<li><strong>reverse</strong> (<em>bool</em>) &#8211; is lstm reversed</li>
<li><strong>param_attr</strong> (<em>ParameterAttribute</em>) &#8211; Parameter config, None if use default.</li>
<li><strong>act</strong> (<em>BaseActivation</em>) &#8211; lstm final activiation type</li>
<li><strong>gate_act</strong> (<em>BaseActivation</em>) &#8211; lstm gate activiation type</li>
<li><strong>state_act</strong> (<em>BaseActivation</em>) &#8211; lstm state activiation type.</li>
<li><strong>mixed_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of mixed layer.
False means no bias, None means default bias.</li>
<li><strong>lstm_bias_attr</strong> (<em>ParameterAttribute|False</em>) &#8211; bias parameter attribute of lstm layer.
False means no bias, None means default bias.</li>
<li><strong>mixed_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; mixed layer&#8217;s extra attribute.</li>
<li><strong>input_proj_bias_attr</strong> (<em>ParameterAttribute|False|None</em>) &#8211; bias attribute for input-to-hidden projection.
False means no bias, None means default bias.</li>
<li><strong>input_proj_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; extra layer attribute for input to hidden
projection of the LSTM unit, such as dropout, error clipping.</li>
<li><strong>lstm_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; lstm layer&#8217;s extra attribute.</li>
<li><strong>get_output_layer_attr</strong> (<em>ExtraLayerAttribute</em>) &#8211; get output layer&#8217;s extra attribute.</li>
</ul>
</td>
</tr>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册