@@ -142,12 +142,15 @@ We also project the encoder vector to :code:`decoder_size` dimensional space, ge
The decoder uses :code:`recurrent_group` to define the recurrent neural network. The step and output functions are defined in :code:`gru_decoder_with_attention`:
The implementation of the step function is listed as below. First, it defines the **memory** of the decoder network. Then it defines attention, gated recurrent unit step function, and the output function:
<p>The decoder uses <codeclass="code docutils literal"><spanclass="pre">recurrent_group</span></code> to define the recurrent neural network. The step and output functions are defined in <codeclass="code docutils literal"><spanclass="pre">gru_decoder_with_attention</span></code>:</p>
<p>The implementation of the step function is listed as below. First, it defines the <strong>memory</strong> of the decoder network. Then it defines attention, gated recurrent unit step function, and the output function:</p>
<li><strong>param_attr</strong> (<aclass="reference internal"href="attrs.html#paddle.trainer_config_helpers.attrs.ParameterAttribute"title="paddle.trainer_config_helpers.attrs.ParameterAttribute"><em>ParameterAttribute</em></a>) – Parameter config, None if use default.</li>
<li><strong>scale</strong> (<em>float</em>) – config scalar, default value is one.</li>
</ul>
</td>
</tr>
...
...
@@ -1162,6 +1161,42 @@ It performs element-wise multiplication with weight.</p>
</table>
</dd></dl>
</div>
<divclass="section"id="dotmul-operator">
<h2>dotmul_operator<aclass="headerlink"href="#dotmul-operator"title="Permalink to this headline">¶</a></h2>