# indicate that h variables in all step scopes should be merged
rnn.add_outputs(h)
out = rnn()
```
Python API functions in above example:
- `rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
- `rnn.add_memory` creates a variable used as the memory.
- `rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
### Nested RNN and LoDTensor
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
By default, the `RNNOp` will concatenate the outputs from all the time steps,
if the `output_all_steps` set to False, it will only output the final time step.
<liclass="toctree-l2"><aclass="reference internal"href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../../getstarted/build_and_install/docker_install_en.html">PaddlePaddle in Docker Containers</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../../getstarted/build_and_install/build_from_source_en.html">Installing from Sources</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../../howto/usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../../howto/usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../../howto/dev/build_en.html">Build PaddlePaddle from Source Code and Run Unit Test</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../../howto/dev/new_layer_en.html">Write New Layers</a></li>
<spanid="rnnop-design"></span><h1>RNNOp design<aclass="headerlink"href="#rnnop-design"title="Permalink to this headline">¶</a></h1>
<p>This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.</p>
<spanid="rnn-algorithm-implementation"></span><h2>RNN Algorithm Implementation<aclass="headerlink"href="#rnn-algorithm-implementation"title="Permalink to this headline">¶</a></h2>
<paligh="center">
<imgsrc="./images/rnn.jpg"/>
</p><p>The above diagram shows an RNN unrolled into a full network.</p>
<p>There are several important concepts:</p>
<ulclass="simple">
<li><em>step-net</em>: the sub-graph to run at each step,</li>
<li><em>memory</em>, $h_t$, the state of the current step,</li>
<li><em>ex-memory</em>, $h_{t-1}$, the state of the previous step,</li>
<li><em>initial memory value</em>, the ex-memory of the first step.</li>
</ul>
<divclass="section"id="step-scope">
<spanid="step-scope"></span><h3>Step-scope<aclass="headerlink"href="#step-scope"title="Permalink to this headline">¶</a></h3>
<p>There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in <em>step-scopes</em>– scopes created for each step.</p>
<paligh="center">
<imgsrc="./images/rnn.png"/><br/>
Figure 2 the RNN's data flow
</p><p>Please be aware that all steps run the same step-net. Each step</p>
<olclass="simple">
<li>creates the step-scope,</li>
<li>realizes local variables, including step-outputs, in the step-scope, and</li>
<li>runs the step-net, which could use these variables.</li>
</ol>
<p>The RNN operator will compose its output from step outputs in step scopes.</p>
</div>
<divclass="section"id="memory-and-ex-memory">
<spanid="memory-and-ex-memory"></span><h3>Memory and Ex-memory<aclass="headerlink"href="#memory-and-ex-memory"title="Permalink to this headline">¶</a></h3>
<p>Let’s give more details about memory and ex-memory via a simply example:</p>
<p>$$
h_t = U h_{t-1} + W x_t
$$,</p>
<p>where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$’s respectively.</p>
<p>In the implementation, we can make an ex-memory variable either “refers to” the memory variable of the previous step,
or copy the value of the previous memory value to the current ex-memory variable.</p>
</div>
<divclass="section"id="usage-in-python">
<spanid="usage-in-python"></span><h3>Usage in Python<aclass="headerlink"href="#usage-in-python"title="Permalink to this headline">¶</a></h3>
<p>For more information on Block, please refer to the <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md">design doc</a>.</p>
<p>We can define an RNN’s step-net using Block:</p>
<spanclass="n">X</span><spanclass="o">=</span><spanclass="n">some_op</span><spanclass="p">()</span><spanclass="c1"># x is some operator's output, and is a LoDTensor</span>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_input</span></code> indicates the parameter is a variable that will be segmented into step-inputs.</li>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_memory</span></code> creates a variable used as the memory.</li>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_outputs</span></code> mark the variables that will be concatenated across steps into the RNN output.</li>
</ul>
</div>
<divclass="section"id="nested-rnn-and-lodtensor">
<spanid="nested-rnn-and-lodtensor"></span><h3>Nested RNN and LoDTensor<aclass="headerlink"href="#nested-rnn-and-lodtensor"title="Permalink to this headline">¶</a></h3>
<p>An RNN whose step-net includes other RNN operators is known as an <em>nested RNN</em>.</p>
<p>For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.</p>
<p>The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.</p>
<p>in above example, the construction of the <codeclass="docutils literal"><spanclass="pre">top_level_rnn</span></code> calls <codeclass="docutils literal"><spanclass="pre">lower_level_rnn</span></code>. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.</p>
<p>By default, the <codeclass="docutils literal"><spanclass="pre">RNNOp</span></code> will concatenate the outputs from all the time steps,
if the <codeclass="docutils literal"><spanclass="pre">output_all_steps</span></code> set to False, it will only output the final time step.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
# indicate that h variables in all step scopes should be merged
rnn.add_outputs(h)
out = rnn()
```
Python API functions in above example:
- `rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
- `rnn.add_memory` creates a variable used as the memory.
- `rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
### Nested RNN and LoDTensor
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
By default, the `RNNOp` will concatenate the outputs from all the time steps,
if the `output_all_steps` set to False, it will only output the final time step.
<p>This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.</p>
<p>There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in <em>step-scopes</em>– scopes created for each step.</p>
<paligh="center">
<imgsrc="./images/rnn.png"/><br/>
Figure 2 the RNN's data flow
</p><p>Please be aware that all steps run the same step-net. Each step</p>
<olclass="simple">
<li>creates the step-scope,</li>
<li>realizes local variables, including step-outputs, in the step-scope, and</li>
<li>runs the step-net, which could use these variables.</li>
</ol>
<p>The RNN operator will compose its output from step outputs in step scopes.</p>
</div>
<divclass="section"id="memory-and-ex-memory">
<spanid="memory-and-ex-memory"></span><h3>Memory and Ex-memory<aclass="headerlink"href="#memory-and-ex-memory"title="永久链接至标题">¶</a></h3>
<p>Let’s give more details about memory and ex-memory via a simply example:</p>
<p>$$
h_t = U h_{t-1} + W x_t
$$,</p>
<p>where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$’s respectively.</p>
<p>In the implementation, we can make an ex-memory variable either “refers to” the memory variable of the previous step,
or copy the value of the previous memory value to the current ex-memory variable.</p>
</div>
<divclass="section"id="usage-in-python">
<spanid="usage-in-python"></span><h3>Usage in Python<aclass="headerlink"href="#usage-in-python"title="永久链接至标题">¶</a></h3>
<p>For more information on Block, please refer to the <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md">design doc</a>.</p>
<p>We can define an RNN’s step-net using Block:</p>
<spanclass="n">X</span><spanclass="o">=</span><spanclass="n">some_op</span><spanclass="p">()</span><spanclass="c1"># x is some operator's output, and is a LoDTensor</span>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_input</span></code> indicates the parameter is a variable that will be segmented into step-inputs.</li>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_memory</span></code> creates a variable used as the memory.</li>
<li><codeclass="docutils literal"><spanclass="pre">rnn.add_outputs</span></code> mark the variables that will be concatenated across steps into the RNN output.</li>
</ul>
</div>
<divclass="section"id="nested-rnn-and-lodtensor">
<spanid="nested-rnn-and-lodtensor"></span><h3>Nested RNN and LoDTensor<aclass="headerlink"href="#nested-rnn-and-lodtensor"title="永久链接至标题">¶</a></h3>
<p>An RNN whose step-net includes other RNN operators is known as an <em>nested RNN</em>.</p>
<p>For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.</p>
<p>The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.</p>
<p>in above example, the construction of the <codeclass="docutils literal"><spanclass="pre">top_level_rnn</span></code> calls <codeclass="docutils literal"><spanclass="pre">lower_level_rnn</span></code>. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.</p>
<p>By default, the <codeclass="docutils literal"><spanclass="pre">RNNOp</span></code> will concatenate the outputs from all the time steps,
if the <codeclass="docutils literal"><spanclass="pre">output_all_steps</span></code> set to False, it will only output the final time step.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.