提交 bebdad9c 编写于 作者: T Travis CI

Deploy to GitHub Pages: 89bbc4f6

上级 baaae3cb
...@@ -1007,7 +1007,7 @@ the given labels as soft labels, default <cite>False</cite>.</li> ...@@ -1007,7 +1007,7 @@ the given labels as soft labels, default <cite>False</cite>.</li>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">A 2-D tensor with shape [N x 1], the cross entropy loss.</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">A 2-D tensor with shape [N x 1], the cross entropy loss.</p>
</td> </td>
</tr> </tr>
<tr class="field-odd field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><cite>ValueError</cite> &#8211; 1) the 1st dimension of <cite>input</cite> and <cite>label</cite> are not equal; 2) when <cite>soft_label == True</cite>, and the 2nd dimension of <cite>input</cite> and <cite>label</cite> are not equal; 3) when <cite>soft_label == False</cite>, and the 2nd dimension of <cite>label</cite> is not 1.</p> <tr class="field-odd field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><cite>ValueError</cite> &#8211; 1) the 1st dimension of <cite>input</cite> and <cite>label</cite> are not equal; 2) when <cite>soft_label == True</cite>, and the 2nd dimension of <cite>input</cite> and <cite>label</cite> are not equal; 3) when <cite>soft_label == False</cite>, and the 2nd dimension of <cite>label</cite> is not 1.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -2020,16 +2020,17 @@ explain how sequence_expand works:</p> ...@@ -2020,16 +2020,17 @@ explain how sequence_expand works:</p>
<dd><p>Lstm unit layer. The equation of a lstm step is:</p> <dd><p>Lstm unit layer. The equation of a lstm step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
\[ \begin{align}\begin{aligned}i_t &amp; = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + W_{c_i}c_{t-1} + b_i)\\f_t &amp; = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + W_{c_f}c_{t-1} + b_f)\\c_t &amp; = f_tc_{t-1} + i_t tanh (W_{x_c}x_t+W_{h_c}h_{t-1} + b_c)\\o_t &amp; = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + W_{c_o}c_t + b_o)\\h_t &amp; = o_t tanh(c_t)\end{aligned}\end{align} \]</div> \[ \begin{align}\begin{aligned}i_t &amp; = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t &amp; = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t &amp; = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t &amp; = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t &amp; = o_t tanh(c_t)\end{aligned}\end{align} \]</div>
</div></blockquote> </div></blockquote>
<p>The inputs of lstm unit includes <span class="math">\(x_t\)</span>, <span class="math">\(h_{t-1}\)</span> and <p>The inputs of lstm unit include <span class="math">\(x_t\)</span>, <span class="math">\(h_{t-1}\)</span> and
<span class="math">\(c_{t-1}\)</span>. The implementation separates the linear transformation <span class="math">\(c_{t-1}\)</span>. The 2nd dimensions of <span class="math">\(h_{t-1}\)</span> and <span class="math">\(c_{t-1}\)</span>
and non-linear transformation apart. Here, we take <span class="math">\(i_t\)</span> as an should be same. The implementation separates the linear transformation and
example. The linear transformation is applied by calling a <cite>fc</cite> layer and non-linear transformation apart. Here, we take <span class="math">\(i_t\)</span> as an example.
the equation is:</p> The linear transformation is applied by calling a <cite>fc</cite> layer and the
equation is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + W_{c_i}c_{t-1} + b_i\]</div> \[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]</div>
</div></blockquote> </div></blockquote>
<p>The non-linear transformation is applied by calling <cite>lstm_unit_op</cite> and the <p>The non-linear transformation is applied by calling <cite>lstm_unit_op</cite> and the
equation is:</p> equation is:</p>
...@@ -2043,9 +2044,12 @@ equation is:</p> ...@@ -2043,9 +2044,12 @@ equation is:</p>
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x_t</strong> (<em>Variable</em>) &#8211; The input value of current step.</li> <li><strong>x_t</strong> (<em>Variable</em>) &#8211; The input value of current step, a 2-D tensor with shape
<li><strong>hidden_t_prev</strong> (<em>Variable</em>) &#8211; The hidden value of lstm unit.</li> M x N, M for batch size and N for input size.</li>
<li><strong>cell_t_prev</strong> (<em>Variable</em>) &#8211; The cell value of lstm unit.</li> <li><strong>hidden_t_prev</strong> (<em>Variable</em>) &#8211; The hidden value of lstm unit, a 2-D tensor
with shape M x S, M for batch size and S for size of lstm unit.</li>
<li><strong>cell_t_prev</strong> (<em>Variable</em>) &#8211; The cell value of lstm unit, a 2-D tensor with
shape M x S, M for batch size and S for size of lstm unit.</li>
<li><strong>forget_bias</strong> (<em>float</em>) &#8211; The forget bias of lstm unit.</li> <li><strong>forget_bias</strong> (<em>float</em>) &#8211; The forget bias of lstm unit.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of parameter weights, used to set <li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of parameter weights, used to set
initializer, name etc.</li> initializer, name etc.</li>
...@@ -2060,14 +2064,14 @@ bias weights will be created and be set to default value.</li> ...@@ -2060,14 +2064,14 @@ bias weights will be created and be set to default value.</li>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first">tuple</p> <tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first">tuple</p>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; The ranks of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be 2 or the 1st dimensions of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same.</p> <tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; The ranks of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be 2 or the 1st dimensions of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same or the 2nd dimensions of <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
<p class="rubric">Examples</p> <p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x_t</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">x_t_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x_t</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">x_t_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
<span class="n">prev_hidden</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_hidden_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">20</span><span class="p">)</span> <span class="n">prev_hidden</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_hidden_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span>
<span class="n">prev_cell</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_cell_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span> <span class="n">prev_cell</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_cell_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span>
<span class="n">hidden_value</span><span class="p">,</span> <span class="n">cell_value</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">lstm_unit</span><span class="p">(</span><span class="n">x_t</span><span class="o">=</span><span class="n">x_t</span><span class="p">,</span> <span class="n">hidden_value</span><span class="p">,</span> <span class="n">cell_value</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">lstm_unit</span><span class="p">(</span><span class="n">x_t</span><span class="o">=</span><span class="n">x_t</span><span class="p">,</span>
<span class="n">hidden_t_prev</span><span class="o">=</span><span class="n">prev_hidden</span><span class="p">,</span> <span class="n">hidden_t_prev</span><span class="o">=</span><span class="n">prev_hidden</span><span class="p">,</span>
......
...@@ -1020,7 +1020,7 @@ the given labels as soft labels, default <cite>False</cite>.</li> ...@@ -1020,7 +1020,7 @@ the given labels as soft labels, default <cite>False</cite>.</li>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">A 2-D tensor with shape [N x 1], the cross entropy loss.</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">A 2-D tensor with shape [N x 1], the cross entropy loss.</p>
</td> </td>
</tr> </tr>
<tr class="field-odd field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><cite>ValueError</cite> &#8211; 1) the 1st dimension of <cite>input</cite> and <cite>label</cite> are not equal; 2) when <cite>soft_label == True</cite>, and the 2nd dimension of <cite>input</cite> and <cite>label</cite> are not equal; 3) when <cite>soft_label == False</cite>, and the 2nd dimension of <cite>label</cite> is not 1.</p> <tr class="field-odd field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><cite>ValueError</cite> &#8211; 1) the 1st dimension of <cite>input</cite> and <cite>label</cite> are not equal; 2) when <cite>soft_label == True</cite>, and the 2nd dimension of <cite>input</cite> and <cite>label</cite> are not equal; 3) when <cite>soft_label == False</cite>, and the 2nd dimension of <cite>label</cite> is not 1.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -2033,16 +2033,17 @@ explain how sequence_expand works:</p> ...@@ -2033,16 +2033,17 @@ explain how sequence_expand works:</p>
<dd><p>Lstm unit layer. The equation of a lstm step is:</p> <dd><p>Lstm unit layer. The equation of a lstm step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
\[ \begin{align}\begin{aligned}i_t &amp; = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + W_{c_i}c_{t-1} + b_i)\\f_t &amp; = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + W_{c_f}c_{t-1} + b_f)\\c_t &amp; = f_tc_{t-1} + i_t tanh (W_{x_c}x_t+W_{h_c}h_{t-1} + b_c)\\o_t &amp; = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + W_{c_o}c_t + b_o)\\h_t &amp; = o_t tanh(c_t)\end{aligned}\end{align} \]</div> \[ \begin{align}\begin{aligned}i_t &amp; = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t &amp; = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t &amp; = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t &amp; = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t &amp; = o_t tanh(c_t)\end{aligned}\end{align} \]</div>
</div></blockquote> </div></blockquote>
<p>The inputs of lstm unit includes <span class="math">\(x_t\)</span>, <span class="math">\(h_{t-1}\)</span> and <p>The inputs of lstm unit include <span class="math">\(x_t\)</span>, <span class="math">\(h_{t-1}\)</span> and
<span class="math">\(c_{t-1}\)</span>. The implementation separates the linear transformation <span class="math">\(c_{t-1}\)</span>. The 2nd dimensions of <span class="math">\(h_{t-1}\)</span> and <span class="math">\(c_{t-1}\)</span>
and non-linear transformation apart. Here, we take <span class="math">\(i_t\)</span> as an should be same. The implementation separates the linear transformation and
example. The linear transformation is applied by calling a <cite>fc</cite> layer and non-linear transformation apart. Here, we take <span class="math">\(i_t\)</span> as an example.
the equation is:</p> The linear transformation is applied by calling a <cite>fc</cite> layer and the
equation is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + W_{c_i}c_{t-1} + b_i\]</div> \[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]</div>
</div></blockquote> </div></blockquote>
<p>The non-linear transformation is applied by calling <cite>lstm_unit_op</cite> and the <p>The non-linear transformation is applied by calling <cite>lstm_unit_op</cite> and the
equation is:</p> equation is:</p>
...@@ -2056,9 +2057,12 @@ equation is:</p> ...@@ -2056,9 +2057,12 @@ equation is:</p>
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x_t</strong> (<em>Variable</em>) &#8211; The input value of current step.</li> <li><strong>x_t</strong> (<em>Variable</em>) &#8211; The input value of current step, a 2-D tensor with shape
<li><strong>hidden_t_prev</strong> (<em>Variable</em>) &#8211; The hidden value of lstm unit.</li> M x N, M for batch size and N for input size.</li>
<li><strong>cell_t_prev</strong> (<em>Variable</em>) &#8211; The cell value of lstm unit.</li> <li><strong>hidden_t_prev</strong> (<em>Variable</em>) &#8211; The hidden value of lstm unit, a 2-D tensor
with shape M x S, M for batch size and S for size of lstm unit.</li>
<li><strong>cell_t_prev</strong> (<em>Variable</em>) &#8211; The cell value of lstm unit, a 2-D tensor with
shape M x S, M for batch size and S for size of lstm unit.</li>
<li><strong>forget_bias</strong> (<em>float</em>) &#8211; The forget bias of lstm unit.</li> <li><strong>forget_bias</strong> (<em>float</em>) &#8211; The forget bias of lstm unit.</li>
<li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of parameter weights, used to set <li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of parameter weights, used to set
initializer, name etc.</li> initializer, name etc.</li>
...@@ -2073,14 +2077,14 @@ bias weights will be created and be set to default value.</li> ...@@ -2073,14 +2077,14 @@ bias weights will be created and be set to default value.</li>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first">tuple</p> <tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first">tuple</p>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; The ranks of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be 2 or the 1st dimensions of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same.</p> <tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; The ranks of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be 2 or the 1st dimensions of <strong>x_t</strong>, <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same or the 2nd dimensions of <strong>hidden_t_prev</strong> and <strong>cell_t_prev</strong> not be the same.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
<p class="rubric">Examples</p> <p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x_t</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">x_t_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x_t</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">x_t_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
<span class="n">prev_hidden</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_hidden_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">20</span><span class="p">)</span> <span class="n">prev_hidden</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_hidden_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span>
<span class="n">prev_cell</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_cell_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span> <span class="n">prev_cell</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">prev_cell_data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">30</span><span class="p">)</span>
<span class="n">hidden_value</span><span class="p">,</span> <span class="n">cell_value</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">lstm_unit</span><span class="p">(</span><span class="n">x_t</span><span class="o">=</span><span class="n">x_t</span><span class="p">,</span> <span class="n">hidden_value</span><span class="p">,</span> <span class="n">cell_value</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">lstm_unit</span><span class="p">(</span><span class="n">x_t</span><span class="o">=</span><span class="n">x_t</span><span class="p">,</span>
<span class="n">hidden_t_prev</span><span class="o">=</span><span class="n">prev_hidden</span><span class="p">,</span> <span class="n">hidden_t_prev</span><span class="o">=</span><span class="n">prev_hidden</span><span class="p">,</span>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册