提交 6829d94f 编写于 作者: T Travis CI

Deploy to GitHub Pages: 5eb0ebaf

上级 9361062d
......@@ -188,35 +188,44 @@
<h1>Optimizer<a class="headerlink" href="#optimizer" title="Permalink to this headline"></a></h1>
<div class="section" id="momentum">
<h2>Momentum<a class="headerlink" href="#momentum" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Momentum</code><span class="sig-paren">(</span><em>momentum=None</em>, <em>sparse=False</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>SGD Optimizer.</p>
<p>SGD is an optimization method, trying to find a neural network that
minimize the &#8220;cost/error&#8221; of it by iteration. In paddle&#8217;s implementation
SGD Optimizer is synchronized, which means all gradients will be wait to
calculate and reduced into one gradient, then do optimize operation.</p>
<p>The neural network consider the learning problem of minimizing an objective
function, that has the form of a sum</p>
<dd><p>Momentum Optimizer.</p>
<p>When sparse=False, the momentum update formula is as follows:</p>
<div class="math">
\[Q(w) = \sum_{i}^{n} Q_i(w)\]</div>
<p>The value of function Q sometimes is the cost of neural network (Mean
Square Error between prediction and label for example). The function Q is
parametrised by w, the weight/bias of neural network. And weights is what to
be learned. The i is the i-th observation in (trainning) data.</p>
<p>So, the SGD method will optimize the weight by</p>
\[\begin{split}v_{t} &amp;= k * v_{t-1} - \gamma_t / (g_{t} + \lambda w_{t-1}) \\
w_{t} &amp;= w_{t-1} + v_{t} \\\end{split}\]</div>
<p>where, <span class="math">\(k\)</span> is momentum, <span class="math">\(\lambda\)</span> is decay rate,
<span class="math">\(\gamma_t\)</span> is learning rate at the t&#8217;th iteration.
<span class="math">\(w_{t}\)</span> is the weight as the t&#8217;th iteration.
And the <span class="math">\(v_{t}\)</span> is the history momentum variable.</p>
<p>When sparse=True, the update scheme:</p>
<div class="math">
\[w = w - \eta \nabla Q(w) = w - \eta \sum_{i}^{n} \nabla Q_i(w)\]</div>
<p>where <span class="math">\(\eta\)</span> is learning rate. And <span class="math">\(n\)</span> is batch size.</p>
\[\begin{split}\alpha_t &amp;= \alpha_{t-1} / k \\
\beta_t &amp;= \beta_{t-1} / (1 + \lambda \gamma_t) \\
u_t &amp;= u_{t-1} - \alpha_t \gamma_t g_t \\
v_t &amp;= v_{t-1} + \tau_{t-1} \alpha_t \gamma_t g_t \\
\tau_t &amp;= \tau_{t-1} + \beta_t / \alpha_t\end{split}\]</div>
<p>where <span class="math">\(k\)</span> is momentum, <span class="math">\(\lambda\)</span> is decay rate,
<span class="math">\(\gamma_t\)</span> is learning rate at the t&#8217;th iteration.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>momentum</strong> (<em>float</em>) &#8211; the momentum factor.</li>
<li><strong>sparse</strong> (<em>bool</em>) &#8211; with sparse support or not, False by default.</li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
<div class="section" id="adam">
<h2>Adam<a class="headerlink" href="#adam" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Adam</code><span class="sig-paren">(</span><em>beta1=0.9</em>, <em>beta2=0.999</em>, <em>epsilon=1e-08</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -225,7 +234,7 @@ The details of please refer <a class="reference external" href="https://arxiv.or
<div class="math">
\[\begin{split}m(w, t) &amp; = \beta_1 m(w, t-1) + (1 - \beta_1) \nabla Q_i(w) \\
v(w, t) &amp; = \beta_2 v(w, t-1) + (1 - \beta_2)(\nabla Q_i(w)) ^2 \\
w &amp; = w - \frac{\eta}{\sqrt{v(w,t) + \epsilon}}\end{split}\]</div>
w &amp; = w - \frac{\eta m(w, t)}{\sqrt{v(w,t) + \epsilon}}\end{split}\]</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
......@@ -245,8 +254,6 @@ divided by zero.</li>
</div>
<div class="section" id="adamax">
<h2>Adamax<a class="headerlink" href="#adamax" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Adamax</code><span class="sig-paren">(</span><em>beta1=0.9</em>, <em>beta2=0.999</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -273,8 +280,6 @@ w_t &amp; = w_{t-1} - (\eta/(1-\beta_1^t))*m_t/u_t\end{split}\]</div>
</div>
<div class="section" id="adagrad">
<h2>AdaGrad<a class="headerlink" href="#adagrad" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">AdaGrad</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -289,8 +294,6 @@ w &amp; = w - \eta diag(G)^{-\frac{1}{2}} \circ g\end{split}\]</div>
</div>
<div class="section" id="decayedadagrad">
<h2>DecayedAdaGrad<a class="headerlink" href="#decayedadagrad" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">DecayedAdaGrad</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -316,8 +319,6 @@ learning\_rate &amp;= 1/sqrt( ( E(g_t^2) + \epsilon )\end{split}\]</div>
</div>
<div class="section" id="adadelta">
<h2>AdaDelta<a class="headerlink" href="#adadelta" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">AdaDelta</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -345,8 +346,6 @@ E(dx_t^2) &amp;= \rho * E(dx_{t-1}^2) + (1-\rho) * (-g*learning\_rate)^2\end{spl
</div>
<div class="section" id="rmsprop">
<h2>RMSProp<a class="headerlink" href="#rmsprop" title="Permalink to this headline"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">RMSProp</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -201,35 +201,44 @@
<h1>Optimizer<a class="headerlink" href="#optimizer" title="永久链接至标题"></a></h1>
<div class="section" id="momentum">
<h2>Momentum<a class="headerlink" href="#momentum" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Momentum</code><span class="sig-paren">(</span><em>momentum=None</em>, <em>sparse=False</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>SGD Optimizer.</p>
<p>SGD is an optimization method, trying to find a neural network that
minimize the &#8220;cost/error&#8221; of it by iteration. In paddle&#8217;s implementation
SGD Optimizer is synchronized, which means all gradients will be wait to
calculate and reduced into one gradient, then do optimize operation.</p>
<p>The neural network consider the learning problem of minimizing an objective
function, that has the form of a sum</p>
<dd><p>Momentum Optimizer.</p>
<p>When sparse=False, the momentum update formula is as follows:</p>
<div class="math">
\[Q(w) = \sum_{i}^{n} Q_i(w)\]</div>
<p>The value of function Q sometimes is the cost of neural network (Mean
Square Error between prediction and label for example). The function Q is
parametrised by w, the weight/bias of neural network. And weights is what to
be learned. The i is the i-th observation in (trainning) data.</p>
<p>So, the SGD method will optimize the weight by</p>
\[\begin{split}v_{t} &amp;= k * v_{t-1} - \gamma_t / (g_{t} + \lambda w_{t-1}) \\
w_{t} &amp;= w_{t-1} + v_{t} \\\end{split}\]</div>
<p>where, <span class="math">\(k\)</span> is momentum, <span class="math">\(\lambda\)</span> is decay rate,
<span class="math">\(\gamma_t\)</span> is learning rate at the t&#8217;th iteration.
<span class="math">\(w_{t}\)</span> is the weight as the t&#8217;th iteration.
And the <span class="math">\(v_{t}\)</span> is the history momentum variable.</p>
<p>When sparse=True, the update scheme:</p>
<div class="math">
\[w = w - \eta \nabla Q(w) = w - \eta \sum_{i}^{n} \nabla Q_i(w)\]</div>
<p>where <span class="math">\(\eta\)</span> is learning rate. And <span class="math">\(n\)</span> is batch size.</p>
\[\begin{split}\alpha_t &amp;= \alpha_{t-1} / k \\
\beta_t &amp;= \beta_{t-1} / (1 + \lambda \gamma_t) \\
u_t &amp;= u_{t-1} - \alpha_t \gamma_t g_t \\
v_t &amp;= v_{t-1} + \tau_{t-1} \alpha_t \gamma_t g_t \\
\tau_t &amp;= \tau_{t-1} + \beta_t / \alpha_t\end{split}\]</div>
<p>where <span class="math">\(k\)</span> is momentum, <span class="math">\(\lambda\)</span> is decay rate,
<span class="math">\(\gamma_t\)</span> is learning rate at the t&#8217;th iteration.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple">
<li><strong>momentum</strong> (<em>float</em>) &#8211; the momentum factor.</li>
<li><strong>sparse</strong> (<em>bool</em>) &#8211; with sparse support or not, False by default.</li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
<div class="section" id="adam">
<h2>Adam<a class="headerlink" href="#adam" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Adam</code><span class="sig-paren">(</span><em>beta1=0.9</em>, <em>beta2=0.999</em>, <em>epsilon=1e-08</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -238,7 +247,7 @@ The details of please refer <a class="reference external" href="https://arxiv.or
<div class="math">
\[\begin{split}m(w, t) &amp; = \beta_1 m(w, t-1) + (1 - \beta_1) \nabla Q_i(w) \\
v(w, t) &amp; = \beta_2 v(w, t-1) + (1 - \beta_2)(\nabla Q_i(w)) ^2 \\
w &amp; = w - \frac{\eta}{\sqrt{v(w,t) + \epsilon}}\end{split}\]</div>
w &amp; = w - \frac{\eta m(w, t)}{\sqrt{v(w,t) + \epsilon}}\end{split}\]</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
......@@ -258,8 +267,6 @@ divided by zero.</li>
</div>
<div class="section" id="adamax">
<h2>Adamax<a class="headerlink" href="#adamax" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">Adamax</code><span class="sig-paren">(</span><em>beta1=0.9</em>, <em>beta2=0.999</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -286,8 +293,6 @@ w_t &amp; = w_{t-1} - (\eta/(1-\beta_1^t))*m_t/u_t\end{split}\]</div>
</div>
<div class="section" id="adagrad">
<h2>AdaGrad<a class="headerlink" href="#adagrad" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">AdaGrad</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -302,8 +307,6 @@ w &amp; = w - \eta diag(G)^{-\frac{1}{2}} \circ g\end{split}\]</div>
</div>
<div class="section" id="decayedadagrad">
<h2>DecayedAdaGrad<a class="headerlink" href="#decayedadagrad" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">DecayedAdaGrad</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -329,8 +332,6 @@ learning\_rate &amp;= 1/sqrt( ( E(g_t^2) + \epsilon )\end{split}\]</div>
</div>
<div class="section" id="adadelta">
<h2>AdaDelta<a class="headerlink" href="#adadelta" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">AdaDelta</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......@@ -358,8 +359,6 @@ E(dx_t^2) &amp;= \rho * E(dx_{t-1}^2) + (1-\rho) * (-g*learning\_rate)^2\end{spl
</div>
<div class="section" id="rmsprop">
<h2>RMSProp<a class="headerlink" href="#rmsprop" title="永久链接至标题"></a></h2>
<p>Optimizers(update equation) for SGD method.</p>
<p>TODO(yuyang18): Complete comments.</p>
<dl class="class">
<dt>
<em class="property">class </em><code class="descclassname">paddle.v2.optimizer.</code><code class="descname">RMSProp</code><span class="sig-paren">(</span><em>rho=0.95</em>, <em>epsilon=1e-06</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册