提交 a843dc9a 编写于 作者: T Travis CI

Deploy to GitHub Pages: e1611eb4

上级 7bef0ebd
...@@ -102,6 +102,12 @@ We provide a packaged book image, simply issue the command: ...@@ -102,6 +102,12 @@ We provide a packaged book image, simply issue the command:
docker run -p 8888:8888 paddlepaddle/book docker run -p 8888:8888 paddlepaddle/book
For users in China, we provide a faster mirror:
.. code-block: bash
docker run -p 8888:8888 docker.paddlepaddlehub.com/book
Then, you would back and paste the address into the local browser: Then, you would back and paste the address into the local browser:
.. code-block:: text .. code-block:: text
......
...@@ -222,11 +222,18 @@ ...@@ -222,11 +222,18 @@
<h2>Optimizer<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2> <h2>Optimizer<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Optimizer</code><span class="sig-paren">(</span><em>global_step=None</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Optimizer</code><span class="sig-paren">(</span><em>learning_rate</em>, <em>global_step=None</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt>
<dd><p>Optimizer Base class.</p> <dd><p>Optimizer Base class.</p>
<p>Define the common interface of an optimizer. <p>Define the common interface of an optimizer.
User should not use this class directly, User should not use this class directly,
but need to use one of it&#8217;s implementation.</p> but need to use one of it&#8217;s implementation.</p>
<dl class="attribute">
<dt>
<code class="descname">global_learning_rate</code></dt>
<dd><p>get global decayed learning rate
:return:</p>
</dd></dl>
<dl class="method"> <dl class="method">
<dt> <dt>
<code class="descname">create_optimization_pass</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>loss</em>, <em>startup_program=None</em><span class="sig-paren">)</span></dt> <code class="descname">create_optimization_pass</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>loss</em>, <em>startup_program=None</em><span class="sig-paren">)</span></dt>
......
...@@ -301,6 +301,9 @@ dig deeper into deep learning, PaddlePaddle Book definitely is your best choice. ...@@ -301,6 +301,9 @@ dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
</pre></div> </pre></div>
</div> </div>
</div></blockquote> </div></blockquote>
<p>For users in China, we provide a faster mirror:</p>
<blockquote>
<div></div></blockquote>
<p>Then, you would back and paste the address into the local browser:</p> <p>Then, you would back and paste the address into the local browser:</p>
<blockquote> <blockquote>
<div><div class="highlight-text"><div class="highlight"><pre><span></span>http://localhost:8888/ <div><div class="highlight-text"><div class="highlight"><pre><span></span>http://localhost:8888/
......
...@@ -1334,6 +1334,24 @@ ...@@ -1334,6 +1334,24 @@
"intermediate" : 0 "intermediate" : 0
} ], } ],
"attrs" : [ ] "attrs" : [ ]
},{
"type" : "abs",
"comment" : "\nAbs Activation Operator.\n\n$out = |x|$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Abs operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Abs operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{ },{
"type" : "softmax", "type" : "softmax",
"comment" : "\nSoftmax Operator.\n\nThe input of the softmax operator is a 2-D tensor with shape N x K (N is the\nbatch_size, K is the dimension of input feature). The output tensor has the\nsame shape as the input tensor.\n\nFor each row of the input tensor, the softmax operator squashes the\nK-dimensional vector of arbitrary real values to a K-dimensional vector of real\nvalues in the range [0, 1] that add up to 1.\nIt computes the exponential of the given dimension and the sum of exponential\nvalues of all the other dimensions in the K-dimensional vector input.\nThen the ratio of the exponential of the given dimension and the sum of\nexponential values of all the other dimensions is the output of the softmax\noperator.\n\nFor each row $i$ and each column $j$ in Input(X), we have:\n $$Out[i, j] = \\frac{\\exp(X[i, j])}{\\sum_j(exp(X[i, j])}$$\n\n", "comment" : "\nSoftmax Operator.\n\nThe input of the softmax operator is a 2-D tensor with shape N x K (N is the\nbatch_size, K is the dimension of input feature). The output tensor has the\nsame shape as the input tensor.\n\nFor each row of the input tensor, the softmax operator squashes the\nK-dimensional vector of arbitrary real values to a K-dimensional vector of real\nvalues in the range [0, 1] that add up to 1.\nIt computes the exponential of the given dimension and the sum of exponential\nvalues of all the other dimensions in the K-dimensional vector input.\nThen the ratio of the exponential of the given dimension and the sum of\nexponential values of all the other dimensions is the output of the softmax\noperator.\n\nFor each row $i$ and each column $j$ in Input(X), we have:\n $$Out[i, j] = \\frac{\\exp(X[i, j])}{\\sum_j(exp(X[i, j])}$$\n\n",
...@@ -1575,6 +1593,35 @@ ...@@ -1575,6 +1593,35 @@
"intermediate" : 0 "intermediate" : 0
} ], } ],
"attrs" : [ ] "attrs" : [ ]
},{
"type" : "elementwise_pow",
"comment" : "\nLimited Elementwise Pow Operator.\n\nThe equation is:\n\n$$Out = X ^ Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Y",
"comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "The output of elementwise op.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "axis",
"type" : "int",
"comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0
} ]
},{ },{
"type" : "proximal_gd", "type" : "proximal_gd",
"comment" : "\nProximalGD Operator.\n\nOptimizer that implements the proximal gradient descent algorithm:\n\n$$\nprox\\_param = param - learning\\_rate * grad \\\\\nparam = sign(prox\\_param) / (1 + learning\\_rate * l2) *\n \\max(|prox\\_param| - learning\\_rate * l1, 0)\n$$ \n\nThe paper that proposed Proximal Gradient Descent:\n(http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf)\n\n", "comment" : "\nProximalGD Operator.\n\nOptimizer that implements the proximal gradient descent algorithm:\n\n$$\nprox\\_param = param - learning\\_rate * grad \\\\\nparam = sign(prox\\_param) / (1 + learning\\_rate * l2) *\n \\max(|prox\\_param| - learning\\_rate * l1, 0)\n$$ \n\nThe paper that proposed Proximal Gradient Descent:\n(http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf)\n\n",
...@@ -3192,24 +3239,6 @@ ...@@ -3192,24 +3239,6 @@
"intermediate" : 0 "intermediate" : 0
} ], } ],
"attrs" : [ ] "attrs" : [ ]
},{
"type" : "abs",
"comment" : "\nAbs Activation Operator.\n\n$out = |x|$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Abs operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Abs operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{ },{
"type" : "softplus", "type" : "softplus",
"comment" : "\nSoftplus Activation Operator.\n\n$out = \\ln(1 + e^{x})$\n\n", "comment" : "\nSoftplus Activation Operator.\n\n$out = \\ln(1 + e^{x})$\n\n",
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -95,6 +95,12 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note ...@@ -95,6 +95,12 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
docker run -p 8888:8888 paddlepaddle/book docker run -p 8888:8888 paddlepaddle/book
国内用户可以使用下面的镜像源来加速访问:
.. code-block: bash
docker run -p 8888:8888 docker.paddlepaddlehub.com/book
然后在浏览器中输入以下网址: 然后在浏览器中输入以下网址:
.. code-block:: text .. code-block:: text
......
...@@ -241,11 +241,18 @@ ...@@ -241,11 +241,18 @@
<h2>Optimizer<a class="headerlink" href="#id1" title="永久链接至标题"></a></h2> <h2>Optimizer<a class="headerlink" href="#id1" title="永久链接至标题"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Optimizer</code><span class="sig-paren">(</span><em>global_step=None</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Optimizer</code><span class="sig-paren">(</span><em>learning_rate</em>, <em>global_step=None</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt>
<dd><p>Optimizer Base class.</p> <dd><p>Optimizer Base class.</p>
<p>Define the common interface of an optimizer. <p>Define the common interface of an optimizer.
User should not use this class directly, User should not use this class directly,
but need to use one of it&#8217;s implementation.</p> but need to use one of it&#8217;s implementation.</p>
<dl class="attribute">
<dt>
<code class="descname">global_learning_rate</code></dt>
<dd><p>get global decayed learning rate
:return:</p>
</dd></dl>
<dl class="method"> <dl class="method">
<dt> <dt>
<code class="descname">create_optimization_pass</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>loss</em>, <em>startup_program=None</em><span class="sig-paren">)</span></dt> <code class="descname">create_optimization_pass</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>loss</em>, <em>startup_program=None</em><span class="sig-paren">)</span></dt>
......
...@@ -314,6 +314,9 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note ...@@ -314,6 +314,9 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
</pre></div> </pre></div>
</div> </div>
</div></blockquote> </div></blockquote>
<p>国内用户可以使用下面的镜像源来加速访问:</p>
<blockquote>
<div></div></blockquote>
<p>然后在浏览器中输入以下网址:</p> <p>然后在浏览器中输入以下网址:</p>
<blockquote> <blockquote>
<div><div class="highlight-text"><div class="highlight"><pre><span></span>http://localhost:8888/ <div><div class="highlight-text"><div class="highlight"><pre><span></span>http://localhost:8888/
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册