提交 3f83d0d5 编写于 作者: T Travis CI

Deploy to GitHub Pages: 2983939b

上级 3403c070
......@@ -432,6 +432,12 @@ multi_binary_label_cross_entropy
:members: multi_binary_label_cross_entropy
:noindex:
mse_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: mse_cost
:noindex:
huber_cost
----------
.. automodule:: paddle.trainer_config_helpers.layers
......@@ -450,6 +456,12 @@ rank_cost
:members: rank_cost
:noindex:
sum_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: sum_cost
:noindex:
crf_layer
-----------------
.. automodule:: paddle.trainer_config_helpers.layers
......@@ -486,12 +498,6 @@ hsigmoid
:members: hsigmoid
:noindex:
sum_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: sum_cost
:noindex:
Check Layer
============
......
......@@ -49,7 +49,7 @@ To recover this relationship between ``X`` and ``Y``, we use a neural network wi
x = data_layer(name='x', size=1)
y = data_layer(name='y', size=1)
y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b'))
cost = regression_cost(input=y_predict, label=y)
cost = mse_cost(input=y_predict, label=y)
outputs(cost)
Some of the most fundamental usages of PaddlePaddle are demonstrated:
......
......@@ -276,16 +276,17 @@
<li><a class="reference internal" href="#cross-entropy">cross_entropy</a></li>
<li><a class="reference internal" href="#cross-entropy-with-selfnorm">cross_entropy_with_selfnorm</a></li>
<li><a class="reference internal" href="#multi-binary-label-cross-entropy">multi_binary_label_cross_entropy</a></li>
<li><a class="reference internal" href="#mse-cost">mse_cost</a></li>
<li><a class="reference internal" href="#huber-cost">huber_cost</a></li>
<li><a class="reference internal" href="#lambda-cost">lambda_cost</a></li>
<li><a class="reference internal" href="#rank-cost">rank_cost</a></li>
<li><a class="reference internal" href="#sum-cost">sum_cost</a></li>
<li><a class="reference internal" href="#crf-layer">crf_layer</a></li>
<li><a class="reference internal" href="#crf-decoding-layer">crf_decoding_layer</a></li>
<li><a class="reference internal" href="#ctc-layer">ctc_layer</a></li>
<li><a class="reference internal" href="#warp-ctc-layer">warp_ctc_layer</a></li>
<li><a class="reference internal" href="#nce-layer">nce_layer</a></li>
<li><a class="reference internal" href="#hsigmoid">hsigmoid</a></li>
<li><a class="reference internal" href="#sum-cost">sum_cost</a></li>
</ul>
</li>
<li><a class="reference internal" href="#check-layer">Check Layer</a><ul>
......@@ -3035,6 +3036,55 @@ Input should be a vector of positive numbers, without normalization.</p>
</table>
</dd></dl>
</div>
<div class="section" id="mse-cost">
<h3>mse_cost<a class="headerlink" href="#mse-cost" title="Permalink to this headline"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">mse_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><blockquote>
<div><p>mean squared error cost:</p>
<div class="math">
\[$\]</div>
</div></blockquote>
<p>rac{1}{N}sum_{i=1}^N(t _i- y_i)^2$</p>
<blockquote>
<div><table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">param name:</th><td class="field-body">layer name.</td>
</tr>
<tr class="field-even field"><th class="field-name">type name:</th><td class="field-body">basestring</td>
</tr>
<tr class="field-odd field"><th class="field-name">param input:</th><td class="field-body">Network prediction.</td>
</tr>
<tr class="field-even field"><th class="field-name">type input:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name">param label:</th><td class="field-body">Data label.</td>
</tr>
<tr class="field-even field"><th class="field-name">type label:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name">param weight:</th><td class="field-body">The weight affects the cost, namely the scale of cost.
It is an optional argument.</td>
</tr>
<tr class="field-even field"><th class="field-name">type weight:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name" colspan="2">param layer_attr:</th></tr>
<tr class="field-odd field"><td>&#160;</td><td class="field-body">layer&#8217;s extra attribute.</td>
</tr>
<tr class="field-even field"><th class="field-name" colspan="2">type layer_attr:</th></tr>
<tr class="field-even field"><td>&#160;</td><td class="field-body">ExtraLayerAttribute</td>
</tr>
<tr class="field-odd field"><th class="field-name">return:</th><td class="field-body">LayerOutput object.</td>
</tr>
<tr class="field-even field"><th class="field-name">rtype:</th><td class="field-body">LayerOutput</td>
</tr>
</tbody>
</table>
</div></blockquote>
</dd></dl>
</div>
<div class="section" id="huber-cost">
<h3>huber_cost<a class="headerlink" href="#huber-cost" title="Permalink to this headline"></a></h3>
......@@ -3170,6 +3220,37 @@ It is an optional argument.</li>
</table>
</dd></dl>
</div>
<div class="section" id="sum-cost">
<h3>sum_cost<a class="headerlink" href="#sum-cost" title="Permalink to this headline"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">sum_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>A loss layer which calculate the sum of the input as loss</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">cost</span> <span class="o">=</span> <span class="n">sum_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">input_layer</span><span class="p">)</span>
</pre></div>
</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput.</em>) &#8211; The first input layer.</li>
<li><strong>name</strong> (<em>None|basestring.</em>) &#8211; The name of this layers. It is not necessary.</li>
<li><strong>layer_attr</strong> (<a class="reference internal" href="attrs.html#paddle.trainer_config_helpers.attrs.ExtraLayerAttribute" title="paddle.trainer_config_helpers.attrs.ExtraLayerAttribute"><em>ExtraLayerAttribute</em></a>) &#8211; Extra Layer Attribute.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">LayerOutput object.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first last">LayerOutput.</p>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
<div class="section" id="crf-layer">
<h3>crf_layer<a class="headerlink" href="#crf-layer" title="Permalink to this headline"></a></h3>
......@@ -3446,37 +3527,6 @@ False means no bias.</li>
</table>
</dd></dl>
</div>
<div class="section" id="sum-cost">
<h3>sum_cost<a class="headerlink" href="#sum-cost" title="Permalink to this headline"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">sum_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>A loss layer which calculate the sum of the input as loss</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">cost</span> <span class="o">=</span> <span class="n">sum_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">input_layer</span><span class="p">)</span>
</pre></div>
</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput.</em>) &#8211; The first input layer.</li>
<li><strong>name</strong> (<em>None|basestring.</em>) &#8211; The name of this layers. It is not necessary.</li>
<li><strong>layer_attr</strong> (<a class="reference internal" href="attrs.html#paddle.trainer_config_helpers.attrs.ExtraLayerAttribute" title="paddle.trainer_config_helpers.attrs.ExtraLayerAttribute"><em>ExtraLayerAttribute</em></a>) &#8211; Extra Layer Attribute.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">LayerOutput object.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first last">LayerOutput.</p>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
</div>
<div class="section" id="check-layer">
......
......@@ -256,7 +256,7 @@
<span class="n">x</span> <span class="o">=</span> <span class="n">data_layer</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;x&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">data_layer</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;y&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">y_predict</span> <span class="o">=</span> <span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;w&#39;</span><span class="p">),</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="n">LinearActivation</span><span class="p">(),</span> <span class="n">bias_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;b&#39;</span><span class="p">))</span>
<span class="n">cost</span> <span class="o">=</span> <span class="n">regression_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">y_predict</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">y</span><span class="p">)</span>
<span class="n">cost</span> <span class="o">=</span> <span class="n">mse_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">y_predict</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">y</span><span class="p">)</span>
<span class="n">outputs</span><span class="p">(</span><span class="n">cost</span><span class="p">)</span>
</pre></div>
</div>
......
此差异已折叠。
......@@ -623,10 +623,7 @@ cp ml-1m/ratings.dat.test .
<span class="n">user_feature</span> <span class="o">=</span> <span class="n">construct_feature</span><span class="p">(</span><span class="s2">&quot;user&quot;</span><span class="p">)</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">cos_sim</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="n">movie_feature</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">user_feature</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">is_predict</span><span class="p">:</span>
<span class="n">outputs</span><span class="p">(</span>
<span class="n">regression_cost</span><span class="p">(</span>
<span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span>
<span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span>
<span class="n">outputs</span><span class="p">(</span><span class="n">mse_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span>
<span class="n">define_py_data_sources2</span><span class="p">(</span>
<span class="s1">&#39;data/train.list&#39;</span><span class="p">,</span>
......
......@@ -432,6 +432,12 @@ multi_binary_label_cross_entropy
:members: multi_binary_label_cross_entropy
:noindex:
mse_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: mse_cost
:noindex:
huber_cost
----------
.. automodule:: paddle.trainer_config_helpers.layers
......@@ -450,6 +456,12 @@ rank_cost
:members: rank_cost
:noindex:
sum_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: sum_cost
:noindex:
crf_layer
-----------------
.. automodule:: paddle.trainer_config_helpers.layers
......@@ -486,12 +498,6 @@ hsigmoid
:members: hsigmoid
:noindex:
sum_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: sum_cost
:noindex:
Check Layer
============
......
......@@ -55,7 +55,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍
# 线性计算网络层: ȳ = wx + b
ȳ = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b'))
# 计算误差函数,即 ȳ 和真实 y 之间的距离
cost = regression_cost(input= ȳ, label=y)
cost = mse_cost(input= ȳ, label=y)
outputs(cost)
......@@ -69,7 +69,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍
- **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。
- **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。
- **回归误差代价层**:回归误差代价层 `regression_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。
- **回归误差代价层**:回归误差代价层 `mse_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。
定义了网络结构并保存为 `trainer_config.py` 之后,运行以下训练命令:
......
......@@ -213,7 +213,7 @@ I1116 09:10:17.123440 50 Util.cpp:130] Calling runInitFunctions
I1116 09:10:17.123764 50 Util.cpp:143] Call runInitFunctions done.
[WARNING 2016-11-16 09:10:17,227 default_decorators.py:40] please use keyword arguments in paddle config.
[INFO 2016-11-16 09:10:17,239 networks.py:1282] The input order is [movie_id, title, genres, user_id, gender, age, occupation, rating]
[INFO 2016-11-16 09:10:17,239 networks.py:1289] The output order is [__regression_cost_0__]
[INFO 2016-11-16 09:10:17,239 networks.py:1289] The output order is [__mse_cost_0__]
I1116 09:10:17.392917 50 Trainer.cpp:170] trainer mode: Normal
I1116 09:10:17.613910 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
I1116 09:10:17.680917 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
......
......@@ -283,16 +283,17 @@
<li><a class="reference internal" href="#cross-entropy">cross_entropy</a></li>
<li><a class="reference internal" href="#cross-entropy-with-selfnorm">cross_entropy_with_selfnorm</a></li>
<li><a class="reference internal" href="#multi-binary-label-cross-entropy">multi_binary_label_cross_entropy</a></li>
<li><a class="reference internal" href="#mse-cost">mse_cost</a></li>
<li><a class="reference internal" href="#huber-cost">huber_cost</a></li>
<li><a class="reference internal" href="#lambda-cost">lambda_cost</a></li>
<li><a class="reference internal" href="#rank-cost">rank_cost</a></li>
<li><a class="reference internal" href="#sum-cost">sum_cost</a></li>
<li><a class="reference internal" href="#crf-layer">crf_layer</a></li>
<li><a class="reference internal" href="#crf-decoding-layer">crf_decoding_layer</a></li>
<li><a class="reference internal" href="#ctc-layer">ctc_layer</a></li>
<li><a class="reference internal" href="#warp-ctc-layer">warp_ctc_layer</a></li>
<li><a class="reference internal" href="#nce-layer">nce_layer</a></li>
<li><a class="reference internal" href="#hsigmoid">hsigmoid</a></li>
<li><a class="reference internal" href="#sum-cost">sum_cost</a></li>
</ul>
</li>
<li><a class="reference internal" href="#check-layer">Check Layer</a><ul>
......@@ -3042,6 +3043,55 @@ Input should be a vector of positive numbers, without normalization.</p>
</table>
</dd></dl>
</div>
<div class="section" id="mse-cost">
<h3>mse_cost<a class="headerlink" href="#mse-cost" title="永久链接至标题"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">mse_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><blockquote>
<div><p>mean squared error cost:</p>
<div class="math">
\[$\]</div>
</div></blockquote>
<p>rac{1}{N}sum_{i=1}^N(t _i- y_i)^2$</p>
<blockquote>
<div><table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">param name:</th><td class="field-body">layer name.</td>
</tr>
<tr class="field-even field"><th class="field-name">type name:</th><td class="field-body">basestring</td>
</tr>
<tr class="field-odd field"><th class="field-name">param input:</th><td class="field-body">Network prediction.</td>
</tr>
<tr class="field-even field"><th class="field-name">type input:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name">param label:</th><td class="field-body">Data label.</td>
</tr>
<tr class="field-even field"><th class="field-name">type label:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name">param weight:</th><td class="field-body">The weight affects the cost, namely the scale of cost.
It is an optional argument.</td>
</tr>
<tr class="field-even field"><th class="field-name">type weight:</th><td class="field-body">LayerOutput</td>
</tr>
<tr class="field-odd field"><th class="field-name" colspan="2">param layer_attr:</th></tr>
<tr class="field-odd field"><td>&#160;</td><td class="field-body">layer&#8217;s extra attribute.</td>
</tr>
<tr class="field-even field"><th class="field-name" colspan="2">type layer_attr:</th></tr>
<tr class="field-even field"><td>&#160;</td><td class="field-body">ExtraLayerAttribute</td>
</tr>
<tr class="field-odd field"><th class="field-name">return:</th><td class="field-body">LayerOutput object.</td>
</tr>
<tr class="field-even field"><th class="field-name">rtype:</th><td class="field-body">LayerOutput</td>
</tr>
</tbody>
</table>
</div></blockquote>
</dd></dl>
</div>
<div class="section" id="huber-cost">
<h3>huber_cost<a class="headerlink" href="#huber-cost" title="永久链接至标题"></a></h3>
......@@ -3177,6 +3227,37 @@ It is an optional argument.</li>
</table>
</dd></dl>
</div>
<div class="section" id="sum-cost">
<h3>sum_cost<a class="headerlink" href="#sum-cost" title="永久链接至标题"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">sum_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>A loss layer which calculate the sum of the input as loss</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">cost</span> <span class="o">=</span> <span class="n">sum_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">input_layer</span><span class="p">)</span>
</pre></div>
</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput.</em>) &#8211; The first input layer.</li>
<li><strong>name</strong> (<em>None|basestring.</em>) &#8211; The name of this layers. It is not necessary.</li>
<li><strong>layer_attr</strong> (<a class="reference internal" href="attrs.html#paddle.trainer_config_helpers.attrs.ExtraLayerAttribute" title="paddle.trainer_config_helpers.attrs.ExtraLayerAttribute"><em>ExtraLayerAttribute</em></a>) &#8211; Extra Layer Attribute.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">LayerOutput object.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first last">LayerOutput.</p>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
<div class="section" id="crf-layer">
<h3>crf_layer<a class="headerlink" href="#crf-layer" title="永久链接至标题"></a></h3>
......@@ -3453,37 +3534,6 @@ False means no bias.</li>
</table>
</dd></dl>
</div>
<div class="section" id="sum-cost">
<h3>sum_cost<a class="headerlink" href="#sum-cost" title="永久链接至标题"></a></h3>
<dl class="function">
<dt>
<code class="descclassname">paddle.trainer_config_helpers.layers.</code><code class="descname">sum_cost</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>A loss layer which calculate the sum of the input as loss</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">cost</span> <span class="o">=</span> <span class="n">sum_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">input_layer</span><span class="p">)</span>
</pre></div>
</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>LayerOutput.</em>) &#8211; The first input layer.</li>
<li><strong>name</strong> (<em>None|basestring.</em>) &#8211; The name of this layers. It is not necessary.</li>
<li><strong>layer_attr</strong> (<a class="reference internal" href="attrs.html#paddle.trainer_config_helpers.attrs.ExtraLayerAttribute" title="paddle.trainer_config_helpers.attrs.ExtraLayerAttribute"><em>ExtraLayerAttribute</em></a>) &#8211; Extra Layer Attribute.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">LayerOutput object.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first last">LayerOutput.</p>
</td>
</tr>
</tbody>
</table>
</dd></dl>
</div>
</div>
<div class="section" id="check-layer">
......
......@@ -264,7 +264,7 @@ y = data_layer(name=&#39;y&#39;, size=1)
# 线性计算网络层: ȳ = wx + b
ȳ = fc_layer(input=x, param_attr=ParamAttr(name=&#39;w&#39;), size=1, act=LinearActivation(), bias_attr=ParamAttr(name=&#39;b&#39;))
# 计算误差函数,即 ȳ 和真实 y 之间的距离
cost = regression_cost(input= ȳ, label=y)
cost = mse_cost(input= ȳ, label=y)
outputs(cost)
</pre></div>
</div>
......@@ -279,7 +279,7 @@ outputs(cost)
<div><ul class="simple">
<li><strong>数据层</strong>:数据层 <cite>data_layer</cite> 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 <cite>x</cite><cite>y</cite></li>
<li><strong>全连接层</strong>:全连接层 <cite>fc_layer</cite> 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。</li>
<li><strong>回归误差代价层</strong>:回归误差代价层 <cite>regression_cost</cite> 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。</li>
<li><strong>回归误差代价层</strong>:回归误差代价层 <cite>mse_cost</cite> 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。</li>
</ul>
</div></blockquote>
</li>
......
......@@ -414,7 +414,7 @@ I1116 <span class="m">09</span>:10:17.123440 <span class="m">50</span> Util.c
I1116 <span class="m">09</span>:10:17.123764 <span class="m">50</span> Util.cpp:143<span class="o">]</span> Call runInitFunctions <span class="k">done</span>.
<span class="o">[</span>WARNING <span class="m">2016</span>-11-16 <span class="m">09</span>:10:17,227 default_decorators.py:40<span class="o">]</span> please use keyword arguments in paddle config.
<span class="o">[</span>INFO <span class="m">2016</span>-11-16 <span class="m">09</span>:10:17,239 networks.py:1282<span class="o">]</span> The input order is <span class="o">[</span>movie_id, title, genres, user_id, gender, age, occupation, rating<span class="o">]</span>
<span class="o">[</span>INFO <span class="m">2016</span>-11-16 <span class="m">09</span>:10:17,239 networks.py:1289<span class="o">]</span> The output order is <span class="o">[</span>__regression_cost_0__<span class="o">]</span>
<span class="o">[</span>INFO <span class="m">2016</span>-11-16 <span class="m">09</span>:10:17,239 networks.py:1289<span class="o">]</span> The output order is <span class="o">[</span>__mse_cost_0__<span class="o">]</span>
I1116 <span class="m">09</span>:10:17.392917 <span class="m">50</span> Trainer.cpp:170<span class="o">]</span> trainer mode: Normal
I1116 <span class="m">09</span>:10:17.613910 <span class="m">50</span> PyDataProvider2.cpp:257<span class="o">]</span> loading dataprovider dataprovider::process
I1116 <span class="m">09</span>:10:17.680917 <span class="m">50</span> PyDataProvider2.cpp:257<span class="o">]</span> loading dataprovider dataprovider::process
......
此差异已折叠。
......@@ -632,10 +632,7 @@ cp ml-1m/ratings.dat.test .
<span class="n">user_feature</span> <span class="o">=</span> <span class="n">construct_feature</span><span class="p">(</span><span class="s2">&quot;user&quot;</span><span class="p">)</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">cos_sim</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="n">movie_feature</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">user_feature</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">is_predict</span><span class="p">:</span>
<span class="n">outputs</span><span class="p">(</span>
<span class="n">regression_cost</span><span class="p">(</span>
<span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span>
<span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span>
<span class="n">outputs</span><span class="p">(</span><span class="n">mse_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span>
<span class="n">define_py_data_sources2</span><span class="p">(</span>
<span class="s1">&#39;data/train.list&#39;</span><span class="p">,</span>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册