<p>In this <codeclass="code docutils literal"><spanclass="pre">trainer_config.py</span></code>, we just map each feature type to
a feature vector, following shows how to map each feature to a vector shows below.</p>
<ul>
<li><pclass="first"><codeclass="code docutils literal"><spanclass="pre">id</span></code>: Just simple embedding, and then add to fully connected layer.</p>
</li>
<ulclass="simple">
<li><codeclass="code docutils literal"><spanclass="pre">id</span></code>: Just simple embedding, and then add to fully connected layer.</li>
@@ -301,10 +301,10 @@ Fully connected layer takes a dense input vector with dimension <span class="mat
<h1>Write Gradient Check Unit Test<aclass="headerlink"href="#write-gradient-check-unit-test"title="Permalink to this headline">¶</a></h1>
<p>An easy way to verify the correctness of new layer’s implementation is to write a gradient check unit test. Gradient check unit test utilizes finite difference method to verify the gradient of a layer. It modifies the input with a small perturbation <spanclass="math">\(\Delta x\)</span> and observes the changes of output <spanclass="math">\(\Delta y\)</span>, the gradient can be computed as <spanclass="math">\(\frac{\Delta y}{\Delta x }\)</span>. This gradient can be compared with the gradient computed by the <codeclass="code docutils literal"><spanclass="pre">backward</span></code> function of the layer to ensure the correctness of the gradient computation. Notice that the gradient check only tests the correctness of the gradient computation, it does not necessarily guarantee the correctness of the implementation of the <codeclass="code docutils literal"><spanclass="pre">forward</span></code> and <codeclass="code docutils literal"><spanclass="pre">backward</span></code> function. You need to write more sophisticated unit tests to make sure your layer is implemented correctly.</p>
<p>All the gradient check unit tests are located in <codeclass="code docutils literal"><spanclass="pre">paddle/gserver/tests/test_LayerGrad.cpp</span></code>. You are recommended to put your test into a new test file if you are planning to write a new layer. The gradient test of the gradient check unit test of the fully connected layer is listed below. It has the following steps.</p>
<ul>
<ulclass="simple">
<li><dlclass="first docutils">
<dt>Create layer configuration. A layer configuration can include the following attributes:</dt>
<dd><ulclass="first last simple">
<dd><ulclass="first last">
<li>size of the bias parameter. (4096 in our example)</li>
<li>type of the layer. (fc in our example)</li>
<li>size of the layer. (4096 in our example)</li>
...
...
@@ -319,7 +319,7 @@ Fully connected layer takes a dense input vector with dimension <span class="mat
<dd><ulclass="first last">
<li><dlclass="first docutils">
<dt>type of the input (<codeclass="code docutils literal"><spanclass="pre">INPUT_DATA</span></code>) in our example. It can be one of the following types</dt>
<li><codeclass="code docutils literal"><spanclass="pre">INPUT_DATA_TARGET</span></code>: dense vector, but it does not used to compute gradient.</li>
...
...
@@ -332,23 +332,18 @@ Fully connected layer takes a dense input vector with dimension <span class="mat
</dd>
</dl>
</li>
<li><pclass="first">name of the input. (<codeclass="code docutils literal"><spanclass="pre">layer_0</span></code> in our example)</p>
</li>
<li><pclass="first">size of the input. (8192 in our example)</p>
</li>
<li><pclass="first">number of non-zeros, only useful for sparse inputs.</p>
</li>
<li><pclass="first">format of sparse data, only useful for sparse inputs.</p>
</li>
<li>name of the input. (<codeclass="code docutils literal"><spanclass="pre">layer_0</span></code> in our example)</li>
<li>size of the input. (8192 in our example)</li>
<li>number of non-zeros, only useful for sparse inputs.</li>
<li>format of sparse data, only useful for sparse inputs.</li>
</ul>
</dd>
</dl>
</li>
<li><pclass="first">each inputs needs to call <codeclass="code docutils literal"><spanclass="pre">config.layerConfig.add_inputs();</span></code> once.</p>
</li>
<li>each inputs needs to call <codeclass="code docutils literal"><spanclass="pre">config.layerConfig.add_inputs();</span></code> once.</li>
<li><dlclass="first docutils">
<dt>call <codeclass="code docutils literal"><spanclass="pre">testLayerGrad</span></code> to perform gradient checks. It has the following arguments.</dt>
<dd><ulclass="first last simple">
<dd><ulclass="first last">
<li>layer and input configurations. (<codeclass="code docutils literal"><spanclass="pre">config</span></code> in our example)</li>
<li>type of the input. (<codeclass="code docutils literal"><spanclass="pre">fc</span></code> in our example)</li>
<li>batch size of the gradient check. (100 in our example)</li>
<h1>Implement Python Wrapper<aclass="headerlink"href="#implement-python-wrapper"title="Permalink to this headline">¶</a></h1>
<p>Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file <codeclass="code docutils literal"><spanclass="pre">python/paddle/trainer/config_parser.py</span></code>. An example of the Python wrapper for fully connected layer is listed below. It has the following steps:</p>
<ul>
<li><pclass="first">Use <codeclass="code docutils literal"><spanclass="pre">@config_layer('fc')</span></code> at the decorator for all the Python wrapper class. <codeclass="code docutils literal"><spanclass="pre">fc</span></code> is the identifier of the layer.</p>
</li>
<ulclass="simple">
<li>Use <codeclass="code docutils literal"><spanclass="pre">@config_layer('fc')</span></code> at the decorator for all the Python wrapper class. <codeclass="code docutils literal"><spanclass="pre">fc</span></code> is the identifier of the layer.</li>
<li>It first call <codeclass="code docutils literal"><spanclass="pre">super(FCLayer,</span><spanclass="pre">self).__init__(name,</span><spanclass="pre">'fc',</span><spanclass="pre">size,</span><spanclass="pre">inputs=inputs,</span><spanclass="pre">**xargs)</span></code> base constructor function. <codeclass="code docutils literal"><spanclass="pre">FCLayer</span></code> is the Python wrapper class name, and <codeclass="code docutils literal"><spanclass="pre">fc</span></code> is the layer identifier name. They must be correct in order for the wrapper to work.</li>
<li>Then it computes the size and format (whether sparse) of each transformation matrix as well as the size.</li>
<spanid="_thread_device_resources"></span><spanclass="target"id="paddlestruct__thread__device__resources"></span><emclass="property">struct </em><codeclass="descname">_thread_device_resources</code><aclass="headerlink"href="#_CPPv224_thread_device_resources"title="Permalink to this definition">¶</a></dt>
<spanid="_hl_device_prop"></span><spanclass="target"id="paddlestruct__hl__device__prop"></span><emclass="property">struct </em><codeclass="descname">_hl_device_prop</code><aclass="headerlink"href="#_CPPv215_hl_device_prop"title="Permalink to this definition">¶</a></dt>
<spanid="_cudnn_tensor_descriptor"></span><spanclass="target"id="paddlestruct__cudnn__tensor__descriptor"></span><emclass="property">struct </em><codeclass="descname">_cudnn_tensor_descriptor</code><aclass="headerlink"href="#_CPPv224_cudnn_tensor_descriptor"title="Permalink to this definition">¶</a></dt>
<spanid="_cudnn_pooling_descriptor"></span><spanclass="target"id="paddlestruct__cudnn__pooling__descriptor"></span><emclass="property">struct </em><codeclass="descname">_cudnn_pooling_descriptor</code><aclass="headerlink"href="#_CPPv225_cudnn_pooling_descriptor"title="Permalink to this definition">¶</a></dt>
<spanid="_cudnn_filter_descriptor"></span><spanclass="target"id="paddlestruct__cudnn__filter__descriptor"></span><emclass="property">struct </em><codeclass="descname">_cudnn_filter_descriptor</code><aclass="headerlink"href="#_CPPv224_cudnn_filter_descriptor"title="Permalink to this definition">¶</a></dt>
<spanid="_cudnn_convolution_descriptor"></span><spanclass="target"id="paddlestruct__cudnn__convolution__descriptor"></span><emclass="property">struct </em><codeclass="descname">_cudnn_convolution_descriptor</code><aclass="headerlink"href="#_CPPv229_cudnn_convolution_descriptor"title="Permalink to this definition">¶</a></dt>
<spanid="BaseOp"></span><spanclass="target"id="paddleclassBaseOp"></span><emclass="property">class </em><codeclass="descname">BaseOp</code><aclass="headerlink"href="#_CPPv26BaseOp"title="Permalink to this definition">¶</a></dt>
<spanid="aggregate::sum"></span><spanclass="target"id="paddleclassaggregate_1_1sum"></span><emclass="property">class </em><codeclass="descname">sum</code><aclass="headerlink"href="#_CPPv2N9aggregate3sumE"title="Permalink to this definition">¶</a></dt>
<spanid="aggregate::max"></span><spanclass="target"id="paddleclassaggregate_1_1max"></span><emclass="property">class </em><codeclass="descname">max</code><aclass="headerlink"href="#_CPPv2N9aggregate3maxE"title="Permalink to this definition">¶</a></dt>
<spanid="aggregate::min"></span><spanclass="target"id="paddleclassaggregate_1_1min"></span><emclass="property">class </em><codeclass="descname">min</code><aclass="headerlink"href="#_CPPv2N9aggregate3minE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::add"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1add"></span><emclass="property">class </em><codeclass="descname">add</code><aclass="headerlink"href="#_CPPv2N4base6binary3addE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::add2"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1add2"></span><emclass="property">class </em><codeclass="descname">add2</code><aclass="headerlink"href="#_CPPv2N4base6binary4add2E"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::sub"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1sub"></span><emclass="property">class </em><codeclass="descname">sub</code><aclass="headerlink"href="#_CPPv2N4base6binary3subE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::mul"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1mul"></span><emclass="property">class </em><codeclass="descname">mul</code><aclass="headerlink"href="#_CPPv2N4base6binary3mulE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::div"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1div"></span><emclass="property">class </em><codeclass="descname">div</code><aclass="headerlink"href="#_CPPv2N4base6binary3divE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::squaredDiff"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1squaredDiff"></span><emclass="property">class </em><codeclass="descname">squaredDiff</code><aclass="headerlink"href="#_CPPv2N4base6binary11squaredDiffE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from base::binary::SSESquaredDiff</p>
<spanid="base::binary::first"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1first"></span><emclass="property">class </em><codeclass="descname">first</code><aclass="headerlink"href="#_CPPv2N4base6binary5firstE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::second"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1second"></span><emclass="property">class </em><codeclass="descname">second</code><aclass="headerlink"href="#_CPPv2N4base6binary6secondE"title="Permalink to this definition">¶</a></dt>
<spanid="base::binary::classificationError"></span><spanclass="target"id="paddleclassbase_1_1binary_1_1classificationError"></span><emclass="property">class </em><codeclass="descname">classificationError</code><aclass="headerlink"href="#_CPPv2N4base6binary19classificationErrorE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from base::binary::SSEClassificationError</p>
<spanid="base::unary::identity"></span><spanclass="target"id="paddleclassbase_1_1unary_1_1identity"></span><emclass="property">class </em><codeclass="descname">identity</code><aclass="headerlink"href="#_CPPv2N4base5unary8identityE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl"></span><spanclass="target"id="paddlenamespacehppl"></span><emclass="property">namespace </em><codeclass="descname">hppl</code><aclass="headerlink"href="#_CPPv24hppl"title="Permalink to this definition">¶</a></dt>
<spanclass="target"id="paddlenamespacehppl"></span><emclass="property">namespace </em><codeclass="descname">hppl</code><aclass="headerlink"href="#_CPPv24hppl"title="Permalink to this definition">¶</a></dt>
<spanid="paddle"></span><spanclass="target"id="paddlenamespacepaddle"></span><emclass="property">namespace </em><codeclass="descname">paddle</code><aclass="headerlink"href="#_CPPv26paddle"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::Active"></span><emclass="property">class </em><codeclass="descname">Active</code><aclass="headerlink"href="#_CPPv2N4hppl6ActiveE"title="Permalink to this definition">¶</a></dt>
<dd><em>#include <hl_activation_functions.h></em><p>Hppl supports sigmoid, relu, tanh, linear active functions for neural networks’ forward and backward activation. </p>
<spanid="hppl::cpu"></span><spanclass="target"id="paddlenamespacehppl_1_1cpu"></span><emclass="property">namespace </em><codeclass="descname">cpu</code><aclass="headerlink"href="#_CPPv2N4hppl3cpuE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::backward::lstm"></span><spanclass="target"id="paddleclasshppl_1_1backward_1_1lstm"></span><emclass="property">class </em><codeclass="descname">lstm</code><aclass="headerlink"href="#_CPPv2N4hppl8backward4lstmE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::forward::lstm"></span><spanclass="target"id="paddleclasshppl_1_1forward_1_1lstm"></span><emclass="property">class </em><codeclass="descname">lstm</code><aclass="headerlink"href="#_CPPv2N4hppl7forward4lstmE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::backward::gru_stateGrad"></span><spanclass="target"id="paddleclasshppl_1_1backward_1_1gru__stateGrad"></span><emclass="property">class </em><codeclass="descname">gru_stateGrad</code><aclass="headerlink"href="#_CPPv2N4hppl8backward13gru_stateGradE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::backward::gru_resetGrad"></span><spanclass="target"id="paddleclasshppl_1_1backward_1_1gru__resetGrad"></span><emclass="property">class </em><codeclass="descname">gru_resetGrad</code><aclass="headerlink"href="#_CPPv2N4hppl8backward13gru_resetGradE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::forward::gru_resetOutput"></span><spanclass="target"id="paddleclasshppl_1_1forward_1_1gru__resetOutput"></span><emclass="property">class </em><codeclass="descname">gru_resetOutput</code><aclass="headerlink"href="#_CPPv2N4hppl7forward15gru_resetOutputE"title="Permalink to this definition">¶</a></dt>
<spanid="hppl::forward::gru_finalOutput"></span><spanclass="target"id="paddleclasshppl_1_1forward_1_1gru__finalOutput"></span><emclass="property">class </em><codeclass="descname">gru_finalOutput</code><aclass="headerlink"href="#_CPPv2N4hppl7forward15gru_finalOutputE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::DataProviderGroup"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">DataProviderGroup</code><aclass="headerlink"href="#_CPPv2N6paddle17DataProviderGroupE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1DataProvider"><spanclass="std std-ref">paddle::DataProvider</span></a></p>
<spanid="paddle::MultiDataProvider"></span><spanclass="target"id="paddleclasspaddle_1_1MultiDataProvider"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">MultiDataProvider</code><aclass="headerlink"href="#_CPPv2N6paddle17MultiDataProviderE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1DataProvider"><spanclass="std std-ref">paddle::DataProvider</span></a></p>
<p>It will read python object, and fill to argument’s each slot. There are two steps, prepare and fill. Scanner will alloc memory during prepare step, fill data into argument during fill step. </p>
<spanid="paddle::DenseScanner"></span><spanclass="target"id="paddleclasspaddle_1_1DenseScanner"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">DenseScanner</code><aclass="headerlink"href="#_CPPv2N6paddle12DenseScannerE"title="Permalink to this definition">¶</a></dt>
<dd><p>Scanner for dense slot. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1IFieldScanner"><spanclass="std std-ref">paddle::IFieldScanner</span></a></p>
<spanid="paddle::IndexScanner"></span><spanclass="target"id="paddleclasspaddle_1_1IndexScanner"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">IndexScanner</code><aclass="headerlink"href="#_CPPv2N6paddle12IndexScannerE"title="Permalink to this definition">¶</a></dt>
<dd><p>Scanner for index slot </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1IFieldScanner"><spanclass="std std-ref">paddle::IFieldScanner</span></a></p>
<spanid="paddle::SparseNonValueScanner"></span><spanclass="target"id="paddleclasspaddle_1_1SparseNonValueScanner"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">SparseNonValueScanner</code><aclass="headerlink"href="#_CPPv2N6paddle21SparseNonValueScannerE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1IFieldScanner"><spanclass="std std-ref">paddle::IFieldScanner</span></a></p>
<spanid="paddle::SparseValueScanner"></span><spanclass="target"id="paddleclasspaddle_1_1SparseValueScanner"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">SparseValueScanner</code><aclass="headerlink"href="#_CPPv2N6paddle18SparseValueScannerE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1SparseNonValueScanner"><spanclass="std std-ref">paddle::SparseNonValueScanner</span></a></p>
<spanclass="target"id="paddleclasspaddle_1_1SparseValueScanner"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">SparseValueScanner</code><aclass="headerlink"href="#_CPPv2N6paddle18SparseValueScannerE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1SparseNonValueScanner"><spanclass="std std-ref">paddle::SparseNonValueScanner</span></a></p>
<spanid="paddle::IPyDataProviderCache"></span><spanclass="target"id="paddleclasspaddle_1_1IPyDataProviderCache"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">IPyDataProviderCache</code><aclass="headerlink"href="#_CPPv2N6paddle20IPyDataProviderCacheE"title="Permalink to this definition">¶</a></dt>
<dd><p>Py Data Provider Cache Interface. </p>
<p>Subclassed by <aclass="reference internal"href="#paddleclasspaddle_1_1CacheOnePassInMemory"><spanclass="std std-ref">paddle::CacheOnePassInMemory</span></a>, <aclass="reference internal"href="#paddleclasspaddle_1_1NoCacheStrategy"><spanclass="std std-ref">paddle::NoCacheStrategy</span></a></p>
<spanid="paddle::NoCacheStrategy"></span><spanclass="target"id="paddleclasspaddle_1_1NoCacheStrategy"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">NoCacheStrategy</code><aclass="headerlink"href="#_CPPv2N6paddle15NoCacheStrategyE"title="Permalink to this definition">¶</a></dt>
<dd><p>No Cache Strategy. Will destruct old data immediately and load data from python every pass. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1IPyDataProviderCache"><spanclass="std std-ref">paddle::IPyDataProviderCache</span></a></p>
<p>In first pass, will load data from python and store them in memory. The rest passes, will load data from memory. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1IPyDataProviderCache"><spanclass="std std-ref">paddle::IPyDataProviderCache</span></a></p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1DataProvider"><spanclass="std std-ref">paddle::DataProvider</span></a></p>
<p>Subclassed by <aclass="reference internal"href="#paddleclasspaddle_1_1ProtoSequenceDataProvider"><spanclass="std std-ref">paddle::ProtoSequenceDataProvider</span></a></p>
<spanid="paddle::ProtoDataProvider::ProtoSlot"></span><spanclass="target"id="paddlestructpaddle_1_1ProtoDataProvider_1_1ProtoSlot"></span><emclass="property">struct </em><codeclass="descname">ProtoSlot</code><aclass="headerlink"href="#_CPPv2N6paddle17ProtoDataProvider9ProtoSlotE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::ProtoDataProvider::ProtoVarSlot"></span><spanclass="target"id="paddlestructpaddle_1_1ProtoDataProvider_1_1ProtoVarSlot"></span><emclass="property">struct </em><codeclass="descname">ProtoVarSlot</code><aclass="headerlink"href="#_CPPv2N6paddle17ProtoDataProvider12ProtoVarSlotE"title="Permalink to this definition">¶</a></dt>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1ProtoDataProvider"><spanclass="std std-ref">paddle::ProtoDataProvider</span></a></p>
<spanid="paddle::Evaluator"></span><spanclass="target"id="paddleclasspaddle_1_1Evaluator"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">Evaluator</code><aclass="headerlink"href="#_CPPv2N6paddle9EvaluatorE"title="Permalink to this definition">¶</a></dt>
<dd><p>Base class for <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">Evaluator</span></a> Evaluating the performance of a model is very important. It indicates how successful the scores(predictions) of a datasets has been by a trained model. </p>
<dd><p>sum <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">Evaluator</span></a> Calculate the sum of output or label </p>
<p>The config file api is sum_evaluator. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">paddle::Evaluator</span></a></p>
@@ -384,7 +384,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
</dl>
</p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1ClassificationErrorEvaluator"><spanclass="std std-ref">paddle::ClassificationErrorEvaluator</span></a></p>
<spanid="paddle::PrecisionRecallEvaluator::StatsInfo"></span><spanclass="target"id="paddlestructpaddle_1_1PrecisionRecallEvaluator_1_1StatsInfo"></span><emclass="property">struct </em><codeclass="descname">StatsInfo</code><aclass="headerlink"href="#_CPPv2N6paddle24PrecisionRecallEvaluator9StatsInfoE"title="Permalink to this definition">¶</a></dt>
@@ -675,7 +675,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
<spanid="paddle::CTCErrorEvaluator"></span><spanclass="target"id="paddleclasspaddle_1_1CTCErrorEvaluator"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">CTCErrorEvaluator</code><aclass="headerlink"href="#_CPPv2N6paddle17CTCErrorEvaluatorE"title="Permalink to this definition">¶</a></dt>
@@ -743,7 +743,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
<dtid="_CPPv2N6paddle15PnpairEvaluatorE">
<spanid="paddle::PnpairEvaluator"></span><spanclass="target"id="paddleclasspaddle_1_1PnpairEvaluator"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">PnpairEvaluator</code><aclass="headerlink"href="#_CPPv2N6paddle15PnpairEvaluatorE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">paddle::Evaluator</span></a></p>
<spanid="paddle::PnpairEvaluator::PredictionResult"></span><spanclass="target"id="paddlestructpaddle_1_1PnpairEvaluator_1_1PredictionResult"></span><emclass="property">struct </em><codeclass="descname">PredictionResult</code><aclass="headerlink"href="#_CPPv2N6paddle15PnpairEvaluator16PredictionResultE"title="Permalink to this definition">¶</a></dt>
@@ -862,7 +862,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
<spanid="paddle::RankAucEvaluator"></span><spanclass="target"id="paddleclasspaddle_1_1RankAucEvaluator"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">RankAucEvaluator</code><aclass="headerlink"href="#_CPPv2N6paddle16RankAucEvaluatorE"title="Permalink to this definition">¶</a></dt>
<dd><p><aclass="reference internal"href="#paddleclasspaddle_1_1RankAucEvaluator"><spanclass="std std-ref">RankAucEvaluator</span></a> calculates the AUC of each list (i.e., titles under the same query), and averages them. Each list should be organized as a sequence. The inputs of this evaluator is [output, click, pv]. If pv is not provided, it will be set to 1. The types of click and pv are dense value. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">paddle::Evaluator</span></a></p>
@@ -1119,7 +1119,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
<p>Typically <aclass="reference internal"href="#paddleclasspaddle_1_1SequenceTextPrinter"><spanclass="std std-ref">SequenceTextPrinter</span></a> layer takes output of maxid or RecurrentGroup with maxid (when generating) as an input.</p>
<p>The config file api is seqtext_printer_evaluator. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1Evaluator"><spanclass="std std-ref">paddle::Evaluator</span></a></p>
@@ -1162,7 +1162,7 @@ The config file api is column_sum_evaluator. <dl class="docutils">
<dd><p>print classification error. </p>
<p>The config file api is classification_error_printer_evaluator. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1ClassificationErrorEvaluator"><spanclass="std std-ref">paddle::ClassificationErrorEvaluator</span></a></p>
<spanid="paddle::GradientMachine"></span><spanclass="target"id="paddleclasspaddle_1_1GradientMachine"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">GradientMachine</code><aclass="headerlink"href="#_CPPv2N6paddle15GradientMachineE"title="Permalink to this definition">¶</a></dt>
<dd><p>Subclassed by <aclass="reference internal"href="#paddleclasspaddle_1_1MultiGradientMachine"><spanclass="std std-ref">paddle::MultiGradientMachine</span></a>, <aclass="reference internal"href="#paddleclasspaddle_1_1NeuralNetwork"><spanclass="std std-ref">paddle::NeuralNetwork</span></a></p>
<spanid="paddle::IGradientMachineMode"></span><spanclass="target"id="paddleclasspaddle_1_1IGradientMachineMode"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">IGradientMachineMode</code><aclass="headerlink"href="#_CPPv2N6paddle20IGradientMachineModeE"title="Permalink to this definition">¶</a></dt>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1GradientMachine"><spanclass="std std-ref">paddle::GradientMachine</span></a></p>
<spanid="paddle::TrainerThread"></span><spanclass="target"id="paddleclasspaddle_1_1TrainerThread"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">TrainerThread</code><aclass="headerlink"href="#_CPPv2N6paddle13TrainerThreadE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine"></span><spanclass="target"id="paddleclasspaddle_1_1RecurrentGradientMachine"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">RecurrentGradientMachine</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachineE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1NeuralNetwork"><spanclass="std std-ref">paddle::NeuralNetwork</span></a></p>
<spanid="paddle::RecurrentGradientMachine::EosFrameLine"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1EosFrameLine"></span><emclass="property">struct </em><codeclass="descname">EosFrameLine</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine12EosFrameLineE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::Generator"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1Generator"></span><emclass="property">struct </em><codeclass="descname">Generator</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine9GeneratorE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::Info"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1Info"></span><emclass="property">struct </em><codeclass="descname">Info</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine4InfoE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::InFrameLine"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1InFrameLine"></span><emclass="property">struct </em><codeclass="descname">InFrameLine</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine11InFrameLineE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::MemoryFrameLine"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1MemoryFrameLine"></span><emclass="property">struct </em><codeclass="descname">MemoryFrameLine</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine15MemoryFrameLineE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::OutFrameLine"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1OutFrameLine"></span><emclass="property">struct </em><codeclass="descname">OutFrameLine</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine12OutFrameLineE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::RecurrentGradientMachine::Path"></span><spanclass="target"id="paddlestructpaddle_1_1RecurrentGradientMachine_1_1Path"></span><emclass="property">struct </em><codeclass="descname">Path</code><aclass="headerlink"href="#_CPPv2N6paddle24RecurrentGradientMachine4PathE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::NeuralNetwork"></span><spanclass="target"id="paddleclasspaddle_1_1NeuralNetwork"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">NeuralNetwork</code><aclass="headerlink"href="#_CPPv2N6paddle13NeuralNetworkE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1GradientMachine"><spanclass="std std-ref">paddle::GradientMachine</span></a></p>
<p>Subclassed by paddle::MultiNetwork, <aclass="reference internal"href="#paddleclasspaddle_1_1ParallelNeuralNetwork"><spanclass="std std-ref">paddle::ParallelNeuralNetwork</span></a>, <aclass="reference internal"href="#paddleclasspaddle_1_1RecurrentGradientMachine"><spanclass="std std-ref">paddle::RecurrentGradientMachine</span></a></p>
<spanid="paddle::ParallelNeuralNetwork"></span><spanclass="target"id="paddleclasspaddle_1_1ParallelNeuralNetwork"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">ParallelNeuralNetwork</code><aclass="headerlink"href="#_CPPv2N6paddle21ParallelNeuralNetworkE"title="Permalink to this definition">¶</a></dt>
<dd><p>A <aclass="reference internal"href="#paddleclasspaddle_1_1ParallelNeuralNetwork"><spanclass="std std-ref">ParallelNeuralNetwork</span></a> is capable of calculating a neural network through multiple threads in parallel. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1NeuralNetwork"><spanclass="std std-ref">paddle::NeuralNetwork</span></a></p>
<spanid="paddle"></span><spanclass="target"id="paddlenamespacepaddle"></span><emclass="property">namespace </em><codeclass="descname">paddle</code><aclass="headerlink"href="#_CPPv26paddle"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::MemoryHandle"></span><spanclass="target"id="paddleclasspaddle_1_1MemoryHandle"></span><emclass="property">class </em><codeclass="descname">MemoryHandle</code><aclass="headerlink"href="#_CPPv2N6paddle12MemoryHandleE"title="Permalink to this definition">¶</a></dt>
<dd><p>Subclassed by <aclass="reference internal"href="#paddleclasspaddle_1_1CpuMemoryHandle"><spanclass="std std-ref">paddle::CpuMemoryHandle</span></a>, <aclass="reference internal"href="#paddleclasspaddle_1_1GpuMemoryHandle"><spanclass="std std-ref">paddle::GpuMemoryHandle</span></a></p>
<dd><em>#include <Allocator.h></em><p><aclass="reference internal"href="#paddleclasspaddle_1_1Allocator"><spanclass="std std-ref">Allocator</span></a> base class. </p>
<p>This is the base class of all <aclass="reference internal"href="#paddleclasspaddle_1_1Allocator"><spanclass="std std-ref">Allocator</span></a> class. </p>
<spanid="paddle::CpuAllocator"></span><spanclass="target"id="paddleclasspaddle_1_1CpuAllocator"></span><emclass="property">class </em><codeclass="descname">CpuAllocator</code><aclass="headerlink"href="#_CPPv2N6paddle12CpuAllocatorE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::GpuAllocator"></span><spanclass="target"id="paddleclasspaddle_1_1GpuAllocator"></span><emclass="property">class </em><codeclass="descname">GpuAllocator</code><aclass="headerlink"href="#_CPPv2N6paddle12GpuAllocatorE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::CudaHostAllocator"></span><spanclass="target"id="paddleclasspaddle_1_1CudaHostAllocator"></span><emclass="property">class </em><codeclass="descname">CudaHostAllocator</code><aclass="headerlink"href="#_CPPv2N6paddle17CudaHostAllocatorE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::PoolAllocator"></span><spanclass="target"id="paddleclasspaddle_1_1PoolAllocator"></span><emclass="property">class </em><codeclass="descname">PoolAllocator</code><aclass="headerlink"href="#_CPPv2N6paddle13PoolAllocatorE"title="Permalink to this definition">¶</a></dt>
<dd><em>#include <PoolAllocator.h></em><p>Memory pool allocator implementation. </p>
<spanid="paddle::StorageEngine"></span><spanclass="target"id="paddleclasspaddle_1_1StorageEngine"></span><emclass="property">class </em><codeclass="descname">StorageEngine</code><aclass="headerlink"href="#_CPPv2N6paddle13StorageEngineE"title="Permalink to this definition">¶</a></dt>
<dd><em>#include <Storage.h></em><p>Storage manager for multiple devices. </p>
<spanid="paddle::ParameterUpdater"></span><spanclass="target"id="paddleclasspaddle_1_1ParameterUpdater"></span><emclass="property">class </em><codeclass="descname">ParameterUpdater</code><aclass="headerlink"href="#_CPPv2N6paddle16ParameterUpdaterE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::ParameterUpdaterComposite"></span><spanclass="target"id="paddleclasspaddle_1_1ParameterUpdaterComposite"></span><emclass="property">class </em><codeclass="descname">ParameterUpdaterComposite</code><aclass="headerlink"href="#_CPPv2N6paddle25ParameterUpdaterCompositeE"title="Permalink to this definition">¶</a></dt>
<dd><p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1ParameterUpdater"><spanclass="std std-ref">paddle::ParameterUpdater</span></a></p>
<p>Subclassed by <aclass="reference internal"href="../../trainer/trainer.html#paddleclasspaddle_1_1SparseRemoteParameterUpdaterComposite"><spanclass="std std-ref">paddle::SparseRemoteParameterUpdaterComposite</span></a></p>
<spanclass="target"id="paddlenamespacepaddle"></span><emclass="property">namespace </em><codeclass="descname">paddle</code><aclass="headerlink"href="#_CPPv26paddle"title="Permalink to this definition">¶</a></dt>
<p>The <aclass="reference internal"href="../parameter/parameter.html#paddleclasspaddle_1_1Parameter"><spanclass="std std-ref">Parameter</span></a> Updater hooks is a group of methods invoke before ParameterUpdater::updateImpl. It can modify gradient/momentum/etc before parameter optimization. </p>
<spanid="paddle::BaseClient"></span><spanclass="target"id="paddleclasspaddle_1_1BaseClient"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">BaseClient</code><aclass="headerlink"href="#_CPPv2N6paddle10BaseClientE"title="Permalink to this definition">¶</a></dt>
<dd><p>it manages all connections to pservers. it exists two modes to manage connections to all pservers. Firstly, one connection owns two threads that separately manage to send and receive data. Secondly, each thread uses one connection for all activation in it. the first solution arms with sendThreads_/recvThreads_ and sendJobQueue_/ recvJobQueue_. the second solution use some shared thread pool to manage connections. In addition to pserver, metric learning also uses network to exchange features within multi-machines, so this class just abstracts some basic threads and queue buffer creation for them </p>
<p>Subclassed by <aclass="reference internal"href="#paddleclasspaddle_1_1ParameterClient2"><spanclass="std std-ref">paddle::ParameterClient2</span></a></p>
<spanid="paddle::BaseClient::SendJob"></span><spanclass="target"id="paddlestructpaddle_1_1BaseClient_1_1SendJob"></span><emclass="property">struct </em><codeclass="descname">SendJob</code><aclass="headerlink"href="#_CPPv2N6paddle10BaseClient7SendJobE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::ParameterClient2"></span><spanclass="target"id="paddleclasspaddle_1_1ParameterClient2"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">ParameterClient2</code><aclass="headerlink"href="#_CPPv2N6paddle16ParameterClient2E"title="Permalink to this definition">¶</a></dt>
<dd><p>The client interface for parameter server. <aclass="reference internal"href="#paddleclasspaddle_1_1ParameterClient2"><spanclass="std std-ref">ParameterClient2</span></a> supports 2 modes for managing connections to parameter servers, in the 1st mode one connection is shared by 2 threads that are separately responsible for sending and recieving activities, in the 2nd mode one connection is owned by only one thread, and all the sending and recieving activities run in that single thread. TODO(yanfei): Additional core idea to further optimizate pserver performance is to do sync-sgd based parameter level instead of pserver level. full-parallelization based parameter level for sync-sgd also can sense forwardbackward computation layer-by-layer for more deeper layer model. Firstly, pserver can do full-parallelization on all computation based parameter level instead of waiting for all gradients are finished and start to send back parameters value immediately if parameter is ready instead of waiting for all parameters value are ready Secondly, parameter client can write back parameters to GPU instead of waiting until all parameters are received to CPU host end. </p>
<p>Inherits from <aclass="reference internal"href="#paddleclasspaddle_1_1BaseClient"><spanclass="std std-ref">paddle::BaseClient</span></a></p>
<spanid="paddle::enumeration_wrapper"></span><spanclass="target"id="paddlenamespacepaddle_1_1enumeration__wrapper"></span><emclass="property">namespace </em><codeclass="descclassname">paddle::</code><codeclass="descname">enumeration_wrapper</code><aclass="headerlink"href="#_CPPv2N6paddle19enumeration_wrapperE"title="Permalink to this definition">¶</a></dt>
<spanid="paddle::BlockingQueue"></span><emclass="property">class </em><codeclass="descclassname">paddle::</code><codeclass="descname">BlockingQueue</code><aclass="headerlink"href="#_CPPv2N6paddle13BlockingQueueE"title="Permalink to this definition">¶</a></dt>