提交 ba9beb37 编写于 作者: T Travis CI

Deploy to GitHub Pages: f40bdb15

上级 3fbf33d4
## Evaluator Design ## Evaluator Design
### The Problem ### Problem Statement
During training or serving, we provide the evaluation function to measure the model performance, e.g., accuracy, precision. In the operator based framework design, the data go through the network pipeline batch by batch. As a result, inside the operator, we only can calculate one minibatch metrics. We need to provide a mechanism to calculate the metrics for each N pass/batch the user wanted. During training or inference, we provide an evaluation function to measure the model performance, for example, accuracy, precision, etc. In the operator based framework design, the data passes through the network pipeline batch by batch. As a result, inside the operator, we only calculate the metrics for one minibatch. Thus, we need to provide a mechanism to calculate the metrics for each N pass/batch the user wants.
### Evaluator Design ### Evaluator Design
Currently, every operation is expressed in the graph. we divide the evaluator process into three steps. Currently, every operation is expressed in the graph. We divide the evaluator process into three steps.
1. Initialize the metric state and add it into the block. 1. Initialize the metric state and add it into the block.
2. Calculate the statistic of the metric state in every mini-batch. The single operator is only responsible for calculating necessary statistics for one mini-batch. For example, accuracy operator only calculate a minibatch data if run once. 2. Calculate the concerned metrics for every mini-batch. The single evaluator operator is only responsible for calculating the necessary statistics for one mini-batch. For example, the accuracy operator only calculates the accuracy for a minibatch data if run once.
3. Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices. 3. Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.
### Implementation ### Implementation
This design is shown in python API. This design is shown in the Python API.
Each metric operator need to caculate the metric statistic and return the batch aware states, Python side responsible for accumulate the states for each pass. Each metric operator needs to caculate the metric statistic and return the batch-aware states. Python side is responsible for accumulating the states for each pass.
```python ```python
......
...@@ -190,16 +190,16 @@ ...@@ -190,16 +190,16 @@
<div class="section" id="evaluator-design"> <div class="section" id="evaluator-design">
<span id="evaluator-design"></span><h1>Evaluator Design<a class="headerlink" href="#evaluator-design" title="Permalink to this headline"></a></h1> <span id="evaluator-design"></span><h1>Evaluator Design<a class="headerlink" href="#evaluator-design" title="Permalink to this headline"></a></h1>
<div class="section" id="the-problem"> <div class="section" id="problem-statement">
<span id="the-problem"></span><h2>The Problem<a class="headerlink" href="#the-problem" title="Permalink to this headline"></a></h2> <span id="problem-statement"></span><h2>Problem Statement<a class="headerlink" href="#problem-statement" title="Permalink to this headline"></a></h2>
<p>During training or serving, we provide the evaluation function to measure the model performance, e.g., accuracy, precision. In the operator based framework design, the data go through the network pipeline batch by batch. As a result, inside the operator, we only can calculate one minibatch metrics. We need to provide a mechanism to calculate the metrics for each N pass/batch the user wanted.</p> <p>During training or inference, we provide an evaluation function to measure the model performance, for example, accuracy, precision, etc. In the operator based framework design, the data passes through the network pipeline batch by batch. As a result, inside the operator, we only calculate the metrics for one minibatch. Thus, we need to provide a mechanism to calculate the metrics for each N pass/batch the user wants.</p>
</div> </div>
<div class="section" id="evaluator-design"> <div class="section" id="evaluator-design">
<span id="id1"></span><h2>Evaluator Design<a class="headerlink" href="#evaluator-design" title="Permalink to this headline"></a></h2> <span id="id1"></span><h2>Evaluator Design<a class="headerlink" href="#evaluator-design" title="Permalink to this headline"></a></h2>
<p>Currently, every operation is expressed in the graph. we divide the evaluator process into three steps.</p> <p>Currently, every operation is expressed in the graph. We divide the evaluator process into three steps.</p>
<ol class="simple"> <ol class="simple">
<li>Initialize the metric state and add it into the block.</li> <li>Initialize the metric state and add it into the block.</li>
<li>Calculate the statistic of the metric state in every mini-batch. The single operator is only responsible for calculating necessary statistics for one mini-batch. For example, accuracy operator only calculate a minibatch data if run once.</li> <li>Calculate the concerned metrics for every mini-batch. The single evaluator operator is only responsible for calculating the necessary statistics for one mini-batch. For example, the accuracy operator only calculates the accuracy for a minibatch data if run once.</li>
</ol> </ol>
<ol class="simple"> <ol class="simple">
<li>Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.</li> <li>Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.</li>
...@@ -207,8 +207,8 @@ ...@@ -207,8 +207,8 @@
</div> </div>
<div class="section" id="implementation"> <div class="section" id="implementation">
<span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="Permalink to this headline"></a></h2> <span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="Permalink to this headline"></a></h2>
<p>This design is shown in python API. <p>This design is shown in the Python API.
Each metric operator need to caculate the metric statistic and return the batch aware states, Python side responsible for accumulate the states for each pass.</p> Each metric operator needs to caculate the metric statistic and return the batch-aware states. Python side is responsible for accumulating the states for each pass.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">Evaluator</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">Evaluator</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="sd">&quot;&quot;&quot;</span> <span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Evaluator Base class.</span> <span class="sd"> Evaluator Base class.</span>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
## Evaluator Design ## Evaluator Design
### The Problem ### Problem Statement
During training or serving, we provide the evaluation function to measure the model performance, e.g., accuracy, precision. In the operator based framework design, the data go through the network pipeline batch by batch. As a result, inside the operator, we only can calculate one minibatch metrics. We need to provide a mechanism to calculate the metrics for each N pass/batch the user wanted. During training or inference, we provide an evaluation function to measure the model performance, for example, accuracy, precision, etc. In the operator based framework design, the data passes through the network pipeline batch by batch. As a result, inside the operator, we only calculate the metrics for one minibatch. Thus, we need to provide a mechanism to calculate the metrics for each N pass/batch the user wants.
### Evaluator Design ### Evaluator Design
Currently, every operation is expressed in the graph. we divide the evaluator process into three steps. Currently, every operation is expressed in the graph. We divide the evaluator process into three steps.
1. Initialize the metric state and add it into the block. 1. Initialize the metric state and add it into the block.
2. Calculate the statistic of the metric state in every mini-batch. The single operator is only responsible for calculating necessary statistics for one mini-batch. For example, accuracy operator only calculate a minibatch data if run once. 2. Calculate the concerned metrics for every mini-batch. The single evaluator operator is only responsible for calculating the necessary statistics for one mini-batch. For example, the accuracy operator only calculates the accuracy for a minibatch data if run once.
3. Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices. 3. Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.
### Implementation ### Implementation
This design is shown in python API. This design is shown in the Python API.
Each metric operator need to caculate the metric statistic and return the batch aware states, Python side responsible for accumulate the states for each pass. Each metric operator needs to caculate the metric statistic and return the batch-aware states. Python side is responsible for accumulating the states for each pass.
```python ```python
......
...@@ -204,16 +204,16 @@ ...@@ -204,16 +204,16 @@
<div class="section" id="evaluator-design"> <div class="section" id="evaluator-design">
<span id="evaluator-design"></span><h1>Evaluator Design<a class="headerlink" href="#evaluator-design" title="永久链接至标题"></a></h1> <span id="evaluator-design"></span><h1>Evaluator Design<a class="headerlink" href="#evaluator-design" title="永久链接至标题"></a></h1>
<div class="section" id="the-problem"> <div class="section" id="problem-statement">
<span id="the-problem"></span><h2>The Problem<a class="headerlink" href="#the-problem" title="永久链接至标题"></a></h2> <span id="problem-statement"></span><h2>Problem Statement<a class="headerlink" href="#problem-statement" title="永久链接至标题"></a></h2>
<p>During training or serving, we provide the evaluation function to measure the model performance, e.g., accuracy, precision. In the operator based framework design, the data go through the network pipeline batch by batch. As a result, inside the operator, we only can calculate one minibatch metrics. We need to provide a mechanism to calculate the metrics for each N pass/batch the user wanted.</p> <p>During training or inference, we provide an evaluation function to measure the model performance, for example, accuracy, precision, etc. In the operator based framework design, the data passes through the network pipeline batch by batch. As a result, inside the operator, we only calculate the metrics for one minibatch. Thus, we need to provide a mechanism to calculate the metrics for each N pass/batch the user wants.</p>
</div> </div>
<div class="section" id="evaluator-design"> <div class="section" id="evaluator-design">
<span id="id1"></span><h2>Evaluator Design<a class="headerlink" href="#evaluator-design" title="永久链接至标题"></a></h2> <span id="id1"></span><h2>Evaluator Design<a class="headerlink" href="#evaluator-design" title="永久链接至标题"></a></h2>
<p>Currently, every operation is expressed in the graph. we divide the evaluator process into three steps.</p> <p>Currently, every operation is expressed in the graph. We divide the evaluator process into three steps.</p>
<ol class="simple"> <ol class="simple">
<li>Initialize the metric state and add it into the block.</li> <li>Initialize the metric state and add it into the block.</li>
<li>Calculate the statistic of the metric state in every mini-batch. The single operator is only responsible for calculating necessary statistics for one mini-batch. For example, accuracy operator only calculate a minibatch data if run once.</li> <li>Calculate the concerned metrics for every mini-batch. The single evaluator operator is only responsible for calculating the necessary statistics for one mini-batch. For example, the accuracy operator only calculates the accuracy for a minibatch data if run once.</li>
</ol> </ol>
<ol class="simple"> <ol class="simple">
<li>Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.</li> <li>Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.</li>
...@@ -221,8 +221,8 @@ ...@@ -221,8 +221,8 @@
</div> </div>
<div class="section" id="implementation"> <div class="section" id="implementation">
<span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="永久链接至标题"></a></h2> <span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="永久链接至标题"></a></h2>
<p>This design is shown in python API. <p>This design is shown in the Python API.
Each metric operator need to caculate the metric statistic and return the batch aware states, Python side responsible for accumulate the states for each pass.</p> Each metric operator needs to caculate the metric statistic and return the batch-aware states. Python side is responsible for accumulating the states for each pass.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">Evaluator</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">Evaluator</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="sd">&quot;&quot;&quot;</span> <span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Evaluator Base class.</span> <span class="sd"> Evaluator Base class.</span>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册