@@ -202,8 +202,8 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:
VarDesc in a block should have its name scope to avoid local variables affecting parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that is stored in the parent block. For example:
PaddlePaddle divides the description of neural network computation graph into two stages: compile time and runtime.
PaddlePaddle divides the description of neural network computation into two stages: compile time and runtime. At compile time, the neural network computation is described as a `ProgramDesc` whereas at runtime an `Executor` interprets the `ProgramDesc` to compute the operations.
PaddlePaddle use proto message to describe compile time graph because
PaddlePaddle use proto message to describe compile time program because
1. Computation graph should be able to be saved to a file.
1. In distributed training, the graph will be serialized and send to multiple workers.
1. The computation program description must be serializable and saved in a file.
1. During distributed training, the sreialized program will be sent to multiple workers. It should also be possible to break the program into different components, each of which can be executed on different workers.
The computation graph is constructed by Data Node and Operation Node. The concept to represent them is in the table below.
The computation `Program` consists of nested `Blocks`. Each `Block` will consist of data(i.e. `Variable`) and `Operations`. The concept to represent them is in the table below.
<spanid="the-compilation-of-blocks"></span><h2>The Compilation of Blocks<aclass="headerlink"href="#the-compilation-of-blocks"title="Permalink to this headline">¶</a></h2>
<p>During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).</p>
<p>VarDesc in a block should have its name scope to avoid local variables affect parent block’s name scope.
Child block’s name scopes should inherit the parent’s so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:</p>
<p>VarDesc in a block should have its name scope to avoid local variables affecting parent block’s name scope.
Child block’s name scopes should inherit the parent’s so that OpDesc in child block can reference a VarDesc that is stored in the parent block. For example:</p>
<spanid="background"></span><h1>Background<aclass="headerlink"href="#background"title="Permalink to this headline">¶</a></h1>
<p>PaddlePaddle divides the description of neural network computation graph into two stages: compile time and runtime.</p>
<p>PaddlePaddle use proto message to describe compile time graph because</p>
<p>PaddlePaddle divides the description of neural network computation into two stages: compile time and runtime. At compile time, the neural network computation is described as a <codeclass="docutils literal"><spanclass="pre">ProgramDesc</span></code> whereas at runtime an <codeclass="docutils literal"><spanclass="pre">Executor</span></code> interprets the <codeclass="docutils literal"><spanclass="pre">ProgramDesc</span></code> to compute the operations.</p>
<p>PaddlePaddle use proto message to describe compile time program because</p>
<olclass="simple">
<li>Computation graph should be able to be saved to a file.</li>
<li>In distributed training, the graph will be serialized and send to multiple workers.</li>
<li>The computation program description must be serializable and saved in a file.</li>
<li>During distributed training, the sreialized program will be sent to multiple workers. It should also be possible to break the program into different components, each of which can be executed on different workers.</li>
</ol>
<p>The computation graph is constructed by Data Node and Operation Node. The concept to represent them is in the table below.</p>
<p>The computation <codeclass="docutils literal"><spanclass="pre">Program</span></code> consists of nested <codeclass="docutils literal"><spanclass="pre">Blocks</span></code>. Each <codeclass="docutils literal"><spanclass="pre">Block</span></code> will consist of data(i.e. <codeclass="docutils literal"><spanclass="pre">Variable</span></code>) and <codeclass="docutils literal"><spanclass="pre">Operations</span></code>. The concept to represent them is in the table below.</p>
@@ -202,8 +202,8 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:
VarDesc in a block should have its name scope to avoid local variables affecting parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that is stored in the parent block. For example:
PaddlePaddle divides the description of neural network computation graph into two stages: compile time and runtime.
PaddlePaddle divides the description of neural network computation into two stages: compile time and runtime. At compile time, the neural network computation is described as a `ProgramDesc` whereas at runtime an `Executor` interprets the `ProgramDesc` to compute the operations.
PaddlePaddle use proto message to describe compile time graph because
PaddlePaddle use proto message to describe compile time program because
1. Computation graph should be able to be saved to a file.
1. In distributed training, the graph will be serialized and send to multiple workers.
1. The computation program description must be serializable and saved in a file.
1. During distributed training, the sreialized program will be sent to multiple workers. It should also be possible to break the program into different components, each of which can be executed on different workers.
The computation graph is constructed by Data Node and Operation Node. The concept to represent them is in the table below.
The computation `Program` consists of nested `Blocks`. Each `Block` will consist of data(i.e. `Variable`) and `Operations`. The concept to represent them is in the table below.
<spanid="the-compilation-of-blocks"></span><h2>The Compilation of Blocks<aclass="headerlink"href="#the-compilation-of-blocks"title="永久链接至标题">¶</a></h2>
<p>During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).</p>
<p>VarDesc in a block should have its name scope to avoid local variables affect parent block’s name scope.
Child block’s name scopes should inherit the parent’s so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:</p>
<p>VarDesc in a block should have its name scope to avoid local variables affecting parent block’s name scope.
Child block’s name scopes should inherit the parent’s so that OpDesc in child block can reference a VarDesc that is stored in the parent block. For example:</p>
<p>PaddlePaddle divides the description of neural network computation graph into two stages: compile time and runtime.</p>
<p>PaddlePaddle use proto message to describe compile time graph because</p>
<p>PaddlePaddle divides the description of neural network computation into two stages: compile time and runtime. At compile time, the neural network computation is described as a <codeclass="docutils literal"><spanclass="pre">ProgramDesc</span></code> whereas at runtime an <codeclass="docutils literal"><spanclass="pre">Executor</span></code> interprets the <codeclass="docutils literal"><spanclass="pre">ProgramDesc</span></code> to compute the operations.</p>
<p>PaddlePaddle use proto message to describe compile time program because</p>
<olclass="simple">
<li>Computation graph should be able to be saved to a file.</li>
<li>In distributed training, the graph will be serialized and send to multiple workers.</li>
<li>The computation program description must be serializable and saved in a file.</li>
<li>During distributed training, the sreialized program will be sent to multiple workers. It should also be possible to break the program into different components, each of which can be executed on different workers.</li>
</ol>
<p>The computation graph is constructed by Data Node and Operation Node. The concept to represent them is in the table below.</p>
<p>The computation <codeclass="docutils literal"><spanclass="pre">Program</span></code> consists of nested <codeclass="docutils literal"><spanclass="pre">Blocks</span></code>. Each <codeclass="docutils literal"><spanclass="pre">Block</span></code> will consist of data(i.e. <codeclass="docutils literal"><spanclass="pre">Variable</span></code>) and <codeclass="docutils literal"><spanclass="pre">Operations</span></code>. The concept to represent them is in the table below.</p>