In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model's backward part automatically according to the given forward part.
When implementing a specific `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.
## Challenges
The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into the right place.
## Usage
Although the whole algorithm is comprised of many functions, only one is exposed as API:
loss(Variable): The variable generated by the cost function.
parameter_list(list): Parameters that need to be updated by optimizers.
If None, it means all parameters need to be updated.
no_grad_set(set): Variables that have no gradients in Block 0.
If None, the set will be generated inside the function and
contains all variables with `step_gradient=True` from all blocks.
Return:
(list[Variable]): list of (parameters, gradients) pair.
"""
```
By invoking this API, the framework appends backward part of the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run.
This API will be invoked automatically before optimizer building.
As a result, in most cases, users do not need to invoke the API by themselves to append backward part.
## Implementation
The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided into two independent parts: creating `grad_op`s and creating new variables.
### Creating `grad_op`s
The creating of `grad_op`s is implemented by:
```python
def _append_backward_ops_(target,
block,
target_block,
no_grad_dict,
grad_to_var):
"""
Create all grad ops, and insert them into given block
Args:
target(Variable): the target variable of forward pass
block(Block): the block where forward ops are
target_block(Block): the block which is going to hold new generated grad ops
no_grad_dict(dict):
key(int) block index
val(set) a set of varibale names. These varibales have no gradient
grad_to_var(dict)(output argument):
key(str): grad variable name
val(str): corresponding forward variable name
"""
```
Given a `block`, the function will traverses all `op`s in this block in reverse order, gets corresponding `grad_op` from the C++ core via `core.get_grad_op_desc()`, then append it to `target_block`.
However, some specific `op`(e.g. `while_op`, `if_else_op`) can hold its own sub-block. For these sub-blocks contains `op`s as well, the `grad_op` creating should be recursive.
During the reverse traversal, we check each `op` whether it has an attribute named `sub_block`. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in `op`'s attribute, we invoke `_append_backward_ops_()` recursively, assigning the new block to parameter `target_block` and the one in `op`'s attribute to `block`. The *pseudo-code* shows this process:
```
******* pseudo-code ********
for op in reversed(block.ops):
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Create a new block(`grad_s_block`), whose father is `s_block`.
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
Invoke `core.get_grad_op_desc()` to get op's grad_op.
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
Append grad_op to current target_block.
```
The first invoking of `_append_backward_ops_()` is initiated by `append_backward()`, in which parameters `block` and `target_block` are all assigned with root block(the block with index 0).
### Corner Cases of `grad_op` Creating
In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output `grad_op`s.
#### Shared Variables
If a variable is read by more than one `op` in the forward pass, its gradient is likely to be written by more than one `grad_op`s in the next backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variable and then add a `sum_op` to add them up.
For the debug convenience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`...
See function `_addup_repetitive_outputs_` in `backward.py` for implementation details.
#### No Gradient Variables
In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some `grad_op` are marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass.
But these unnecessary gradients still need to be creating and initialized by something, otherwise following `grad_op`s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ `fill_zeros_like_op` to initialize them as all zeros.
This features are implemented in function `_remove_no_grad_branch_`. It checks new created `grad_op`s one-by-one, removes whose outputs are all in `no_grad_set` or inserts `fill_zeros_like_op` when its necessary. We can get the `no_grad_set` from the `_append_backward_ops_` argument `no_grad_dict` or generate it on the fly by scanning all variables' `no_gradient` attribute(True or False).
### Creating Backward Variables
Up to now, we have completed all creating and adjusting jobs of `grad_op`s. However, backward variables have not been created. Now they are only represented by `grad_op`'s input and output arguments. The backward variable creating job will be done by:
```python
def _append_backward_vars_(block,
start_op_idx,
grad_to_var,
grad_info_map):
"""
Create new variables required by backward pass.
Args:
block(Block): the block where new variables will be created
start_op_idx(int): Only variables required by ops in block.ops[start_op_idx : ] will be created
grad_to_var(dict):
key(str): grad variable name
val(str): corresponding forward variable name
In most cases, this dict is generated by _append_backward_ops_()
grad_info_map(dict)(output argument):
key(str): forward variable name
val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index
"""
```
Given a `block`, this function traverses all the `grad_op`s in it(The argument `start_op_idx` indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The *pseudo-code* shows this process:
```
for op in block.ops[start_op_idx : ]:
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Invoke _append_backward_vars_(), with `block=s_block`
for var_name in op.all_output_names():
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
continue
create a new variable named 'var_name' in block
if grad_to_var.has_key(var_name):
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
<spanid="backward-building"></span><h1>Backward Building<aclass="headerlink"href="#backward-building"title="Permalink to this headline">¶</a></h1>
<divclass="section"id="motivation">
<spanid="motivation"></span><h2>Motivation<aclass="headerlink"href="#motivation"title="Permalink to this headline">¶</a></h2>
<p>In Neural Network, most models are solved by the backpropagation algorithm(known as <strong>BP</strong>) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model’s backward part automatically according to the given forward part.</p>
<p>When implementing a specific <codeclass="docutils literal"><spanclass="pre">op</span></code>, the developer is also asked to implement its backward version, called <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>. A <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> takes gradients of its corresponding <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s outputs, and calculate gradients of the <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s inputs. During the building of a model’s backward part, the framework creates each forward <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.</p>
</div>
<divclass="section"id="challenges">
<spanid="challenges"></span><h2>Challenges<aclass="headerlink"href="#challenges"title="Permalink to this headline">¶</a></h2>
<p>The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the <strong>Fluid</strong> design, a deep learning model is described by <codeclass="docutils literal"><spanclass="pre">Program</span></code>, <codeclass="docutils literal"><spanclass="pre">Block</span></code>, <codeclass="docutils literal"><spanclass="pre">Op</span></code> and <codeclass="docutils literal"><spanclass="pre">Variable</span></code>. The <codeclass="docutils literal"><spanclass="pre">Block</span></code> itself can be nested. It means that the <codeclass="docutils literal"><spanclass="pre">op</span></code>s and <codeclass="docutils literal"><spanclass="pre">variable</span></code>s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s and new created <codeclass="docutils literal"><spanclass="pre">variable</span></code>s into the right place.</p>
</div>
<divclass="section"id="usage">
<spanid="usage"></span><h2>Usage<aclass="headerlink"href="#usage"title="Permalink to this headline">¶</a></h2>
<p>Although the whole algorithm is comprised of many functions, only one is exposed as API:</p>
<spanclass="sd"> Append backward part to main_program</span>
<spanclass="sd"> Args:</span>
<spanclass="sd"> loss(Variable): The variable generated by the cost function.</span>
<spanclass="sd"> parameter_list(list): Parameters that need to be updated by optimizers.</span>
<spanclass="sd"> If None, it means all parameters need to be updated.</span>
<spanclass="sd"> no_grad_set(set): Variables that have no gradients in Block 0. </span>
<spanclass="sd"> If None, the set will be generated inside the function and </span>
<spanclass="sd"> contains all variables with `step_gradient=True` from all blocks.</span>
<spanclass="sd"></span>
<spanclass="sd"> Return:</span>
<spanclass="sd"> (list[Variable]): list of (parameters, gradients) pair.</span>
<spanclass="sd">"""</span>
</pre></div>
</div>
<p>By invoking this API, the framework appends backward part of the program where the <codeclass="docutils literal"><spanclass="pre">loss</span></code> is. It takes three arguments. <codeclass="docutils literal"><spanclass="pre">loss</span></code> means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. <codeclass="docutils literal"><spanclass="pre">parameter_list</span></code> marks all parameters needs updating. If it’s <codeclass="docutils literal"><spanclass="pre">None</span></code>, all parameter will be updated by optimizers. <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> marks variables without gradient. if all outputs of some <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> are in <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code>, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> will not be run.</p>
<p>This API will be invoked automatically before optimizer building.
As a result, in most cases, users do not need to invoke the API by themselves to append backward part.</p>
</div>
<divclass="section"id="implementation">
<spanid="implementation"></span><h2>Implementation<aclass="headerlink"href="#implementation"title="Permalink to this headline">¶</a></h2>
<p>The implementation of backward building algorithm is in <codeclass="docutils literal"><spanclass="pre">backward.py</span></code> file. The whole algorithm can be divided into two independent parts: creating <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s and creating new variables.</p>
<divclass="section"id="creating-grad-ops">
<spanid="creating-grad-ops"></span><h3>Creating <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s<aclass="headerlink"href="#creating-grad-ops"title="Permalink to this headline">¶</a></h3>
<p>The creating of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s is implemented by:</p>
<p>Given a <codeclass="docutils literal"><spanclass="pre">block</span></code>, the function will traverses all <codeclass="docutils literal"><spanclass="pre">op</span></code>s in this block in reverse order, gets corresponding <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> from the C++ core via <codeclass="docutils literal"><spanclass="pre">core.get_grad_op_desc()</span></code>, then append it to <codeclass="docutils literal"><spanclass="pre">target_block</span></code>.</p>
<p>However, some specific <codeclass="docutils literal"><spanclass="pre">op</span></code>(e.g. <codeclass="docutils literal"><spanclass="pre">while_op</span></code>, <codeclass="docutils literal"><spanclass="pre">if_else_op</span></code>) can hold its own sub-block. For these sub-blocks contains <codeclass="docutils literal"><spanclass="pre">op</span></code>s as well, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> creating should be recursive.</p>
<p>During the reverse traversal, we check each <codeclass="docutils literal"><spanclass="pre">op</span></code> whether it has an attribute named <codeclass="docutils literal"><spanclass="pre">sub_block</span></code>. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s attribute, we invoke <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_()</span></code> recursively, assigning the new block to parameter <codeclass="docutils literal"><spanclass="pre">target_block</span></code> and the one in <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s attribute to <codeclass="docutils literal"><spanclass="pre">block</span></code>. The <em>pseudo-code</em> shows this process:</p>
Get the sub-block(`s_block`) from op's attribute.
Create a new block(`grad_s_block`), whose father is `s_block`.
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
Invoke `core.get_grad_op_desc()` to get op's grad_op.
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
Append grad_op to current target_block.
</pre></div>
</div>
<p>The first invoking of <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_()</span></code> is initiated by <codeclass="docutils literal"><spanclass="pre">append_backward()</span></code>, in which parameters <codeclass="docutils literal"><spanclass="pre">block</span></code> and <codeclass="docutils literal"><spanclass="pre">target_block</span></code> are all assigned with root block(the block with index 0).</p>
<spanid="corner-cases-of-grad-op-creating"></span><h3>Corner Cases of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> Creating<aclass="headerlink"href="#corner-cases-of-grad-op-creating"title="Permalink to this headline">¶</a></h3>
<p>In the previous section, we show the regular process of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s.</p>
<divclass="section"id="shared-variables">
<spanid="shared-variables"></span><h4>Shared Variables<aclass="headerlink"href="#shared-variables"title="Permalink to this headline">¶</a></h4>
<p>If a variable is read by more than one <codeclass="docutils literal"><spanclass="pre">op</span></code> in the forward pass, its gradient is likely to be written by more than one <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s in the next backward pass. To make the gradient result being the sum of all <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s’ outputs instead of the last running one, we assign each output with a temporary variable and then add a <codeclass="docutils literal"><spanclass="pre">sum_op</span></code> to add them up.</p>
<p>For the debug convenience, if the final gradient name is <codeclass="docutils literal"><spanclass="pre">w@GRAD</span></code>, it’s corresponding temporary variables will be named as <codeclass="docutils literal"><spanclass="pre">w@GRAD@RENAME@0</span></code>, <codeclass="docutils literal"><spanclass="pre">w@GRAD@RENAME@1</span></code>...</p>
<p>See function <codeclass="docutils literal"><spanclass="pre">_addup_repetitive_outputs_</span></code> in <codeclass="docutils literal"><spanclass="pre">backward.py</span></code> for implementation details.</p>
</div>
<divclass="section"id="no-gradient-variables">
<spanid="no-gradient-variables"></span><h4>No Gradient Variables<aclass="headerlink"href="#no-gradient-variables"title="Permalink to this headline">¶</a></h4>
<p>In our framework, variables can be marked as <em>no_gradient</em>, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> are marked as <em>no_gradient</em>, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> itself can be skipped in backward pass.</p>
<p>But these unnecessary gradients still need to be creating and initialized by something, otherwise following <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ <codeclass="docutils literal"><spanclass="pre">fill_zeros_like_op</span></code> to initialize them as all zeros.</p>
<p>This features are implemented in function <codeclass="docutils literal"><spanclass="pre">_remove_no_grad_branch_</span></code>. It checks new created <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s one-by-one, removes whose outputs are all in <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> or inserts <codeclass="docutils literal"><spanclass="pre">fill_zeros_like_op</span></code> when its necessary. We can get the <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> from the <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_</span></code> argument <codeclass="docutils literal"><spanclass="pre">no_grad_dict</span></code> or generate it on the fly by scanning all variables’<codeclass="docutils literal"><spanclass="pre">no_gradient</span></code> attribute(True or False).</p>
<spanid="creating-backward-variables"></span><h3>Creating Backward Variables<aclass="headerlink"href="#creating-backward-variables"title="Permalink to this headline">¶</a></h3>
<p>Up to now, we have completed all creating and adjusting jobs of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s. However, backward variables have not been created. Now they are only represented by <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>‘s input and output arguments. The backward variable creating job will be done by:</p>
<spanclass="sd"> val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index</span>
<spanclass="sd">"""</span>
</pre></div>
</div>
<p>Given a <codeclass="docutils literal"><spanclass="pre">block</span></code>, this function traverses all the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s in it(The argument <codeclass="docutils literal"><spanclass="pre">start_op_idx</span></code> indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The <em>pseudo-code</em> shows this process:</p>
<divclass="highlight-default"><divclass="highlight"><pre><span></span>for op in block.ops[start_op_idx : ]:
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Invoke _append_backward_vars_(), with `block=s_block`
for var_name in op.all_output_names():
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
continue
create a new variable named 'var_name' in block
if grad_to_var.has_key(var_name):
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model's backward part automatically according to the given forward part.
When implementing a specific `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.
## Challenges
The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into the right place.
## Usage
Although the whole algorithm is comprised of many functions, only one is exposed as API:
loss(Variable): The variable generated by the cost function.
parameter_list(list): Parameters that need to be updated by optimizers.
If None, it means all parameters need to be updated.
no_grad_set(set): Variables that have no gradients in Block 0.
If None, the set will be generated inside the function and
contains all variables with `step_gradient=True` from all blocks.
Return:
(list[Variable]): list of (parameters, gradients) pair.
"""
```
By invoking this API, the framework appends backward part of the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run.
This API will be invoked automatically before optimizer building.
As a result, in most cases, users do not need to invoke the API by themselves to append backward part.
## Implementation
The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided into two independent parts: creating `grad_op`s and creating new variables.
### Creating `grad_op`s
The creating of `grad_op`s is implemented by:
```python
def _append_backward_ops_(target,
block,
target_block,
no_grad_dict,
grad_to_var):
"""
Create all grad ops, and insert them into given block
Args:
target(Variable): the target variable of forward pass
block(Block): the block where forward ops are
target_block(Block): the block which is going to hold new generated grad ops
no_grad_dict(dict):
key(int) block index
val(set) a set of varibale names. These varibales have no gradient
grad_to_var(dict)(output argument):
key(str): grad variable name
val(str): corresponding forward variable name
"""
```
Given a `block`, the function will traverses all `op`s in this block in reverse order, gets corresponding `grad_op` from the C++ core via `core.get_grad_op_desc()`, then append it to `target_block`.
However, some specific `op`(e.g. `while_op`, `if_else_op`) can hold its own sub-block. For these sub-blocks contains `op`s as well, the `grad_op` creating should be recursive.
During the reverse traversal, we check each `op` whether it has an attribute named `sub_block`. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in `op`'s attribute, we invoke `_append_backward_ops_()` recursively, assigning the new block to parameter `target_block` and the one in `op`'s attribute to `block`. The *pseudo-code* shows this process:
```
******* pseudo-code ********
for op in reversed(block.ops):
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Create a new block(`grad_s_block`), whose father is `s_block`.
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
Invoke `core.get_grad_op_desc()` to get op's grad_op.
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
Append grad_op to current target_block.
```
The first invoking of `_append_backward_ops_()` is initiated by `append_backward()`, in which parameters `block` and `target_block` are all assigned with root block(the block with index 0).
### Corner Cases of `grad_op` Creating
In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output `grad_op`s.
#### Shared Variables
If a variable is read by more than one `op` in the forward pass, its gradient is likely to be written by more than one `grad_op`s in the next backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variable and then add a `sum_op` to add them up.
For the debug convenience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`...
See function `_addup_repetitive_outputs_` in `backward.py` for implementation details.
#### No Gradient Variables
In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some `grad_op` are marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass.
But these unnecessary gradients still need to be creating and initialized by something, otherwise following `grad_op`s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ `fill_zeros_like_op` to initialize them as all zeros.
This features are implemented in function `_remove_no_grad_branch_`. It checks new created `grad_op`s one-by-one, removes whose outputs are all in `no_grad_set` or inserts `fill_zeros_like_op` when its necessary. We can get the `no_grad_set` from the `_append_backward_ops_` argument `no_grad_dict` or generate it on the fly by scanning all variables' `no_gradient` attribute(True or False).
### Creating Backward Variables
Up to now, we have completed all creating and adjusting jobs of `grad_op`s. However, backward variables have not been created. Now they are only represented by `grad_op`'s input and output arguments. The backward variable creating job will be done by:
```python
def _append_backward_vars_(block,
start_op_idx,
grad_to_var,
grad_info_map):
"""
Create new variables required by backward pass.
Args:
block(Block): the block where new variables will be created
start_op_idx(int): Only variables required by ops in block.ops[start_op_idx : ] will be created
grad_to_var(dict):
key(str): grad variable name
val(str): corresponding forward variable name
In most cases, this dict is generated by _append_backward_ops_()
grad_info_map(dict)(output argument):
key(str): forward variable name
val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index
"""
```
Given a `block`, this function traverses all the `grad_op`s in it(The argument `start_op_idx` indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The *pseudo-code* shows this process:
```
for op in block.ops[start_op_idx : ]:
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Invoke _append_backward_vars_(), with `block=s_block`
for var_name in op.all_output_names():
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
continue
create a new variable named 'var_name' in block
if grad_to_var.has_key(var_name):
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
<p>In Neural Network, most models are solved by the backpropagation algorithm(known as <strong>BP</strong>) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model’s backward part automatically according to the given forward part.</p>
<p>When implementing a specific <codeclass="docutils literal"><spanclass="pre">op</span></code>, the developer is also asked to implement its backward version, called <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>. A <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> takes gradients of its corresponding <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s outputs, and calculate gradients of the <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s inputs. During the building of a model’s backward part, the framework creates each forward <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.</p>
<p>The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the <strong>Fluid</strong> design, a deep learning model is described by <codeclass="docutils literal"><spanclass="pre">Program</span></code>, <codeclass="docutils literal"><spanclass="pre">Block</span></code>, <codeclass="docutils literal"><spanclass="pre">Op</span></code> and <codeclass="docutils literal"><spanclass="pre">Variable</span></code>. The <codeclass="docutils literal"><spanclass="pre">Block</span></code> itself can be nested. It means that the <codeclass="docutils literal"><spanclass="pre">op</span></code>s and <codeclass="docutils literal"><spanclass="pre">variable</span></code>s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s and new created <codeclass="docutils literal"><spanclass="pre">variable</span></code>s into the right place.</p>
<spanclass="sd"> Append backward part to main_program</span>
<spanclass="sd"> Args:</span>
<spanclass="sd"> loss(Variable): The variable generated by the cost function.</span>
<spanclass="sd"> parameter_list(list): Parameters that need to be updated by optimizers.</span>
<spanclass="sd"> If None, it means all parameters need to be updated.</span>
<spanclass="sd"> no_grad_set(set): Variables that have no gradients in Block 0. </span>
<spanclass="sd"> If None, the set will be generated inside the function and </span>
<spanclass="sd"> contains all variables with `step_gradient=True` from all blocks.</span>
<spanclass="sd"></span>
<spanclass="sd"> Return:</span>
<spanclass="sd"> (list[Variable]): list of (parameters, gradients) pair.</span>
<spanclass="sd">"""</span>
</pre></div>
</div>
<p>By invoking this API, the framework appends backward part of the program where the <codeclass="docutils literal"><spanclass="pre">loss</span></code> is. It takes three arguments. <codeclass="docutils literal"><spanclass="pre">loss</span></code> means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. <codeclass="docutils literal"><spanclass="pre">parameter_list</span></code> marks all parameters needs updating. If it’s <codeclass="docutils literal"><spanclass="pre">None</span></code>, all parameter will be updated by optimizers. <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> marks variables without gradient. if all outputs of some <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> are in <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code>, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> will not be run.</p>
<p>This API will be invoked automatically before optimizer building.
As a result, in most cases, users do not need to invoke the API by themselves to append backward part.</p>
<p>The implementation of backward building algorithm is in <codeclass="docutils literal"><spanclass="pre">backward.py</span></code> file. The whole algorithm can be divided into two independent parts: creating <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s and creating new variables.</p>
<p>Given a <codeclass="docutils literal"><spanclass="pre">block</span></code>, the function will traverses all <codeclass="docutils literal"><spanclass="pre">op</span></code>s in this block in reverse order, gets corresponding <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> from the C++ core via <codeclass="docutils literal"><spanclass="pre">core.get_grad_op_desc()</span></code>, then append it to <codeclass="docutils literal"><spanclass="pre">target_block</span></code>.</p>
<p>However, some specific <codeclass="docutils literal"><spanclass="pre">op</span></code>(e.g. <codeclass="docutils literal"><spanclass="pre">while_op</span></code>, <codeclass="docutils literal"><spanclass="pre">if_else_op</span></code>) can hold its own sub-block. For these sub-blocks contains <codeclass="docutils literal"><spanclass="pre">op</span></code>s as well, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> creating should be recursive.</p>
<p>During the reverse traversal, we check each <codeclass="docutils literal"><spanclass="pre">op</span></code> whether it has an attribute named <codeclass="docutils literal"><spanclass="pre">sub_block</span></code>. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s attribute, we invoke <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_()</span></code> recursively, assigning the new block to parameter <codeclass="docutils literal"><spanclass="pre">target_block</span></code> and the one in <codeclass="docutils literal"><spanclass="pre">op</span></code>‘s attribute to <codeclass="docutils literal"><spanclass="pre">block</span></code>. The <em>pseudo-code</em> shows this process:</p>
Get the sub-block(`s_block`) from op's attribute.
Create a new block(`grad_s_block`), whose father is `s_block`.
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
Invoke `core.get_grad_op_desc()` to get op's grad_op.
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
Append grad_op to current target_block.
</pre></div>
</div>
<p>The first invoking of <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_()</span></code> is initiated by <codeclass="docutils literal"><spanclass="pre">append_backward()</span></code>, in which parameters <codeclass="docutils literal"><spanclass="pre">block</span></code> and <codeclass="docutils literal"><spanclass="pre">target_block</span></code> are all assigned with root block(the block with index 0).</p>
<spanid="corner-cases-of-grad-op-creating"></span><h3>Corner Cases of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> Creating<aclass="headerlink"href="#corner-cases-of-grad-op-creating"title="永久链接至标题">¶</a></h3>
<p>In the previous section, we show the regular process of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s.</p>
<p>If a variable is read by more than one <codeclass="docutils literal"><spanclass="pre">op</span></code> in the forward pass, its gradient is likely to be written by more than one <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s in the next backward pass. To make the gradient result being the sum of all <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s’ outputs instead of the last running one, we assign each output with a temporary variable and then add a <codeclass="docutils literal"><spanclass="pre">sum_op</span></code> to add them up.</p>
<p>For the debug convenience, if the final gradient name is <codeclass="docutils literal"><spanclass="pre">w@GRAD</span></code>, it’s corresponding temporary variables will be named as <codeclass="docutils literal"><spanclass="pre">w@GRAD@RENAME@0</span></code>, <codeclass="docutils literal"><spanclass="pre">w@GRAD@RENAME@1</span></code>...</p>
<p>See function <codeclass="docutils literal"><spanclass="pre">_addup_repetitive_outputs_</span></code> in <codeclass="docutils literal"><spanclass="pre">backward.py</span></code> for implementation details.</p>
<p>In our framework, variables can be marked as <em>no_gradient</em>, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> are marked as <em>no_gradient</em>, the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code> itself can be skipped in backward pass.</p>
<p>But these unnecessary gradients still need to be creating and initialized by something, otherwise following <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ <codeclass="docutils literal"><spanclass="pre">fill_zeros_like_op</span></code> to initialize them as all zeros.</p>
<p>This features are implemented in function <codeclass="docutils literal"><spanclass="pre">_remove_no_grad_branch_</span></code>. It checks new created <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s one-by-one, removes whose outputs are all in <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> or inserts <codeclass="docutils literal"><spanclass="pre">fill_zeros_like_op</span></code> when its necessary. We can get the <codeclass="docutils literal"><spanclass="pre">no_grad_set</span></code> from the <codeclass="docutils literal"><spanclass="pre">_append_backward_ops_</span></code> argument <codeclass="docutils literal"><spanclass="pre">no_grad_dict</span></code> or generate it on the fly by scanning all variables’<codeclass="docutils literal"><spanclass="pre">no_gradient</span></code> attribute(True or False).</p>
<p>Up to now, we have completed all creating and adjusting jobs of <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s. However, backward variables have not been created. Now they are only represented by <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>‘s input and output arguments. The backward variable creating job will be done by:</p>
<spanclass="sd"> val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index</span>
<spanclass="sd">"""</span>
</pre></div>
</div>
<p>Given a <codeclass="docutils literal"><spanclass="pre">block</span></code>, this function traverses all the <codeclass="docutils literal"><spanclass="pre">grad_op</span></code>s in it(The argument <codeclass="docutils literal"><spanclass="pre">start_op_idx</span></code> indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The <em>pseudo-code</em> shows this process:</p>
<divclass="highlight-default"><divclass="highlight"><pre><span></span>for op in block.ops[start_op_idx : ]:
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Invoke _append_backward_vars_(), with `block=s_block`
for var_name in op.all_output_names():
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
continue
create a new variable named 'var_name' in block
if grad_to_var.has_key(var_name):
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.