Backward Building¶
Motivation¶
In Neural Network, most models are solved by the backpropagation algorithm(known as BP) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model’s backward part automatically according to the given forward part.
When implementing a specific op
, the developer is also asked to implement its backward version, called grad_op
. A grad_op
takes gradients of its corresponding op
‘s outputs, and calculate gradients of the op
‘s inputs. During the building of a model’s backward part, the framework creates each forward op
‘s grad_op
, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.
Challenges¶
The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the Fluid design, a deep learning model is described by Program
, Block
, Op
and Variable
. The Block
itself can be nested. It means that the op
s and variable
s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert grad_op
s and new created variable
s into the right place.
Usage¶
Although the whole algorithm is comprised of many functions, only one is exposed as API:
def append_backward(loss, parameter_list=None, no_grad_set=None):
"""
Append backward part to main_program
Args:
loss(Variable): The variable generated by the cost function.
parameter_list(list): Parameters that need to be updated by optimizers.
If None, it means all parameters need to be updated.
no_grad_set(set): Variables that have no gradients in Block 0.
If None, the set will be generated inside the function and
contains all variables with `step_gradient=True` from all blocks.
Return:
(list[Variable]): list of (parameters, gradients) pair.
"""
By invoking this API, the framework appends backward part of the program where the loss
is. It takes three arguments. loss
means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. parameter_list
marks all parameters needs updating. If it’s None
, all parameter will be updated by optimizers. no_grad_set
marks variables without gradient. if all outputs of some grad_op
are in no_grad_set
, the grad_op
will not be run.
This API will be invoked automatically before optimizer building. As a result, in most cases, users do not need to invoke the API by themselves to append backward part.
Implementation¶
The implementation of backward building algorithm is in backward.py
file. The whole algorithm can be divided into two independent parts: creating grad_op
s and creating new variables.
Creating grad_op
s¶
The creating of grad_op
s is implemented by:
def _append_backward_ops_(target,
block,
target_block,
no_grad_dict,
grad_to_var):
"""
Create all grad ops, and insert them into given block
Args:
target(Variable): the target variable of forward pass
block(Block): the block where forward ops are
target_block(Block): the block which is going to hold new generated grad ops
no_grad_dict(dict):
key(int) block index
val(set) a set of varibale names. These varibales have no gradient
grad_to_var(dict)(output argument):
key(str): grad variable name
val(str): corresponding forward variable name
"""
Given a block
, the function will traverses all op
s in this block in reverse order, gets corresponding grad_op
from the C++ core via core.get_grad_op_desc()
, then append it to target_block
.
However, some specific op
(e.g. while_op
, if_else_op
) can hold its own sub-block. For these sub-blocks contains op
s as well, the grad_op
creating should be recursive.
During the reverse traversal, we check each op
whether it has an attribute named sub_block
. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in op
‘s attribute, we invoke _append_backward_ops_()
recursively, assigning the new block to parameter target_block
and the one in op
‘s attribute to block
. The pseudo-code shows this process:
******* pseudo-code ********
for op in reversed(block.ops):
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Create a new block(`grad_s_block`), whose father is `s_block`.
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
Invoke `core.get_grad_op_desc()` to get op's grad_op.
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
Append grad_op to current target_block.
The first invoking of _append_backward_ops_()
is initiated by append_backward()
, in which parameters block
and target_block
are all assigned with root block(the block with index 0).
Corner Cases of grad_op
Creating¶
In the previous section, we show the regular process of grad_op
creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output grad_op
s.
No Gradient Variables¶
In our framework, variables can be marked as no_gradient, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some grad_op
are marked as no_gradient, the grad_op
itself can be skipped in backward pass.
Another situation is all the gradient inputs of some grad_op
are marked as no_gradient, which means all of them can be considered as zeros. For grad_op
s are in essence the propagation of gradients, all the outputs are definitely zeros when all gradient inputs are zeros. Therefore the grad_op
can also be skipped.
It should be noted that all these zero gradients still need to be creating and initialized by something, otherwise following grad_op
s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ fill_zeros_like_op
to initialize them as all zeros.
This features are implemented in function _remove_no_grad_branch_
. It checks new created grad_op
s one-by-one, removes who can be skipped and inserts fill_zeros_like_op
when its necessary. We can get the no_grad_set
from the _append_backward_ops_
argument no_grad_dict
or generate it on the fly by scanning all variables’ no_gradient
attribute(True or False).
Creating Backward Variables¶
Up to now, we have completed all creating and adjusting jobs of grad_op
s. However, backward variables have not been created. Now they are only represented by grad_op
‘s input and output arguments. The backward variable creating job will be done by:
def _append_backward_vars_(block,
start_op_idx,
grad_to_var,
grad_info_map):
"""
Create new variables required by backward pass.
Args:
block(Block): the block where new variables will be created
start_op_idx(int): Only variables required by ops in block.ops[start_op_idx : ] will be created
grad_to_var(dict):
key(str): grad variable name
val(str): corresponding forward variable name
In most cases, this dict is generated by _append_backward_ops_()
grad_info_map(dict)(output argument):
key(str): forward variable name
val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index
"""
Given a block
, this function traverses all the grad_op
s in it(The argument start_op_idx
indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The pseudo-code shows this process:
for op in block.ops[start_op_idx : ]:
if op has an attribute named 'sub_block':
Get the sub-block(`s_block`) from op's attribute.
Invoke _append_backward_vars_(), with `block=s_block`
for var_name in op.all_output_names():
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
continue
create a new variable named 'var_name' in block
if grad_to_var.has_key(var_name):
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
do op's var type inference
do op's shape inference