backward.md 9.5 KB
Newer Older
F
fengjiayi 已提交
1
# Backward Building
D
dongzhihong 已提交
2

F
fengjiayi 已提交
3
## Motivation
D
dongzhihong 已提交
4

F
fengjiayi 已提交
5
In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to definate the backward part. So a mechanism is required by the framework which is able to complete the model's backward part automatically acoording to the given forward part.
D
dongzhihong 已提交
6

F
fengjiayi 已提交
7
When implementing a certain `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forward part. In this way, gradients spread from the end to the beginning of the model, in other word, from the loss to parameters.
D
dongzhihong 已提交
8

F
fengjiayi 已提交
9
## Challenges
D
dongzhihong 已提交
10

F
fengjiayi 已提交
11
The motivation of backward building is obvious. However, to implement it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into right place. 
F
fengjiayi 已提交
12

F
fengjiayi 已提交
13
## Usage
F
fengjiayi 已提交
14

F
fengjiayi 已提交
15
Although the whole algorithm is comprised of many functions, only one is exposed as API:
F
fengjiayi 已提交
16

F
fengjiayi 已提交
17 18 19 20
```python
def append_backward(loss, parameter_list=None, no_grad_set=None):
    """
    Append backward part to main_program
F
fengjiayi 已提交
21

F
fengjiayi 已提交
22 23 24 25
    Args:
        loss(Variable): The variable generated by cost function.
        parameter_list(list): Parameters that need to be updated by optimizer.
            If None, it means all parameters need to be updated.
F
fengjiayi 已提交
26

F
fengjiayi 已提交
27 28 29 30 31 32 33
        no_grad_set(set): Variables that have no gradients in Block 0. 
            If None, the set will be generated inside the function and 
            contains all variables with `step_gradient=True` from all blocks.
        
    Return:
        (list[Variable]): list of (parameters, gradients) pair.
    """
F
fengjiayi 已提交
34 35
```

F
fengjiayi 已提交
36
By invoking this API, the framework appends backward part for the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run.
F
fengjiayi 已提交
37

F
fengjiayi 已提交
38 39
This API will be invoked automatically before optimizer building. 
As a result, in most cases users do not need to invoke the API by themselves to append backward part.
D
dongzhihong 已提交
40

F
fengjiayi 已提交
41
## Implementation
D
dongzhihong 已提交
42

F
fengjiayi 已提交
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided to two independent parts: creating of `grad_op`s and creating of new variables. 

### Creating `grad_op`s

The creating of `grad_op`s is implemented by:

```python
def _append_backward_ops_(target,
                          block,
                          target_block,
                          no_grad_dict,
                          grad_to_var):
    """
    Create all grad ops, and insert them into given block

    Args:
        target(Variable): the target variable of forward pass
        block(Block): the block where forward ops are
        target_block(Block): the block which is going to hold new generated grad ops
        no_grad_dict(dict): 
            key(int)  block index
            val(set) a set of varibale names. These varibales have no gradient
        grad_to_var(dict)(output argument):
            key(str): grad variable name
            val(str): corresponding forward variable name
    """
D
dongzhihong 已提交
69
```
F
fengjiayi 已提交
70

F
fengjiayi 已提交
71
Given a `block`, the function will traverses all `op`s in this block in reverse order, gets corresponding `grad_op` from the C++ core via `core.get_grad_op_desc()`, then append it to `target_block`. 
F
fengjiayi 已提交
72

F
fengjiayi 已提交
73
However, some specific `op`(e.g. `while_op`, `if_else_op`) can hold its own sub-block. For these sub-blocks contains `op`s as well, the `grad_op` creating should be recursive.
F
fengjiayi 已提交
74

F
fengjiayi 已提交
75
During the reverse traversal, we check each `op` whether it has an attribute named `sub_block`. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in `op`'s attribute, we invoke `_append_backward_ops_()` recursively, assigning the new block to parameter `target_block` and the one in `op`'s attribute to `block`. The *pseudo-code* shows this process:
F
fengjiayi 已提交
76

F
fengjiayi 已提交
77 78 79 80 81 82 83 84 85 86 87 88 89
```
******* pseudo-code ********
for op in reversed(block.ops):
    if op has an attribute named 'sub_block':
        Get the sub-block(`s_block`) from op's attribute.
        Create a new block(`grad_s_block`), whose father is `s_block`.
        Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
    
    Invoke `core.get_grad_op_desc()` to get op's grad_op.
    Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
    Assign grad_s_block to grad_op as it's 'sub_block' attribute.
    Append grad_op to current target_block.
```
D
dongzhihong 已提交
90

F
fengjiayi 已提交
91
The first invoking of `_append_backward_ops_()` is initiated by `append_backward()`, in which parameters `block` and `target_block` are all assigned with root block(the block with index 0).
D
dongzhihong 已提交
92

F
fengjiayi 已提交
93
### Corner Cases of `grad_op` Creating
D
dongzhihong 已提交
94

F
fengjiayi 已提交
95
In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, regular algorithm is not enough to get the correct result and appending handling is required. These addtional processes run after the above-mentioned algorithm and do some special adjusts on its output `grad_op`s.
D
dongzhihong 已提交
96

F
fengjiayi 已提交
97
#### Shared Variables
D
dongzhihong 已提交
98

F
fengjiayi 已提交
99
If a variable is readed by more than one `op` in the forward pass, its gradient is likey to be written by more than one `grad_op`s in the following backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variables, and then add a `sum_op` to add them up. 
D
dongzhihong 已提交
100

F
fengjiayi 已提交
101
For the debug convinience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`...
D
dongzhihong 已提交
102

F
fengjiayi 已提交
103 104 105 106
<figure class="center">
<img src="./images/duplicate_op.png" width="45%" >
<img src="images/duplicate_op2.png" width="45%" >
</figure>
D
dongzhihong 已提交
107

F
fengjiayi 已提交
108
See function `_addup_repetitive_outputs_` in `backward.py` for implementation details.
D
dongzhihong 已提交
109

F
fengjiayi 已提交
110
#### No Gradient Variables
D
dongzhihong 已提交
111

F
fengjiayi 已提交
112
In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Obviously, when all the outputs of some `grad_op` is marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass. 
D
dongzhihong 已提交
113

F
fengjiayi 已提交
114
But these unnecessary gradients still need to be creating and initialized by something, otherwise following `grad_op`s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ `fill_zeros_like_op` to initialize them as all zeros. 
D
dongzhihong 已提交
115

F
fengjiayi 已提交
116
This features are implemented in function `_remove_no_grad_branch_`. It checks new created `grad_op`'s one-by-one, removes whose outputs are all in `no_grad_set` or inserts `fill_zeros_like_op` when its necessary. We can get the `no_grad_set` from the `_append_backward_ops_` argument `no_grad_dict` or generate it on fly by scanning all variables' `no_gradient` attribute(True or False). 
D
dongzhihong 已提交
117

F
fengjiayi 已提交
118
### Creating Backward Variables
D
dongzhihong 已提交
119

F
fengjiayi 已提交
120
Up to now, we have completed all creating and adjusting jobs of `grad_op`s. However, backward variables have not been created. Now they are only represented by `grad_op`'s input and output arguments. The backward variable creating job will be done by:
D
dongzhihong 已提交
121

F
fengjiayi 已提交
122 123 124 125 126 127 128
```python
def _append_backward_vars_(block, 
                           start_op_idx, 
                           grad_to_var, 
                           grad_info_map):
    """
    Create new variables required by backward pass.
D
dongzhihong 已提交
129

F
fengjiayi 已提交
130 131 132 133 134 135 136 137 138 139 140 141
    Args:
        block(Block): the block where new variables will be created
        start_op_idx(int): Only variables required by ops in block.ops[start_op_idx : ] will be created
        grad_to_var(dict):
            key(str): grad variable name
            val(str): corresponding forward variable name
            In most cases, this dict is generated by _append_backward_ops_()
        grad_info_map(dict)(output argument):
            key(str): forward variable name
            val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index
    """
```
D
dongzhihong 已提交
142

F
fengjiayi 已提交
143
Given a `block`, this function traverses all the `grad_op`s in it(The argument `start_op_idx` indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The *pseudo-code* shows this process:
D
dongzhihong 已提交
144

F
fengjiayi 已提交
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
```
for op in block.ops[start_op_idx : ]:

    if op has an attribute named 'sub_block':
        Get the sub-block(`s_block`) from op's attribute.
        Invoke _append_backward_vars_(), with `block=s_block`
        
    for var_name in op.all_output_names():
        if block.has_var_recursive(var_name) or var_name is the name of empty variable:
            continue
        create a new variable named 'var_name' in block
        if grad_to_var.has_key(var_name):
            set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
            
    do op's var type inference
    do op's shape inference
```