block.md 10.8 KB
Newer Older
Y
Yan Chunwei 已提交
1 2 3 4 5 6 7
# Design Doc: Block and Scope

## The Representation of Computation

Both deep learning systems and programming languages help users describe computation procedures.  These systems use various representations of computation:

- Caffe, Torch, and Paddle: sequences of layers.
8
- TensorFlow, Caffe2, Mxnet: graph of operators.
Y
Yan Chunwei 已提交
9 10 11 12
- PaddlePaddle: nested blocks, like C++ and Java programs.

## Block in Programming Languages and Deep Learning

13
In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions or operators.
Y
Yan Chunwei 已提交
14 15 16

Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:

_青葱's avatar
_青葱 已提交
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
<table>
<thead>
<tr>
<th>programming languages</th>
<th>PaddlePaddle</th>
</tr>
</thead>
<tbody>
<tr>
<td>for, while loop </td>
<td>RNN, WhileOp </td>
</tr>
<tr>
<td>if, if-else, switch </td>
<td>IfElseOp, SwitchOp </td>
</tr>
<tr>
<td>sequential execution </td>
<td>a sequence of layers </td>
</tr>
</tbody>
</table>

Y
Yan Chunwei 已提交
40 41 42 43 44

A key difference is that a C++ program describes a one pass computation, whereas a deep learning program describes both the forward and backward passes.

## Stack Frames and the Scope Hierarchy

45
The existence of the backward pass makes the execution of a block of PaddlePaddle different from traditional programs:
Y
Yan Chunwei 已提交
46

_青葱's avatar
_青葱 已提交
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
<table>
<thead>
<tr>
<th>programming languages</th>
<th>PaddlePaddle</th>
</tr>
</thead>
<tbody>
<tr>
<td>stack </td>
<td>scope hierarchy </td>
</tr>
<tr>
<td>stack frame  </td>
<td>scope </td>
</tr>
<tr>
<td>push at entering block </td>
<td>push at entering block </td>
</tr>
<tr>
<td>pop at leaving block </td>
<td>destroy when minibatch completes </td>
</tr>
</tbody>
</table>

Y
Yan Chunwei 已提交
74 75 76 77 78 79 80 81 82 83

1. In traditional programs:

   - When the execution enters the left curly brace of a block, the runtime pushes a frame into the stack, where it realizes local variables.
   - After the execution leaves the right curly brace, the runtime pops the frame.
   - The maximum number of frames in the stack is the maximum depth of nested blocks.

1. In PaddlePaddle

   - When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
84
   - PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are used by the backward pass.  So it has a stack forest known as a *scope hierarchy*.
Y
Yan Chunwei 已提交
85
   - The height of the highest tree is the maximum depth of nested blocks.
86
   - After the processing of a minibatch, PaddlePaddle destroys the scope hierarchy.
Y
Yan Chunwei 已提交
87 88 89 90 91 92 93 94 95 96

## Use Blocks in C++ and PaddlePaddle Programs

Let us consolidate the discussion by presenting some examples.

### Blocks with `if-else` and `IfElseOp`

The following C++ programs shows how blocks are used with the `if-else` structure:

```c++
97 98
namespace pd = paddle;

Y
Yan Chunwei 已提交
99
int x = 10;
100 101
int y = 1;
int z = 10;
Y
Yan Chunwei 已提交
102
bool cond = false;
103
int o1, o2;
Y
Yan Chunwei 已提交
104 105
if (cond) {
  int z = x + y;
106 107
  o1 = z;
  o2 = pd::layer::softmax(z);
Y
Yan Chunwei 已提交
108
} else {
109 110 111
  int d = pd::layer::fc(z);
  o1 = d;
  o2 = d+1;
Y
Yan Chunwei 已提交
112
}
113

Y
Yan Chunwei 已提交
114 115
```

W
weixing02 已提交
116
An equivalent PaddlePaddle program from the design doc of the [IfElseOp operator](../execution/if_else_op.md) is as follows:
Y
Yan Chunwei 已提交
117 118 119 120

```python
import paddle as pd

121 122 123 124 125 126
x = minibatch([10, 20, 30]) # shape=[None, 1]
y = var(1) # shape=[1], value=1
z = minibatch([10, 20, 30]) # shape=[None, 1]
cond = larger_than(x, 15) # [false, true, true]

ie = pd.ifelse()
Y
Yan Chunwei 已提交
127
with ie.true_block():
128 129
    d = pd.layer.add_scalar(x, y)
    ie.output(d, pd.layer.softmax(d))
Y
Yan Chunwei 已提交
130
with ie.false_block():
131 132 133
    d = pd.layer.fc(z)
    ie.output(d, d+1)
o1, o2 = ie(cond)
Y
Yan Chunwei 已提交
134 135
```

136
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `fc(x)` and `x+1` .
Y
Yan Chunwei 已提交
137

138
The difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances.
Y
Yan Chunwei 已提交
139

140

Y
Yan Chunwei 已提交
141 142
### Blocks with `for` and `RNNOp`

W
weixing02 已提交
143
The following RNN model in PaddlePaddle from the [RNN design doc](../dynamic_rnn/rnn.md) :
Y
Yan Chunwei 已提交
144 145

```python
146 147 148 149 150 151 152 153
x = sequence([10, 20, 30]) # shape=[None, 1]
m = var(0) # shape=[1]
W = var(0.314, param=true) # shape=[1]
U = var(0.375, param=true) # shape=[1]

rnn = pd.rnn()
with rnn.step():
  h = rnn.memory(init = m)
154
  h_prev = rnn.previous_memory(h)
155
  a = layer.fc(W, x)
156
  b = layer.fc(U, h_prev)  
157 158 159 160
  s = pd.add(a, b)
  act = pd.sigmoid(s)
  rnn.update_memory(h, act)
  rnn.output(a, b)
Y
Yan Chunwei 已提交
161 162 163 164 165 166
o1, o2 = rnn()
```
has its equivalent C++ program as follows

```c++
int* x = {10, 20, 30};
167 168 169
int* m = {0};
int* W = {0.314};
int* U = {0.375};
Y
Yan Chunwei 已提交
170 171 172 173 174 175 176

int mem[sizeof(x) / sizeof(x[0]) + 1];
int o1[sizeof(x) / sizeof(x[0]) + 1];
int o2[sizeof(x) / sizeof(x[0]) + 1];
for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
  int x = x[i-1];
  if (i == 1) mem[0] = m;
177 178 179
  int a = W * x;
  int b = Y * mem[i-1];
  int s = fc_out + hidden_out;
Y
Yan Chunwei 已提交
180 181 182 183 184 185 186 187 188
  int act = sigmoid(sum);
  mem[i] = act;
  o1[i] = act;
  o2[i] = hidden_out;
}
```

## Compilation and Execution

189
Like TensorFlow, a PaddlePaddle program is written in Python. The first part describes a neural network as a protobuf message, and the rest executes the message for training or inference.
Y
Yan Chunwei 已提交
190

191
The generation of this protobuf message is similar to how a compiler generates a binary executable file. The execution of the message is similar to how the OS executes the binary file.
Y
Yan Chunwei 已提交
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227

## The "Binary Executable File Format"

The definition of the protobuf message is as follows:

```protobuf
message BlockDesc {
  repeated VarDesc vars = 1;
  repeated OpDesc ops = 2;
}
```

The step net in above RNN example would look like

```
BlockDesc {
  vars = {
    VarDesc {...} // x
    VarDesc {...} // h
    VarDesc {...} // fc_out
    VarDesc {...} // hidden_out
    VarDesc {...} // sum
    VarDesc {...} // act
  }
  ops = {
    OpDesc {...} // matmul
    OpDesc {...} // add_two
    OpDesc {...} // sigmoid
  }
};
```

Also, the RNN operator in above example is serialized into a protobuf message of type `OpDesc` and would look like:

```
OpDesc {
228 229
  inputs = {0} // the index of x in vars of BlockDesc above
  outputs = {5, 3} // indices of act and hidden_out in vars of BlockDesc above
Y
Yan Chunwei 已提交
230
  attrs {
231
    "states" : {1} // the index of h
Y
Yan Chunwei 已提交
232 233 234 235 236 237 238 239 240 241 242 243
    "step_net" : <above step net>
  }
};
```

This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing the global block.


## The Compilation of Blocks

During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).

244 245
VarDesc in a block should have its name scope to avoid local variables affecting parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that is stored in the parent block. For example:
Y
Yan Chunwei 已提交
246 247

```python
248
a = pd.Variable(shape=[20, 20])
Y
Yan Chunwei 已提交
249 250 251
b = pd.fc(a, params=["fc.w", "fc.b"])

rnn = pd.create_rnn()
252
with rnn.stepnet():
253
    x = a.as_step_input()
Y
Yan Chunwei 已提交
254 255
    # reuse fc's parameter
    fc_without_b = pd.get_variable("fc.w")
256
    rnn.output(fc_without_b)
Y
Yan Chunwei 已提交
257 258 259

out = rnn()
```
260
The method `pd.get_variable` can help retrieve a Variable by the name. The Variable may be stored in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.
Y
Yan Chunwei 已提交
261 262 263 264 265

In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.

To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.

266
`SymbolTable` can do the following:
Y
Yan Chunwei 已提交
267 268

- store the definitions (some names and attributes) of variables and operators,
269 270
- verify if a variable was declared,
- make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).
Y
Yan Chunwei 已提交
271 272 273 274 275 276 277 278 279 280 281


```c++
// Information in SymbolTable is enough to trace the dependency graph. So maybe
// the Eval() interface takes a SymbolTable is enough.
class SymbolTable {
 public:
  SymbolTable(SymbolTable* parent) : parent_(parent) {}

  OpDesc* NewOp(const string& name="");

282 283 284
  // TODO determine whether name is generated by python or C++.
  // Currently assume that a unique name will be generated by C++ if the
  // argument name is left default.
D
dongzhihong 已提交
285
  VarDesc* Var(const string& name="");
Y
Yan Chunwei 已提交
286

287
  // find a VarDesc by name, if recursive is true, find parent's SymbolTable
Y
Yan Chunwei 已提交
288 289 290 291 292
  // recursively.
  // this interface is introduced to support InferShape, find protobuf messages
  // of variables and operators, pass pointers into InferShape.
  //
  // NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
293
  // be proposed and embedded into pybind to enable python operation on C++ pointers.
Y
Yan Chunwei 已提交
294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310
  VarDesc* FindVar(const string& name, bool recursive=true);

  OpDesc* FindOp(const string& name);

  BlockDesc Compile() const;

 private:
  SymbolTable* parent_;

  map<string, OpDesc> ops_;
  map<string, VarDesc> vars_;
};
```

After all the description of variables and operators is added into SymbolTable,
the block has enough information to run.

311
The `Block` class takes a `BlockDesc` as input, and provides `Run` and `InferShape` functions.
Y
Yan Chunwei 已提交
312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332


```c++
namespace {

class Block : OperatorBase {
public:
  Block(const BlockDesc& desc) desc_(desc) {}

  void InferShape(const framework::Scope& scope) const override {
    if (!symbols_ready_) {
      CreateVariables(scope);
      CreateOperators();
    }
    // should run InferShape first.
    for (auto& op : runtime_table_.ops()) {
      op->InferShape(scope);
    }
  }

  void Run(const framework::Scope& scope,
D
dzhwinter 已提交
333
           const platform::Place& place) const override {
Y
Yan Chunwei 已提交
334 335
    PADDLE_ENFORCE(symbols_ready_, "operators and variables should be created first.");
    for (auto& op : runtime_table_.ops()) {
D
dzhwinter 已提交
336
      op->Run(scope, place);
Y
Yan Chunwei 已提交
337 338 339 340 341 342
    }
  }

  void CreateVariables(const framework::Scope& scope);
  void CreateOperators();

343
  // some other necessary interfaces of NetOp are listed below
Y
Yan Chunwei 已提交
344 345 346 347 348 349 350 351 352 353 354 355 356
  // ...

private:
  BlockDesc desc_;
  bool symbols_ready_{false};
};
```

## The Execution of Blocks

Block inherits from OperatorBase, which has a Run method.
Block's Run method will run its operators sequentially.

357
There is another important interface called `Eval`, which takes some arguments called targets and generates a minimal graph which treats targets as the end points and creates a new Block. After `Run`, `Eval` will get the latest value and return the targets.
Y
Yan Chunwei 已提交
358 359 360 361 362 363

The definition of Eval is as follows:

```c++
// clean a block description by targets using the corresponding dependency graph.
// return a new BlockDesc with minimal number of operators.
364
// NOTE: The return type is not a Block but the block's description so that this can be distributed
Y
Yan Chunwei 已提交
365 366 367 368 369 370 371 372 373 374 375
// to a cluster.
BlockDesc Prune(const BlockDesc& desc, vector<string> targets);

void Block::Eval(const vector<string>& targets,
                 const framework::Scope& scope,
                 const platform::DeviceContext& dev_ctx) {
  BlockDesc min_desc = Prune(desc_, targets);
  Block min_block(min_desc);
  min_block.Run(scope, dev_ctx);
}
```