block.md.txt 10.8 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338
# Design Doc: Block and Scope

## The Representation of Computation

Both deep learning systems and programming languages help users describe computation procedures.  These systems use various representations of computation:

- Caffe, Torch, and Paddle: sequences of layers.
- TensorFlow, Caffe2, Mxnet: graphs of operators.
- PaddlePaddle: nested blocks, like C++ and Java programs.

## Block in Programming Languages and Deep Learning

In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions, or operators.

Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:

| programming languages | PaddlePaddle          |
|-----------------------|-----------------------|
| for, while loop       | RNN, WhileOp          |
| if, if-else, switch   | IfElseOp, SwitchOp    |
| sequential execution  | a sequence of layers  |

A key difference is that a C++ program describes a one pass computation, whereas a deep learning program describes both the forward and backward passes.

## Stack Frames and the Scope Hierarchy

The existence of the backward makes the execution of a block of traditional programs and PaddlePaddle different to each other:

| programming languages | PaddlePaddle                  |
|-----------------------|-------------------------------|
| stack                 | scope hierarchy               |
| stack frame           | scope                         |
| push at entering block| push at entering block        |
| pop at leaving block  | destroy at minibatch completes|

1. In traditional programs:

   - When the execution enters the left curly brace of a block, the runtime pushes a frame into the stack, where it realizes local variables.
   - After the execution leaves the right curly brace, the runtime pops the frame.
   - The maximum number of frames in the stack is the maximum depth of nested blocks.

1. In PaddlePaddle

   - When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
   - PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are to be used by the backward pass.  So it has a stack forest known as a *scope hierarchy*.
   - The height of the highest tree is the maximum depth of nested blocks.
   - After the process of a minibatch, PaddlePaddle destroys the scope hierarchy.

## Use Blocks in C++ and PaddlePaddle Programs

Let us consolidate the discussion by presenting some examples.

### Blocks with `if-else` and `IfElseOp`

The following C++ programs shows how blocks are used with the `if-else` structure:

```c++
int x = 10;
int y = 20;
int out;
bool cond = false;
if (cond) {
  int z = x + y;
  out = softmax(z);
} else {
  int z = fc(x);
  out = z;
}
```

An equivalent PaddlePaddle program from the design doc of the [IfElseOp operator](./if_else_op.md) is as follows:

```python
import paddle as pd

x = var(10)
y = var(20)
cond = var(false)
ie = pd.create_ifelseop(inputs=[x], output_num=1)
with ie.true_block():
    x = ie.inputs(true, 0)
    z = operator.add(x, y)
    ie.set_output(true, 0, operator.softmax(z))
with ie.false_block():
    x = ie.inputs(false, 0)
    z = layer.fc(x)
    ie.set_output(true, 0, operator.softmax(z))
out = b(cond)
```

In both examples, the left branch computes `softmax(x+y)` and the right branch computes `fc(x)`.

A difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances.  The `ie.input(true, 0)` invocation returns instances in the 0-th input, `x`, that corresponds to true values in `cond` as the local variable `x`, where `ie.input(false, 0)` returns instances corresponding to false values.

### Blocks with `for` and `RNNOp`

The following RNN model from the [RNN design doc](./rnn.md)

```python
x = sequence([10, 20, 30])
m = var(0)
W = tensor()
U = tensor()

rnn = create_rnn(inputs=[input])
with rnn.stepnet() as net:
  x = net.set_inputs(0)
  h = net.add_memory(init=m)
  fc_out = pd.matmul(W, x)
  hidden_out = pd.matmul(U, h.pre(n=1))
  sum = pd.add_two(fc_out, hidden_out)
  act = pd.sigmoid(sum)
  h.update(act)                       # update memory with act
  net.set_outputs(0, act, hidden_out) # two outputs

o1, o2 = rnn()
print o1, o2
```

has its equivalent C++ program as follows

```c++
int* x = {10, 20, 30};
int m = 0;
int W = some_value();
int U = some_other_value();

int mem[sizeof(x) / sizeof(x[0]) + 1];
int o1[sizeof(x) / sizeof(x[0]) + 1];
int o2[sizeof(x) / sizeof(x[0]) + 1];
for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
  int x = x[i-1];
  if (i == 1) mem[0] = m;
  int fc_out = W * x;
  int hidden_out = Y * mem[i-1];
  int sum = fc_out + hidden_out;
  int act = sigmoid(sum);
  mem[i] = act;
  o1[i] = act;
  o2[i] = hidden_out;
}

print_array(o1);
print_array(o2);
```


## Compilation and Execution

Like TensorFlow programs, a PaddlePaddle program is written in Python.  The first part describes a neural network as a protobuf message, and the rest part executes the message for training or inference.

The generation of this protobuf message is like what a compiler generates a binary executable file.  The execution of the message that the OS executes the binary file.

## The "Binary Executable File Format"

The definition of the protobuf message is as follows:

```protobuf
message BlockDesc {
  repeated VarDesc vars = 1;
  repeated OpDesc ops = 2;
}
```

The step net in above RNN example would look like

```
BlockDesc {
  vars = {
    VarDesc {...} // x
    VarDesc {...} // h
    VarDesc {...} // fc_out
    VarDesc {...} // hidden_out
    VarDesc {...} // sum
    VarDesc {...} // act
  }
  ops = {
    OpDesc {...} // matmul
    OpDesc {...} // add_two
    OpDesc {...} // sigmoid
  }
};
```

Also, the RNN operator in above example is serialized into a protobuf message of type `OpDesc` and would look like:

```
OpDesc {
  inputs = {0} // the index of x
  outputs = {5, 3} // indices of act and hidden_out
  attrs {
    "memories" : {1} // the index of h
    "step_net" : <above step net>
  }
};
```

This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing the global block.


## The Compilation of Blocks

During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).

VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example

```python
a = pd.Varaible(shape=[20, 20])
b = pd.fc(a, params=["fc.w", "fc.b"])

rnn = pd.create_rnn()
with rnn.stepnet() as net:
    x = net.set_inputs(a)
    # reuse fc's parameter
    fc_without_b = pd.get_variable("fc.w")
    net.set_outputs(fc_without_b)

out = rnn()
```
the method `pd.get_variable` can help retrieve a Variable by a name, a Variable may store in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.

In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.

To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.

`SymbolTable` can do the following stuff:

- store the definitions (some names and attributes) of variables and operators,
- to verify if a variable was declared,
- to make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).


```c++
// Information in SymbolTable is enough to trace the dependency graph. So maybe
// the Eval() interface takes a SymbolTable is enough.
class SymbolTable {
 public:
  SymbolTable(SymbolTable* parent) : parent_(parent) {}

  OpDesc* NewOp(const string& name="");

  // TODO determine whether name is generated by python or C++
  // currently assume that a unique name will be generated by C++ if the
  // argument name left default.
  VarDesc* NewVar(const string& name="");

  // find a VarDesc by name, if recursive true, find parent's SymbolTable
  // recursively.
  // this interface is introduced to support InferShape, find protobuf messages
  // of variables and operators, pass pointers into InferShape.
  // operator
  //
  // NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
  // be proposed and embedded into pybind to enable python operate on C++ pointers.
  VarDesc* FindVar(const string& name, bool recursive=true);

  OpDesc* FindOp(const string& name);

  BlockDesc Compile() const;

 private:
  SymbolTable* parent_;

  map<string, OpDesc> ops_;
  map<string, VarDesc> vars_;
};
```

After all the description of variables and operators is added into SymbolTable,
the block has enough information to run.

The `Block` class takes a `BlockDesc` as input, and provide `Run` and `InferShape` functions.


```c++
namespace {

class Block : OperatorBase {
public:
  Block(const BlockDesc& desc) desc_(desc) {}

  void InferShape(const framework::Scope& scope) const override {
    if (!symbols_ready_) {
      CreateVariables(scope);
      CreateOperators();
    }
    // should run InferShape first.
    for (auto& op : runtime_table_.ops()) {
      op->InferShape(scope);
    }
  }

  void Run(const framework::Scope& scope,
           const platform::DeviceContext& dev_ctx) const override {
    PADDLE_ENFORCE(symbols_ready_, "operators and variables should be created first.");
    for (auto& op : runtime_table_.ops()) {
      op->Run(scope, dev_ctx);
    }
  }

  void CreateVariables(const framework::Scope& scope);
  void CreateOperators();

  // some other necessary interfaces of NetOp are list below
  // ...

private:
  BlockDesc desc_;
  bool symbols_ready_{false};
};
```

## The Execution of Blocks

Block inherits from OperatorBase, which has a Run method.
Block's Run method will run its operators sequentially.

There is another important interface called `Eval`, which take some arguments called targets, and generate a minimal graph which takes targets as the end points and creates a new Block,
after `Run`, `Eval` will get the latest value and return the targets.

The definition of Eval is as follows:

```c++
// clean a block description by targets using the corresponding dependency graph.
// return a new BlockDesc with minimal number of operators.
// NOTE not return a Block but the block's description so that this can be distributed
// to a cluster.
BlockDesc Prune(const BlockDesc& desc, vector<string> targets);

void Block::Eval(const vector<string>& targets,
                 const framework::Scope& scope,
                 const platform::DeviceContext& dev_ctx) {
  BlockDesc min_desc = Prune(desc_, targets);
  Block min_block(min_desc);
  min_block.Run(scope, dev_ctx);
}
```