Both deep learning systems and programming languages help users describe computation procedures. These systems use various representations of computation:
Both deep learning systems and programming languages help users describe computation procedures. These systems use various representations of computation:
- Caffe, Torch, and Paddle: sequences of layers.
- Caffe, Torch, and Paddle: sequences of layers.
- TensorFlow, Caffe2, Mxnet: graphs of operators.
- TensorFlow, Caffe2, Mxnet: graph of operators.
- PaddlePaddle: nested blocks, like C++ and Java programs.
- PaddlePaddle: nested blocks, like C++ and Java programs.
## Block in Programming Languages and Deep Learning
## Block in Programming Languages and Deep Learning
In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions, or operators.
In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions or operators.
Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:
Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:
...
@@ -24,14 +24,14 @@ A key difference is that a C++ program describes a one pass computation, whereas
...
@@ -24,14 +24,14 @@ A key difference is that a C++ program describes a one pass computation, whereas
## Stack Frames and the Scope Hierarchy
## Stack Frames and the Scope Hierarchy
The existence of the backward makes the execution of a block of traditional programs and PaddlePaddle different to each other:
The existence of the backward pass makes the execution of a block of PaddlePaddle different from traditional programs:
| push at entering block| push at entering block |
| push at entering block| push at entering block |
| pop at leaving block | destroy at minibatch completes|
| pop at leaving block | destroy when minibatch completes|
1. In traditional programs:
1. In traditional programs:
...
@@ -42,9 +42,9 @@ The existence of the backward makes the execution of a block of traditional prog
...
@@ -42,9 +42,9 @@ The existence of the backward makes the execution of a block of traditional prog
1. In PaddlePaddle
1. In PaddlePaddle
- When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
- When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
- PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are to be used by the backward pass. So it has a stack forest known as a *scope hierarchy*.
- PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are used by the backward pass. So it has a stack forest known as a *scope hierarchy*.
- The height of the highest tree is the maximum depth of nested blocks.
- The height of the highest tree is the maximum depth of nested blocks.
- After the process of a minibatch, PaddlePaddle destroys the scope hierarchy.
- After the processing of a minibatch, PaddlePaddle destroys the scope hierarchy.
## Use Blocks in C++ and PaddlePaddle Programs
## Use Blocks in C++ and PaddlePaddle Programs
...
@@ -94,14 +94,14 @@ with ie.false_block():
...
@@ -94,14 +94,14 @@ with ie.false_block():
o1, o2 = ie(cond)
o1, o2 = ie(cond)
```
```
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `x+1` and `fc(x)`.
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `fc(x)` and `x+1` .
A difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances. The `ie.input(true, 0)` invocation returns instances in the 0-th input, `x`, that corresponds to true values in `cond` as the local variable `x`, where `ie.input(false, 0)` returns instances corresponding to false values.
The difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances.
### Blocks with `for` and `RNNOp`
### Blocks with `for` and `RNNOp`
The following RNN model from the [RNN design doc](./rnn.md)
The following RNN model in PaddlePaddle from the [RNN design doc](./rnn.md) :
```python
```python
x = sequence([10, 20, 30]) # shape=[None, 1]
x = sequence([10, 20, 30]) # shape=[None, 1]
...
@@ -112,9 +112,9 @@ U = var(0.375, param=true) # shape=[1]
...
@@ -112,9 +112,9 @@ U = var(0.375, param=true) # shape=[1]
rnn = pd.rnn()
rnn = pd.rnn()
with rnn.step():
with rnn.step():
h = rnn.memory(init = m)
h = rnn.memory(init = m)
hh = rnn.previous_memory(h)
h_prev = rnn.previous_memory(h)
a = layer.fc(W, x)
a = layer.fc(W, x)
b = layer.fc(U, hh)
b = layer.fc(U, h_prev)
s = pd.add(a, b)
s = pd.add(a, b)
act = pd.sigmoid(s)
act = pd.sigmoid(s)
rnn.update_memory(h, act)
rnn.update_memory(h, act)
...
@@ -147,9 +147,9 @@ for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
...
@@ -147,9 +147,9 @@ for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
## Compilation and Execution
## Compilation and Execution
Like TensorFlow programs, a PaddlePaddle program is written in Python. The first part describes a neural network as a protobuf message, and the rest part executes the message for training or inference.
Like TensorFlow, a PaddlePaddle program is written in Python. The first part describes a neural network as a protobuf message, and the rest executes the message for training or inference.
The generation of this protobuf message is like what a compiler generates a binary executable file. The execution of the message that the OS executes the binary file.
The generation of this protobuf message is similar to how a compiler generates a binary executable file. The execution of the message is similar to how the OS executes the binary file.
## The "Binary Executable File Format"
## The "Binary Executable File Format"
...
@@ -186,8 +186,8 @@ Also, the RNN operator in above example is serialized into a protobuf message of
...
@@ -186,8 +186,8 @@ Also, the RNN operator in above example is serialized into a protobuf message of
```
```
OpDesc {
OpDesc {
inputs = {0} // the index of x
inputs = {0} // the index of x in vars of BlockDesc above
outputs = {5, 3} // indices of act and hidden_out
outputs = {5, 3} // indices of act and hidden_out in vars of BlockDesc above
attrs {
attrs {
"memories" : {1} // the index of h
"memories" : {1} // the index of h
"step_net" : <above step net>
"step_net" : <above step net>
...
@@ -203,14 +203,14 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
...
@@ -203,14 +203,14 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:
```python
```python
a = pd.Varaible(shape=[20, 20])
a = pd.Variable(shape=[20, 20])
b = pd.fc(a, params=["fc.w", "fc.b"])
b = pd.fc(a, params=["fc.w", "fc.b"])
rnn = pd.create_rnn()
rnn = pd.create_rnn()
with rnn.stepnet()
with rnn.stepnet():
x = a.as_step_input()
x = a.as_step_input()
# reuse fc's parameter
# reuse fc's parameter
fc_without_b = pd.get_variable("fc.w")
fc_without_b = pd.get_variable("fc.w")
...
@@ -218,17 +218,17 @@ with rnn.stepnet()
...
@@ -218,17 +218,17 @@ with rnn.stepnet()
out = rnn()
out = rnn()
```
```
the method `pd.get_variable` can help retrieve a Variable by a name, a Variable may store in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.
The method `pd.get_variable` can help retrieve a Variable by the name. The Variable may be stored in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.
In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.
In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.
To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.
To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.
`SymbolTable` can do the following stuff:
`SymbolTable` can do the following:
- store the definitions (some names and attributes) of variables and operators,
- store the definitions (some names and attributes) of variables and operators,
- to verify if a variable was declared,
- verify if a variable was declared,
- to make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).
- make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).
```c++
```c++
...
@@ -240,19 +240,18 @@ class SymbolTable {
...
@@ -240,19 +240,18 @@ class SymbolTable {
OpDesc* NewOp(const string& name="");
OpDesc* NewOp(const string& name="");
// TODO determine whether name is generated by python or C++
// TODO determine whether name is generated by python or C++.
// currently assume that a unique name will be generated by C++ if the
// Currently assume that a unique name will be generated by C++ if the
// argument name left default.
// argument name is left default.
VarDesc* NewVar(const string& name="");
VarDesc* NewVar(const string& name="");
// find a VarDesc by name, if recursive true, find parent's SymbolTable
// find a VarDesc by name, if recursive is true, find parent's SymbolTable
// recursively.
// recursively.
// this interface is introduced to support InferShape, find protobuf messages
// this interface is introduced to support InferShape, find protobuf messages
// of variables and operators, pass pointers into InferShape.
// of variables and operators, pass pointers into InferShape.
// operator
//
//
// NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
// NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
// be proposed and embedded into pybind to enable python operate on C++ pointers.
// be proposed and embedded into pybind to enable python operation on C++ pointers.
// some other necessary interfaces of NetOp are list below
// some other necessary interfaces of NetOp are listed below
// ...
// ...
private:
private:
...
@@ -316,15 +315,14 @@ private:
...
@@ -316,15 +315,14 @@ private:
Block inherits from OperatorBase, which has a Run method.
Block inherits from OperatorBase, which has a Run method.
Block's Run method will run its operators sequentially.
Block's Run method will run its operators sequentially.
There is another important interface called `Eval`, which take some arguments called targets, and generate a minimal graph which takes targets as the end points and creates a new Block,
There is another important interface called `Eval`, which takes some arguments called targets and generates a minimal graph which treats targets as the end points and creates a new Block. After `Run`, `Eval` will get the latest value and return the targets.
after `Run`, `Eval` will get the latest value and return the targets.
The definition of Eval is as follows:
The definition of Eval is as follows:
```c++
```c++
// clean a block description by targets using the corresponding dependency graph.
// clean a block description by targets using the corresponding dependency graph.
// return a new BlockDesc with minimal number of operators.
// return a new BlockDesc with minimal number of operators.
// NOTE not return a Block but the block's description so that this can be distributed
// NOTE: The return type is not a Block but the block's description so that this can be distributed
Both deep learning systems and programming languages help users describe computation procedures. These systems use various representations of computation:
Both deep learning systems and programming languages help users describe computation procedures. These systems use various representations of computation:
- Caffe, Torch, and Paddle: sequences of layers.
- Caffe, Torch, and Paddle: sequences of layers.
- TensorFlow, Caffe2, Mxnet: graphs of operators.
- TensorFlow, Caffe2, Mxnet: graph of operators.
- PaddlePaddle: nested blocks, like C++ and Java programs.
- PaddlePaddle: nested blocks, like C++ and Java programs.
## Block in Programming Languages and Deep Learning
## Block in Programming Languages and Deep Learning
In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions, or operators.
In programming languages, a block is a pair of curly braces that includes local variables definitions and a sequence of instructions or operators.
Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:
Blocks work with control flow structures like `if`, `else`, and `for`, which have equivalents in deep learning:
...
@@ -24,14 +24,14 @@ A key difference is that a C++ program describes a one pass computation, whereas
...
@@ -24,14 +24,14 @@ A key difference is that a C++ program describes a one pass computation, whereas
## Stack Frames and the Scope Hierarchy
## Stack Frames and the Scope Hierarchy
The existence of the backward makes the execution of a block of traditional programs and PaddlePaddle different to each other:
The existence of the backward pass makes the execution of a block of PaddlePaddle different from traditional programs:
| push at entering block| push at entering block |
| push at entering block| push at entering block |
| pop at leaving block | destroy at minibatch completes|
| pop at leaving block | destroy when minibatch completes|
1. In traditional programs:
1. In traditional programs:
...
@@ -42,9 +42,9 @@ The existence of the backward makes the execution of a block of traditional prog
...
@@ -42,9 +42,9 @@ The existence of the backward makes the execution of a block of traditional prog
1. In PaddlePaddle
1. In PaddlePaddle
- When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
- When the execution enters a block, PaddlePaddle adds a new scope, where it realizes variables.
- PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are to be used by the backward pass. So it has a stack forest known as a *scope hierarchy*.
- PaddlePaddle doesn't pop a scope after the execution of the block because variables therein are used by the backward pass. So it has a stack forest known as a *scope hierarchy*.
- The height of the highest tree is the maximum depth of nested blocks.
- The height of the highest tree is the maximum depth of nested blocks.
- After the process of a minibatch, PaddlePaddle destroys the scope hierarchy.
- After the processing of a minibatch, PaddlePaddle destroys the scope hierarchy.
## Use Blocks in C++ and PaddlePaddle Programs
## Use Blocks in C++ and PaddlePaddle Programs
...
@@ -94,14 +94,14 @@ with ie.false_block():
...
@@ -94,14 +94,14 @@ with ie.false_block():
o1, o2 = ie(cond)
o1, o2 = ie(cond)
```
```
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `x+1` and `fc(x)`.
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `fc(x)` and `x+1` .
A difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances. The `ie.input(true, 0)` invocation returns instances in the 0-th input, `x`, that corresponds to true values in `cond` as the local variable `x`, where `ie.input(false, 0)` returns instances corresponding to false values.
The difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances.
### Blocks with `for` and `RNNOp`
### Blocks with `for` and `RNNOp`
The following RNN model from the [RNN design doc](./rnn.md)
The following RNN model in PaddlePaddle from the [RNN design doc](./rnn.md) :
```python
```python
x = sequence([10, 20, 30]) # shape=[None, 1]
x = sequence([10, 20, 30]) # shape=[None, 1]
...
@@ -112,9 +112,9 @@ U = var(0.375, param=true) # shape=[1]
...
@@ -112,9 +112,9 @@ U = var(0.375, param=true) # shape=[1]
rnn = pd.rnn()
rnn = pd.rnn()
with rnn.step():
with rnn.step():
h = rnn.memory(init = m)
h = rnn.memory(init = m)
hh = rnn.previous_memory(h)
h_prev = rnn.previous_memory(h)
a = layer.fc(W, x)
a = layer.fc(W, x)
b = layer.fc(U, hh)
b = layer.fc(U, h_prev)
s = pd.add(a, b)
s = pd.add(a, b)
act = pd.sigmoid(s)
act = pd.sigmoid(s)
rnn.update_memory(h, act)
rnn.update_memory(h, act)
...
@@ -147,9 +147,9 @@ for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
...
@@ -147,9 +147,9 @@ for (int i = 1; i <= sizeof(x)/sizeof(x[0]); ++i) {
## Compilation and Execution
## Compilation and Execution
Like TensorFlow programs, a PaddlePaddle program is written in Python. The first part describes a neural network as a protobuf message, and the rest part executes the message for training or inference.
Like TensorFlow, a PaddlePaddle program is written in Python. The first part describes a neural network as a protobuf message, and the rest executes the message for training or inference.
The generation of this protobuf message is like what a compiler generates a binary executable file. The execution of the message that the OS executes the binary file.
The generation of this protobuf message is similar to how a compiler generates a binary executable file. The execution of the message is similar to how the OS executes the binary file.
## The "Binary Executable File Format"
## The "Binary Executable File Format"
...
@@ -186,8 +186,8 @@ Also, the RNN operator in above example is serialized into a protobuf message of
...
@@ -186,8 +186,8 @@ Also, the RNN operator in above example is serialized into a protobuf message of
```
```
OpDesc {
OpDesc {
inputs = {0} // the index of x
inputs = {0} // the index of x in vars of BlockDesc above
outputs = {5, 3} // indices of act and hidden_out
outputs = {5, 3} // indices of act and hidden_out in vars of BlockDesc above
attrs {
attrs {
"memories" : {1} // the index of h
"memories" : {1} // the index of h
"step_net" : <above step net>
"step_net" : <above step net>
...
@@ -203,14 +203,14 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
...
@@ -203,14 +203,14 @@ This `OpDesc` value is in the `ops` field of the `BlockDesc` value representing
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
During the generation of the Protobuf message, the Block should store VarDesc (the Protobuf message which describes Variable) and OpDesc (the Protobuf message which describes Operator).
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
VarDesc in a block should have its name scope to avoid local variables affect parent block's name scope.
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example
Child block's name scopes should inherit the parent's so that OpDesc in child block can reference a VarDesc that stored in parent block. For example:
```python
```python
a = pd.Varaible(shape=[20, 20])
a = pd.Variable(shape=[20, 20])
b = pd.fc(a, params=["fc.w", "fc.b"])
b = pd.fc(a, params=["fc.w", "fc.b"])
rnn = pd.create_rnn()
rnn = pd.create_rnn()
with rnn.stepnet()
with rnn.stepnet():
x = a.as_step_input()
x = a.as_step_input()
# reuse fc's parameter
# reuse fc's parameter
fc_without_b = pd.get_variable("fc.w")
fc_without_b = pd.get_variable("fc.w")
...
@@ -218,17 +218,17 @@ with rnn.stepnet()
...
@@ -218,17 +218,17 @@ with rnn.stepnet()
out = rnn()
out = rnn()
```
```
the method `pd.get_variable` can help retrieve a Variable by a name, a Variable may store in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.
The method `pd.get_variable` can help retrieve a Variable by the name. The Variable may be stored in a parent block, but might be retrieved in a child block, so block should have a variable scope that supports inheritance.
In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.
In compiler design, the symbol table is a data structure created and maintained by compilers to store information about the occurrence of various entities such as variable names, function names, classes, etc.
To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.
To store the definition of variables and operators, we define a C++ class `SymbolTable`, like the one used in compilers.
`SymbolTable` can do the following stuff:
`SymbolTable` can do the following:
- store the definitions (some names and attributes) of variables and operators,
- store the definitions (some names and attributes) of variables and operators,
- to verify if a variable was declared,
- verify if a variable was declared,
- to make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).
- make it possible to implement type checking (offer Protobuf message pointers to `InferShape` handlers).
```c++
```c++
...
@@ -240,19 +240,18 @@ class SymbolTable {
...
@@ -240,19 +240,18 @@ class SymbolTable {
OpDesc* NewOp(const string& name="");
OpDesc* NewOp(const string& name="");
// TODO determine whether name is generated by python or C++
// TODO determine whether name is generated by python or C++.
// currently assume that a unique name will be generated by C++ if the
// Currently assume that a unique name will be generated by C++ if the
// argument name left default.
// argument name is left default.
VarDesc* NewVar(const string& name="");
VarDesc* NewVar(const string& name="");
// find a VarDesc by name, if recursive true, find parent's SymbolTable
// find a VarDesc by name, if recursive is true, find parent's SymbolTable
// recursively.
// recursively.
// this interface is introduced to support InferShape, find protobuf messages
// this interface is introduced to support InferShape, find protobuf messages
// of variables and operators, pass pointers into InferShape.
// of variables and operators, pass pointers into InferShape.
// operator
//
//
// NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
// NOTE maybe some C++ classes such as VarDescBuilder and OpDescBuilder should
// be proposed and embedded into pybind to enable python operate on C++ pointers.
// be proposed and embedded into pybind to enable python operation on C++ pointers.
// some other necessary interfaces of NetOp are list below
// some other necessary interfaces of NetOp are listed below
// ...
// ...
private:
private:
...
@@ -316,15 +315,14 @@ private:
...
@@ -316,15 +315,14 @@ private:
Block inherits from OperatorBase, which has a Run method.
Block inherits from OperatorBase, which has a Run method.
Block's Run method will run its operators sequentially.
Block's Run method will run its operators sequentially.
There is another important interface called `Eval`, which take some arguments called targets, and generate a minimal graph which takes targets as the end points and creates a new Block,
There is another important interface called `Eval`, which takes some arguments called targets and generates a minimal graph which treats targets as the end points and creates a new Block. After `Run`, `Eval` will get the latest value and return the targets.
after `Run`, `Eval` will get the latest value and return the targets.
The definition of Eval is as follows:
The definition of Eval is as follows:
```c++
```c++
// clean a block description by targets using the corresponding dependency graph.
// clean a block description by targets using the corresponding dependency graph.
// return a new BlockDesc with minimal number of operators.
// return a new BlockDesc with minimal number of operators.
// NOTE not return a Block but the block's description so that this can be distributed
// NOTE: The return type is not a Block but the block's description so that this can be distributed