提交 4b86b49e 编写于 作者: Q qiaolongfei

Merge branch 'fix-build-activation_op' of ssh://github.com/jacquesqiao/Paddle...

Merge branch 'fix-build-activation_op' of ssh://github.com/jacquesqiao/Paddle into add-async-listen-and-serv-op
...@@ -49,9 +49,9 @@ In the new design, we propose to create a new operation for averaging parameter ...@@ -49,9 +49,9 @@ In the new design, we propose to create a new operation for averaging parameter
- the optimizer - the optimizer
- the window_size to keep the updates - the window_size to keep the updates
The ParameterAverageOptimizer op can be like any other operator with its own CPU/GPU implementation either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement the kernel using Eigen following the abstraction pattern implemented for [Operators](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/rmsprop_op.h). We also want to support the case when the Trainer/Optimizer runs on the GPU while ParameterAverageOptimizer runs on a CPU. The ParameterAverageOptimizer op can be like any other operator with its own CPU/GPU implementation either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement the kernel using Eigen following the abstraction pattern implemented for [Operators](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/rmsprop_op.h). We also want to support the case when the Trainer/Optimizer runs on the GPU while ParameterAverageOptimizer runs on a CPU.
The idea of building an op for averaging is in sync with the refactored PaddlePaddle philosophy of using operators to represent any computation unit. The way the op will be added to the computation graph will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) in Python API. The idea of building an op for averaging is in sync with the refactored PaddlePaddle philosophy of using operators to represent any computation unit. The way the op will be added to the computation graph will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#layer-function) in Python API.
### Python API implementation for ParameterAverageOptimizer ### Python API implementation for ParameterAverageOptimizer
...@@ -59,8 +59,8 @@ Based on Polyak and Juditsky (1992), we can generalize the averaging of updates ...@@ -59,8 +59,8 @@ Based on Polyak and Juditsky (1992), we can generalize the averaging of updates
- Any optimizer (RMSProp , AdaGrad etc.) - Any optimizer (RMSProp , AdaGrad etc.)
- A window size. The op keeps accumulating updated parameter values over a window of N batches and takes an average. Move the averaged value to a buffer when window is full to avoid loss of precision. - A window size. The op keeps accumulating updated parameter values over a window of N batches and takes an average. Move the averaged value to a buffer when window is full to avoid loss of precision.
Using the ParameterAverageOptimizer op, any user can add the operation to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support averaging. As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since ParameterAverageOptimizer will be an operator, it makes sense to create it in the layer functions. Using the ParameterAverageOptimizer op, any user can add the operation to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support averaging. As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since ParameterAverageOptimizer will be an operator, it makes sense to create it in the layer functions.
We will have a wrapper written in Python that will support the functionality and implement the actual core computation in C++ core as we have done for other [Optimizers](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/rmsprop_op.cc) We will have a wrapper written in Python that will support the functionality and implement the actual core computation in C++ core as we have done for other [Optimizers](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/rmsprop_op.cc)
#### Creation of the ParameterAverageOptimizer operator #### Creation of the ParameterAverageOptimizer operator
There are two ways for creating the ParameterAverageOptimizer op: There are two ways for creating the ParameterAverageOptimizer op:
...@@ -71,4 +71,4 @@ The proposal is to add the op immediately while building the computation graph. ...@@ -71,4 +71,4 @@ The proposal is to add the op immediately while building the computation graph.
#### High-level API #### High-level API
In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) to create neural network layers. Hence, we also need to provide parameter average functionality in layer functions. In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#layer-function) to create neural network layers. Hence, we also need to provide parameter average functionality in layer functions.
...@@ -113,7 +113,7 @@ if (cond) { ...@@ -113,7 +113,7 @@ if (cond) {
``` ```
An equivalent PaddlePaddle program from the design doc of the [IfElseOp operator](./if_else_op.md) is as follows: An equivalent PaddlePaddle program from the design doc of the [IfElseOp operator](../execution/if_else_op.md) is as follows:
```python ```python
import paddle as pd import paddle as pd
...@@ -140,7 +140,7 @@ The difference is that variables in the C++ program contain scalar values, where ...@@ -140,7 +140,7 @@ The difference is that variables in the C++ program contain scalar values, where
### Blocks with `for` and `RNNOp` ### Blocks with `for` and `RNNOp`
The following RNN model in PaddlePaddle from the [RNN design doc](./rnn.md) : The following RNN model in PaddlePaddle from the [RNN design doc](../dynamic_rnn/rnn.md) :
```python ```python
x = sequence([10, 20, 30]) # shape=[None, 1] x = sequence([10, 20, 30]) # shape=[None, 1]
......
# Executor Design Doc # Executor Design Doc
## Motivation ## Motivation
In [fluid](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/fluid.md), we encourage the user to use deep learning programming paradigms to describe the training process. When the user-written Python program is executed, it will first create a protobuf message In [fluid](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/motivation/fluid.md), we encourage the user to use deep learning programming paradigms to describe the training process. When the user-written Python program is executed, it will first create a protobuf message
[`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/paddle/framework/framework.proto#L145) that describes the process and is conceptually like an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). [`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/paddle/framework/framework.proto#L145) that describes the process and is conceptually like an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree).
The executor runs the `ProgramDesc` like an interpreter. `ProgramDesc` contains the intrinsics (operators in this case) and variables which will be used, executor explicitly executes the stored precompiled code. The executor runs the `ProgramDesc` like an interpreter. `ProgramDesc` contains the intrinsics (operators in this case) and variables which will be used, executor explicitly executes the stored precompiled code.
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
A PaddlePaddle program consists of two parts -- the first generates a `ProgramDesc` protobuf message that describes the program, and the second runs this message using a C++ class `Executor`. A PaddlePaddle program consists of two parts -- the first generates a `ProgramDesc` protobuf message that describes the program, and the second runs this message using a C++ class `Executor`.
A simple example PaddlePaddle program can be found in [graph.md](./graph.md): A simple example PaddlePaddle program can be found in [graph.md](../others/graph.md):
```python ```python
x = layer.data("images") x = layer.data("images")
......
# Design Doc: Concurrent Programming with Fluid # Design Doc: Concurrent Programming with Fluid
With PaddlePaddle Fluid, users describe a program other than a model. The program is a [`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto) protobuf message. TensorFlow/MxNet/Caffe2 applications generate protobuf messages too, but their protobuf messages represent the model, a graph of operators, but not the program that trains/uses the model. With PaddlePaddle Fluid, users describe a program other than a model. The program is a [`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto) protobuf message. TensorFlow/MxNet/Caffe2 applications generate protobuf messages too, but their protobuf messages represent the model, a graph of operators, but not the program that trains/uses the model.
Many know that when we program TensorFlow, we can specify the device on which each operator runs. This allows us to create a concurrent/parallel AI application. An interesting questions is **how does a `ProgramDesc` represents a concurrent program?** Many know that when we program TensorFlow, we can specify the device on which each operator runs. This allows us to create a concurrent/parallel AI application. An interesting questions is **how does a `ProgramDesc` represents a concurrent program?**
...@@ -28,19 +28,19 @@ The following table compares concepts in Fluid and Go ...@@ -28,19 +28,19 @@ The following table compares concepts in Fluid and Go
<tr> <tr>
<td>control-flow and built-in functions </td> <td>control-flow and built-in functions </td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators">intrinsics/operators</a></td> <a href="https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators">intrinsics/operators</a></td>
<td></td> <td></td>
</tr> </tr>
<tr> <tr>
<td>goroutines, channels </td> <td>goroutines, channels </td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework/thread_pool.h">class ThreadPool</a></td> <a href="https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/framework/thread_pool.h">class ThreadPool</a></td>
<td></td> <td></td>
</tr> </tr>
<tr> <tr>
<td>runtime </td> <td>runtime </td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h">class Executor</a></td> <a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/executor.h">class Executor</a></td>
<td></td> <td></td>
</tr> </tr>
</tbody> </tbody>
...@@ -78,7 +78,7 @@ message ProgramDesc { ...@@ -78,7 +78,7 @@ message ProgramDesc {
} }
``` ```
Then, the default `main` function calls `fluid.run()`, which creates an instance of the [`class Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h) and calls `Executor.Run(block[0])`, where `block[0]` is the first and only block defined in above `ProgramDesc` message. Then, the default `main` function calls `fluid.run()`, which creates an instance of the [`class Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/executor.h) and calls `Executor.Run(block[0])`, where `block[0]` is the first and only block defined in above `ProgramDesc` message.
The default `main` function is defined as follows: The default `main` function is defined as follows:
...@@ -146,7 +146,7 @@ An explanation of the above program: ...@@ -146,7 +146,7 @@ An explanation of the above program:
- `fluid.k8s` is a package that provides access to Kubernetes API. - `fluid.k8s` is a package that provides access to Kubernetes API.
- `fluid.k8s.get_worker_addrs` returns the list of IP and ports of all pods of the current job except for the current one (the master pod). - `fluid.k8s.get_worker_addrs` returns the list of IP and ports of all pods of the current job except for the current one (the master pod).
- `fluid.tensor_array` creates a [tensor array](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor_array.h). `fluid.parallel_for` creates a `ParallelFor` intrinsic, which, when executed, - `fluid.tensor_array` creates a [tensor array](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/lod_tensor_array.h). `fluid.parallel_for` creates a `ParallelFor` intrinsic, which, when executed,
1. creates `len(L)` scopes, each for the concurrent running of the sub-block (block 1 in this case), and initializes a variable named "index" in the scope to an integer value in the range `[0, len(L)-1]`, and 1. creates `len(L)` scopes, each for the concurrent running of the sub-block (block 1 in this case), and initializes a variable named "index" in the scope to an integer value in the range `[0, len(L)-1]`, and
2. creates `len(L)` threads by calling into the `ThreadPool` singleton, each thread 2. creates `len(L)` threads by calling into the `ThreadPool` singleton, each thread
...@@ -175,7 +175,7 @@ where ...@@ -175,7 +175,7 @@ where
1. listens on the current pod's IP address, as returned by `fliud.k8s.self_addr()`, 1. listens on the current pod's IP address, as returned by `fliud.k8s.self_addr()`,
2. once a connection is established, 2. once a connection is established,
1. creates a scope of two parameters, "input" and "output", 1. creates a scope of two parameters, "input" and "output",
2. reads a [Fluid variable](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/variable.h) and saves it into "input", 2. reads a [Fluid variable](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/variable.h) and saves it into "input",
3. creates an Executor instance and calls `Executor.Run(block)`, where the block is generated by running the lambda specified as the second parameter of `fluid.listen_and_do`. 3. creates an Executor instance and calls `Executor.Run(block)`, where the block is generated by running the lambda specified as the second parameter of `fluid.listen_and_do`.
## Summarization ## Summarization
......
...@@ -177,7 +177,7 @@ The local training architecture will be the same as the distributed training arc ...@@ -177,7 +177,7 @@ The local training architecture will be the same as the distributed training arc
### Training Data ### Training Data
In PaddlePaddle v0.10.0, training data is typically read In PaddlePaddle v0.10.0, training data is typically read
with [data reader](../reader/README.md) from Python. This approach is with [data reader](./README.md) from Python. This approach is
no longer efficient when training distributedly since the Python no longer efficient when training distributedly since the Python
process no longer runs on the same node with the trainer processes, process no longer runs on the same node with the trainer processes,
the Python reader will need to read from the distributed filesystem the Python reader will need to read from the distributed filesystem
......
...@@ -65,7 +65,7 @@ For embedding layers, the gradient may have many rows containing only 0 when tra ...@@ -65,7 +65,7 @@ For embedding layers, the gradient may have many rows containing only 0 when tra
if the gradient uses a dense tensor to do parameter optimization, if the gradient uses a dense tensor to do parameter optimization,
it could spend unnecessary memory, slow down the calculations and waste it could spend unnecessary memory, slow down the calculations and waste
the bandwidth while doing distributed training. the bandwidth while doing distributed training.
In Fluid, we introduce [SelectedRows](../selected_rows.md) to represent a list of rows containing In Fluid, we introduce [SelectedRows](../modules/selected_rows.md) to represent a list of rows containing
non-zero gradient data. So when we do parameter optimization both locally and remotely, non-zero gradient data. So when we do parameter optimization both locally and remotely,
we only need to send those non-zero rows to the optimizer operators: we only need to send those non-zero rows to the optimizer operators:
......
...@@ -22,7 +22,7 @@ There are several important concepts here: ...@@ -22,7 +22,7 @@ There are several important concepts here:
There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes* which are created for each step. There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes* which are created for each step.
<p align="center"> <p align="center">
<img src="https://github.com/PaddlePaddle/Paddle/tree/develop/doc/fluid/images/rnn.png"/><br/> <img src="https://raw.githubusercontent.com/PaddlePaddle/Paddle/develop/doc/fluid/images/rnn.png"/><br/>
Figure 2 illustrates the RNN's data flow Figure 2 illustrates the RNN's data flow
</p> </p>
...@@ -93,7 +93,7 @@ For example, we could have a 2-level RNN, where the top level corresponds to par ...@@ -93,7 +93,7 @@ For example, we could have a 2-level RNN, where the top level corresponds to par
The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text. The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text.
<p align="center"> <p align="center">
<img src="https://github.com/PaddlePaddle/Paddle/tree/develop/doc/fluid/images/2_level_rnn.png"/> <img src="https://raw.githubusercontent.com/PaddlePaddle/Paddle/develop/doc/fluid/images/rnn.png"/>
</p> </p>
```python ```python
...@@ -149,5 +149,5 @@ If the `output_all_steps` is set to False, it will only output the final time st ...@@ -149,5 +149,5 @@ If the `output_all_steps` is set to False, it will only output the final time st
<p align="center"> <p align="center">
<img src="https://github.com/PaddlePaddle/Paddle/tree/develop/doc/fluid/images/rnn_2level_data.png"/> <img src="https://raw.githubusercontent.com/PaddlePaddle/Paddle/develop/doc/fluid/images/rnn_2level_data.png"/>
</p> </p>
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
concepts/index_cn.rst concepts/index_cn.rst
data_type/index_cn.rst data_type/index_cn.rst
memory/index_cn.rst memory/index_cn.rst
muti_devices/index_cn.rst multi_devices/index_cn.rst
dynamic_rnn/index_cn.rst dynamic_rnn/index_cn.rst
concurrent/index_cn.rst concurrent/index_cn.rst
algorithm/index_cn.rst algorithm/index_cn.rst
......
...@@ -9,7 +9,7 @@ Design ...@@ -9,7 +9,7 @@ Design
concepts/index_en.rst concepts/index_en.rst
data_type/index_en.rst data_type/index_en.rst
memory/index_en.rst memory/index_en.rst
muti_devices/index_en.rst multi_devices/index_en.rst
dynamic_rnn/index_en.rst dynamic_rnn/index_en.rst
concurrent/index_en.rst concurrent/index_en.rst
algorithm/index_en.rst algorithm/index_en.rst
......
...@@ -36,7 +36,7 @@ Please be aware that these Python classes need to maintain some construction-tim ...@@ -36,7 +36,7 @@ Please be aware that these Python classes need to maintain some construction-tim
### Program ### Program
A `ProgramDesc` describes a [DL program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/program.md), which is composed of an array of `BlockDesc`s. The `BlockDesc`s in a `ProgramDesc` can have a tree-like hierarchical structure. However, the `ProgramDesc` onlys stores a flattened array of `BlockDesc`s. A `BlockDesc` refers to its parent block by its index in the array. For example, operators in the step block of an RNN operator need to be able to access variables in its ancestor blocks. A `ProgramDesc` describes a [DL program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/program.md), which is composed of an array of `BlockDesc`s. The `BlockDesc`s in a `ProgramDesc` can have a tree-like hierarchical structure. However, the `ProgramDesc` onlys stores a flattened array of `BlockDesc`s. A `BlockDesc` refers to its parent block by its index in the array. For example, operators in the step block of an RNN operator need to be able to access variables in its ancestor blocks.
Whenever we create a block, we need to set its parent block to the current block, hence the Python class `Program` needs to maintain a data member `current_block`. Whenever we create a block, we need to set its parent block to the current block, hence the Python class `Program` needs to maintain a data member `current_block`.
...@@ -70,7 +70,7 @@ class Program(objects): ...@@ -70,7 +70,7 @@ class Program(objects):
### Block ### Block
A [Block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md) includes A [Block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/block.md) includes
1. a map from variable names to an instance of the Python `Variable` class, and 1. a map from variable names to an instance of the Python `Variable` class, and
1. a list of `Operator` instances. 1. a list of `Operator` instances.
......
...@@ -32,9 +32,9 @@ In the new design, we propose to create new operations for regularization. For n ...@@ -32,9 +32,9 @@ In the new design, we propose to create new operations for regularization. For n
- L2_regularization_op - L2_regularization_op
- L1_regularization_op - L1_regularization_op
These ops can be like any other ops with their own CPU/GPU implementations either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement their kernels using Eigen following the abstraction pattern implemented for [Activation Ops](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/accuracy_op.h). This abstraction pattern can make it very easy to implement new regularization schemes other than L1 and L2 norm penalties. These ops can be like any other ops with their own CPU/GPU implementations either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement their kernels using Eigen following the abstraction pattern implemented for [Activation Ops](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/accuracy_op.h). This abstraction pattern can make it very easy to implement new regularization schemes other than L1 and L2 norm penalties.
The idea of building ops for regularization is in sync with the refactored Paddle philosophy of using operators to represent any computation unit. The way these ops will be added to the computation graph, will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) in Python API. The idea of building ops for regularization is in sync with the refactored Paddle philosophy of using operators to represent any computation unit. The way these ops will be added to the computation graph, will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#layer-function) in Python API.
### Computation Graph ### Computation Graph
...@@ -48,7 +48,7 @@ The Python API will modify this computation graph to add regularization operator ...@@ -48,7 +48,7 @@ The Python API will modify this computation graph to add regularization operator
       
### Python API implementation for Regularization ### Python API implementation for Regularization
Using the low level ops, `L2_regularization_op` and `L1_regularization_op`, any user can add regularization to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support regularization. An example of such an API can be seen in [Keras](https://keras.io/regularizers/). As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since regularization is a property of parameters, it makes sense to create these in the layer functions. Using the low level ops, `L2_regularization_op` and `L1_regularization_op`, any user can add regularization to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support regularization. An example of such an API can be seen in [Keras](https://keras.io/regularizers/). As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since regularization is a property of parameters, it makes sense to create these in the layer functions.
#### Creation of Regularization ops #### Creation of Regularization ops
There are two possibilities for creating the regularization ops: There are two possibilities for creating the regularization ops:
...@@ -63,4 +63,4 @@ Since we want to create the regularization ops in a lazy manner, the regularizat ...@@ -63,4 +63,4 @@ Since we want to create the regularization ops in a lazy manner, the regularizat
#### High-level API #### High-level API
In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) to create neural network layers. Hence, we also need to provide regularization functionality in layer functions. The design of these APIs can be postponed for later right now. A good reference for these APIs can be found in [Keras](https://keras.io/regularizers/) and also by looking at Tensorflow in [`tf.contrib.layers`](https://www.tensorflow.org/api_guides/python/contrib.layers). In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#layer-function) to create neural network layers. Hence, we also need to provide regularization functionality in layer functions. The design of these APIs can be postponed for later right now. A good reference for these APIs can be found in [Keras](https://keras.io/regularizers/) and also by looking at Tensorflow in [`tf.contrib.layers`](https://www.tensorflow.org/api_guides/python/contrib.layers).
...@@ -23,7 +23,7 @@ func paddlepaddle() { ...@@ -23,7 +23,7 @@ func paddlepaddle() {
} }
``` ```
This program consists of a [block](block.md) of three operators -- This program consists of a [block](../concepts/block.md) of three operators --
`read`, `assign`, and `mult`. Its `ProgramDesc` message looks like `read`, `assign`, and `mult`. Its `ProgramDesc` message looks like
the following the following
...@@ -107,4 +107,4 @@ where `cuda_context` could be a global variable of type ...@@ -107,4 +107,4 @@ where `cuda_context` could be a global variable of type
## Multi-Block Code Generation ## Multi-Block Code Generation
Most Fluid application programs may have more than one blocks. To Most Fluid application programs may have more than one blocks. To
execute them, we need to trace [scopes](scope.md). execute them, we need to trace [scopes](../concepts/scope.md).
...@@ -11,7 +11,7 @@ The goals of refactoring include: ...@@ -11,7 +11,7 @@ The goals of refactoring include:
1. PaddlePaddle represents the computation, training and inference of Deep Learning models, by computation graphs. 1. PaddlePaddle represents the computation, training and inference of Deep Learning models, by computation graphs.
1. Please refer to [computation graphs](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/graph.md) for a concrete example. 1. Please refer to [computation graphs](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/others/graph.md) for a concrete example.
1. Users write Python programs to describe the graphs and run them (locally or remotely). 1. Users write Python programs to describe the graphs and run them (locally or remotely).
...@@ -28,7 +28,7 @@ The goals of refactoring include: ...@@ -28,7 +28,7 @@ The goals of refactoring include:
1. the C++ library `libpaddle.so` for local execution, 1. the C++ library `libpaddle.so` for local execution,
1. the master process of a distributed training job for training, or 1. the master process of a distributed training job for training, or
1. the server process of a Kubernetes serving job for distributed serving. 1. the server process of a Kubernetes serving job for distributed serving.
1. *Execution* executes the graph by constructing instances of class [`Variable`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/variable.h#L24) and [`OperatorBase`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h#L70), according to the protobuf message. 1. *Execution* executes the graph by constructing instances of class [`Variable`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/variable.h#L24) and [`OperatorBase`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h#L70), according to the protobuf message.
## Description and Realization of Computation Graph ## Description and Realization of Computation Graph
...@@ -48,16 +48,16 @@ At runtime, the C++ program realizes the graph and runs it. ...@@ -48,16 +48,16 @@ At runtime, the C++ program realizes the graph and runs it.
<tr> <tr>
<td>Data</td> <td>Data</td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto#L107">VarDesc</a></td> <a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L107">VarDesc</a></td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/variable.h#L24">Variable</a></td> <a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/variable.h#L24">Variable</a></td>
</tr> </tr>
<tr> <tr>
<td>Operation </td> <td>Operation </td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto#L35">OpDesc</a></td> <a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L35">OpDesc</a></td>
<td> <td>
<a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h#L64">Operator</a></td> <a href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h#L64">Operator</a></td>
</tr> </tr>
<tr> <tr>
<td>Block </td> <td>Block </td>
...@@ -85,7 +85,7 @@ The word *graph* is interchangeable with *block* in this document. A graph cons ...@@ -85,7 +85,7 @@ The word *graph* is interchangeable with *block* in this document. A graph cons
1. The invocation of `train` or [`infer`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/inference.py#L108) methods in the Python program does the following: 1. The invocation of `train` or [`infer`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/inference.py#L108) methods in the Python program does the following:
1. Create a new Scope instance in the [scope hierarchy](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/scope.md) for each run of a block, 1. Create a new Scope instance in the [scope hierarchy](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/scope.md) for each run of a block,
1. realize local variables defined in the BlockDesc message in the new scope, 1. realize local variables defined in the BlockDesc message in the new scope,
1. a scope is similar to the stack frame in programming languages, 1. a scope is similar to the stack frame in programming languages,
...@@ -195,7 +195,7 @@ Maintaining a map, whose key is the type name and the value is the corresponding ...@@ -195,7 +195,7 @@ Maintaining a map, whose key is the type name and the value is the corresponding
## Related Concepts ## Related Concepts
### Op_Maker ### Op_Maker
It's constructor takes `proto` and `checker`. They are completed during Op_Maker's construction. ([ScaleOpMaker](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/scale_op.cc#L37)) It's constructor takes `proto` and `checker`. They are completed during Op_Maker's construction. ([ScaleOpMaker](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/scale_op.cc#L37))
### Register Macros ### Register Macros
```cpp ```cpp
...@@ -236,7 +236,7 @@ REGISTER_OP_WITHOUT_GRADIENT(op_type, op_class, op_maker_class) ...@@ -236,7 +236,7 @@ REGISTER_OP_WITHOUT_GRADIENT(op_type, op_class, op_maker_class)
* `Tensor` is an n-dimension array with type. * `Tensor` is an n-dimension array with type.
* Only dims and data pointers are stored in `Tensor`. * Only dims and data pointers are stored in `Tensor`.
* All operations on `Tensor` are written in `Operator` or global functions. * All operations on `Tensor` are written in `Operator` or global functions.
* Variable length Tensor design [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor.md) * Variable length Tensor design [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/lod_tensor.md)
* `Variable` instances are the inputs and the outputs of an operator, not just `Tensor`. * `Variable` instances are the inputs and the outputs of an operator, not just `Tensor`.
* `step_scopes` in RNN is a variable and not a tensor. * `step_scopes` in RNN is a variable and not a tensor.
* `Scope` is where variables are stored. * `Scope` is where variables are stored.
......
# Kernel Hint Design # Kernel Hint Design
## Problem ## Problem
In PaddlePaddle's [Design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md), one Operator may have multiple kernels. Users may have some personal preference to choose a certain type of kernel for an operator, such as `force_cpu` to choose a CPU kernel, `use_cudnn` to choose a CUDNN kernel, we need to provide a way for users to do this. In PaddlePaddle's [Design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/execution/switch.md), one Operator may have multiple kernels. Users may have some personal preference to choose a certain type of kernel for an operator, such as `force_cpu` to choose a CPU kernel, `use_cudnn` to choose a CUDNN kernel, we need to provide a way for users to do this.
In the current design, we use KernelType to describe one kernel. In the current design, we use KernelType to describe one kernel.
...@@ -14,7 +14,7 @@ struct KernelType { ...@@ -14,7 +14,7 @@ struct KernelType {
``` ```
`place_` `data_type_` and `layout_` can be got from the input tensors of the operator, `GetActualKernelType(inputs)` use inputs to infer the proper kernel key that fit the incoming data, but users can not directly configure it. `place_` `data_type_` and `layout_` can be got from the input tensors of the operator, `GetActualKernelType(inputs)` use inputs to infer the proper kernel key that fit the incoming data, but users can not directly configure it.
The [design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md) also provides a virtual method `GetExpectedKernelType` that user can overload and use to choose the KernelType they want to use. The [design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/execution/switch.md) also provides a virtual method `GetExpectedKernelType` that user can overload and use to choose the KernelType they want to use.
So we should send the information user defined in proto to `GetExpectedKernelType` for choosing a kernel. So we should send the information user defined in proto to `GetExpectedKernelType` for choosing a kernel.
......
...@@ -8,7 +8,7 @@ struct OpKernelType { ...@@ -8,7 +8,7 @@ struct OpKernelType {
proto::DataType data_type_; proto::DataType data_type_;
}; };
``` ```
For more details, please refer to [codes](https://github.com/PaddlePaddle/Paddle/blob/2d5ec16bc8a09fb8e0f62c89b116b0cd1d333907/paddle/framework/operator.h#L348-L374) in github. For more details, please refer to [codes](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h#L348-L374) in github.
It contains two keys, `Place` and `DataType`. And these two keys will be hashed to a unique key to represent a certain type of kernel. However, these two keys do not provide enough information. We need a more complete representation of `OpKernelType`. It contains two keys, `Place` and `DataType`. And these two keys will be hashed to a unique key to represent a certain type of kernel. However, these two keys do not provide enough information. We need a more complete representation of `OpKernelType`.
......
...@@ -11,7 +11,7 @@ In the old version of PaddlePaddle, the C++ class `RecurrentGradientMachine` imp ...@@ -11,7 +11,7 @@ In the old version of PaddlePaddle, the C++ class `RecurrentGradientMachine` imp
There are a lot of heuristic tricks in the sequence generation tasks, so the flexibility of sequence decoder is very important to users. There are a lot of heuristic tricks in the sequence generation tasks, so the flexibility of sequence decoder is very important to users.
During the refactoring of PaddlePaddle, some new concepts are proposed such as: [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor.md) and [TensorArray](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/tensor_array.md) that can better support the sequence usage, and they can also help make the implementation of beam search based sequence decoder **more transparent and modular** . During the refactoring of PaddlePaddle, some new concepts are proposed such as: [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/lod_tensor.md) and [TensorArray](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/tensor_array.md) that can better support the sequence usage, and they can also help make the implementation of beam search based sequence decoder **more transparent and modular** .
For example, the RNN states, candidates IDs and probabilities of beam search can be represented all as `LoDTensors`; For example, the RNN states, candidates IDs and probabilities of beam search can be represented all as `LoDTensors`;
the selected candidate's IDs in each time step can be stored in a `TensorArray`, and `Packed` to the sentences translated. the selected candidate's IDs in each time step can be stored in a `TensorArray`, and `Packed` to the sentences translated.
......
...@@ -4,7 +4,7 @@ To make the operator document itself more clear, we recommend operator names obe ...@@ -4,7 +4,7 @@ To make the operator document itself more clear, we recommend operator names obe
## OpProtoMaker names ## OpProtoMaker names
When defining an operator in Paddle, a corresponding [OpProtoMaker](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h#L170) (TODO: OpProtoMaker Doc)need to be defined. All the Input/Output and Attributes will write into the [OpProto](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto#L61) , and will be used in client language to create operator. When defining an operator in Paddle, a corresponding [OpProtoMaker](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h#L170) (TODO: OpProtoMaker Doc)need to be defined. All the Input/Output and Attributes will write into the [OpProto](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L61) , and will be used in client language to create operator.
- Input/Output. - Input/Output.
- Input/Output names follow the **CamelCase**. e.g. `X`, `Y`, `Matrix`, `LastAxisInMatrix`. Input/Output much more like Variables, we prefer to meaningful English words. - Input/Output names follow the **CamelCase**. e.g. `X`, `Y`, `Matrix`, `LastAxisInMatrix`. Input/Output much more like Variables, we prefer to meaningful English words.
......
...@@ -147,7 +147,7 @@ class MulOp : public framework::OperatorWithKernel { ...@@ -147,7 +147,7 @@ class MulOp : public framework::OperatorWithKernel {
}; };
``` ```
[`MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L22)继承自`OperatorWithKernel``public`成员: [`MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/mul_op.cc#L22)继承自`OperatorWithKernel``public`成员:
```cpp ```cpp
using framework::OperatorWithKernel::OperatorWithKernel; using framework::OperatorWithKernel::OperatorWithKernel;
...@@ -173,7 +173,7 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs, ...@@ -173,7 +173,7 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs,
`MulKernel`继承自`framework::OpKernel`,带有下面两个模板参数: `MulKernel`继承自`framework::OpKernel`,带有下面两个模板参数:
- `typename DeviceContext`: 表示设备类型,不同设备(CPU、CUDA)共享同一个Kernel时,需加该模板参数,不共享则不加,一个不共享的例子是[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43) - `typename DeviceContext`: 表示设备类型,不同设备(CPU、CUDA)共享同一个Kernel时,需加该模板参数,不共享则不加,一个不共享的例子是[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/cross_entropy_op.h#L43)
- `typename T` : 表示数据类型,如`float`, `double`等。 - `typename T` : 表示数据类型,如`float`, `double`等。
...@@ -201,9 +201,9 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs, ...@@ -201,9 +201,9 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs,
需要注意:**不同设备(CPU、CUDA)共享一个Op定义,是否则共享同一个`OpKernel`,取决于`Compute`调用的函数是否支持不同设备。** 需要注意:**不同设备(CPU、CUDA)共享一个Op定义,是否则共享同一个`OpKernel`,取决于`Compute`调用的函数是否支持不同设备。**
`MulOp`的CPU、CUDA实现共享同一个`Kernel``OpKernel`不共享的例子可以参考:[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43) `MulOp`的CPU、CUDA实现共享同一个`Kernel``OpKernel`不共享的例子可以参考:[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/cross_entropy_op.h#L43)
为了使`OpKernel`的计算过程书写更加简单,并且CPU、CUDA的代码可以复用,我们通常借助 Eigen unsupported Tensor模块来实现`Compute`接口。关于在PaddlePaddle中如何使用Eigen库,请参考[使用文档](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md) 为了使`OpKernel`的计算过程书写更加简单,并且CPU、CUDA的代码可以复用,我们通常借助 Eigen unsupported Tensor模块来实现`Compute`接口。关于在PaddlePaddle中如何使用Eigen库,请参考[使用文档](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/dev/use_eigen_cn.md)
到此,前向Op实现完成。接下来,需要在`.cc`文件中注册该op和kernel。 到此,前向Op实现完成。接下来,需要在`.cc`文件中注册该op和kernel。
反向Op类的定义,反向OpKernel的定义与前向Op类似,这里不再赘述。**但需注意反向Op没有`ProtoMaker`** 反向Op类的定义,反向OpKernel的定义与前向Op类似,这里不再赘述。**但需注意反向Op没有`ProtoMaker`**
......
...@@ -26,13 +26,6 @@ Here are the base types needed. For details, please refer to the design docs. ...@@ -26,13 +26,6 @@ Here are the base types needed. For details, please refer to the design docs.
Operators can be categorized into two groups: operator with kernel(s) and operator without kernel(s). An operator with kernel(s) inherits from `OperatorWithKernel` while the one without kernel(s) inherits from `OperatorBase`. This tutorial focuses on implementing operators with kernels. In short, an operator includes the following information: Operators can be categorized into two groups: operator with kernel(s) and operator without kernel(s). An operator with kernel(s) inherits from `OperatorWithKernel` while the one without kernel(s) inherits from `OperatorBase`. This tutorial focuses on implementing operators with kernels. In short, an operator includes the following information:
Information | Where is it defined
-------------- | :----------------------
OpProtoMake definition | `.cc`files, Backward Op does not need an OpProtoMake interface.
Op definition | `.cc` files
Kernel implementation | The kernel methods shared between CPU and CUDA are defined in `.h` files. CPU-specific kernels live in `.cc` files, while CUDA-specific kernels are implemented in `.cu`files.
Registering the Op | Ops are registered in `.cc` files; For Kernel registration, `.cc` files contain the CPU implementation, while `.cu` files contain the CUDA implementation.
<table> <table>
<thead> <thead>
<tr> <tr>
...@@ -176,7 +169,7 @@ Usually `OpProtoMaker` and `Op`'s type definitions are written in `.cc` files, w ...@@ -176,7 +169,7 @@ Usually `OpProtoMaker` and `Op`'s type definitions are written in `.cc` files, w
`MulKernel` inherits `framework::OpKernel`, which includes the following templates: `MulKernel` inherits `framework::OpKernel`, which includes the following templates:
- `typename DeviceContext` denotes device context type. When different devices, namely the CPUDeviceContext and the CUDADeviceContext, share the same kernel, this template needs to be added. If they don't share kernels, this must not be added. An example of a non-sharing kernel is [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43). - `typename DeviceContext` denotes device context type. When different devices, namely the CPUDeviceContext and the CUDADeviceContext, share the same kernel, this template needs to be added. If they don't share kernels, this must not be added. An example of a non-sharing kernel is [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/cross_entropy_op.h#L43).
- `typename T` denotes data type, such as `float` or `double`. - `typename T` denotes data type, such as `float` or `double`.
...@@ -207,7 +200,7 @@ Note that **different devices (CPU, CUDA)share one Op definition; whether or not ...@@ -207,7 +200,7 @@ Note that **different devices (CPU, CUDA)share one Op definition; whether or not
`MulOp`'s CPU and CUDA share the same `Kernel`. A non-sharing `OpKernel` example can be seen in [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/cross_entropy_op.cc). `MulOp`'s CPU and CUDA share the same `Kernel`. A non-sharing `OpKernel` example can be seen in [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/cross_entropy_op.cc).
To ease the writing of `OpKernel` compute, and for reusing code cross-device, [`Eigen-unsupported Tensor`](https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md?fileviewer=file-view-default) module is used to implement `Compute` interface. To learn about how the Eigen library is used in PaddlePaddle, please see [usage document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md). To ease the writing of `OpKernel` compute, and for reusing code cross-device, [`Eigen-unsupported Tensor`](https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md?fileviewer=file-view-default) module is used to implement `Compute` interface. To learn about how the Eigen library is used in PaddlePaddle, please see [usage document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/dev/use_eigen_en.md).
This concludes the forward implementation of an operator. Next its operation and kernel need to be registered in a `.cc` file. This concludes the forward implementation of an operator. Next its operation and kernel need to be registered in a `.cc` file.
......
...@@ -4,13 +4,13 @@ ...@@ -4,13 +4,13 @@
PaddlePaddle Fluid have hundreds of operators. Each operator could have one or more kernels. A kernel is an implementation of the operator for a certain device, which could be a hardware device, e.g., the CUDA GPU, or a library that utilizes a device, e.g., Intel MKL that makes full use of the Xeon CPU. PaddlePaddle Fluid have hundreds of operators. Each operator could have one or more kernels. A kernel is an implementation of the operator for a certain device, which could be a hardware device, e.g., the CUDA GPU, or a library that utilizes a device, e.g., Intel MKL that makes full use of the Xeon CPU.
[This document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/new_op_en.md) explains how to add an operator, and its kernels. The kernels of an operator are indexed by a C++ type [`OpKernelType`](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md). An operator chooses the right kernel at runtime. This choosing mechanism is described [here](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md). [This document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/dev/new_op_en.md) explains how to add an operator, and its kernels. The kernels of an operator are indexed by a C++ type [`OpKernelType`](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/multi_devices/operator_kernel_type.md). An operator chooses the right kernel at runtime. This choosing mechanism is described [here](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/execution/switch.md).
## Write Kernels for A New Device ## Write Kernels for A New Device
### Add A New Device ### Add A New Device
For some historical reaons, we misuse the word *library* for *device*. For example, we call the deivce type by *library type*. An example is the header file [`library_type.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/library_type.h#L24). We will correct this ASAP. For some historical reaons, we misuse the word *library* for *device*. For example, we call the deivce type by *library type*. An example is the header file [`library_type.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/library_type.h#L24). We will correct this ASAP.
To register a new device, we need to add an enum value to `LibraryType`: To register a new device, we need to add an enum value to `LibraryType`:
...@@ -23,9 +23,9 @@ enum class LibraryType { ...@@ -23,9 +23,9 @@ enum class LibraryType {
``` ```
### Add A New [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L53) ### Add A New [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h#L53)
If you have a new kind of Device, firstly you need to add a new kind of [`Place`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L53). For example `CUDAPlace`: If you have a new kind of Device, firstly you need to add a new kind of [`Place`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h#L53). For example `CUDAPlace`:
```cpp ```cpp
struct CUDAPlace { struct CUDAPlace {
...@@ -45,8 +45,8 @@ struct CUDAPlace { ...@@ -45,8 +45,8 @@ struct CUDAPlace {
typedef boost::variant<CUDAPlace, CPUPlace> Place; typedef boost::variant<CUDAPlace, CPUPlace> Place;
``` ```
### Add [device context]((https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L37)) ### Add [device context]((https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/device_context.h#L37))
After a new kind of Device is added, you should add a corresponding [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L37) for it. After a new kind of Device is added, you should add a corresponding [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/device_context.h#L37) for it.
```cpp ```cpp
class DeviceContext { class DeviceContext {
...@@ -58,9 +58,9 @@ class DeviceContext { ...@@ -58,9 +58,9 @@ class DeviceContext {
}; };
``` ```
### Implement new [OpKernel](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h#L351) for your Device. ### Implement new [OpKernel](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h#L351) for your Device.
A detailed documentation can be found in [`new_op_and_kernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/new_op_en.md) A detailed documentation can be found in [`new_op_and_kernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/dev/new_op_en.md)
```cpp ```cpp
class OpKernelBase { class OpKernelBase {
...@@ -101,7 +101,7 @@ REGISTER_OP_KERNEL( ...@@ -101,7 +101,7 @@ REGISTER_OP_KERNEL(
kernel0, kernel1 are kernels that have the same `op_type`, `library_type`, `place_type` but different `data_types`. kernel0, kernel1 are kernels that have the same `op_type`, `library_type`, `place_type` but different `data_types`.
take [`conv2d`]((https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/conv_cudnn_op.cu.cc#L318)) as an example: take [`conv2d`]((https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/conv_cudnn_op.cu.cc#L318)) as an example:
```cpp ```cpp
REGISTER_OP_KERNEL(conv2d, CPU, paddle::platform::CPUPlace, REGISTER_OP_KERNEL(conv2d, CPU, paddle::platform::CPUPlace,
......
...@@ -13,7 +13,7 @@ So, how to support a new Device/Library in Fluid becomes a challenge. ...@@ -13,7 +13,7 @@ So, how to support a new Device/Library in Fluid becomes a challenge.
## Basic: Integrate A New Device/Library ## Basic: Integrate A New Device/Library
For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md). For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/read_source.md).
There are mainly three parts that we have to consider while integrating a new device/library: There are mainly three parts that we have to consider while integrating a new device/library:
...@@ -28,7 +28,7 @@ There are mainly three parts that we have to consider while integrating a new de ...@@ -28,7 +28,7 @@ There are mainly three parts that we have to consider while integrating a new de
Please note that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices. Please note that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.
#### Place #### Place
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent the device memory where data is located. If we add another device, we have to add the corresponding `DevicePlace`. Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h#L55) to represent the device memory where data is located. If we add another device, we have to add the corresponding `DevicePlace`.
``` ```
| CPUPlace | CPUPlace
...@@ -44,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place; ...@@ -44,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place;
#### DeviceContext #### DeviceContext
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different libraries, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`. Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/fluid/paddle/platform/device_context.h#L30) to manage the resources in different libraries, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
``` ```
...@@ -84,7 +84,7 @@ private: ...@@ -84,7 +84,7 @@ private:
#### memory module #### memory module
Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36): Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/memory/memory.h#L36):
``` ```
template <typename Place> template <typename Place>
...@@ -102,7 +102,7 @@ To implement these interfaces, we have to implement MemoryAllocator for differen ...@@ -102,7 +102,7 @@ To implement these interfaces, we have to implement MemoryAllocator for differen
#### Tensor #### Tensor
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in a specific Place. [Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/tensor.h#L36) holds data with some shape in a specific Place.
```cpp ```cpp
class Tensor { class Tensor {
...@@ -161,7 +161,7 @@ t.mutable_data(place); ...@@ -161,7 +161,7 @@ t.mutable_data(place);
Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors. Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.
Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27) as an example: Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/math/maxouting.h#L27) as an example:
The interface is defined in the header file. The interface is defined in the header file.
...@@ -210,7 +210,7 @@ The implementation of `OpKernel` is similar to math functors, the extra thing we ...@@ -210,7 +210,7 @@ The implementation of `OpKernel` is similar to math functors, the extra thing we
Fluid provides different register interfaces in op_registry.h Fluid provides different register interfaces in op_registry.h
Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134) operator as an example: Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/crop_op.cc#L134) operator as an example:
In .cc file: In .cc file:
...@@ -236,5 +236,5 @@ Generally, we will implement OpKernel for all Device/Library of an Operator. We ...@@ -236,5 +236,5 @@ Generally, we will implement OpKernel for all Device/Library of an Operator. We
For more details, please refer to following docs: For more details, please refer to following docs:
- operator kernel type [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md) - operator kernel type [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/multi_devices/operator_kernel_type.md)
- switch kernel [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md) - switch kernel [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/execution/switch.md)
...@@ -105,16 +105,14 @@ TEST(TensorCopy, Tensor) { ...@@ -105,16 +105,14 @@ TEST(TensorCopy, Tensor) {
} }
TEST(TensorFromVector, Tensor) { TEST(TensorFromVector, Tensor) {
using namespace paddle::framework;
using namespace paddle::platform;
{ {
std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9}; std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9};
Tensor cpu_tensor; paddle::framework::Tensor cpu_tensor;
// Copy to CPU Tensor // Copy to CPU Tensor
cpu_tensor.Resize(make_ddim({3, 3})); cpu_tensor.Resize(paddle::framework::make_ddim({3, 3}));
auto cpu_place = new paddle::platform::CPUPlace(); auto cpu_place = new paddle::platform::CPUPlace();
TensorFromVector<int>(src_vec, &cpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, &cpu_tensor);
// Compare Tensors // Compare Tensors
const int* cpu_ptr = cpu_tensor.data<int>(); const int* cpu_ptr = cpu_tensor.data<int>();
...@@ -125,8 +123,8 @@ TEST(TensorFromVector, Tensor) { ...@@ -125,8 +123,8 @@ TEST(TensorFromVector, Tensor) {
} }
src_vec.erase(src_vec.begin(), src_vec.begin() + 5); src_vec.erase(src_vec.begin(), src_vec.begin() + 5);
cpu_tensor.Resize(make_ddim({2, 2})); cpu_tensor.Resize(paddle::framework::make_ddim({2, 2}));
TensorFromVector<int>(src_vec, &cpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, &cpu_tensor);
cpu_ptr = cpu_tensor.data<int>(); cpu_ptr = cpu_tensor.data<int>();
src_ptr = src_vec.data(); src_ptr = src_vec.data();
ASSERT_NE(src_ptr, cpu_ptr); ASSERT_NE(src_ptr, cpu_ptr);
...@@ -140,23 +138,23 @@ TEST(TensorFromVector, Tensor) { ...@@ -140,23 +138,23 @@ TEST(TensorFromVector, Tensor) {
#ifdef PADDLE_WITH_CUDA #ifdef PADDLE_WITH_CUDA
{ {
std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9}; std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9};
Tensor cpu_tensor; paddle::framework::Tensor cpu_tensor;
Tensor gpu_tensor; paddle::framework::Tensor gpu_tensor;
Tensor dst_tensor; paddle::framework::Tensor dst_tensor;
// Copy to CPU Tensor // Copy to CPU Tensor
cpu_tensor.Resize(make_ddim({3, 3})); cpu_tensor.Resize(make_ddim({3, 3}));
auto cpu_place = new paddle::platform::CPUPlace(); auto cpu_place = new paddle::platform::CPUPlace();
CPUDeviceContext cpu_ctx(*cpu_place); paddle::platform::CPUDeviceContext cpu_ctx(*cpu_place);
TensorFromVector<int>(src_vec, cpu_ctx, &cpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, cpu_ctx, &cpu_tensor);
// Copy to GPUTensor // Copy to GPUTensor
gpu_tensor.Resize(make_ddim({3, 3})); gpu_tensor.Resize(paddle::framework::make_ddim({3, 3}));
auto gpu_place = new paddle::platform::CUDAPlace(); auto gpu_place = new paddle::platform::CUDAPlace();
CUDADeviceContext gpu_ctx(*gpu_place); paddle::platform::CUDADeviceContext gpu_ctx(*gpu_place);
TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor);
// Copy from GPU to CPU tensor for comparison // Copy from GPU to CPU tensor for comparison
TensorCopy(gpu_tensor, *cpu_place, gpu_ctx, &dst_tensor); paddle::framework::TensorCopy(gpu_tensor, *cpu_place, gpu_ctx, &dst_tensor);
// Sync before Compare Tensors // Sync before Compare Tensors
gpu_ctx.Wait(); gpu_ctx.Wait();
...@@ -172,11 +170,11 @@ TEST(TensorFromVector, Tensor) { ...@@ -172,11 +170,11 @@ TEST(TensorFromVector, Tensor) {
src_vec.erase(src_vec.begin(), src_vec.begin() + 5); src_vec.erase(src_vec.begin(), src_vec.begin() + 5);
cpu_tensor.Resize(make_ddim({2, 2})); cpu_tensor.Resize(paddle::framework::make_ddim({2, 2}));
TensorFromVector<int>(src_vec, cpu_ctx, &cpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, cpu_ctx, &cpu_tensor);
gpu_tensor.Resize(make_ddim({2, 2})); gpu_tensor.Resize(paddle::framework::make_ddim({2, 2}));
TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor);
TensorCopy(gpu_tensor, *cpu_place, gpu_ctx, &dst_tensor); paddle::framework::TensorCopy(gpu_tensor, *cpu_place, gpu_ctx, &dst_tensor);
// Sync before Compare Tensors // Sync before Compare Tensors
gpu_ctx.Wait(); gpu_ctx.Wait();
...@@ -197,18 +195,16 @@ TEST(TensorFromVector, Tensor) { ...@@ -197,18 +195,16 @@ TEST(TensorFromVector, Tensor) {
} }
TEST(TensorToVector, Tensor) { TEST(TensorToVector, Tensor) {
using namespace paddle::framework;
using namespace paddle::platform;
{ {
Tensor src; paddle::framework::Tensor src;
int* src_ptr = src.mutable_data<int>({3, 3}, CPUPlace()); int* src_ptr = src.mutable_data<int>({3, 3}, paddle::platform::CPUPlace());
for (int i = 0; i < 3 * 3; ++i) { for (int i = 0; i < 3 * 3; ++i) {
src_ptr[i] = i; src_ptr[i] = i;
} }
CPUPlace place; paddle::platform::CPUPlace place;
std::vector<int> dst; std::vector<int> dst;
TensorToVector<int>(src, &dst); paddle::framework::TensorToVector<int>(src, &dst);
for (int i = 0; i < 3 * 3; ++i) { for (int i = 0; i < 3 * 3; ++i) {
EXPECT_EQ(src_ptr[i], dst[i]); EXPECT_EQ(src_ptr[i], dst[i]);
...@@ -217,13 +213,13 @@ TEST(TensorToVector, Tensor) { ...@@ -217,13 +213,13 @@ TEST(TensorToVector, Tensor) {
#ifdef PADDLE_WITH_CUDA #ifdef PADDLE_WITH_CUDA
{ {
std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9}; std::vector<int> src_vec = {1, 2, 3, 4, 5, 6, 7, 8, 9};
Tensor gpu_tensor; paddle::framework::Tensor gpu_tensor;
CUDAPlace place; paddle::platform::CUDAPlace place;
CUDADeviceContext gpu_ctx(place); paddle::platform::CUDADeviceContext gpu_ctx(place);
TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor); paddle::framework::TensorFromVector<int>(src_vec, gpu_ctx, &gpu_tensor);
std::vector<int> dst; std::vector<int> dst;
TensorToVector<int>(gpu_tensor, gpu_ctx, &dst); paddle::framework::TensorToVector<int>(gpu_tensor, gpu_ctx, &dst);
for (int i = 0; i < 3 * 3; ++i) { for (int i = 0; i < 3 * 3; ++i) {
EXPECT_EQ(src_vec[i], dst[i]); EXPECT_EQ(src_vec[i], dst[i]);
...@@ -233,54 +229,54 @@ TEST(TensorToVector, Tensor) { ...@@ -233,54 +229,54 @@ TEST(TensorToVector, Tensor) {
} }
TEST(TensorContainsNAN, CPU) { TEST(TensorContainsNAN, CPU) {
using namespace paddle::framework;
using namespace paddle::platform;
{ {
Tensor src; paddle::framework::Tensor src;
float* buf = src.mutable_data<float>({3}, CPUPlace()); float* buf = src.mutable_data<float>({3}, paddle::platform::CPUPlace());
buf[0] = 0.0; buf[0] = 0.0;
buf[1] = NAN; buf[1] = NAN;
buf[2] = 0.0; buf[2] = 0.0;
ASSERT_TRUE(TensorContainsNAN(src)); ASSERT_TRUE(paddle::framework::TensorContainsNAN(src));
buf[1] = 0.0; buf[1] = 0.0;
ASSERT_FALSE(TensorContainsNAN(src)); ASSERT_FALSE(paddle::framework::TensorContainsNAN(src));
} }
{ {
Tensor src; paddle::framework::Tensor src;
float16* buf = src.mutable_data<float16>({3}, CPUPlace()); paddle::platform::float16* buf =
src.mutable_data<paddle::platform::float16>(
{3}, paddle::platform::CPUPlace());
buf[0] = 0.0; buf[0] = 0.0;
buf[1].x = 0x7fff; buf[1].x = 0x7fff;
buf[2] = 0.0; buf[2] = 0.0;
ASSERT_TRUE(TensorContainsNAN(src)); ASSERT_TRUE(paddle::framework::TensorContainsNAN(src));
buf[1] = 0.0; buf[1] = 0.0;
ASSERT_FALSE(TensorContainsNAN(src)); ASSERT_FALSE(paddle::framework::TensorContainsNAN(src));
} }
} }
TEST(TensorContainsInf, CPU) { TEST(TensorContainsInf, CPU) {
using namespace paddle::framework;
using namespace paddle::platform;
{ {
Tensor src; paddle::framework::Tensor src;
double* buf = src.mutable_data<double>({3}, CPUPlace()); double* buf = src.mutable_data<double>({3}, paddle::platform::CPUPlace());
buf[0] = 1.0; buf[0] = 1.0;
buf[1] = INFINITY; buf[1] = INFINITY;
buf[2] = 0.0; buf[2] = 0.0;
ASSERT_TRUE(TensorContainsInf(src)); ASSERT_TRUE(paddle::framework::TensorContainsInf(src));
buf[1] = 1.0; buf[1] = 1.0;
ASSERT_FALSE(TensorContainsInf(src)); ASSERT_FALSE(paddle::framework::TensorContainsInf(src));
} }
{ {
Tensor src; paddle::framework::Tensor src;
float16* buf = src.mutable_data<float16>({3}, CPUPlace()); paddle::platform::float16* buf =
src.mutable_data<paddle::platform::float16>(
{3}, paddle::platform::CPUPlace());
buf[0] = 1.0; buf[0] = 1.0;
buf[1].x = 0x7c00; buf[1].x = 0x7c00;
buf[2] = 0.0; buf[2] = 0.0;
ASSERT_TRUE(TensorContainsInf(src)); ASSERT_TRUE(paddle::framework::TensorContainsInf(src));
buf[1] = 1.0; buf[1] = 1.0;
ASSERT_FALSE(TensorContainsInf(src)); ASSERT_FALSE(paddle::framework::TensorContainsInf(src));
} }
} }
......
...@@ -45,9 +45,8 @@ static __global__ void FillInf(platform::float16* buf) { ...@@ -45,9 +45,8 @@ static __global__ void FillInf(platform::float16* buf) {
} }
TEST(TensorContainsNAN, GPU) { TEST(TensorContainsNAN, GPU) {
using namespace paddle::platform; paddle::platform::CUDAPlace gpu(0);
CUDAPlace gpu(0); auto& pool = paddle::platform::DeviceContextPool::Instance();
auto& pool = DeviceContextPool::Instance();
auto* cuda_ctx = pool.GetByPlace(gpu); auto* cuda_ctx = pool.GetByPlace(gpu);
{ {
Tensor tensor; Tensor tensor;
...@@ -58,7 +57,8 @@ TEST(TensorContainsNAN, GPU) { ...@@ -58,7 +57,8 @@ TEST(TensorContainsNAN, GPU) {
} }
{ {
Tensor tensor; Tensor tensor;
float16* buf = tensor.mutable_data<float16>({3}, gpu); paddle::platform::float16* buf =
tensor.mutable_data<paddle::platform::float16>({3}, gpu);
FillNAN<<<1, 1, 0, cuda_ctx->stream()>>>(buf); FillNAN<<<1, 1, 0, cuda_ctx->stream()>>>(buf);
cuda_ctx->Wait(); cuda_ctx->Wait();
ASSERT_TRUE(TensorContainsNAN(tensor)); ASSERT_TRUE(TensorContainsNAN(tensor));
...@@ -66,9 +66,8 @@ TEST(TensorContainsNAN, GPU) { ...@@ -66,9 +66,8 @@ TEST(TensorContainsNAN, GPU) {
} }
TEST(TensorContainsInf, GPU) { TEST(TensorContainsInf, GPU) {
using namespace paddle::platform; paddle::platform::CUDAPlace gpu(0);
CUDAPlace gpu(0); auto& pool = paddle::platform::DeviceContextPool::Instance();
auto& pool = DeviceContextPool::Instance();
auto* cuda_ctx = pool.GetByPlace(gpu); auto* cuda_ctx = pool.GetByPlace(gpu);
{ {
Tensor tensor; Tensor tensor;
...@@ -79,7 +78,8 @@ TEST(TensorContainsInf, GPU) { ...@@ -79,7 +78,8 @@ TEST(TensorContainsInf, GPU) {
} }
{ {
Tensor tensor; Tensor tensor;
float16* buf = tensor.mutable_data<float16>({3}, gpu); paddle::platform::float16* buf =
tensor.mutable_data<paddle::platform::float16>({3}, gpu);
FillInf<<<1, 1, 0, cuda_ctx->stream()>>>(buf); FillInf<<<1, 1, 0, cuda_ctx->stream()>>>(buf);
cuda_ctx->Wait(); cuda_ctx->Wait();
ASSERT_TRUE(TensorContainsInf(tensor)); ASSERT_TRUE(TensorContainsInf(tensor));
......
...@@ -74,105 +74,105 @@ class ActivationOpGrad : public framework::OperatorWithKernel { ...@@ -74,105 +74,105 @@ class ActivationOpGrad : public framework::OperatorWithKernel {
} }
}; };
constexpr char SigmoidDoc[] = R"DOC( __attribute__((unused)) constexpr char SigmoidDoc[] = R"DOC(
Sigmoid Activation Operator Sigmoid Activation Operator
$$out = \frac{1}{1 + e^{-x}}$$ $$out = \frac{1}{1 + e^{-x}}$$
)DOC"; )DOC";
constexpr char LogSigmoidDoc[] = R"DOC( __attribute__((unused)) constexpr char LogSigmoidDoc[] = R"DOC(
Logsigmoid Activation Operator Logsigmoid Activation Operator
$$out = \log \frac{1}{1 + e^{-x}}$$ $$out = \log \frac{1}{1 + e^{-x}}$$
)DOC"; )DOC";
constexpr char ExpDoc[] = R"DOC( __attribute__((unused)) constexpr char ExpDoc[] = R"DOC(
Exp Activation Operator. Exp Activation Operator.
$out = e^x$ $out = e^x$
)DOC"; )DOC";
constexpr char ReluDoc[] = R"DOC( __attribute__((unused)) constexpr char ReluDoc[] = R"DOC(
Relu Activation Operator. Relu Activation Operator.
$out = \max(x, 0)$ $out = \max(x, 0)$
)DOC"; )DOC";
constexpr char TanhDoc[] = R"DOC( __attribute__((unused)) constexpr char TanhDoc[] = R"DOC(
Tanh Activation Operator. Tanh Activation Operator.
$$out = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$ $$out = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
)DOC"; )DOC";
constexpr char TanhShrinkDoc[] = R"DOC( __attribute__((unused)) constexpr char TanhShrinkDoc[] = R"DOC(
TanhShrink Activation Operator. TanhShrink Activation Operator.
$$out = x - \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$ $$out = x - \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
)DOC"; )DOC";
constexpr char SqrtDoc[] = R"DOC( __attribute__((unused)) constexpr char SqrtDoc[] = R"DOC(
Sqrt Activation Operator. Sqrt Activation Operator.
$out = \sqrt{x}$ $out = \sqrt{x}$
)DOC"; )DOC";
constexpr char AbsDoc[] = R"DOC( __attribute__((unused)) constexpr char AbsDoc[] = R"DOC(
Abs Activation Operator. Abs Activation Operator.
$out = |x|$ $out = |x|$
)DOC"; )DOC";
constexpr char CeilDoc[] = R"DOC( __attribute__((unused)) constexpr char CeilDoc[] = R"DOC(
Ceil Activation Operator. Ceil Activation Operator.
$out = ceil(x)$ $out = ceil(x)$
)DOC"; )DOC";
constexpr char FloorDoc[] = R"DOC( __attribute__((unused)) constexpr char FloorDoc[] = R"DOC(
Floor Activation Operator. Floor Activation Operator.
$out = floor(x)$ $out = floor(x)$
)DOC"; )DOC";
constexpr char CosDoc[] = R"DOC( __attribute__((unused)) constexpr char CosDoc[] = R"DOC(
Cosine Activation Operator. Cosine Activation Operator.
$out = cos(x)$ $out = cos(x)$
)DOC"; )DOC";
constexpr char SinDoc[] = R"DOC( __attribute__((unused)) constexpr char SinDoc[] = R"DOC(
Sine Activation Operator. Sine Activation Operator.
$out = sin(x)$ $out = sin(x)$
)DOC"; )DOC";
constexpr char RoundDoc[] = R"DOC( __attribute__((unused)) constexpr char RoundDoc[] = R"DOC(
Round Activation Operator. Round Activation Operator.
$out = [x]$ $out = [x]$
)DOC"; )DOC";
constexpr char ReciprocalDoc[] = R"DOC( __attribute__((unused)) constexpr char ReciprocalDoc[] = R"DOC(
Reciprocal Activation Operator. Reciprocal Activation Operator.
$$out = \frac{1}{x}$$ $$out = \frac{1}{x}$$
)DOC"; )DOC";
constexpr char LogDoc[] = R"DOC( __attribute__((unused)) constexpr char LogDoc[] = R"DOC(
Log Activation Operator. Log Activation Operator.
$out = \ln(x)$ $out = \ln(x)$
...@@ -181,21 +181,21 @@ Natural logarithm of x. ...@@ -181,21 +181,21 @@ Natural logarithm of x.
)DOC"; )DOC";
constexpr char SquareDoc[] = R"DOC( __attribute__((unused)) constexpr char SquareDoc[] = R"DOC(
Square Activation Operator. Square Activation Operator.
$out = x^2$ $out = x^2$
)DOC"; )DOC";
constexpr char SoftplusDoc[] = R"DOC( __attribute__((unused)) constexpr char SoftplusDoc[] = R"DOC(
Softplus Activation Operator. Softplus Activation Operator.
$out = \ln(1 + e^{x})$ $out = \ln(1 + e^{x})$
)DOC"; )DOC";
constexpr char SoftsignDoc[] = R"DOC( __attribute__((unused)) constexpr char SoftsignDoc[] = R"DOC(
Softsign Activation Operator. Softsign Activation Operator.
$$out = \frac{x}{1 + |x|}$$ $$out = \frac{x}{1 + |x|}$$
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册