diff --git a/doc/fluid/design/algorithm/parameter_average.md b/doc/fluid/design/algorithm/parameter_average.md index 53d601d3a9a37e8adad519833bb6fa2dc48023a0..70c5cdecadbf57af8200d63893a86b41a4282a54 100644 --- a/doc/fluid/design/algorithm/parameter_average.md +++ b/doc/fluid/design/algorithm/parameter_average.md @@ -7,7 +7,9 @@ Polyak and Juditsky (1992) showed that the test performance of simple average of Hence, to accelerate the speed of Stochastic Gradient Descent, Averaged Stochastic Gradient Descent (ASGD) was proposed in Polyak and Juditsky (1992). For ASGD, the running average of parameters obtained by SGD, is used as the estimator for
. The averaging is done as follows: -![](./images/asgd.gif) +

+
+

We propose averaging for any optimizer similar to how ASGD performs it, as mentioned above. diff --git a/doc/fluid/design/concurrent/channel.md b/doc/fluid/design/concurrent/channel.md index a00a3325e7b49381f0f82ebbf32b74683f02de5f..a5cf17faa8bc42a49ba35c195b91da62ea24e6eb 100644 --- a/doc/fluid/design/concurrent/channel.md +++ b/doc/fluid/design/concurrent/channel.md @@ -2,7 +2,7 @@ ## Introduction -A Channel is a data structure that allows for synchronous interprocess +A Channel is a data structure that allows for synchronous interprocess communication via message passing. It is a fundemental component of CSP (communicating sequential processes), and allows for users to pass data between threads without having to worry about synchronization. @@ -18,7 +18,7 @@ Creates a new channel that takes in variables of a specific dtype. - **fluid.make_channel(dtype, capacity=0)** - **dtype**: The data type of variables being sent/received through channel - - **capacity**: The capacity of the channel. A capacity of 0 represents + - **capacity**: The capacity of the channel. A capacity of 0 represents an unbuffered channel. Capacity > 0 represents a buffered channel ``` @@ -40,8 +40,8 @@ fluid.channel_close(ch) ### Send data to a channel -Sends a variable to a channel. Currently, variables of dtype `LoDTensor`, -`LoDRankTable`, `LoDTensorArray`, `SelectedRows`, `ReaderHolder`, and +Sends a variable to a channel. Currently, variables of dtype `LoDTensor`, +`LoDRankTable`, `LoDTensorArray`, `SelectedRows`, `ReaderHolder`, and `ChannelHolder` are supported. By default, the data of the Variable is moved from the sender to the receiver, @@ -52,7 +52,7 @@ however the user can optionally copy the data before performing the send. - **variable**: The variable to send to the channel - **is_copy**: If set to True, channel_send will perform a variable assign to copy the source variable to a new variable to be sent. - + ``` ch = fluid.make_channel(dtype=core.VarDesc.VarType.LOD_TENSOR) var = fill_constant(shape=[1],dtype=core.VarDesc.VarType.INT32, value=100) @@ -68,7 +68,7 @@ receiving variable. - **channel**: The channel to receive the variable from - **return_variable**: The destination variable used to store the data of the variable received from the channel - + ``` ch = fluid.make_channel(dtype=core.VarDesc.VarType.LOD_TENSOR) var = fill_constant(shape=[1],dtype=core.VarDesc.VarType.INT32, value=-1) @@ -84,9 +84,9 @@ internal queues, locks, and conditional variables. ### QueueMessage QueueMessage encapsulates the state of the channel send/receive operation to be -put in the **sendq/recvq**. It contains a condition variable used to lock the +put in the **sendq/recvq**. It contains a condition variable used to lock the thread (when there are no available sends/receives). In addition, it contains -a callback function to notify a thread when the QueueMessage is being +a callback function to notify a thread when the QueueMessage is being processed by the channel. ### Queues @@ -108,21 +108,21 @@ channel_recv operation will put a new QueueMessage on the recvq and block the current thread under two conditions: 1. The channel is buffered and there is no data on the buff_ 2. The channel is unbuffered and does not have a sender - + ### State diagram #### Channel Send

-
+

- + #### Channel Receive

-
+

- + ## Limitations and Considerations ### Variable Copy @@ -135,5 +135,5 @@ be sent before it is sent. Please note that this is acheived by adding an **assign** operator and creating a temporary variable that is sent in place of the original variable. Please -note that **assign** operator has limited support for only certain variables +note that **assign** operator has limited support for only certain variables datatypes. diff --git a/doc/fluid/design/concurrent/select_op.md b/doc/fluid/design/concurrent/select_op.md index 52c226bc94a4e8bfc5588705d7f65328840e91cc..98dd94a2be0e6cf92d8832e380bce1e358a49ca3 100644 --- a/doc/fluid/design/concurrent/select_op.md +++ b/doc/fluid/design/concurrent/select_op.md @@ -2,13 +2,13 @@ ## Introduction -In golang, the [**select**](https://golang.org/ref/spec#Select_statements) -statement lets a goroutine wait on multiple communication operations at the -same time. The **select** blocks until one of its cases can run, then -executes the case. If multiple cases are ready to run, then one case is +In golang, the [**select**](https://golang.org/ref/spec#Select_statements) +statement lets a goroutine wait on multiple communication operations at the +same time. The **select** blocks until one of its cases can run, then +executes the case. If multiple cases are ready to run, then one case is choosen at random to be executed. -With the introduction of CSP for Paddle, we mimic this behavior by +With the introduction of CSP for Paddle, we mimic this behavior by creating a ***select_op***. ## How to use it @@ -17,11 +17,11 @@ The **select_op** is available as a c++ operator. However most users will prefer to use the much simplier Python API. - **fluid.Select()**: Creates a select operator and adds it to the current -block within the main program. Also creates a sub block and adds it to the -main program. This sub block is used to hold all variables and operators +block within the main program. Also creates a sub block and adds it to the +main program. This sub block is used to hold all variables and operators used by the case statements. - -Within the select block, users can add cases by + +Within the select block, users can add cases by calling **select.case** or **select.default** method. - **fluid.Select.case(channel_action, channel, result_variable)**: Represents @@ -37,13 +37,13 @@ execute. ``` ch1 = fluid.make_channel(dtype=core.VarDesc.VarType.LOD_TENSOR) quit_ch = fluid.make_channel(dtype=core.VarDesc.VarType.LOD_TENSOR) - + x = fill_constant(shape=[1], dtype=core.VarDesc.VarType.INT32, value=0) y = fill_constant(shape=[1], dtype=core.VarDesc.VarType.INT32, value=1) - + while_cond = fill_constant(shape=[1], dtype=core.VarDesc.VarType.BOOL, value=True) while_op = While(cond=while_cond) - + with while_op.block(): with fluid.Select() as select: with select.case(fluid.channel_send, channel, x): @@ -99,17 +99,17 @@ blocks { } } // Create "select" operator. - // inputs: + // inputs: // X: All input variables used by operators within the select block // case_to_execute: Variable filled in by select_op when it determines // which case to execute. // // outputs: - // Out: All output variables referenced by operators within select block. - // + // Out: All output variables referenced by operators within select block. + // // attrs: // sub_block: The block id containing the select "cases" - // cases: Serialized list of all cases in the select op. + // cases: Serialized list of all cases in the select op. // Each case is serialized as: ',,,' // where type is 0 for default, 1 for send, and 2 for receive. // No channel and values are needed for default cases. @@ -150,7 +150,7 @@ into **X**. It will also create a temp variable called **case_to_execute**. Th filled in by the select_op after it has completed processing the case statements. If there are no available cases to execute (ie: all cases are blocked on channel operations, and -there is no default statement), then the select_op will block the current thread. The thread will +there is no default statement), then the select_op will block the current thread. The thread will unblock once there is a channel operation affecting one of the case statements, at which point, the **select_op** will set the **case_to_execute** variable to the index of the case to execute. @@ -247,17 +247,17 @@ blocks { ``` -Cases are represented by a **conditional_block operator**, whose's condition is set as the output of -equal(**case_to_execute**, **case_index**). Since each case index is unique in this sub-block, +Cases are represented by a **conditional_block operator**, whose's condition is set as the output of +equal(**case_to_execute**, **case_index**). Since each case index is unique in this sub-block, only one case will be executed. ### select_op flow

-
+

-The select algorithm is inspired by golang's select routine. Please refer to +The select algorithm is inspired by golang's select routine. Please refer to http://www.tapirgames.com/blog/golang-concurrent-select-implementation for more information. ## Backward Pass diff --git a/doc/fluid/design/dist_train/distributed_architecture.md b/doc/fluid/design/dist_train/distributed_architecture.md index a405cb6aaf80b9d2e8a1a9c774ca85cc7e62bbab..3cd4750bcea7d8f0f759e49b9b2df4e98cd9b1f5 100644 --- a/doc/fluid/design/dist_train/distributed_architecture.md +++ b/doc/fluid/design/dist_train/distributed_architecture.md @@ -40,11 +40,11 @@ computation is only specified in Python code which sits outside of PaddlePaddle, Similar to how a compiler uses an intermediate representation (IR) so that the programmer does not need to manually optimize their code for most of the cases, we can have an intermediate representation in PaddlePaddle as well. The compiler optimizes the IR as follows: - + PaddlePaddle can support model parallelism by converting the IR so that the user no longer needs to manually perform the computation and operations in the Python component: - + The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the computation dependency graph and the variables used in the computation. @@ -60,7 +60,7 @@ For a detailed explanation, refer to this document - The revamped distributed training architecture can address the above discussed limitations. Below is the illustration of how it does so: - + The major components are: *Python API*, *Distribute Transpiler* and *Remote Executor*. @@ -152,7 +152,7 @@ for data in train_reader(): `JobDesc` object describe the distributed job resource specification to run on Cluster environment. - + `RemoteExecutor.run` sends the `ProgramDesc` and [TrainingJob](https://github.com/PaddlePaddle/cloud/blob/unreleased-tpr/doc/autoscale/README.md#training-job-resource) @@ -171,7 +171,7 @@ In the future, a more general placement algorithm should be implemented, which m The local training architecture will be the same as the distributed training architecture, the difference is that everything runs locally, and there is just one PaddlePaddle runtime: - + ### Training Data diff --git a/doc/fluid/design/dist_train/multi_cpu.md b/doc/fluid/design/dist_train/multi_cpu.md index a8d8ee0422acc84835170a44eb83f9b5f0c6bb40..586612622a654403f07ce55bf7c7f35578af542f 100644 --- a/doc/fluid/design/dist_train/multi_cpu.md +++ b/doc/fluid/design/dist_train/multi_cpu.md @@ -8,11 +8,11 @@ Op graph to a multi-CPU Op graph, and run `ParallelDo` Op to run the graph. ## Transpiler - + After converted: - + ## Implement diff --git a/doc/fluid/design/dist_train/parameter_server.md b/doc/fluid/design/dist_train/parameter_server.md index 6ce48dfbfce8b094684b412ebfda7e505ddc30ae..179b5f8c299189c69167012e5ec8a2df9ab2380e 100644 --- a/doc/fluid/design/dist_train/parameter_server.md +++ b/doc/fluid/design/dist_train/parameter_server.md @@ -41,11 +41,11 @@ We will need these OPs: *Send*, *Recv*, *Enqueue*, *Dequeue*. Below is an example of converting the user defined graph to the subgraphs for the trainer and the parameter server: - + After converting: - + 1. The parameter variable W and its optimizer program are placed on the parameter server. 1. Operators are added to the program. @@ -69,7 +69,7 @@ In Fluid, we introduce [SelectedRows](../selected_rows.md) to represent a list o non-zero gradient data. So when we do parameter optimization both locally and remotely, we only need to send those non-zero rows to the optimizer operators: - + ### Benefits diff --git a/doc/fluid/design/dynamic_rnn/rnn.md b/doc/fluid/design/dynamic_rnn/rnn.md index 6f414e5549b149bc88fb252085ff56dbb06730f8..9a61cd788afc3234c8957ad8c740e470382aa5dd 100644 --- a/doc/fluid/design/dynamic_rnn/rnn.md +++ b/doc/fluid/design/dynamic_rnn/rnn.md @@ -5,7 +5,7 @@ This document describes the RNN (Recurrent Neural Network) operator and how it i ## RNN Algorithm Implementation

- +

The above diagram shows an RNN unrolled into a full network. @@ -22,7 +22,7 @@ There are several important concepts here: There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes* which are created for each step.

-
+
Figure 2 illustrates the RNN's data flow

@@ -93,7 +93,7 @@ For example, we could have a 2-level RNN, where the top level corresponds to par The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text.

- +

```python @@ -149,5 +149,5 @@ If the `output_all_steps` is set to False, it will only output the final time st

- +

diff --git a/doc/fluid/design/modules/batch_norm_op.md b/doc/fluid/design/modules/batch_norm_op.md index d1392619c42d9206bf4bddcd33ad11b033e6cbdb..211e060cc15327428900cd8eaf2fb74021701505 100644 --- a/doc/fluid/design/modules/batch_norm_op.md +++ b/doc/fluid/design/modules/batch_norm_op.md @@ -2,7 +2,7 @@ ## What is batch normalization -Batch normalization is a frequently-used method in deep network training. It adjusts the mean and variance of a layer's output, and make the data distribution easier for next layer's training. +Batch normalization is a frequently-used method in deep network training. It adjusts the mean and variance of a layer's output, and make the data distribution easier for next layer's training. The principle of batch normalization can be summarized into a simple function: @@ -66,7 +66,7 @@ As most C++ operators do, `batch_norm_op` is defined by inputs, outputs, attribu The following graph showes the training computational process of `batch_norm_op`: - + cudnn provides APIs to finish the whole series of computation, we can use them in our GPU kernel. @@ -74,13 +74,13 @@ cudnn provides APIs to finish the whole series of computation, we can use them i `batch_norm_op` is warpped as a layer in Python: -```python -def batch_norm_layer(net, +```python +def batch_norm_layer(net, input, - output, - scale, - bias, - use_global_est = False, + output, + scale, + bias, + use_global_est = False, epsilon = 1e-6, momentum = 0.99): mean_cache = scope.new_var(name = 'estimated_mean', trainable = False) @@ -119,15 +119,15 @@ for pass_id in range(PASS_NUM): if pass_id % 100 == 0: net.infer(test_image) # run inferencing model # ... -``` +``` `is_infer` is an attribute. Once an operator is created, its attributes can not be changed. It suggests us that we shall maintain two `batch_norm_op` in the model, one's `is_infer` is `True`(we call it `infer_batch_norm_op`) and the other one's is `False`(we call it `train_batch_norm_op`). They share all parameters and variables, but be placed in two different branches. That is to say, if a network contains a `batch_norm_op`, it will fork into two branches, one go through `train_batch_norm_op` and the other one go through `infer_batch_norm_op`:
- +
-Just like what is shown in the above graph, the net forks before `batch_norm_op` and will never merge again. All the operators after `batch_norm_op` will duplicate. +Just like what is shown in the above graph, the net forks before `batch_norm_op` and will never merge again. All the operators after `batch_norm_op` will duplicate. When the net runs in training mode, the end of the left branch will be set as the running target, so the dependency tracking process will ignore right branch automatically. When the net runs in inferencing mode, the process is reversed. diff --git a/doc/fluid/design/modules/regularization.md b/doc/fluid/design/modules/regularization.md index 21280ac898feb4dd5e5a5d9e88d121e856850f0b..ffc3199a84cb156173caca0d16643ea5194922f0 100644 --- a/doc/fluid/design/modules/regularization.md +++ b/doc/fluid/design/modules/regularization.md @@ -6,23 +6,23 @@ A central problem in machine learning is how to design an algorithm that will pe ### Parameter Norm Penalties Most common regularization approaches in deep learning are based on limiting the capacity of the models by adding a parameter norm penalty to the objective function `J`. This is given as follows: -
+
The parameter `alpha` is a hyperparameter that weights the relative contribution of the norm penalty term, `omega`, relative to the standard objective function `J`. The most commonly used norm penalties are the L2 norm penalty and the L1 norm penalty. These are given as follows: ##### L2 Regularization: -
+
##### L1 Regularization -
+
A much more detailed mathematical background of regularization can be found [here](http://www.deeplearningbook.org/contents/regularization.html). ## Regularization Survey -A detailed survey of regularization in various deep learning frameworks can be found [here](https://github.com/PaddlePaddle/Paddle/wiki/Regularization-Survey). +A detailed survey of regularization in various deep learning frameworks can be found [here](https://github.com/PaddlePaddle/Paddle/wiki/Regularization-Survey). ## Proposal for Regularization in PaddlePaddle @@ -32,41 +32,35 @@ In the new design, we propose to create new operations for regularization. For n - L2_regularization_op - L1_regularization_op -These ops can be like any other ops with their own CPU/GPU implementations either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement their kernels using Eigen following the abstraction pattern implemented for [Activation Ops](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/accuracy_op.h). This abstraction pattern can make it very easy to implement new regularization schemes other than L1 and L2 norm penalties. +These ops can be like any other ops with their own CPU/GPU implementations either using Eigen or separate CPU and GPU kernels. As the initial implementation, we can implement their kernels using Eigen following the abstraction pattern implemented for [Activation Ops](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/accuracy_op.h). This abstraction pattern can make it very easy to implement new regularization schemes other than L1 and L2 norm penalties. -The idea of building ops for regularization is in sync with the refactored Paddle philosophy of using operators to represent any computation unit. The way these ops will be added to the computation graph, will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) in Python API. +The idea of building ops for regularization is in sync with the refactored Paddle philosophy of using operators to represent any computation unit. The way these ops will be added to the computation graph, will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) in Python API. ### Computation Graph Below is an example of a really simple feed forward neural network. -
+
The Python API will modify this computation graph to add regularization operators. The modified computation graph will look as follows: -
+
    ### Python API implementation for Regularization -Using the low level ops, `L2_regularization_op` and `L1_regularization_op`, any user can add regularization to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support regularization. An example of such an API can be seen in [Keras](https://keras.io/regularizers/). As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since regularization is a property of parameters, it makes sense to create these in the layer functions. +Using the low level ops, `L2_regularization_op` and `L1_regularization_op`, any user can add regularization to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support regularization. An example of such an API can be seen in [Keras](https://keras.io/regularizers/). As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since regularization is a property of parameters, it makes sense to create these in the layer functions. #### Creation of Regularization ops There are two possibilities for creating the regularization ops: -1. We create these ops immediately while building the computation graph. -2. We add these ops in a lazy manner, just before the backward, similar to the way the optimization ops are added. +1. We create these ops immediately while building the computation graph. +2. We add these ops in a lazy manner, just before the backward, similar to the way the optimization ops are added. -The proposal is to add these ops in a lazy manner just before the backward pass. +The proposal is to add these ops in a lazy manner just before the backward pass. #### Storage of Regularization attributes -Since we want to create the regularization ops in a lazy manner, the regularization attributes (type of regularization and weight of regularization penalty) can be stored as attributes of the [`Parameter`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/framework.py#L421) class. This is because regularization is a property of the parameters and storing regularization properties with Parameters also allows for shared parameters. +Since we want to create the regularization ops in a lazy manner, the regularization attributes (type of regularization and weight of regularization penalty) can be stored as attributes of the [`Parameter`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/framework.py#L421) class. This is because regularization is a property of the parameters and storing regularization properties with Parameters also allows for shared parameters. #### High-level API In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) to create neural network layers. Hence, we also need to provide regularization functionality in layer functions. The design of these APIs can be postponed for later right now. A good reference for these APIs can be found in [Keras](https://keras.io/regularizers/) and also by looking at Tensorflow in [`tf.contrib.layers`](https://www.tensorflow.org/api_guides/python/contrib.layers). - - - - - - diff --git a/doc/fluid/design/network/deep_speech_2.md b/doc/fluid/design/network/deep_speech_2.md index 7f5dcf55f9f2a0fd27ffde100510dd8fee305381..d3906143d3eab2853507eaf8eae51e7777e7178b 100644 --- a/doc/fluid/design/network/deep_speech_2.md +++ b/doc/fluid/design/network/deep_speech_2.md @@ -116,7 +116,7 @@ The classical DS2 network contains 15 layers (from bottom to top): - **One** CTC-loss layer
-
+
Figure 1. Archetecture of Deep Speech 2 Network.
@@ -142,7 +142,7 @@ Key ingredients about the layers: - **Batch Normalization Layers**: - Added to all above layers (except for data and loss layer). - Sequence-wise normalization for RNNs: BatchNorm only performed on input-state projection and not state-state projection, for efficiency consideration. - + @@ -208,7 +208,7 @@ TODO by Assignees ### Beam Search with CTC and LM
-
+
Figure 2. Algorithm for CTC Beam Search Decoder.
diff --git a/doc/fluid/design/network/sequence_decoder.md b/doc/fluid/design/network/sequence_decoder.md index c4a9bbeeefca0e05c335dd60233691e8bac33015..a56c1b5bcacabe0d511e7e081baa9f0de4719ed5 100644 --- a/doc/fluid/design/network/sequence_decoder.md +++ b/doc/fluid/design/network/sequence_decoder.md @@ -199,7 +199,7 @@ Packing the `selected_generation_scores` will get a `LoDTensor`, and each tail i ## LoD and shape changes during decoding

- +

According to the image above, the only phase that changes the LoD is beam search. diff --git a/doc/fluid/design/others/gan_api.md b/doc/fluid/design/others/gan_api.md index fb41df8615f73d9fd4c32995eab265833eac1a55..8cc79304700e61f2ce1b6ff6290d0133bb7fdbb3 100644 --- a/doc/fluid/design/others/gan_api.md +++ b/doc/fluid/design/others/gan_api.md @@ -1,24 +1,24 @@ # Design for GAN -GAN (General Adversarial Net [https://arxiv.org/abs/1406.2661]) is an important model for unsupervised learning and widely used in many areas. +GAN (General Adversarial Net [https://arxiv.org/abs/1406.2661]) is an important model for unsupervised learning and widely used in many areas. It applies several important concepts in machine learning system design, including building and running subgraphs, dependency tracing, different optimizers in one executor and so forth. In our GAN design, we wrap it as a user-friendly easily customized python API to design different models. We take the conditional DC-GAN (Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [https://arxiv.org/abs/1511.06434]) as an example due to its good performance on image generation.

-
+
Figure 1. The overall running logic of GAN. The black solid arrows indicate the forward pass; the green dashed arrows indicate the backward pass of generator training; the red dashed arrows indicate the backward pass of the discriminator training. The BP pass of the green (red) arrow should only update the parameters in the green (red) boxes. The diamonds indicate the data providers. d\_loss and g\_loss marked in red and green are the two targets we would like to run.

The operators, layers and functions required/optional to build a GAN demo is summarized in https://github.com/PaddlePaddle/Paddle/issues/4563.

-
+
Figure 2. Photo borrowed from the original DC-GAN paper.

-## The Conditional-GAN might be a class. +## The Conditional-GAN might be a class. This design we adopt the popular open source design in https://github.com/carpedm20/DCGAN-tensorflow and https://github.com/rajathkmp/DCGAN. It contains following data structure: - DCGAN(object): which contains everything required to build a GAN model. It provides following member functions methods as API: @@ -29,7 +29,7 @@ This design we adopt the popular open source design in https://github.com/carped Returns a generated image. - discriminator(image): -Given an image, decide if it is from a real source or a fake one. +Given an image, decide if it is from a real source or a fake one. Returns a 0/1 binary label. - build_model(self): @@ -47,7 +47,7 @@ To be more detailed, we introduce our design of DCGAN as following: ```python class DCGAN(object): def __init__(self, y_dim=None): - + # hyper parameters self.y_dim = y_dim # conditional gan or not self.batch_size = 100 @@ -82,18 +82,18 @@ class DCGAN(object): # input z: the random noise # input y: input data label (optional) # output G_im: generated fake images - + if not self.y_dim: z = pd.layer.concat(1, [z, y]) - + G_h0 = pd.layer.fc(z, self.G_w0, self.G_b0) G_h0_bn = pd.layer.batch_norm(G_h0) G_h0_relu = pd.layer.relu(G_h0_bn) - + G_h1 = pd.layer.deconv(G_h0_relu, self.G_w1, self.G_b1) G_h1_bn = pd.layer.batch_norm(G_h1) G_h1_relu = pd.layer.relu(G_h1_bn) - + G_h2 = pd.layer.deconv(G_h1_relu, self.G_W2, self.G_b2)) G_im = pd.layer.tanh(G_im) return G_im @@ -111,11 +111,11 @@ class DCGAN(object): D_h0 = pd.layer.conv2d(image, w=self.D_w0, b=self.D_b0) D_h0_bn = pd.layer.batchnorm(h0) D_h0_relu = pd.layer.lrelu(h0_bn) - + D_h1 = pd.layer.conv2d(D_h0_relu, w=self.D_w1, b=self.D_b1) D_h1_bn = pd.layer.batchnorm(D_h1) D_h1_relu = pd.layer.lrelu(D_h1_bn) - + D_h2 = pd.layer.fc(D_h1_relu, w=self.D_w2, b=self.D_b2) return D_h2 ``` @@ -123,7 +123,7 @@ class DCGAN(object): ### Class member function: Build the model - Define data readers as placeholders to hold the data; - Build generator and discriminators; -- Define two training losses for discriminator and generator, respectively. +- Define two training losses for discriminator and generator, respectively. If we have execution dependency engine to back-trace all tensors, the module building our GAN model will be like this: ```python class DCGAN(object): @@ -133,7 +133,7 @@ class DCGAN(object): self.images = pd.data(pd.float32, [self.batch_size, self.im_size, self.im_size]) self.faked_images = pd.data(pd.float32, [self.batch_size, self.im_size, self.im_size]) self.z = pd.data(tf.float32, [None, self.z_size]) - + # step 1: generate images by generator, classify real/fake images with discriminator if self.y_dim: # if conditional GAN, includes label self.G = self.generator(self.z, self.y) @@ -147,12 +147,12 @@ class DCGAN(object): # generate fake images self.sampled = self.sampler(self.z) self.D_f = self.discriminator(self.images) - + # step 2: define the two losses self.d_loss_real = pd.reduce_mean(pd.cross_entropy(self.D_t, np.ones(self.batch_size)) self.d_loss_fake = pd.reduce_mean(pd.cross_entropy(self.D_f, np.zeros(self.batch_size)) self.d_loss = self.d_loss_real + self.d_loss_fake - + self.g_loss = pd.reduce_mean(pd.cross_entropy(self.D_f, np.ones(self.batch_szie)) ``` @@ -176,7 +176,7 @@ class DCGAN(object): self.G = self.generator(self.z) self.D_g = self.discriminator(self.G, self.y) self.g_loss = pd.reduce_mean(pd.cross_entropy(self.D_g, np.ones(self.batch_szie)) - + with pd.default_block().d_block(): if self.y_dim: # if conditional GAN, includes label self.D_t = self.discriminator(self.images, self.y) @@ -217,7 +217,7 @@ if __name__ == "__main__": # load mnist data data_X, data_y = self.load_mnist() - + # Two subgraphs required!!! with pd.block().d_block(): d_optim = pd.train.Adam(lr = .001, beta= .1) @@ -228,7 +228,7 @@ if __name__ == "__main__": # executor sess = pd.executor() - + # training for epoch in xrange(10000): for batch_id in range(N / batch_size): @@ -239,7 +239,7 @@ if __name__ == "__main__": batch_z = np.random.uniform(-1., 1., [batch_size, z_dim]) if batch_id % 2 == 0: - sess.run(d_step, + sess.run(d_step, feed_dict = {dcgan.images: batch_im, dcgan.y: batch_label, dcgan.z: batch_z}) diff --git a/doc/fluid/dev/releasing_process.md b/doc/fluid/dev/releasing_process.md index 0810765b85f73d9dba876e66fb43bb1ad476d6d2..d459b54e09a782416ce218eba42e69de8378ef2f 100644 --- a/doc/fluid/dev/releasing_process.md +++ b/doc/fluid/dev/releasing_process.md @@ -37,7 +37,7 @@ PaddlePaddle每次发新的版本,遵循以下流程: 可以在此页面的"Artifacts"下拉框中找到生成的3个二进制文件,分别对应CAPI,`cp27m`和`cp27mu`的版本。然后按照上述的方法 使用`twine`工具上传即可。 - + * 注:CI环境使用 https://github.com/PaddlePaddle/buildtools 这里的DockerImage作为编译环境以支持更多的Linux 发型版,如果需要手动编译,也可以使用这些镜像。这些镜像也可以从 https://hub.docker.com/r/paddlepaddle/paddle_manylinux_devel/tags/ 下载得到。 diff --git a/doc/fluid/howto/performance/profiler.md b/doc/fluid/howto/performance/profiler.md index b20b5efdc1f1f10ce7cec835adcc6fb374ed4e20..fe05534be7c473fcdbb15f59b05102980b940896 100644 --- a/doc/fluid/howto/performance/profiler.md +++ b/doc/fluid/howto/performance/profiler.md @@ -23,7 +23,7 @@ But how to record the time for the mixed C++ and CUDA program? There many C++ A The overall flow is shown as the following figure. -
+
### Event @@ -36,10 +36,10 @@ enum EventKind { kPopRange}; ``` - kMark: only a marker without time range. -- kPushRange: mark the starting event for time range. +- kPushRange: mark the starting event for time range. - kPopRange: mark the ending event for time range. -For the CPU code, the events only need to record the current time. For the CUDA code, the [event management functions of CUDA](http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__EVENT.html#group__CUDART__EVENT) are used. For many pieces of code, an event lists are used to record each piece. +For the CPU code, the events only need to record the current time. For the CUDA code, the [event management functions of CUDA](http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__EVENT.html#group__CUDART__EVENT) are used. For many pieces of code, an event lists are used to record each piece. ```c++ class Event { @@ -66,11 +66,11 @@ struct EventList { }; ``` -As mentioned above, there is no need to record the timeline when disabling the profiler. So there is a global state to enable or disable the profiler. +As mentioned above, there is no need to record the timeline when disabling the profiler. So there is a global state to enable or disable the profiler. ```c++ enum ProfilerState { - kDisabled, + kDisabled, kCPU, kCUDA }; diff --git a/doc/fluid/images/2_level_rnn.dot b/doc/fluid/images/2_level_rnn.dot new file mode 100644 index 0000000000000000000000000000000000000000..5d77865061ca7bbbfcf254dd938f09aef5553505 --- /dev/null +++ b/doc/fluid/images/2_level_rnn.dot @@ -0,0 +1,56 @@ +digraph G { + + rnn [label="1st level RNN" shape=box] + + subgraph cluster0 { + label = "time step 0" + + sent0 [label="sentence"] + sent1 [label="sentence"] + + rnn1 [label="2nd level RNN" shape=box] + + sent0 -> rnn1 + sent1 -> rnn1 + } + + subgraph cluster1 { + label = "time step 1" + + sent2 [label="sentence"] + sent3 [label="sentence"] + + rnn2 [label="2nd level RNN" shape=box] + + sent2 -> rnn2 + sent3 -> rnn2 + } + + subgraph cluster2 { + label = "time step 2" + + sent4 [label="sentence"] + sent5 [label="sentence"] + + rnn3 [label="2nd level RNN" shape=box] + + sent4 -> rnn3 + sent5 -> rnn3 + } + + + para0 [label="paragraph info 0"] + para1 [label="paragraph info 1"] + para2 [label="paragraph info 2"] + + rnn1 -> para0 + rnn2 -> para1 + rnn3 -> para2 + + para0 -> rnn + para1 -> rnn + para2 -> rnn + + chapter [label="chapter info"] + rnn -> chapter +} diff --git a/doc/fluid/images/2_level_rnn.png b/doc/fluid/images/2_level_rnn.png new file mode 100644 index 0000000000000000000000000000000000000000..0537a75beb175c0c284717421f7aa908da2a5038 Binary files /dev/null and b/doc/fluid/images/2_level_rnn.png differ diff --git a/doc/fluid/images/LOD-and-shape-changes-during-decoding.jpg b/doc/fluid/images/LOD-and-shape-changes-during-decoding.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8b0d90f7b9d8184b314b0ee4e521f53eb5f1b455 Binary files /dev/null and b/doc/fluid/images/LOD-and-shape-changes-during-decoding.jpg differ diff --git a/doc/fluid/images/asgd.gif b/doc/fluid/images/asgd.gif new file mode 100644 index 0000000000000000000000000000000000000000..4a0da7bf6df9326a2aab1638b77c5455c18b8c4e Binary files /dev/null and b/doc/fluid/images/asgd.gif differ diff --git a/doc/fluid/images/batch_norm_fork.dot b/doc/fluid/images/batch_norm_fork.dot new file mode 100644 index 0000000000000000000000000000000000000000..4bc47713cba2cb23f1b34fffe6426ef10ac3a9df --- /dev/null +++ b/doc/fluid/images/batch_norm_fork.dot @@ -0,0 +1,25 @@ +digraph ImageBatchNormForkGragh { + subgraph cluster_before { + Prev [label="...", shape=plaintext]; + Rnn [label="rnn_op", shape=box]; + BatchNorm [label="batch_norm_op", shape=box]; + Fc [label="fc_op", shape=box]; + After [label="...", shape=plaintext]; + Prev -> Rnn -> BatchNorm -> Fc -> After; + label="original"; + } + + subgraph cluster_after { + Prev2 [label="...", shape=plaintext]; + Rnn2 [label="rnn_op", shape=box]; + BatchNorm2_1 [label="train_batch_norm_op", shape=box]; + BatchNorm2_2 [label="infer_batch_norm_op", shape=box]; + Fc2_1 [label="fc_op", shape=box]; + Fc2_2 [label="fc_op", shape=box]; + After2_1 [label="...", shape=plaintext]; + After2_2 [label="...", shape=plaintext]; + Prev2 -> Rnn2 -> BatchNorm2_1 -> Fc2_1 -> After2_1; + Rnn2 -> BatchNorm2_2 ->Fc2_2 ->After2_2 + label="forked"; + } +} diff --git a/doc/fluid/images/batch_norm_fork.png b/doc/fluid/images/batch_norm_fork.png new file mode 100644 index 0000000000000000000000000000000000000000..aded62bce5bc268b7a3ef4dc96c89fe21d6ea955 Binary files /dev/null and b/doc/fluid/images/batch_norm_fork.png differ diff --git a/doc/fluid/images/batch_norm_op_kernel.png b/doc/fluid/images/batch_norm_op_kernel.png new file mode 100644 index 0000000000000000000000000000000000000000..a99ce81ff3bf42880ebbd6a1297de3bf038e09b2 Binary files /dev/null and b/doc/fluid/images/batch_norm_op_kernel.png differ diff --git a/doc/fluid/images/beam_search.png b/doc/fluid/images/beam_search.png new file mode 100644 index 0000000000000000000000000000000000000000..7f7e35f34223162d0f7f0ed97375909c43b830ae Binary files /dev/null and b/doc/fluid/images/beam_search.png differ diff --git a/doc/fluid/images/ci_build_whl.png b/doc/fluid/images/ci_build_whl.png new file mode 100644 index 0000000000000000000000000000000000000000..232762b82a9ae3e979a1f38a7beb715c87438f40 Binary files /dev/null and b/doc/fluid/images/ci_build_whl.png differ diff --git a/doc/fluid/images/compiler.graffle b/doc/fluid/images/compiler.graffle new file mode 100644 index 0000000000000000000000000000000000000000..8cc678fea3c820103e7ce81f7a5d625d6c1d92de Binary files /dev/null and b/doc/fluid/images/compiler.graffle differ diff --git a/doc/fluid/images/compiler.png b/doc/fluid/images/compiler.png new file mode 100644 index 0000000000000000000000000000000000000000..65d34f841afce9756def07dd8ecb9ca44e658bfe Binary files /dev/null and b/doc/fluid/images/compiler.png differ diff --git a/doc/fluid/images/control_flow_graph.png b/doc/fluid/images/control_flow_graph.png new file mode 100644 index 0000000000000000000000000000000000000000..3579998e58d07abc50bd3332128d4733a391cb3b Binary files /dev/null and b/doc/fluid/images/control_flow_graph.png differ diff --git a/doc/fluid/images/dataflow_equations.png b/doc/fluid/images/dataflow_equations.png new file mode 100644 index 0000000000000000000000000000000000000000..c10f7f69f4007952e5b0394edaa04efa1cfbb658 Binary files /dev/null and b/doc/fluid/images/dataflow_equations.png differ diff --git a/doc/fluid/images/dcgan.png b/doc/fluid/images/dcgan.png new file mode 100644 index 0000000000000000000000000000000000000000..15e8e290a111ff43900934341365cb4360d87d28 Binary files /dev/null and b/doc/fluid/images/dcgan.png differ diff --git a/doc/fluid/images/deep_learning.png b/doc/fluid/images/deep_learning.png new file mode 100644 index 0000000000000000000000000000000000000000..026becc4d94e01e407dacb2a5314a0e5723334ff Binary files /dev/null and b/doc/fluid/images/deep_learning.png differ diff --git a/doc/fluid/images/dist-graph.graffle b/doc/fluid/images/dist-graph.graffle new file mode 100644 index 0000000000000000000000000000000000000000..941399c6ced8d5f65b6c595522b770c88259df4b Binary files /dev/null and b/doc/fluid/images/dist-graph.graffle differ diff --git a/doc/fluid/images/dist-graph.png b/doc/fluid/images/dist-graph.png new file mode 100644 index 0000000000000000000000000000000000000000..3546b09f1c2ee3e4f60f519d5e47f823f08051a7 Binary files /dev/null and b/doc/fluid/images/dist-graph.png differ diff --git a/doc/fluid/images/distributed_architecture.graffle b/doc/fluid/images/distributed_architecture.graffle new file mode 100644 index 0000000000000000000000000000000000000000..d1b60141342232e06227c2d430ebc60ec349a907 Binary files /dev/null and b/doc/fluid/images/distributed_architecture.graffle differ diff --git a/doc/fluid/images/distributed_architecture.png b/doc/fluid/images/distributed_architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..29c7b0c0783f97c6d33b1db1ed484d6a2b9dd356 Binary files /dev/null and b/doc/fluid/images/distributed_architecture.png differ diff --git a/doc/fluid/images/ds2_network.png b/doc/fluid/images/ds2_network.png new file mode 100644 index 0000000000000000000000000000000000000000..1a5b2184d47928cc2849d5a7c8ea2d8cf5337e11 Binary files /dev/null and b/doc/fluid/images/ds2_network.png differ diff --git a/doc/fluid/images/feed_forward.png b/doc/fluid/images/feed_forward.png new file mode 100644 index 0000000000000000000000000000000000000000..d312371a04c26aa6cd196e0bd1f51becb425180b Binary files /dev/null and b/doc/fluid/images/feed_forward.png differ diff --git a/doc/fluid/images/feed_forward_regularized.png b/doc/fluid/images/feed_forward_regularized.png new file mode 100644 index 0000000000000000000000000000000000000000..677e99bfd9f8e72ed9fe4b27127af2ced202f447 Binary files /dev/null and b/doc/fluid/images/feed_forward_regularized.png differ diff --git a/doc/fluid/images/fluid-compiler.graffle b/doc/fluid/images/fluid-compiler.graffle new file mode 100644 index 0000000000000000000000000000000000000000..c933df2cb855462c52b2d25f7f9a99b95652961d Binary files /dev/null and b/doc/fluid/images/fluid-compiler.graffle differ diff --git a/doc/fluid/images/fluid-compiler.png b/doc/fluid/images/fluid-compiler.png new file mode 100644 index 0000000000000000000000000000000000000000..1b0ffed2039c91a3a00bbb719da08c91c3acf7bb Binary files /dev/null and b/doc/fluid/images/fluid-compiler.png differ diff --git a/doc/fluid/images/graph_construction_example.bash b/doc/fluid/images/graph_construction_example.bash new file mode 100755 index 0000000000000000000000000000000000000000..35e6997abd17588e17a82d448918fc1b3bd7220e --- /dev/null +++ b/doc/fluid/images/graph_construction_example.bash @@ -0,0 +1,11 @@ +cat ./graph_construction_example.dot | \ + sed 's/color=red/color=red, style=invis/g' | \ + sed 's/color=green/color=green, style=invis/g' | \ + dot -Tpng > graph_construction_example_forward_only.png + +cat ./graph_construction_example.dot | \ + sed 's/color=green/color=green, style=invis/g' | \ + dot -Tpng > graph_construction_example_forward_backward.png + +cat ./graph_construction_example.dot | \ + dot -Tpng > graph_construction_example_all.png diff --git a/doc/fluid/images/graph_construction_example.dot b/doc/fluid/images/graph_construction_example.dot new file mode 100644 index 0000000000000000000000000000000000000000..e115f9844bae6ad24f638c8ed4749cea8aff06a9 --- /dev/null +++ b/doc/fluid/images/graph_construction_example.dot @@ -0,0 +1,68 @@ +digraph ImageClassificationGraph { + ///////// The forward part ///////// + FeedX [label="Feed", color=blue, shape=box]; + FeedY [label="Feed", color=blue, shape=box]; + InitW [label="Init", color=blue, shape=diamond]; + Initb [label="Init", color=blue, shape=diamond]; + FC [label="FC", color=blue, shape=box]; + MSE [label="MSE", color=blue, shape=box]; + + x [label="x", color=blue, shape=oval]; + l [label="l", color=blue, shape=oval]; + y [label="y", color=blue, shape=oval]; + W [label="W", color=blue, shape=doublecircle]; + b [label="b", color=blue, shape=doublecircle]; + cost [label="cost", color=blue, shape=oval]; + + FeedX -> x -> FC -> y -> MSE -> cost [color=blue]; + FeedY -> l [color=blue]; + InitW -> W [color=blue]; + Initb -> b [color=blue]; + W -> FC [color=blue]; + b -> FC [color=blue]; + l -> MSE [color=blue]; + + ////////// The backward part ///////// + MSE_Grad [label="MSE_grad", color=red, shape=box]; + FC_Grad [label="FC_grad", color=red, shape=box]; + + d_cost [label="d cost", color=red, shape=oval]; + d_y [label="d y", color=red, shape=oval]; + d_b [label="d b", color=red, shape=oval]; + d_W [label="d W", color=red, shape=oval]; + + cost -> MSE_Grad [color=red]; + d_cost -> MSE_Grad [color=red]; + l -> MSE_Grad [color=red]; + y -> MSE_Grad -> d_y [color=red]; + + x -> FC_Grad [color=red]; + y -> FC_Grad [color=red]; + d_y -> FC_Grad [color=red]; + W -> FC_Grad -> d_W [color=red]; + b -> FC_Grad -> d_b [color=red]; + + ////////// The optimizaiton part ////////// + + OPT_W [label="SGD", color=green, shape=box]; + OPT_b [label="SGD", color=green, shape=box]; + + W -> OPT_W [color=green]; + b -> OPT_b [color=green]; + d_W -> OPT_W -> W [color=green]; + d_b -> OPT_b -> b [color=green]; + + ////////// Groupings ////////// + + subgraph clusterMSE { + style=invis; + MSE; + MSE_Grad; + } + + subgraph clusterFC { + style=invis; + FC; + FC_Grad; + } +} diff --git a/doc/fluid/images/graph_construction_example_all.png b/doc/fluid/images/graph_construction_example_all.png new file mode 100644 index 0000000000000000000000000000000000000000..261611a5721f9aa97874f7e6d897fe48cf667db2 Binary files /dev/null and b/doc/fluid/images/graph_construction_example_all.png differ diff --git a/doc/fluid/images/graph_construction_example_forward_backward.png b/doc/fluid/images/graph_construction_example_forward_backward.png new file mode 100644 index 0000000000000000000000000000000000000000..4c69687f4a6a181138f3df72ce5e8aa48487b5be Binary files /dev/null and b/doc/fluid/images/graph_construction_example_forward_backward.png differ diff --git a/doc/fluid/images/graph_construction_example_forward_only.png b/doc/fluid/images/graph_construction_example_forward_only.png new file mode 100644 index 0000000000000000000000000000000000000000..e668c16e0cac73acb4e5dc2b1827557ae77126b4 Binary files /dev/null and b/doc/fluid/images/graph_construction_example_forward_only.png differ diff --git a/doc/fluid/images/l1_regularization.png b/doc/fluid/images/l1_regularization.png new file mode 100644 index 0000000000000000000000000000000000000000..e1b9c7a44f94dc027598a98da93ddb8133190972 Binary files /dev/null and b/doc/fluid/images/l1_regularization.png differ diff --git a/doc/fluid/images/l2_regularization.png b/doc/fluid/images/l2_regularization.png new file mode 100644 index 0000000000000000000000000000000000000000..d5c2fcbc2ccae75ad083162e5a2dceb0210be298 Binary files /dev/null and b/doc/fluid/images/l2_regularization.png differ diff --git a/doc/fluid/images/local-graph.graffle b/doc/fluid/images/local-graph.graffle new file mode 100644 index 0000000000000000000000000000000000000000..19e509bd9af3c1e9a3f5e0f16ddd281457a339c5 Binary files /dev/null and b/doc/fluid/images/local-graph.graffle differ diff --git a/doc/fluid/images/local-graph.png b/doc/fluid/images/local-graph.png new file mode 100644 index 0000000000000000000000000000000000000000..ada51200f793a9bb18911e7d63cfdb3244b967d7 Binary files /dev/null and b/doc/fluid/images/local-graph.png differ diff --git a/doc/fluid/images/local_architecture.graffle b/doc/fluid/images/local_architecture.graffle new file mode 100644 index 0000000000000000000000000000000000000000..49fcc663ebe3824aa234e3a67aadf285cb417877 Binary files /dev/null and b/doc/fluid/images/local_architecture.graffle differ diff --git a/doc/fluid/images/local_architecture.png b/doc/fluid/images/local_architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..14adc9fd72b855bb9f74fbf2c84ac9ec0cf2b122 Binary files /dev/null and b/doc/fluid/images/local_architecture.png differ diff --git a/doc/fluid/images/lookup_table.png b/doc/fluid/images/lookup_table.png new file mode 100644 index 0000000000000000000000000000000000000000..72dfe3547f731d0d090338afb206b0549dff472e Binary files /dev/null and b/doc/fluid/images/lookup_table.png differ diff --git a/doc/fluid/images/lookup_table_training.png b/doc/fluid/images/lookup_table_training.png new file mode 100644 index 0000000000000000000000000000000000000000..cc7cc4aeb3b885850fe2f70f19fb84d5873bed1e Binary files /dev/null and b/doc/fluid/images/lookup_table_training.png differ diff --git a/doc/fluid/images/loss_equation.png b/doc/fluid/images/loss_equation.png new file mode 100644 index 0000000000000000000000000000000000000000..14212ec8d36c803de96bde8a9a4b5591bd20434e Binary files /dev/null and b/doc/fluid/images/loss_equation.png differ diff --git a/doc/fluid/images/multi-threads.graffle b/doc/fluid/images/multi-threads.graffle new file mode 100644 index 0000000000000000000000000000000000000000..e71173715fff92a0a933d0c7d83599ba948552c6 Binary files /dev/null and b/doc/fluid/images/multi-threads.graffle differ diff --git a/doc/fluid/images/multi-threads@3x.png b/doc/fluid/images/multi-threads@3x.png new file mode 100644 index 0000000000000000000000000000000000000000..e40a869987dbbf5019d4cb03c1dab55b74d6c9f9 Binary files /dev/null and b/doc/fluid/images/multi-threads@3x.png differ diff --git a/doc/fluid/images/multigpu_allreduce.graffle b/doc/fluid/images/multigpu_allreduce.graffle new file mode 100644 index 0000000000000000000000000000000000000000..cb5bc420ceafe8ba4c87694d44ee4e5e4ad06779 Binary files /dev/null and b/doc/fluid/images/multigpu_allreduce.graffle differ diff --git a/doc/fluid/images/multigpu_allreduce.png b/doc/fluid/images/multigpu_allreduce.png new file mode 100644 index 0000000000000000000000000000000000000000..87a1b3e8f6dd4a713ec9df9f0037d1da04e9178a Binary files /dev/null and b/doc/fluid/images/multigpu_allreduce.png differ diff --git a/doc/fluid/images/multigpu_before_convert.graffle b/doc/fluid/images/multigpu_before_convert.graffle new file mode 100644 index 0000000000000000000000000000000000000000..6c35ab1b21fb76ceae82d3693ed0d085b5bc0855 Binary files /dev/null and b/doc/fluid/images/multigpu_before_convert.graffle differ diff --git a/doc/fluid/images/multigpu_before_convert.png b/doc/fluid/images/multigpu_before_convert.png new file mode 100644 index 0000000000000000000000000000000000000000..9c8f7711165d80a2fa3911280fdee91855a401b1 Binary files /dev/null and b/doc/fluid/images/multigpu_before_convert.png differ diff --git a/doc/fluid/images/multiple_reader.png b/doc/fluid/images/multiple_reader.png new file mode 100644 index 0000000000000000000000000000000000000000..b22126b31db4982c13fc3a0827805e6aaf955046 Binary files /dev/null and b/doc/fluid/images/multiple_reader.png differ diff --git a/doc/fluid/images/paddle-compile.graffle b/doc/fluid/images/paddle-compile.graffle new file mode 100644 index 0000000000000000000000000000000000000000..a6348cc3dbcaca923c6e794681b2edb85cb9f8f6 Binary files /dev/null and b/doc/fluid/images/paddle-compile.graffle differ diff --git a/doc/fluid/images/paddle-compile.png b/doc/fluid/images/paddle-compile.png new file mode 100644 index 0000000000000000000000000000000000000000..e0f13d551ac41afaec627a57dea79356464bf0bf Binary files /dev/null and b/doc/fluid/images/paddle-compile.png differ diff --git a/doc/fluid/images/pprof_1.png b/doc/fluid/images/pprof_1.png new file mode 100644 index 0000000000000000000000000000000000000000..8e9edbf377672d0ef40f2fc7bd39e746923550cb Binary files /dev/null and b/doc/fluid/images/pprof_1.png differ diff --git a/doc/fluid/images/pprof_2.png b/doc/fluid/images/pprof_2.png new file mode 100644 index 0000000000000000000000000000000000000000..172ba20399ba974d27f4c072425277b69b02520b Binary files /dev/null and b/doc/fluid/images/pprof_2.png differ diff --git a/doc/fluid/images/profiler.png b/doc/fluid/images/profiler.png new file mode 100644 index 0000000000000000000000000000000000000000..d57b71ca88aaba5d05584a6219d84214e285a1e1 Binary files /dev/null and b/doc/fluid/images/profiler.png differ diff --git a/doc/fluid/images/readers.png b/doc/fluid/images/readers.png new file mode 100644 index 0000000000000000000000000000000000000000..fd59168ce16c9e2a0ef45303c28c997cfd7740be Binary files /dev/null and b/doc/fluid/images/readers.png differ diff --git a/doc/fluid/images/remote_executor.graffle b/doc/fluid/images/remote_executor.graffle new file mode 100644 index 0000000000000000000000000000000000000000..41b2067311694b56d211a4f32d1b76884eeffd2d Binary files /dev/null and b/doc/fluid/images/remote_executor.graffle differ diff --git a/doc/fluid/images/remote_executor.png b/doc/fluid/images/remote_executor.png new file mode 100644 index 0000000000000000000000000000000000000000..744e2fb2e0f1bbe058e991ba7b2a09000965ee79 Binary files /dev/null and b/doc/fluid/images/remote_executor.png differ diff --git a/doc/fluid/images/rnn.dot b/doc/fluid/images/rnn.dot new file mode 100644 index 0000000000000000000000000000000000000000..c1141cd9c981bb3cbf50d8bf7a6ed210280d79a5 --- /dev/null +++ b/doc/fluid/images/rnn.dot @@ -0,0 +1,87 @@ +digraph G { + label = "simple RNN implementation" + + ranksep=2; + + //graph [nodesep=1, ranksep=1]; + + node[nodesep=1] + + subgraph cluster0 { + label = "global scope" + rankdir = TB + W + boot_memory + input + output + } + + subgraph cluster1 { + label = "step-scope 0" + rankdir = TB + memory0[label="memory"] + prememory0[label="pre-memory"] + step_input0[label="step input"] + step_output0[label="step output"] + } + + subgraph cluster2 { + label = "step-scope 1" + rankdir = TB + memory1[label="memory"] + prememory1[label="pre-memory"] + step_input1[label="step input"] + step_output1[label="step output"] + } + + subgraph cluster3 { + label = "step-scope 2" + rankdir = TB + memory2[label="memory"] + prememory2[label="pre-memory"] + step_input2[label="step input"] + step_output2[label="step output"] + } + + stepnet [shape=box] + stepnet0 [shape=box, style=dashed] + stepnet1 [shape=box, style=dashed] + stepnet2 [shape=box, style=dashed] + + + edge[color=blue] + boot_memory -> prememory0 [label="init" color="blue"] + memory0 -> prememory1 [label="copy/reference" color="blue"] + memory1 -> prememory2 [label="copy/reference" color="blue"] + + edge[color=black] + W -> stepnet0[constraint=false, style=dashed] + W -> stepnet1[constraint=false, style=dashed] + W -> stepnet2[constraint=false, style=dashed] + + memory0 -> stepnet0[style=dashed] + prememory0 -> stepnet0 -> step_output0[style=dashed] + + memory1 -> stepnet1[style=dashed] + prememory1 -> stepnet1 -> step_output1[style=dashed] + + memory2 -> stepnet2[style=dashed] + prememory2 -> stepnet2 -> step_output2[style=dashed] + + input -> step_input0 + input -> step_input1 + input -> step_input2 + + step_input0 -> stepnet0 [style=dashed] + step_input1 -> stepnet1[style=dashed] + step_input2 -> stepnet2[style=dashed] + + step_output0 -> output + step_output1 -> output + step_output2 -> output + + stepnet0 -> stepnet[style=dashed] + stepnet1 -> stepnet[style=dashed] + stepnet2 -> stepnet[style=dashed] + +} diff --git a/doc/fluid/images/rnn.jpg b/doc/fluid/images/rnn.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9867e404cf959df0dce6ded5222b466c788fb840 Binary files /dev/null and b/doc/fluid/images/rnn.jpg differ diff --git a/doc/fluid/images/rnn.png b/doc/fluid/images/rnn.png new file mode 100644 index 0000000000000000000000000000000000000000..e139e373fe8396782044cfd936fdde624f8c66fe Binary files /dev/null and b/doc/fluid/images/rnn.png differ diff --git a/doc/fluid/images/rnn_2level_data.dot b/doc/fluid/images/rnn_2level_data.dot new file mode 100644 index 0000000000000000000000000000000000000000..1d85ae2617a915ad0ad8288d848b607cc37ad297 --- /dev/null +++ b/doc/fluid/images/rnn_2level_data.dot @@ -0,0 +1,75 @@ +digraph G { + chapter [label="chapter"] + + subgraph cluster0 { + label = "paragraph 0" + + top_rnn0[label="top rnn step 0" shape=box] + + p0 [label="paragraph 0"] + p1 [label="paragraph 1"] + } + + subgraph cluster1{ + label = "paragraph 1" + + top_rnn1[label="top rnn step 1" shape=box] + + p2 [label="paragraph 0"] + p3 [label="paragraph 1"] + } + + subgraph cluster_p0 { + label = "sentence 0" + + low_rnn0 [label="low rnn step 0" shape=box] + s00 [label="sentence 0"] + s01 [label="sentence 1"] + + low_rnn0 -> s00 + low_rnn0 -> s01 + } + + subgraph cluster_p1 { + label = "sentence 1" + low_rnn1 [label="low rnn step 1" shape=box] + s10 [label="sentence 0"] + s11 [label="sentence 1"] + low_rnn1 -> s10 + low_rnn1 -> s11 + } + + subgraph cluster_p2 { + label = "sentence 1" + low_rnn2 [label="low rnn step 0" shape=box] + s20 [label="sentence 0"] + s21 [label="sentence 1"] + low_rnn2 -> s20 + low_rnn2 -> s21 + } + + subgraph cluster_p3 { + label = "sentence 1" + low_rnn3 [label="low rnn step 1" shape=box] + s30 [label="sentence 0"] + s31 [label="sentence 1"] + low_rnn3 -> s30 + low_rnn3 -> s31 + } + + + chapter -> top_rnn0 + chapter -> top_rnn1 + + top_rnn0 -> p0 + top_rnn0 -> p1 + top_rnn1 -> p2 + top_rnn1 -> p3 + + + p0 -> low_rnn0 + p1 -> low_rnn1 + p2 -> low_rnn2 + p3 -> low_rnn3 + +} diff --git a/doc/fluid/images/rnn_2level_data.png b/doc/fluid/images/rnn_2level_data.png new file mode 100644 index 0000000000000000000000000000000000000000..4be81b2430717a6a506342a09fc26899568574c6 Binary files /dev/null and b/doc/fluid/images/rnn_2level_data.png differ diff --git a/doc/fluid/images/single-thread@3x.png b/doc/fluid/images/single-thread@3x.png new file mode 100644 index 0000000000000000000000000000000000000000..4083aebfdd45af5fbac25fa2c4176bc08c3cb44a Binary files /dev/null and b/doc/fluid/images/single-thread@3x.png differ diff --git a/doc/fluid/images/sparse_update.graffle b/doc/fluid/images/sparse_update.graffle new file mode 100644 index 0000000000000000000000000000000000000000..08d689a58f83698d8c1158ee3990ed8abf3a7a9a Binary files /dev/null and b/doc/fluid/images/sparse_update.graffle differ diff --git a/doc/fluid/images/sparse_update.png b/doc/fluid/images/sparse_update.png new file mode 100644 index 0000000000000000000000000000000000000000..8c872e6ac479f7d1b818a4a207956c43155d0ad7 Binary files /dev/null and b/doc/fluid/images/sparse_update.png differ diff --git a/doc/fluid/images/test.dot b/doc/fluid/images/test.dot new file mode 100644 index 0000000000000000000000000000000000000000..62c69b8fc8010a26a54a6ee8ef1488aad94d747a --- /dev/null +++ b/doc/fluid/images/test.dot @@ -0,0 +1,35 @@ + +digraph Test { + z -> generator -> G_img; + G_img -> discriminator -> D_f -> d_loss_f; + label0 -> d_loss_f -> d_loss; + + img -> discriminator -> D_t -> d_loss_t; + label1 -> d_loss_t -> d_loss; + + d_loss -> d_loss_t[color=red, style=dashed]; + d_loss -> d_loss_f[color=red, style=dashed]; + d_loss_t -> D_t[color=red, style=dashed]; + d_loss_f -> D_f[color=red, style=dashed]; + D_t -> discriminator[color=red, style=dashed]; + D_f -> discriminator[color=red, style=dashed]; + + D_f -> g_loss; + label2 -> g_loss; + + g_loss -> D_f[color=green, style=dashed]; + D_f -> discriminator[color=green, style=dashed]; + discriminator -> G_img[color=green, style=dashed]; + G_img -> generator[color=green, style=dashed]; + + discriminator [color=red, shape=box]; + generator [color=green, shape=box]; + z [shape=diamond]; + img [shape=diamond]; + label0 [shape=diamond]; + label1 [shape=diamond]; + label2 [shape=diamond]; + + d_loss [color=red]; + g_loss [color=green]; +} diff --git a/doc/fluid/images/test.dot.png b/doc/fluid/images/test.dot.png new file mode 100644 index 0000000000000000000000000000000000000000..4e121a40b9f7b2232d7cdda315bad15926446f55 Binary files /dev/null and b/doc/fluid/images/test.dot.png differ diff --git a/doc/fluid/images/theta_star.gif b/doc/fluid/images/theta_star.gif new file mode 100644 index 0000000000000000000000000000000000000000..dd24d33e124396be3fc410c9b12f33148f64efe2 Binary files /dev/null and b/doc/fluid/images/theta_star.gif differ diff --git a/doc/fluid/images/timeline.jpeg b/doc/fluid/images/timeline.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..38ec3f80c982857531f30a8bb0fa26ea5bf05385 Binary files /dev/null and b/doc/fluid/images/timeline.jpeg differ diff --git a/doc/fluid/images/tracing.jpeg b/doc/fluid/images/tracing.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..3a49fc4f8a401a9463b0157e2f38c164ca02dcc5 Binary files /dev/null and b/doc/fluid/images/tracing.jpeg differ