diff --git a/doc/fluid/design/algorithm/parameter_average.md b/doc/fluid/design/algorithm/parameter_average.md index 70c5cdecadbf57af8200d63893a86b41a4282a54..940d37fb31dcd0c50ea6c4c42b052d7cb23a9c47 100644 --- a/doc/fluid/design/algorithm/parameter_average.md +++ b/doc/fluid/design/algorithm/parameter_average.md @@ -5,10 +5,10 @@ In a large scale machine learning setup where the size of the training data is h Polyak and Juditsky (1992) showed that the test performance of simple average of parameters obtained by Stochastic Gradient Descent (SGD) is as good as that of parameter values that are obtained by training the model over and over again, over the training dataset. -Hence, to accelerate the speed of Stochastic Gradient Descent, Averaged Stochastic Gradient Descent (ASGD) was proposed in Polyak and Juditsky (1992). For ASGD, the running average of parameters obtained by SGD, is used as the estimator for
. The averaging is done as follows: +Hence, to accelerate the speed of Stochastic Gradient Descent, Averaged Stochastic Gradient Descent (ASGD) was proposed in Polyak and Juditsky (1992). For ASGD, the running average of parameters obtained by SGD, is used as the estimator for
. The averaging is done as follows:

-
+

We propose averaging for any optimizer similar to how ASGD performs it, as mentioned above. diff --git a/doc/fluid/design/concurrent/channel.md b/doc/fluid/design/concurrent/channel.md index a5cf17faa8bc42a49ba35c195b91da62ea24e6eb..df67438bcc741ac521b00ee962fc13c93db21182 100644 --- a/doc/fluid/design/concurrent/channel.md +++ b/doc/fluid/design/concurrent/channel.md @@ -114,13 +114,13 @@ current thread under two conditions: #### Channel Send

-
+

#### Channel Receive

-
+

## Limitations and Considerations diff --git a/doc/fluid/design/concurrent/concurrent_programming.md b/doc/fluid/design/concurrent/concurrent_programming.md index 64602166065af28309d7a01fdeb7076a9b0a081a..1859f983e9133674e69ecd506d7683ea926b2b8f 100644 --- a/doc/fluid/design/concurrent/concurrent_programming.md +++ b/doc/fluid/design/concurrent/concurrent_programming.md @@ -23,21 +23,25 @@ The following table compares concepts in Fluid and Go user-defined functions layers + control-flow and built-in functions intrinsics/operators + goroutines, channels class ThreadPool + runtime class Executor + diff --git a/doc/fluid/design/concurrent/select_op.md b/doc/fluid/design/concurrent/select_op.md index 98dd94a2be0e6cf92d8832e380bce1e358a49ca3..4fcae57cc7932cdaebe549486e7f7cebf0bd038a 100644 --- a/doc/fluid/design/concurrent/select_op.md +++ b/doc/fluid/design/concurrent/select_op.md @@ -254,7 +254,7 @@ only one case will be executed. ### select_op flow

-
+

The select algorithm is inspired by golang's select routine. Please refer to diff --git a/doc/fluid/design/dist_train/distributed_architecture.md b/doc/fluid/design/dist_train/distributed_architecture.md index 3cd4750bcea7d8f0f759e49b9b2df4e98cd9b1f5..229cb47c17d633be6848bb35e58d33ec9b47ec3b 100644 --- a/doc/fluid/design/dist_train/distributed_architecture.md +++ b/doc/fluid/design/dist_train/distributed_architecture.md @@ -40,11 +40,11 @@ computation is only specified in Python code which sits outside of PaddlePaddle, Similar to how a compiler uses an intermediate representation (IR) so that the programmer does not need to manually optimize their code for most of the cases, we can have an intermediate representation in PaddlePaddle as well. The compiler optimizes the IR as follows: - + PaddlePaddle can support model parallelism by converting the IR so that the user no longer needs to manually perform the computation and operations in the Python component: - + The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the computation dependency graph and the variables used in the computation. @@ -60,7 +60,7 @@ For a detailed explanation, refer to this document - The revamped distributed training architecture can address the above discussed limitations. Below is the illustration of how it does so: - + The major components are: *Python API*, *Distribute Transpiler* and *Remote Executor*. @@ -152,7 +152,7 @@ for data in train_reader(): `JobDesc` object describe the distributed job resource specification to run on Cluster environment. - + `RemoteExecutor.run` sends the `ProgramDesc` and [TrainingJob](https://github.com/PaddlePaddle/cloud/blob/unreleased-tpr/doc/autoscale/README.md#training-job-resource) @@ -171,7 +171,7 @@ In the future, a more general placement algorithm should be implemented, which m The local training architecture will be the same as the distributed training architecture, the difference is that everything runs locally, and there is just one PaddlePaddle runtime: - + ### Training Data diff --git a/doc/fluid/design/dist_train/multi_cpu.md b/doc/fluid/design/dist_train/multi_cpu.md index 586612622a654403f07ce55bf7c7f35578af542f..38222d083084ebfca3099ce96b47868c42d55101 100644 --- a/doc/fluid/design/dist_train/multi_cpu.md +++ b/doc/fluid/design/dist_train/multi_cpu.md @@ -8,11 +8,11 @@ Op graph to a multi-CPU Op graph, and run `ParallelDo` Op to run the graph. ## Transpiler - + After converted: - + ## Implement diff --git a/doc/fluid/design/dist_train/parameter_server.md b/doc/fluid/design/dist_train/parameter_server.md index 179b5f8c299189c69167012e5ec8a2df9ab2380e..73c85da5e89eee0ac7857a0b808bc64ae673fdad 100644 --- a/doc/fluid/design/dist_train/parameter_server.md +++ b/doc/fluid/design/dist_train/parameter_server.md @@ -41,11 +41,11 @@ We will need these OPs: *Send*, *Recv*, *Enqueue*, *Dequeue*. Below is an example of converting the user defined graph to the subgraphs for the trainer and the parameter server: - + After converting: - + 1. The parameter variable W and its optimizer program are placed on the parameter server. 1. Operators are added to the program. @@ -69,8 +69,7 @@ In Fluid, we introduce [SelectedRows](../selected_rows.md) to represent a list o non-zero gradient data. So when we do parameter optimization both locally and remotely, we only need to send those non-zero rows to the optimizer operators: - - + ### Benefits - Model parallelism becomes easier to implement: it is an extension to diff --git a/doc/fluid/design/dynamic_rnn/rnn.md b/doc/fluid/design/dynamic_rnn/rnn.md index 9a61cd788afc3234c8957ad8c740e470382aa5dd..7b61b050f640814d6949cf6847b431da53d59581 100644 --- a/doc/fluid/design/dynamic_rnn/rnn.md +++ b/doc/fluid/design/dynamic_rnn/rnn.md @@ -5,7 +5,7 @@ This document describes the RNN (Recurrent Neural Network) operator and how it i ## RNN Algorithm Implementation

- +

The above diagram shows an RNN unrolled into a full network. diff --git a/doc/fluid/design/modules/batch_norm_op.md b/doc/fluid/design/modules/batch_norm_op.md index 211e060cc15327428900cd8eaf2fb74021701505..e451ffcc73b5de2b911e1c6de54b42a5d1d54c37 100644 --- a/doc/fluid/design/modules/batch_norm_op.md +++ b/doc/fluid/design/modules/batch_norm_op.md @@ -66,7 +66,7 @@ As most C++ operators do, `batch_norm_op` is defined by inputs, outputs, attribu The following graph showes the training computational process of `batch_norm_op`: - + cudnn provides APIs to finish the whole series of computation, we can use them in our GPU kernel. @@ -124,7 +124,7 @@ for pass_id in range(PASS_NUM): `is_infer` is an attribute. Once an operator is created, its attributes can not be changed. It suggests us that we shall maintain two `batch_norm_op` in the model, one's `is_infer` is `True`(we call it `infer_batch_norm_op`) and the other one's is `False`(we call it `train_batch_norm_op`). They share all parameters and variables, but be placed in two different branches. That is to say, if a network contains a `batch_norm_op`, it will fork into two branches, one go through `train_batch_norm_op` and the other one go through `infer_batch_norm_op`:
- +
Just like what is shown in the above graph, the net forks before `batch_norm_op` and will never merge again. All the operators after `batch_norm_op` will duplicate. diff --git a/doc/fluid/design/modules/regularization.md b/doc/fluid/design/modules/regularization.md index ffc3199a84cb156173caca0d16643ea5194922f0..8cd5ff71d193f03e1ac923724b52f28c6057d25d 100644 --- a/doc/fluid/design/modules/regularization.md +++ b/doc/fluid/design/modules/regularization.md @@ -6,17 +6,17 @@ A central problem in machine learning is how to design an algorithm that will pe ### Parameter Norm Penalties Most common regularization approaches in deep learning are based on limiting the capacity of the models by adding a parameter norm penalty to the objective function `J`. This is given as follows: -
+
The parameter `alpha` is a hyperparameter that weights the relative contribution of the norm penalty term, `omega`, relative to the standard objective function `J`. The most commonly used norm penalties are the L2 norm penalty and the L1 norm penalty. These are given as follows: ##### L2 Regularization: -
+
##### L1 Regularization -
+
A much more detailed mathematical background of regularization can be found [here](http://www.deeplearningbook.org/contents/regularization.html). @@ -40,11 +40,11 @@ The idea of building ops for regularization is in sync with the refactored Paddl Below is an example of a really simple feed forward neural network. -
+
The Python API will modify this computation graph to add regularization operators. The modified computation graph will look as follows: -
+
    ### Python API implementation for Regularization diff --git a/doc/fluid/design/network/deep_speech_2.md b/doc/fluid/design/network/deep_speech_2.md index d3906143d3eab2853507eaf8eae51e7777e7178b..f32a5b7e8a4d820319a666dab4c3129360e2c924 100644 --- a/doc/fluid/design/network/deep_speech_2.md +++ b/doc/fluid/design/network/deep_speech_2.md @@ -116,7 +116,7 @@ The classical DS2 network contains 15 layers (from bottom to top): - **One** CTC-loss layer
-
+
Figure 1. Archetecture of Deep Speech 2 Network.
@@ -208,7 +208,7 @@ TODO by Assignees ### Beam Search with CTC and LM
-
+
Figure 2. Algorithm for CTC Beam Search Decoder.
diff --git a/doc/fluid/design/network/sequence_decoder.md b/doc/fluid/design/network/sequence_decoder.md index a56c1b5bcacabe0d511e7e081baa9f0de4719ed5..f13d30ca9fe09c9525c711436f605bb280e11000 100644 --- a/doc/fluid/design/network/sequence_decoder.md +++ b/doc/fluid/design/network/sequence_decoder.md @@ -199,7 +199,7 @@ Packing the `selected_generation_scores` will get a `LoDTensor`, and each tail i ## LoD and shape changes during decoding

- +

According to the image above, the only phase that changes the LoD is beam search. diff --git a/doc/fluid/design/others/gan_api.md b/doc/fluid/design/others/gan_api.md index 8cc79304700e61f2ce1b6ff6290d0133bb7fdbb3..7167470088766985fa5ad31657410309330fd725 100644 --- a/doc/fluid/design/others/gan_api.md +++ b/doc/fluid/design/others/gan_api.md @@ -7,14 +7,14 @@ It applies several important concepts in machine learning system design, includi In our GAN design, we wrap it as a user-friendly easily customized python API to design different models. We take the conditional DC-GAN (Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [https://arxiv.org/abs/1511.06434]) as an example due to its good performance on image generation.

-
+
Figure 1. The overall running logic of GAN. The black solid arrows indicate the forward pass; the green dashed arrows indicate the backward pass of generator training; the red dashed arrows indicate the backward pass of the discriminator training. The BP pass of the green (red) arrow should only update the parameters in the green (red) boxes. The diamonds indicate the data providers. d\_loss and g\_loss marked in red and green are the two targets we would like to run.

The operators, layers and functions required/optional to build a GAN demo is summarized in https://github.com/PaddlePaddle/Paddle/issues/4563.

-
+
Figure 2. Photo borrowed from the original DC-GAN paper.

diff --git a/doc/fluid/dev/releasing_process.md b/doc/fluid/dev/releasing_process.md index d459b54e09a782416ce218eba42e69de8378ef2f..c5943ccd81c2ae2aaacd2676da12509db889f54a 100644 --- a/doc/fluid/dev/releasing_process.md +++ b/doc/fluid/dev/releasing_process.md @@ -37,7 +37,7 @@ PaddlePaddle每次发新的版本,遵循以下流程: 可以在此页面的"Artifacts"下拉框中找到生成的3个二进制文件,分别对应CAPI,`cp27m`和`cp27mu`的版本。然后按照上述的方法 使用`twine`工具上传即可。 - + * 注:CI环境使用 https://github.com/PaddlePaddle/buildtools 这里的DockerImage作为编译环境以支持更多的Linux 发型版,如果需要手动编译,也可以使用这些镜像。这些镜像也可以从 https://hub.docker.com/r/paddlepaddle/paddle_manylinux_devel/tags/ 下载得到。 diff --git a/doc/fluid/howto/performance/profiler.md b/doc/fluid/howto/performance/profiler.md index fe05534be7c473fcdbb15f59b05102980b940896..ee96e7c74ce317caddb387cbb1d4998937bd5c81 100644 --- a/doc/fluid/howto/performance/profiler.md +++ b/doc/fluid/howto/performance/profiler.md @@ -23,7 +23,7 @@ But how to record the time for the mixed C++ and CUDA program? There many C++ A The overall flow is shown as the following figure. -
+
### Event