From deacfa9eb9c7e8cd55dd16a5b25424c7d9d04b9e Mon Sep 17 00:00:00 2001 From: fengjiayi Date: Tue, 2 Jan 2018 01:04:32 +0800 Subject: [PATCH] fix typo --- doc/{ => design}/backward.md | 29 ++++++++---------- .../design}/images/duplicate_op.graffle | Bin .../design}/images/duplicate_op.png | Bin .../design}/images/duplicate_op2.graffle | Bin .../design}/images/duplicate_op2.png | Bin 5 files changed, 12 insertions(+), 17 deletions(-) rename doc/{ => design}/backward.md (68%) rename {paddle/framework => doc/design}/images/duplicate_op.graffle (100%) rename {paddle/framework => doc/design}/images/duplicate_op.png (100%) rename {paddle/framework => doc/design}/images/duplicate_op2.graffle (100%) rename {paddle/framework => doc/design}/images/duplicate_op2.png (100%) diff --git a/doc/backward.md b/doc/design/backward.md similarity index 68% rename from doc/backward.md rename to doc/design/backward.md index acc95e99c4..85f45b5c74 100644 --- a/doc/backward.md +++ b/doc/design/backward.md @@ -2,13 +2,13 @@ ## Motivation -In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to definate the backward part. So a mechanism is required by the framework which is able to complete the model's backward part automatically acoording to the given forward part. +In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model's backward part automatically according to the given forward part. -When implementing a certain `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forward part. In this way, gradients spread from the end to the beginning of the model, in other word, from the loss to parameters. +When implementing a specific `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters. ## Challenges -The motivation of backward building is obvious. However, to implement it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into right place. +The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into the right place. ## Usage @@ -20,8 +20,8 @@ def append_backward(loss, parameter_list=None, no_grad_set=None): Append backward part to main_program Args: - loss(Variable): The variable generated by cost function. - parameter_list(list): Parameters that need to be updated by optimizer. + loss(Variable): The variable generated by the cost function. + parameter_list(list): Parameters that need to be updated by optimizers. If None, it means all parameters need to be updated. no_grad_set(set): Variables that have no gradients in Block 0. @@ -33,14 +33,14 @@ def append_backward(loss, parameter_list=None, no_grad_set=None): """ ``` -By invoking this API, the framework appends backward part for the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run. +By invoking this API, the framework appends backward part of the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run. This API will be invoked automatically before optimizer building. -As a result, in most cases users do not need to invoke the API by themselves to append backward part. +As a result, in most cases, users do not need to invoke the API by themselves to append backward part. ## Implementation -The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided to two independent parts: creating of `grad_op`s and creating of new variables. +The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided into two independent parts: creating of `grad_op`s and creating new variables. ### Creating `grad_op`s @@ -92,24 +92,19 @@ The first invoking of `_append_backward_ops_()` is initiated by `append_backward ### Corner Cases of `grad_op` Creating -In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, regular algorithm is not enough to get the correct result and appending handling is required. These addtional processes run after the above-mentioned algorithm and do some special adjusts on its output `grad_op`s. +In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output `grad_op`s. #### Shared Variables -If a variable is readed by more than one `op` in the forward pass, its gradient is likey to be written by more than one `grad_op`s in the following backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variables, and then add a `sum_op` to add them up. +If a variable is read by more than one `op` in the forward pass, its gradient is likely to be written by more than one `grad_op`s in the next backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variable and then add a `sum_op` to add them up. -For the debug convinience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`... - -
- - -
+For the debug convenience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`... See function `_addup_repetitive_outputs_` in `backward.py` for implementation details. #### No Gradient Variables -In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Obviously, when all the outputs of some `grad_op` is marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass. +In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some `grad_op` are marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass. But these unnecessary gradients still need to be creating and initialized by something, otherwise following `grad_op`s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ `fill_zeros_like_op` to initialize them as all zeros. diff --git a/paddle/framework/images/duplicate_op.graffle b/doc/design/images/duplicate_op.graffle similarity index 100% rename from paddle/framework/images/duplicate_op.graffle rename to doc/design/images/duplicate_op.graffle diff --git a/paddle/framework/images/duplicate_op.png b/doc/design/images/duplicate_op.png similarity index 100% rename from paddle/framework/images/duplicate_op.png rename to doc/design/images/duplicate_op.png diff --git a/paddle/framework/images/duplicate_op2.graffle b/doc/design/images/duplicate_op2.graffle similarity index 100% rename from paddle/framework/images/duplicate_op2.graffle rename to doc/design/images/duplicate_op2.graffle diff --git a/paddle/framework/images/duplicate_op2.png b/doc/design/images/duplicate_op2.png similarity index 100% rename from paddle/framework/images/duplicate_op2.png rename to doc/design/images/duplicate_op2.png -- GitLab