From babc05d17d071da5200c7e00808ef411ff37d111 Mon Sep 17 00:00:00 2001 From: "Wang,Jeff" Date: Wed, 6 Jun 2018 12:04:42 -0700 Subject: [PATCH] Rephrase the sentences to improve the readiability --- 02.recognize_digits/README.cn.md | 2 +- 02.recognize_digits/README.md | 4 ++-- 02.recognize_digits/index.cn.html | 2 +- 02.recognize_digits/index.html | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/02.recognize_digits/README.cn.md b/02.recognize_digits/README.cn.md index b7d6ab7..110ee12 100644 --- a/02.recognize_digits/README.cn.md +++ b/02.recognize_digits/README.cn.md @@ -139,7 +139,7 @@ PaddlePaddle在API中提供了自动加载[MNIST](http://yann.lecun.com/exdb/mni 1. `train_program`:指定如何从 `inference_program` 和`标签值`中获取 `loss` 的函数。 这是指定损失计算的地方。 -1. `optimizer_func`: 配置如何最小化损失。PaddlePaddle 支持最主要的优化方法。 +1. `optimizer_func`: “指定优化器配置的函数。优化器负责减少损失并驱动培训。Paddle 支持多种不同的优化器。 1. `Trainer`:PaddlePaddle Trainer 管理由 `train_program` 和 `optimizer` 指定的训练过程。 通过 `event_handler` 回调函数,用户可以监控培训的进展。 diff --git a/02.recognize_digits/README.md b/02.recognize_digits/README.md index 32fcfb3..7c6792f 100644 --- a/02.recognize_digits/README.md +++ b/02.recognize_digits/README.md @@ -146,7 +146,7 @@ Here are the quick overview on the major fluid API complements. This is where you specify the network flow. 1. `train_program`: A function that specify how to get avg_cost from `inference_program` and labels. This is where you specify the loss calculations. -1. `optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods. +1. `optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers." 1. `Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training progress through the `event_handler` callback function. 1. `Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer. @@ -247,7 +247,7 @@ def train_program(): #### Optimizer Function Configuration -In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges. +In the following `Adam` optimizer, `learning_rate` specifies the learning rate in the optimization procedure. ```python def optimizer_program(): diff --git a/02.recognize_digits/index.cn.html b/02.recognize_digits/index.cn.html index f7e9266..6a7e018 100644 --- a/02.recognize_digits/index.cn.html +++ b/02.recognize_digits/index.cn.html @@ -181,7 +181,7 @@ PaddlePaddle在API中提供了自动加载[MNIST](http://yann.lecun.com/exdb/mni 1. `train_program`:指定如何从 `inference_program` 和`标签值`中获取 `loss` 的函数。 这是指定损失计算的地方。 -1. `optimizer_func`: 配置如何最小化损失。PaddlePaddle 支持最主要的优化方法。 +1. `optimizer_func`: “指定优化器配置的函数。优化器负责减少损失并驱动培训。Paddle 支持多种不同的优化器。 1. `Trainer`:PaddlePaddle Trainer 管理由 `train_program` 和 `optimizer` 指定的训练过程。 通过 `event_handler` 回调函数,用户可以监控培训的进展。 diff --git a/02.recognize_digits/index.html b/02.recognize_digits/index.html index 026d7c0..800113c 100644 --- a/02.recognize_digits/index.html +++ b/02.recognize_digits/index.html @@ -188,7 +188,7 @@ Here are the quick overview on the major fluid API complements. This is where you specify the network flow. 1. `train_program`: A function that specify how to get avg_cost from `inference_program` and labels. This is where you specify the loss calculations. -1. `optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods. +1. `optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers." 1. `Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training progress through the `event_handler` callback function. 1. `Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer. @@ -289,7 +289,7 @@ def train_program(): #### Optimizer Function Configuration -In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges. +In the following `Adam` optimizer, `learning_rate` specifies the learning rate in the optimization procedure. ```python def optimizer_program(): -- GitLab