diff --git a/02.recognize_digits/README.cn.md b/02.recognize_digits/README.cn.md index b7d6ab7e320044f10d5bd81d3c7fafcd10a0a6c4..110ee1219903f867199ee4115583db2c7b7aca7d 100644 --- a/02.recognize_digits/README.cn.md +++ b/02.recognize_digits/README.cn.md @@ -139,7 +139,7 @@ PaddlePaddle在API中提供了自动加载[MNIST](http://yann.lecun.com/exdb/mni 1. `train_program`:指定如何从 `inference_program` 和`标签值`中获取 `loss` 的函数。 这是指定损失计算的地方。 -1. `optimizer_func`: 配置如何最小化损失。PaddlePaddle 支持最主要的优化方法。 +1. `optimizer_func`: “指定优化器配置的函数。优化器负责减少损失并驱动培训。Paddle 支持多种不同的优化器。 1. `Trainer`:PaddlePaddle Trainer 管理由 `train_program` 和 `optimizer` 指定的训练过程。 通过 `event_handler` 回调函数,用户可以监控培训的进展。 diff --git a/02.recognize_digits/README.md b/02.recognize_digits/README.md index 32fcfb3eeea68196dbcd1e59c9c7e95623743f18..7c6792ff578b3c1626361a0fe80e123024eae82b 100644 --- a/02.recognize_digits/README.md +++ b/02.recognize_digits/README.md @@ -146,7 +146,7 @@ Here are the quick overview on the major fluid API complements. This is where you specify the network flow. 1. `train_program`: A function that specify how to get avg_cost from `inference_program` and labels. This is where you specify the loss calculations. -1. `optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods. +1. `optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers." 1. `Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training progress through the `event_handler` callback function. 1. `Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer. @@ -247,7 +247,7 @@ def train_program(): #### Optimizer Function Configuration -In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges. +In the following `Adam` optimizer, `learning_rate` specifies the learning rate in the optimization procedure. ```python def optimizer_program(): diff --git a/02.recognize_digits/index.cn.html b/02.recognize_digits/index.cn.html index f7e9266a56cb274db812fc701f31118070b6748c..6a7e018ce26d66cffe5676381cb37e551fe44adc 100644 --- a/02.recognize_digits/index.cn.html +++ b/02.recognize_digits/index.cn.html @@ -181,7 +181,7 @@ PaddlePaddle在API中提供了自动加载[MNIST](http://yann.lecun.com/exdb/mni 1. `train_program`:指定如何从 `inference_program` 和`标签值`中获取 `loss` 的函数。 这是指定损失计算的地方。 -1. `optimizer_func`: 配置如何最小化损失。PaddlePaddle 支持最主要的优化方法。 +1. `optimizer_func`: “指定优化器配置的函数。优化器负责减少损失并驱动培训。Paddle 支持多种不同的优化器。 1. `Trainer`:PaddlePaddle Trainer 管理由 `train_program` 和 `optimizer` 指定的训练过程。 通过 `event_handler` 回调函数,用户可以监控培训的进展。 diff --git a/02.recognize_digits/index.html b/02.recognize_digits/index.html index 026d7c0e823daff6cdeafa0c7a3cc499b6aea8d8..800113c289b93fec49fe7dd66012381e29627363 100644 --- a/02.recognize_digits/index.html +++ b/02.recognize_digits/index.html @@ -188,7 +188,7 @@ Here are the quick overview on the major fluid API complements. This is where you specify the network flow. 1. `train_program`: A function that specify how to get avg_cost from `inference_program` and labels. This is where you specify the loss calculations. -1. `optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods. +1. `optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers." 1. `Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training progress through the `event_handler` callback function. 1. `Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer. @@ -289,7 +289,7 @@ def train_program(): #### Optimizer Function Configuration -In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges. +In the following `Adam` optimizer, `learning_rate` specifies the learning rate in the optimization procedure. ```python def optimizer_program():