From 1865e86a4d8204378d9749eb301351548c5d61b0 Mon Sep 17 00:00:00 2001 From: Nicky Date: Mon, 2 Jul 2018 14:40:22 -0700 Subject: [PATCH] Fix trainer error in readme due to optimizer function --- 01.fit_a_line/README.cn.md | 11 ++++++++++- 01.fit_a_line/README.md | 10 +++++++++- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/01.fit_a_line/README.cn.md b/01.fit_a_line/README.cn.md index 87ab885..936142b 100644 --- a/01.fit_a_line/README.cn.md +++ b/01.fit_a_line/README.cn.md @@ -142,6 +142,15 @@ def train_program(): return avg_loss ``` +### Optimizer Function 配置 + +在下面的 `SGD optimizer`,`learning_rate` 是训练的速度,与网络的训练收敛速度有关系。 + +```python +def optimizer_program(): + return fluid.optimizer.SGD(learning_rate=0.001) +``` + ### 定义运算场所 我们可以定义运算是发生在CPU还是GPU @@ -157,7 +166,7 @@ place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() trainer = fluid.Trainer( train_func=train_program, place=place, - optimizer_func=fluid.optimizer.SGD(learning_rate=0.001)) + optimizer_func=optimizer_program) ``` ### 开始提供数据 diff --git a/01.fit_a_line/README.md b/01.fit_a_line/README.md index df624f9..e672caf 100644 --- a/01.fit_a_line/README.md +++ b/01.fit_a_line/README.md @@ -149,6 +149,14 @@ def train_program(): return avg_loss ``` +### Optimizer Function Configuration + +In the following `SGD` optimizer, `learning_rate` specifies the learning rate in the optimization procedure. + +```python +def optimizer_program(): + return fluid.optimizer.SGD(learning_rate=0.001) +``` ### Specify Place Specify your training environment, you should specify if the training is on CPU or GPU. @@ -165,7 +173,7 @@ The trainer will take the `train_program` as input. trainer = fluid.Trainer( train_func=train_program, place=place, - optimizer_func=fluid.optimizer.SGD(learning_rate=0.001)) + optimizer_func=optimizer_program) ``` ### Feeding Data -- GitLab