提交 be85a58d 编写于 作者: W Wang,Jeff

Change to use optimizer func in markdown

上级 bd993696
......@@ -245,6 +245,15 @@ def train_program():
return [avg_cost, acc]
```
#### Optimizer Function Configuration
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
def optimizer_program():
return fluid.optimizer.Adam(learning_rate=0.001)
```
### Data Feeders Configuration
Then we specify the training data `paddle.dataset.mnist.train()` and testing data `paddle.dataset.mnist.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
......@@ -266,15 +275,13 @@ test_reader = paddle.batch(
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
use_cuda = False # set to True if training with GPU
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
trainer = fluid.Trainer(
train_func=train_program, place=place, optimizer=optimizer)
train_func=train_program, place=place, optimizer_func=optimizer_program)
```
#### Event Handler
......
......@@ -287,6 +287,15 @@ def train_program():
return [avg_cost, acc]
```
#### Optimizer Function Configuration
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
def optimizer_program():
return fluid.optimizer.Adam(learning_rate=0.001)
```
### Data Feeders Configuration
Then we specify the training data `paddle.dataset.mnist.train()` and testing data `paddle.dataset.mnist.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
......@@ -308,15 +317,13 @@ test_reader = paddle.batch(
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
use_cuda = False # set to True if training with GPU
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
trainer = fluid.Trainer(
train_func=train_program, place=place, optimizer=optimizer)
train_func=train_program, place=place, optimizer_func=optimizer_program)
```
#### Event Handler
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册