In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
defoptimizer_program():
returnfluid.optimizer.Adam(learning_rate=0.001)
```
### Data Feeders Configuration
Then we specify the training data `paddle.dataset.mnist.train()` and testing data `paddle.dataset.mnist.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
...
...
@@ -266,15 +275,13 @@ test_reader = paddle.batch(
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
def optimizer_program():
return fluid.optimizer.Adam(learning_rate=0.001)
```
### Data Feeders Configuration
Then we specify the training data `paddle.dataset.mnist.train()` and testing data `paddle.dataset.mnist.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
...
...
@@ -308,15 +317,13 @@ test_reader = paddle.batch(
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
use_cuda = False # set to True if training with GPU
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()