@@ -159,6 +159,7 @@ We will go though all of them and dig more on the configurations in this demo.
...
@@ -159,6 +159,7 @@ We will go though all of them and dig more on the configurations in this demo.
A PaddlePaddle program starts from importing the API package:
A PaddlePaddle program starts from importing the API package:
```python
```python
importpaddle
importpaddle.fluidasfluid
importpaddle.fluidasfluid
```
```
...
@@ -179,7 +180,7 @@ def softmax_regression():
...
@@ -179,7 +180,7 @@ def softmax_regression():
returnpredict
returnpredict
```
```
- Multi-Layer Perceptron: this network has two hidden fully-connected layers, both are using ReLU as activation functino. The output layer is using softmax activation:
- Multi-Layer Perceptron: this network has two hidden fully-connected layers, both are using ReLU as activation function. The output layer is using softmax activation:
```python
```python
defmultilayer_perceptron():
defmultilayer_perceptron():
...
@@ -259,15 +260,12 @@ test_reader = paddle.batch(
...
@@ -259,15 +260,12 @@ test_reader = paddle.batch(
### Trainer Configuration
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Momentum` optimizer, `momentum=0.9` means that 90% of the current momentum comes from that of the previous iteration. The learning rate relates to the speed at which the network training converges. Regularization is meant to prevent over-fitting; here we use the L2 regularization.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
@@ -201,6 +201,7 @@ We will go though all of them and dig more on the configurations in this demo.
...
@@ -201,6 +201,7 @@ We will go though all of them and dig more on the configurations in this demo.
A PaddlePaddle program starts from importing the API package:
A PaddlePaddle program starts from importing the API package:
```python
```python
import paddle
import paddle.fluid as fluid
import paddle.fluid as fluid
```
```
...
@@ -221,7 +222,7 @@ def softmax_regression():
...
@@ -221,7 +222,7 @@ def softmax_regression():
return predict
return predict
```
```
- Multi-Layer Perceptron: this network has two hidden fully-connected layers, both are using ReLU as activation functino. The output layer is using softmax activation:
- Multi-Layer Perceptron: this network has two hidden fully-connected layers, both are using ReLU as activation function. The output layer is using softmax activation:
```python
```python
def multilayer_perceptron():
def multilayer_perceptron():
...
@@ -301,15 +302,12 @@ test_reader = paddle.batch(
...
@@ -301,15 +302,12 @@ test_reader = paddle.batch(
### Trainer Configuration
### Trainer Configuration
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
Now, we need to setup the trainer. The trainer need to take in `train_program`, `place`, and `optimizer`.
In the following `Momentum` optimizer, `momentum=0.9` means that 90% of the current momentum comes from that of the previous iteration. The learning rate relates to the speed at which the network training converges. Regularization is meant to prevent over-fitting; here we use the L2 regularization.
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
```python
```python
use_cude = False # set to True if training with GPU
use_cuda = False # set to True if training with GPU
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()