@@ -146,7 +146,7 @@ Here are the quick overview on the major fluid API complements.
This is where you specify the network flow.
1.`train_program`: A function that specify how to get avg_cost from `inference_program` and labels.
This is where you specify the loss calculations.
1.`optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods.
1.`optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers."
1.`Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training
progress through the `event_handler` callback function.
1.`Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer.
...
...
@@ -247,7 +247,7 @@ def train_program():
#### Optimizer Function Configuration
In the following `Adam` optimizer, `learning_rate`means the speed at which the network training converges.
In the following `Adam` optimizer, `learning_rate`specifies the learning rate in the optimization procedure.
@@ -188,7 +188,7 @@ Here are the quick overview on the major fluid API complements.
This is where you specify the network flow.
1. `train_program`: A function that specify how to get avg_cost from `inference_program` and labels.
This is where you specify the loss calculations.
1. `optimizer_func`: Configure how to minimize the loss. Paddle supports most major optimization methods.
1. `optimizer_func`:"A function that specifies the configuration of the the optimizer. The optimizer is responsible for minimizing the loss and driving the training. Paddle supports many different optimizers."
1. `Trainer`: Fluid trainer manages the training process specified by the `train_program` and `optimizer`. Users can monitor the training
progress through the `event_handler` callback function.
1. `Inferencer`: Fluid inferencer loads the `inference_program` and the parameters trained by the Trainer.
...
...
@@ -289,7 +289,7 @@ def train_program():
#### Optimizer Function Configuration
In the following `Adam` optimizer, `learning_rate` means the speed at which the network training converges.
In the following `Adam` optimizer, `learning_rate` specifies the learning rate in the optimization procedure.