The input to the network is defined as `fluid.layers.data` , corresponding to image pixels in the context of image classification. The images in CIFAR10 are 32x32 coloured images with three channels. Therefore, the size of the input data is 3072 (3x32x32).
The input to the network is defined as `fluid.data` , corresponding to image pixels in the context of image classification. The images in CIFAR10 are 32x32 coloured images with three channels. Therefore, the size of the input data is 3072 (3x32x32).
# predict = vgg_bn_drop(images) # un-comment to use vgg net
# predict = vgg_bn_drop(images) # un-comment to use vgg net
...
@@ -322,7 +322,7 @@ def inference_program():
...
@@ -322,7 +322,7 @@ def inference_program():
Then we need to set up the the `train_program`. It takes the prediction from the inference_program first.
Then we need to set up the the `train_program`. It takes the prediction from the inference_program first.
During the training, it will calculate the `avg_loss` from the prediction.
During the training, it will calculate the `avg_loss` from the prediction.
In the context of supervised learning, labels of training images are defined in `fluid.layers.data` as well. During training, the multi-class cross-entropy is used as the loss function and becomes the output of the network. During testing, the outputs are the probabilities calculated in the classifier.
In the context of supervised learning, labels of training images are defined in `fluid.data` as well. During training, the multi-class cross-entropy is used as the loss function and becomes the output of the network. During testing, the outputs are the probabilities calculated in the classifier.
**NOTE:** A training program should return an array and the first returned argument has to be `avg_cost` .
**NOTE:** A training program should return an array and the first returned argument has to be `avg_cost` .
The trainer always uses it to calculate the gradients.
The trainer always uses it to calculate the gradients.
...
@@ -331,7 +331,7 @@ The trainer always uses it to calculate the gradients.
...
@@ -331,7 +331,7 @@ The trainer always uses it to calculate the gradients.
The input to the network is defined as `fluid.layers.data` , corresponding to image pixels in the context of image classification. The images in CIFAR10 are 32x32 coloured images with three channels. Therefore, the size of the input data is 3072 (3x32x32).
The input to the network is defined as `fluid.data` , corresponding to image pixels in the context of image classification. The images in CIFAR10 are 32x32 coloured images with three channels. Therefore, the size of the input data is 3072 (3x32x32).
# predict = vgg_bn_drop(images) # un-comment to use vgg net
# predict = vgg_bn_drop(images) # un-comment to use vgg net
...
@@ -364,7 +364,7 @@ def inference_program():
...
@@ -364,7 +364,7 @@ def inference_program():
Then we need to set up the the `train_program`. It takes the prediction from the inference_program first.
Then we need to set up the the `train_program`. It takes the prediction from the inference_program first.
During the training, it will calculate the `avg_loss` from the prediction.
During the training, it will calculate the `avg_loss` from the prediction.
In the context of supervised learning, labels of training images are defined in `fluid.layers.data` as well. During training, the multi-class cross-entropy is used as the loss function and becomes the output of the network. During testing, the outputs are the probabilities calculated in the classifier.
In the context of supervised learning, labels of training images are defined in `fluid.data` as well. During training, the multi-class cross-entropy is used as the loss function and becomes the output of the network. During testing, the outputs are the probabilities calculated in the classifier.
**NOTE:** A training program should return an array and the first returned argument has to be `avg_cost` .
**NOTE:** A training program should return an array and the first returned argument has to be `avg_cost` .
The trainer always uses it to calculate the gradients.
The trainer always uses it to calculate the gradients.
...
@@ -373,7 +373,7 @@ The trainer always uses it to calculate the gradients.
...
@@ -373,7 +373,7 @@ The trainer always uses it to calculate the gradients.