@@ -183,7 +183,7 @@ The above stacked bidirectional LSTM network extracts high-level features and ma
...
@@ -183,7 +183,7 @@ The above stacked bidirectional LSTM network extracts high-level features and ma
To reiterate, we can either invoke `convolution_net` or `stacked_lstm_net`. In below steps, we will go with `convolution_net`.
To reiterate, we can either invoke `convolution_net` or `stacked_lstm_net`. In below steps, we will go with `convolution_net`.
Next we define a `inference_program` that simply uses `convolution_net` to predict output with the input from `fluid.layer.data`.
Next we define an`inference_program` that simply uses `convolution_net` to predict output with the input from `fluid.layer.data`.
```python
```python
definference_program(word_dict):
definference_program(word_dict):
...
@@ -200,6 +200,7 @@ Also define `optimizer_func` to specify the optimizer.
...
@@ -200,6 +200,7 @@ Also define `optimizer_func` to specify the optimizer.
In the context of supervised learning, labels of the training set are defined in `paddle.layer.data` too. During training, cross-entropy is used as loss function in `paddle.layer.classification_cost` and as the output of the network; During testing, the outputs are the probabilities calculated in the classifier.
In the context of supervised learning, labels of the training set are defined in `paddle.layer.data` too. During training, cross-entropy is used as loss function in `paddle.layer.classification_cost` and as the output of the network; During testing, the outputs are the probabilities calculated in the classifier.
First result that returns from the list must be cost.
@@ -225,7 +225,7 @@ The above stacked bidirectional LSTM network extracts high-level features and ma
...
@@ -225,7 +225,7 @@ The above stacked bidirectional LSTM network extracts high-level features and ma
To reiterate, we can either invoke `convolution_net` or `stacked_lstm_net`. In below steps, we will go with `convolution_net`.
To reiterate, we can either invoke `convolution_net` or `stacked_lstm_net`. In below steps, we will go with `convolution_net`.
Next we define a `inference_program` that simply uses `convolution_net` to predict output with the input from `fluid.layer.data`.
Next we define an `inference_program` that simply uses `convolution_net` to predict output with the input from `fluid.layer.data`.
```python
```python
def inference_program(word_dict):
def inference_program(word_dict):
...
@@ -242,6 +242,7 @@ Also define `optimizer_func` to specify the optimizer.
...
@@ -242,6 +242,7 @@ Also define `optimizer_func` to specify the optimizer.
In the context of supervised learning, labels of the training set are defined in `paddle.layer.data` too. During training, cross-entropy is used as loss function in `paddle.layer.classification_cost` and as the output of the network; During testing, the outputs are the probabilities calculated in the classifier.
In the context of supervised learning, labels of the training set are defined in `paddle.layer.data` too. During training, cross-entropy is used as loss function in `paddle.layer.classification_cost` and as the output of the network; During testing, the outputs are the probabilities calculated in the classifier.
First result that returns from the list must be cost.