提交 8862db07 编写于 作者: L liaogang

fix conflict

# Recognize Digits # Recognize Digits
The source code for this tutorial is under [book/recognize_digits](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits). First-time readers, please refer to PaddlePaddle [installation instructions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_en.rst). The source code for this tutorial is live at [book/recognize_digits](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits). For instructions on getting started with Paddle, please refer to [installation instructions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_en.rst).
## Introduction ## Introduction
When we learn a new programming language, the first task is usually to write a program that prints "Hello World." In Machine Learning or Deep Learning, the equivalent task is to train a model to perform handwritten digit recognition with [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Handwriting recognition is a typical image classification problem. The problem is relatively easy, and MNIST is a complete dataset. As a simple Computer Vision dataset, MNIST contains images of handwritten digits and their corresponding labels (Fig. 1). The input image is a 28x28 matrix, and the label is one of the digits from 0 to 9. Each image is normalized in size and centered. When one learns to program, the first task is usually to write a program that prints "Hello World!". In Machine Learning or Deep Learning, the equivalent task is to train a model to recognize hand-written digits on the dataset [MNIST](http://yann.lecun.com/exdb/mnist/). Handwriting recognition is a classic image classification problem. The problem is relatively easy and MNIST is a complete dataset. As a simple Computer Vision dataset, MNIST contains images of handwritten digits and their corresponding labels (Fig. 1). The input image is a $28\times28$ matrix, and the label is one of the digits from $0$ to $9$. All images are normalized, meaning that they are both rescaled and centered.
<p align="center"> <p align="center">
<img src="image/mnist_example_image.png" width="400"><br/> <img src="image/mnist_example_image.png" width="400"><br/>
...@@ -12,37 +12,37 @@ Fig. 1. Examples of MNIST images ...@@ -12,37 +12,37 @@ Fig. 1. Examples of MNIST images
The MNIST dataset is created from the [NIST](https://www.nist.gov/srd/nist-special-database-19) Special Database 3 (SD-3) and the Special Database 1 (SD-1). The SD-3 is labeled by the staff of the U.S. Census Bureau, while SD-1 is labeled by high school students the in U.S. Therefore the SD-3 is cleaner and easier to recognize than the SD-1 dataset. Yann LeCun et al. used half of the samples from each of SD-1 and SD-3 to create the MNIST training set (60,000 samples) and test set (10,000 samples), where training set was labeled by 250 different annotators, and it was guaranteed that there wasn't a complete overlap of annotators of training set and test set. The MNIST dataset is created from the [NIST](https://www.nist.gov/srd/nist-special-database-19) Special Database 3 (SD-3) and the Special Database 1 (SD-1). The SD-3 is labeled by the staff of the U.S. Census Bureau, while SD-1 is labeled by high school students the in U.S. Therefore the SD-3 is cleaner and easier to recognize than the SD-1 dataset. Yann LeCun et al. used half of the samples from each of SD-1 and SD-3 to create the MNIST training set (60,000 samples) and test set (10,000 samples), where training set was labeled by 250 different annotators, and it was guaranteed that there wasn't a complete overlap of annotators of training set and test set.
Yann LeCun, one of the founders of Deep Learning, contributed highly towards handwritten character recognition in early days and proposed CNN (Convolutional Neural Network), which drastically improved recognition capability for handwritten characters. CNNs are now a critical concept in Deep Learning. From Yann LeCun's first proposal of LeNet to those winning models in ImageNet, such as VGGNet, GoogLeNet, ResNet, etc. (Please refer to [Image Classification](https://github.com/PaddlePaddle/book/tree/develop/03.image_classification) tutorial), CNN achieved a series of impressive results in Image Classification tasks. Yann LeCun, one of the founders of Deep Learning, have previously made tremendous contributions to handwritten character recognition and proposed the **Convolutional Neural Network** (CNN), which drastically improved recognition capability for handwritten characters. CNNs are now a critical concept in Deep Learning. From the LeNet proposal by Yann LeCun, to those winning models in ImageNet competitions, such as VGGNet, GoogLeNet, and ResNet (See [Image Classification](https://github.com/PaddlePaddle/book/tree/develop/03.image_classification) tutorial), CNNs have achieved a series of impressive results in Image Classification tasks.
Many algorithms are tested on MNIST. In 1998, LeCun experimented with single layer linear classifier, MLP (Multilayer Perceptron) and Multilayer CNN LeNet. These algorithms constantly reduced test error from 12% to 0.7% \[[1](#References)\]. Since then, researchers have worked on many algorithms such as k-NN (K-Nearest Neighbors) \[[2](#References)\], Support Vector Machine (SVM) \[[3](#References)\], Neural Networks \[[4-7](#References)\] and Boosting \[[8](#References)\]. Various preprocessing methods like distortion removal, noise removal, blurring etc. have also been applied to increase recognition accuracy. Many algorithms are tested on MNIST. In 1998, LeCun experimented with single layer linear classifier, Multilayer Perceptron (MLP) and Multilayer CNN LeNet. These algorithms quickly reduced test error from 12% to 0.7% \[[1](#references)\]. Since then, researchers have worked on many algorithms such as **K-Nearest Neighbors** (k-NN) \[[2](#references)\], **Support Vector Machine** (SVM) \[[3](#references)\], **Neural Networks** \[[4-7](#references)\] and **Boosting** \[[8](#references)\]. Various preprocessing methods like distortion removal, noise removal, and blurring, have also been applied to increase recognition accuracy.
In this tutorial, we tackle the task of handwritten character recognition. We start with a simple softmax regression model and guide our readers step-by-step to improve this model's performance on the task of recognition. In this tutorial, we tackle the task of handwritten character recognition. We start with a simple **softmax** regression model and guide our readers step-by-step to improve this model's performance on the task of recognition.
## Model Overview ## Model Overview
Before introducing classification algorithms and training procedure, we provide some definitions: Before introducing classification algorithms and training procedure, we define the following symbols:
- $X$ is the input: Input is a $28\times28$ MNIST image. It is flattened to a $784$ dimensional vector. $X=\left ( x_0, x_1, \dots, x_{783} \right )$. - $X$ is the input: Input is a $28\times 28$ MNIST image. It is flattened to a $784$ dimensional vector. $X=\left (x_0, x_1, \dots, x_{783} \right )$.
- $Y$ is the output: Output of the classifier is 1 of the 10 classes (digits from 0 to 9). $Y=\left ( y_0, y_1, \dots, y_9 \right )$. Each dimension $y_i$ represents the probability that the input image belongs to class $i$. - $Y$ is the output: Output of the classifier is 1 of the 10 classes (digits from 0 to 9). $Y=\left (y_0, y_1, \dots, y_9 \right )$. Each dimension $y_i$ represents the probability that the input image belongs to class $i$.
- $L$ is the ground truth label: $L=\left ( l_0, l_1, \dots, l_9 \right )$. It is also 10 dimensional, but only one dimension is 1 and all others are all 0. - $L$ is the ground truth label: $L=\left ( l_0, l_1, \dots, l_9 \right )$. It is also 10 dimensional, but only one entry is $1$ and all others are $0$s.
### Softmax Regression ### Softmax Regression
In a simple softmax regression model, the input is fed to fully connected layers and a softmax function is applied to get probabilities of multiple output classes\[[9](#References)\]. In a simple softmax regression model, the input is first fed to fully connected layers. Then, a softmax function is applied to output probabilities of multiple output classes\[[9](#references)\].
Input $X$ is multiplied with weights $W$, and bias $b$ is added to generate activations. The input $X$ is multiplied by weights $W$ and then added to the bias $b$ to generate activations.
$$ y_i = \text{softmax}(\sum_j W_{i,j}x_j + b_i) $$ $$ y_i = \text{softmax}(\sum_j W_{i,j}x_j + b_i) $$
where $ \text{softmax}(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}} $ where $ \text{softmax}(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}} $
For an $N$ class classification problem with $N$ output nodes, an $N$ dimensional vector is normalized to $N$ real values in the range [0, 1], each representing the probability of the sample to belong to the class. Here $y_i$ is the prediction probability that an image is digit $i$. For an $N$-class classification problem with $N$ output nodes, Softmax normalizes the resulting $N$ dimensional vector so that each of its entries falls in the range $[0,1]\in\math{R}$, representing the probability that the sample belongs to a certain class. Here $y_i$ denotes the predicted probability that an image is of digit $i$.
In such a classification problem, we usually use the cross entropy loss function: In such a classification problem, we usually use the cross entropy loss function:
$$ \text{crossentropy}(label, y) = -\sum_i label_ilog(y_i) $$ $$ \text{crossentropy}(label, y) = -\sum_i label_ilog(y_i) $$
Fig. 2 shows a softmax regression network, with weights in blue, and bias in red. +1 indicates bias is 1. Fig. 2 illustrates a softmax regression network, with the weights in blue, and the bias in red. `+1` indicates that the bias is $1$.
<p align="center"> <p align="center">
<img src="image/softmax_regression_en.png" width=400><br/> <img src="image/softmax_regression_en.png" width=400><br/>
...@@ -51,13 +51,13 @@ Fig. 2. Softmax regression network architecture<br/> ...@@ -51,13 +51,13 @@ Fig. 2. Softmax regression network architecture<br/>
### Multilayer Perceptron ### Multilayer Perceptron
The Softmax regression model described above uses the simplest two-layer neural network, i.e. it only contains an input layer and an output layer. So its regression ability is limited. To achieve better recognition results, we consider adding several hidden layers \[[10](#References)\] between the input layer and the output layer. The softmax regression model described above uses the simplest two-layer neural network. That is, it only contains an input layer and an output layer, with limited regression capability. To achieve better recognition results, consider adding several hidden layers\[[10](#references)\] between the input layer and the output layer.
1. After the first hidden layer, we get $ H_1 = \phi(W_1X + b_1) $, where $\phi$ is the activation function. Some common ones are sigmoid, tanh and ReLU. 1. After the first hidden layer, we get $ H_1 = \phi(W_1X + b_1) $, where $\phi$ denotes the activation function. Some [common ones](###list-of-common-activation-functions) are sigmoid, tanh and ReLU.
2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $. 2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $.
3. Finally, after output layer, we get $Y=\text{softmax}(W_3H_2 + b_3)$, the final classification result vector. 3. Finally, the output layer outputs $Y=\text{softmax}(W_3H_2 + b_3)$, the vector denoting our classification result.
Fig. 3. is Multilayer Perceptron network, with weights in blue, and bias in red. +1 indicates bias is 1. Fig. 3. shows a Multilayer Perceptron network, with the weights in blue, and the bias in red. +1 indicates that the bias is $1$.
<p align="center"> <p align="center">
<img src="image/mlp_en.png" width=500><br/> <img src="image/mlp_en.png" width=500><br/>
...@@ -74,18 +74,18 @@ Fig. 3. Multilayer Perceptron network architecture<br/> ...@@ -74,18 +74,18 @@ Fig. 3. Multilayer Perceptron network architecture<br/>
Fig. 4. Convolutional layer<br/> Fig. 4. Convolutional layer<br/>
</p> </p>
The Convolutional layer is the core of a Convolutional Neural Network. The parameters in this layer are composed of a set of filters or kernels. In the forward step, each kernel moves horizontally and vertically, we compute a dot product of the kernel and the input at the corresponding positions, to this result we add bias and apply an activation function. The result is a two-dimensional activation map. For example, some kernel may recognize corners, and some may recognize circles. These convolution kernels may respond strongly to the corresponding features. The **convolutional layer** is the core of a Convolutional Neural Network. The parameters in this layer are composed of a set of filters, also called kernels. We could visualize the convolution step in the following fashion: Each kernel slides horizontally and vertically till it covers the whole image. At every window, we compute the dot product of the kernel and the input. Then, we add the bias and apply an activation function. The result is a two-dimensional activation map. For example, some kernel may recognize corners, and some may recognize circles. These convolution kernels may respond strongly to the corresponding features.
Fig. 4 is a dynamic graph of a convolutional layer, where depths are not shown for simplicity. Input is $W_1=5, H_1=5, D_1=3$. In fact, this is a common representation for colored images. $W_1$ and $H_1$ of a colored image correspond to the width and height respectively. $D_1$ corresponds to the 3 color channels for RGB. The parameters of the convolutional layer are $K=2, F=3, S=2, P=1$. $K$ is the number of kernels. Here, $Filter W_0$ and $Filter W_1$ are two kernels. $F$ is kernel size. $W0$ and $W1$ are both $3\times3$ matrix in all depths. $S$ is the stride. Kernels move leftwards or downwards by 2 units each time. $P$ is padding, an extension of the input. The gray area in the figure shows zero padding with size 1. Fig. 4 illustrates the dynamic programming of a convolutional layer, where depths are flattened for simplicity. The input is $W_1=5$, $H_1=5$, $D_1=3$. In fact, this is a common representation for colored images. $W_1$ and $H_1$ correspond to the width and height in a colored image. $D_1$ corresponds to the 3 color channels for RGB. The parameters of the convolutional layer are $K=2$, $F=3$, $S=2$, $P=1$. $K$ denotes the number of kernels; specifically, $Filter$ $W_0$ and $Filter$ $W_1$ are the kernels. $F$ is kernel size while $W0$ and $W1$ are both $F\timesF = 3\times3$ matrices in all depths. $S$ is the stride, which is the width of the sliding window; here, kernels move leftwards or downwards by 2 units each time. $P$ is the width of the padding, which denotes an extension of the input; here, the gray area shows zero padding with size 1.
#### Pooling Layer #### Pooling Layer
<p align="center"> <p align="center">
<img src="image/max_pooling_en.png" width="400px"><br/> <img src="image/max_pooling_en.png" width="400px"><br/>
Fig. 5 Pooling layer<br/> Fig. 5 Pooling layer using max-pooling<br/>
</p> </p>
A Pooling layer performs downsampling. The main functionality of this layer is to reduce computation by reducing the network parameters. It also prevents overfitting to some extent. Usually, a pooling layer is added after a convolutional layer. Pooling layer can be of various types like max pooling, average pooling, etc. Max pooling uses rectangles to segment the input layer into several parts and computes the maximum value in each part as the output (Fig. 5.) A **pooling layer** performs downsampling. The main functionality of this layer is to reduce computation by reducing the network parameters. It also prevents over-fitting to some extent. Usually, a pooling layer is added after a convolutional layer. Pooling layer can use various techniques, such as max pooling and average pooling. As shown in Fig.5, max pooling uses rectangles to segment the input layer into several parts and computes the maximum value in each part as the output.
#### LeNet-5 Network #### LeNet-5 Network
...@@ -94,13 +94,13 @@ A Pooling layer performs downsampling. The main functionality of this layer is t ...@@ -94,13 +94,13 @@ A Pooling layer performs downsampling. The main functionality of this layer is t
Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/> Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/>
</p> </p>
[LeNet-5](http://yann.lecun.com/exdb/lenet/) is one of the simplest Convolutional Neural Networks. Fig. 6. shows its architecture: A 2-dimensional input image is fed into two sets of convolutional layers and pooling layers, this output is then fed to a fully connected layer and a softmax classifier. The following three properties of convolution enable LeNet-5 to better recognize images than Multilayer fully connected perceptrons: [**LeNet-5**](http://yann.lecun.com/exdb/lenet/) is one of the simplest Convolutional Neural Networks. Fig. 6. shows its architecture: A 2-dimensional input image is fed into two sets of convolutional layers and pooling layers. This output is then fed to a fully connected layer and a softmax classifier. Compared to multilayer, fully connected perceptrons, the LeNet-5 can recognize images better. This is due to the following three properties of the convolution:
- 3D properties of neurons: a convolutional layer is organized by width, height and depth. Neurons in each layer are connected to only a small region in the previous layer. This region is called the receptive field. - The 3D nature of the neurons: a convolutional layer is organized by width, height and depth. Neurons in each layer are connected to only a small region in the previous layer. This region is called the receptive field.
- Local connection: A CNN utilizes the local space correlation by connecting local neurons. This design guarantees that the learned filter has a strong response to local input features. Stacking many such layers generates a non-linear filter that is more global. This enables the network to first obtain good representation for small parts of input and then combine them to represent a larger region. - Local connectivity: A CNN utilizes the local space correlation by connecting local neurons. This design guarantees that the learned filter has a strong response to local input features. Stacking many such layers generates a non-linear filter that is more global. This enables the network to first obtain good representation for small parts of input and then combine them to represent a larger region.
- Sharing weights: In a CNN, computation is iterated on shared parameters (weights and bias) to form a feature map. This means all neurons in the same depth of the output respond to the same feature. This allows detecting a feature regardless of its position in the input and enables translation equivariance. - Weight sharing: In a CNN, computation is iterated on shared parameters (weights and bias) to form a feature map. This means that all the neurons in the same depth of the output respond to the same feature. This allows the network to detect a feature regardless of its position in the input. In other words, it is shift invariant.
For more details on Convolutional Neural Networks, please refer to [this Stanford open course]( http://cs231n.github.io/convolutional-networks/ ) and [this Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) tutorial. For more details on Convolutional Neural Networks, please refer to the tutorial on [Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) and the [relevant lecture](http://cs231n.github.io/convolutional-networks/) from a Stanford open course.
### List of Common Activation Functions ### List of Common Activation Functions
- Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $ - Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $
...@@ -118,12 +118,12 @@ For more information, please refer to [Activation functions on Wikipedia](https: ...@@ -118,12 +118,12 @@ For more information, please refer to [Activation functions on Wikipedia](https:
PaddlePaddle provides a Python module, `paddle.dataset.mnist`, which downloads and caches the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). The cache is under `/home/username/.cache/paddle/dataset/mnist`: PaddlePaddle provides a Python module, `paddle.dataset.mnist`, which downloads and caches the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). The cache is under `/home/username/.cache/paddle/dataset/mnist`:
| File name | Description | | File name | Description | Size |
|----------------------|-------------------------| |----------------------|--------------|-----------|
|train-images-idx3-ubyte| Training images, 60,000 | |train-images-idx3-ubyte| Training images | 60,000 |
|train-labels-idx1-ubyte| Training labels, 60,000 | |train-labels-idx1-ubyte| Training labels | 60,000 |
|t10k-images-idx3-ubyte | Evaluation images, 10,000 | |t10k-images-idx3-ubyte | Evaluation images | 10,000 |
|t10k-labels-idx1-ubyte | Evaluation labels, 10,000 | |t10k-labels-idx1-ubyte | Evaluation labels | 10,000 |
## Model Configuration ## Model Configuration
...@@ -135,9 +135,9 @@ import gzip ...@@ -135,9 +135,9 @@ import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
``` ```
We want to use this program to demonstrate multiple kinds of models. Let define each of them as a Python function: We want to use this program to demonstrate three different classifiers, each defined as a Python function:
- softmax regression: the network has a fully-connection layer with softmax activation: - Softmax regression: the network has a fully-connection layer with softmax activation:
```python ```python
def softmax_regression(img): def softmax_regression(img):
...@@ -147,7 +147,7 @@ def softmax_regression(img): ...@@ -147,7 +147,7 @@ def softmax_regression(img):
return predict return predict
``` ```
- multi-layer perceptron: this network has two hidden fully-connected layers, one with LeRU and the other with softmax activation: - Multi-Layer Perceptron: this network has two hidden fully-connected layers, one with ReLU and the other with softmax activation:
```python ```python
def multilayer_perceptron(img): def multilayer_perceptron(img):
...@@ -161,7 +161,7 @@ def multilayer_perceptron(img): ...@@ -161,7 +161,7 @@ def multilayer_perceptron(img):
return predict return predict
``` ```
- convolution network LeNet-5: the input image is fed through two convolution-pooling layer, a fully-connected layer, and the softmax output layer: - Convolution network LeNet-5: the input image is fed through two convolution-pooling layers, a fully-connected layer, and the softmax output layer:
```python ```python
def convolutional_neural_network(img): def convolutional_neural_network(img):
...@@ -207,7 +207,7 @@ predict = convolutional_neural_network(images) # uncomment for LeNet5 ...@@ -207,7 +207,7 @@ predict = convolutional_neural_network(images) # uncomment for LeNet5
cost = paddle.layer.classification_cost(input=predict, label=label) cost = paddle.layer.classification_cost(input=predict, label=label)
``` ```
Now, it is time to specify training parameters. The number 0.9 in the following `Momentum` optimizer means that 90% of the current the momentum comes from the momentum of the previous iteration. Now, it is time to specify training parameters. In the following `Momentum` optimizer, `momentum=0.9` means that 90% of the current momentum comes from that of the previous iteration. The learning rate relates to the speed at which the network training converges. Regularization is meant to prevent over-fitting; here we use the L2 regularization.
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
...@@ -222,11 +222,11 @@ trainer = paddle.trainer.SGD(cost=cost, ...@@ -222,11 +222,11 @@ trainer = paddle.trainer.SGD(cost=cost,
update_equation=optimizer) update_equation=optimizer)
``` ```
Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data. Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
Here `shuffle` is a reader decorator, which takes a reader A as its parameter, and returns a new reader B, where B calls A to read in `buffer_size` data instances everytime into a buffer, then shuffles and yield instances in the buffer. If you want very shuffled data, try use a larger buffer size. `shuffle` is a reader decorator. It takes in a reader A as input and returns a new reader B. Under the hood, B calls A to read data in the following fashion: it copies in `buffer_size` instances at a time into a buffer, shuffles the data, and yields the shuffled instances one at a time. A large buffer size would yield very shuffled data.
`batch` is a special decorator, whose input is a reader and output is a *batch reader*, which doesn't yield an instance at a time, but a minibatch. `batch` is a special decorator, which takes in reader and outputs a *batch reader*, which doesn't yield an instance, but a minibatch at a time.
`event_handler_plot` is used to plot a figure like below: `event_handler_plot` is used to plot a figure like below:
...@@ -312,7 +312,7 @@ print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1]) ...@@ -312,7 +312,7 @@ print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100) print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
``` ```
Usually, with MNIST data, the softmax regression model can get accuracy around 92.34%, MLP can get about 97.66%, and convolution network can get up to around 99.20%. Convolution layers have been widely considered a great invention for image processsing. Usually, with MNIST data, the softmax regression model achieves an accuracy around 92.34%, the MLP 97.66%, and the convolution network around 99.20%. Convolution layers have been widely considered a great invention for image processing.
## Application ## Application
...@@ -336,9 +336,15 @@ lab = np.argsort(-probs) # probs and lab are the results of one batch data ...@@ -336,9 +336,15 @@ lab = np.argsort(-probs) # probs and lab are the results of one batch data
print "Label of image/infer_3.png is: %d" % lab[0][0] print "Label of image/infer_3.png is: %d" % lab[0][0]
``` ```
## Conclusion ## Conclusion
This tutorial describes a few basic Deep Learning models viz. Softmax regression, Multilayer Perceptron Network and Convolutional Neural Network. The subsequent tutorials will derive more sophisticated models from these. So it is crucial to understand these models for future learning. When our model evolved from a simple softmax regression to slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieved large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve results of an old one. Moreover, this tutorial introduced the basic flow of PaddlePaddle model design, starting with a dataprovider, model layer construction, to final training and prediction. Readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice. This tutorial describes a few common deep learning models using **Softmax regression**, **Multilayer Perceptron Network**, and **Convolutional Neural Network**. Understanding these models is crucial for future learning; the subsequent tutorials derive more sophisticated networks by building on top of them.
When our model evolves from a simple softmax regression to a slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieves a large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve the results of an old one.
Moreover, this tutorial introduces the basic flow of PaddlePaddle model design, which starts with a *dataprovider*, a model layer construction, and finally training and prediction. Motivated readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice.
## References ## References
......
...@@ -231,7 +231,7 @@ trainer = paddle.trainer.SGD(cost=cost, ...@@ -231,7 +231,7 @@ trainer = paddle.trainer.SGD(cost=cost,
下面`shuffle`是一个reader decorator,它接受一个reader A,返回另一个reader B —— reader B 每次读入`buffer_size`条训练数据到一个buffer里,然后随机打乱其顺序,并且逐条输出。 下面`shuffle`是一个reader decorator,它接受一个reader A,返回另一个reader B —— reader B 每次读入`buffer_size`条训练数据到一个buffer里,然后随机打乱其顺序,并且逐条输出。
`batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。 `batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minibatch。
`event_handler_plot`可以用来在训练过程中画图如下: `event_handler_plot`可以用来在训练过程中画图如下:
......
...@@ -42,10 +42,10 @@ ...@@ -42,10 +42,10 @@
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
# Recognize Digits # Recognize Digits
The source code for this tutorial is under [book/recognize_digits](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits). First-time readers, please refer to PaddlePaddle [installation instructions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_en.rst). The source code for this tutorial is live at [book/recognize_digits](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits). For instructions on getting started with Paddle, please refer to [installation instructions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_en.rst).
## Introduction ## Introduction
When we learn a new programming language, the first task is usually to write a program that prints "Hello World." In Machine Learning or Deep Learning, the equivalent task is to train a model to perform handwritten digit recognition with [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Handwriting recognition is a typical image classification problem. The problem is relatively easy, and MNIST is a complete dataset. As a simple Computer Vision dataset, MNIST contains images of handwritten digits and their corresponding labels (Fig. 1). The input image is a 28x28 matrix, and the label is one of the digits from 0 to 9. Each image is normalized in size and centered. When one learns to program, the first task is usually to write a program that prints "Hello World!". In Machine Learning or Deep Learning, the equivalent task is to train a model to recognize hand-written digits on the dataset [MNIST](http://yann.lecun.com/exdb/mnist/). Handwriting recognition is a classic image classification problem. The problem is relatively easy and MNIST is a complete dataset. As a simple Computer Vision dataset, MNIST contains images of handwritten digits and their corresponding labels (Fig. 1). The input image is a $28\times28$ matrix, and the label is one of the digits from $0$ to $9$. All images are normalized, meaning that they are both rescaled and centered.
<p align="center"> <p align="center">
<img src="image/mnist_example_image.png" width="400"><br/> <img src="image/mnist_example_image.png" width="400"><br/>
...@@ -54,37 +54,37 @@ Fig. 1. Examples of MNIST images ...@@ -54,37 +54,37 @@ Fig. 1. Examples of MNIST images
The MNIST dataset is created from the [NIST](https://www.nist.gov/srd/nist-special-database-19) Special Database 3 (SD-3) and the Special Database 1 (SD-1). The SD-3 is labeled by the staff of the U.S. Census Bureau, while SD-1 is labeled by high school students the in U.S. Therefore the SD-3 is cleaner and easier to recognize than the SD-1 dataset. Yann LeCun et al. used half of the samples from each of SD-1 and SD-3 to create the MNIST training set (60,000 samples) and test set (10,000 samples), where training set was labeled by 250 different annotators, and it was guaranteed that there wasn't a complete overlap of annotators of training set and test set. The MNIST dataset is created from the [NIST](https://www.nist.gov/srd/nist-special-database-19) Special Database 3 (SD-3) and the Special Database 1 (SD-1). The SD-3 is labeled by the staff of the U.S. Census Bureau, while SD-1 is labeled by high school students the in U.S. Therefore the SD-3 is cleaner and easier to recognize than the SD-1 dataset. Yann LeCun et al. used half of the samples from each of SD-1 and SD-3 to create the MNIST training set (60,000 samples) and test set (10,000 samples), where training set was labeled by 250 different annotators, and it was guaranteed that there wasn't a complete overlap of annotators of training set and test set.
Yann LeCun, one of the founders of Deep Learning, contributed highly towards handwritten character recognition in early days and proposed CNN (Convolutional Neural Network), which drastically improved recognition capability for handwritten characters. CNNs are now a critical concept in Deep Learning. From Yann LeCun's first proposal of LeNet to those winning models in ImageNet, such as VGGNet, GoogLeNet, ResNet, etc. (Please refer to [Image Classification](https://github.com/PaddlePaddle/book/tree/develop/03.image_classification) tutorial), CNN achieved a series of impressive results in Image Classification tasks. Yann LeCun, one of the founders of Deep Learning, have previously made tremendous contributions to handwritten character recognition and proposed the **Convolutional Neural Network** (CNN), which drastically improved recognition capability for handwritten characters. CNNs are now a critical concept in Deep Learning. From the LeNet proposal by Yann LeCun, to those winning models in ImageNet competitions, such as VGGNet, GoogLeNet, and ResNet (See [Image Classification](https://github.com/PaddlePaddle/book/tree/develop/03.image_classification) tutorial), CNNs have achieved a series of impressive results in Image Classification tasks.
Many algorithms are tested on MNIST. In 1998, LeCun experimented with single layer linear classifier, MLP (Multilayer Perceptron) and Multilayer CNN LeNet. These algorithms constantly reduced test error from 12% to 0.7% \[[1](#References)\]. Since then, researchers have worked on many algorithms such as k-NN (K-Nearest Neighbors) \[[2](#References)\], Support Vector Machine (SVM) \[[3](#References)\], Neural Networks \[[4-7](#References)\] and Boosting \[[8](#References)\]. Various preprocessing methods like distortion removal, noise removal, blurring etc. have also been applied to increase recognition accuracy. Many algorithms are tested on MNIST. In 1998, LeCun experimented with single layer linear classifier, Multilayer Perceptron (MLP) and Multilayer CNN LeNet. These algorithms quickly reduced test error from 12% to 0.7% \[[1](#references)\]. Since then, researchers have worked on many algorithms such as **K-Nearest Neighbors** (k-NN) \[[2](#references)\], **Support Vector Machine** (SVM) \[[3](#references)\], **Neural Networks** \[[4-7](#references)\] and **Boosting** \[[8](#references)\]. Various preprocessing methods like distortion removal, noise removal, and blurring, have also been applied to increase recognition accuracy.
In this tutorial, we tackle the task of handwritten character recognition. We start with a simple softmax regression model and guide our readers step-by-step to improve this model's performance on the task of recognition. In this tutorial, we tackle the task of handwritten character recognition. We start with a simple **softmax** regression model and guide our readers step-by-step to improve this model's performance on the task of recognition.
## Model Overview ## Model Overview
Before introducing classification algorithms and training procedure, we provide some definitions: Before introducing classification algorithms and training procedure, we define the following symbols:
- $X$ is the input: Input is a $28\times28$ MNIST image. It is flattened to a $784$ dimensional vector. $X=\left ( x_0, x_1, \dots, x_{783} \right )$. - $X$ is the input: Input is a $28\times 28$ MNIST image. It is flattened to a $784$ dimensional vector. $X=\left (x_0, x_1, \dots, x_{783} \right )$.
- $Y$ is the output: Output of the classifier is 1 of the 10 classes (digits from 0 to 9). $Y=\left ( y_0, y_1, \dots, y_9 \right )$. Each dimension $y_i$ represents the probability that the input image belongs to class $i$. - $Y$ is the output: Output of the classifier is 1 of the 10 classes (digits from 0 to 9). $Y=\left (y_0, y_1, \dots, y_9 \right )$. Each dimension $y_i$ represents the probability that the input image belongs to class $i$.
- $L$ is the ground truth label: $L=\left ( l_0, l_1, \dots, l_9 \right )$. It is also 10 dimensional, but only one dimension is 1 and all others are all 0. - $L$ is the ground truth label: $L=\left ( l_0, l_1, \dots, l_9 \right )$. It is also 10 dimensional, but only one entry is $1$ and all others are $0$s.
### Softmax Regression ### Softmax Regression
In a simple softmax regression model, the input is fed to fully connected layers and a softmax function is applied to get probabilities of multiple output classes\[[9](#References)\]. In a simple softmax regression model, the input is first fed to fully connected layers. Then, a softmax function is applied to output probabilities of multiple output classes\[[9](#references)\].
Input $X$ is multiplied with weights $W$, and bias $b$ is added to generate activations. The input $X$ is multiplied by weights $W$ and then added to the bias $b$ to generate activations.
$$ y_i = \text{softmax}(\sum_j W_{i,j}x_j + b_i) $$ $$ y_i = \text{softmax}(\sum_j W_{i,j}x_j + b_i) $$
where $ \text{softmax}(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}} $ where $ \text{softmax}(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}} $
For an $N$ class classification problem with $N$ output nodes, an $N$ dimensional vector is normalized to $N$ real values in the range [0, 1], each representing the probability of the sample to belong to the class. Here $y_i$ is the prediction probability that an image is digit $i$. For an $N$-class classification problem with $N$ output nodes, Softmax normalizes the resulting $N$ dimensional vector so that each of its entries falls in the range $[0,1]\in\math{R}$, representing the probability that the sample belongs to a certain class. Here $y_i$ denotes the predicted probability that an image is of digit $i$.
In such a classification problem, we usually use the cross entropy loss function: In such a classification problem, we usually use the cross entropy loss function:
$$ \text{crossentropy}(label, y) = -\sum_i label_ilog(y_i) $$ $$ \text{crossentropy}(label, y) = -\sum_i label_ilog(y_i) $$
Fig. 2 shows a softmax regression network, with weights in blue, and bias in red. +1 indicates bias is 1. Fig. 2 illustrates a softmax regression network, with the weights in blue, and the bias in red. `+1` indicates that the bias is $1$.
<p align="center"> <p align="center">
<img src="image/softmax_regression_en.png" width=400><br/> <img src="image/softmax_regression_en.png" width=400><br/>
...@@ -93,13 +93,13 @@ Fig. 2. Softmax regression network architecture<br/> ...@@ -93,13 +93,13 @@ Fig. 2. Softmax regression network architecture<br/>
### Multilayer Perceptron ### Multilayer Perceptron
The Softmax regression model described above uses the simplest two-layer neural network, i.e. it only contains an input layer and an output layer. So its regression ability is limited. To achieve better recognition results, we consider adding several hidden layers \[[10](#References)\] between the input layer and the output layer. The softmax regression model described above uses the simplest two-layer neural network. That is, it only contains an input layer and an output layer, with limited regression capability. To achieve better recognition results, consider adding several hidden layers\[[10](#references)\] between the input layer and the output layer.
1. After the first hidden layer, we get $ H_1 = \phi(W_1X + b_1) $, where $\phi$ is the activation function. Some common ones are sigmoid, tanh and ReLU. 1. After the first hidden layer, we get $ H_1 = \phi(W_1X + b_1) $, where $\phi$ denotes the activation function. Some [common ones](###list-of-common-activation-functions) are sigmoid, tanh and ReLU.
2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $. 2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $.
3. Finally, after output layer, we get $Y=\text{softmax}(W_3H_2 + b_3)$, the final classification result vector. 3. Finally, the output layer outputs $Y=\text{softmax}(W_3H_2 + b_3)$, the vector denoting our classification result.
Fig. 3. is Multilayer Perceptron network, with weights in blue, and bias in red. +1 indicates bias is 1. Fig. 3. shows a Multilayer Perceptron network, with the weights in blue, and the bias in red. +1 indicates that the bias is $1$.
<p align="center"> <p align="center">
<img src="image/mlp_en.png" width=500><br/> <img src="image/mlp_en.png" width=500><br/>
...@@ -116,18 +116,18 @@ Fig. 3. Multilayer Perceptron network architecture<br/> ...@@ -116,18 +116,18 @@ Fig. 3. Multilayer Perceptron network architecture<br/>
Fig. 4. Convolutional layer<br/> Fig. 4. Convolutional layer<br/>
</p> </p>
The Convolutional layer is the core of a Convolutional Neural Network. The parameters in this layer are composed of a set of filters or kernels. In the forward step, each kernel moves horizontally and vertically, we compute a dot product of the kernel and the input at the corresponding positions, to this result we add bias and apply an activation function. The result is a two-dimensional activation map. For example, some kernel may recognize corners, and some may recognize circles. These convolution kernels may respond strongly to the corresponding features. The **convolutional layer** is the core of a Convolutional Neural Network. The parameters in this layer are composed of a set of filters, also called kernels. We could visualize the convolution step in the following fashion: Each kernel slides horizontally and vertically till it covers the whole image. At every window, we compute the dot product of the kernel and the input. Then, we add the bias and apply an activation function. The result is a two-dimensional activation map. For example, some kernel may recognize corners, and some may recognize circles. These convolution kernels may respond strongly to the corresponding features.
Fig. 4 is a dynamic graph of a convolutional layer, where depths are not shown for simplicity. Input is $W_1=5, H_1=5, D_1=3$. In fact, this is a common representation for colored images. $W_1$ and $H_1$ of a colored image correspond to the width and height respectively. $D_1$ corresponds to the 3 color channels for RGB. The parameters of the convolutional layer are $K=2, F=3, S=2, P=1$. $K$ is the number of kernels. Here, $Filter W_0$ and $Filter W_1$ are two kernels. $F$ is kernel size. $W0$ and $W1$ are both $3\times3$ matrix in all depths. $S$ is the stride. Kernels move leftwards or downwards by 2 units each time. $P$ is padding, an extension of the input. The gray area in the figure shows zero padding with size 1. Fig. 4 illustrates the dynamic programming of a convolutional layer, where depths are flattened for simplicity. The input is $W_1=5$, $H_1=5$, $D_1=3$. In fact, this is a common representation for colored images. $W_1$ and $H_1$ correspond to the width and height in a colored image. $D_1$ corresponds to the 3 color channels for RGB. The parameters of the convolutional layer are $K=2$, $F=3$, $S=2$, $P=1$. $K$ denotes the number of kernels; specifically, $Filter$ $W_0$ and $Filter$ $W_1$ are the kernels. $F$ is kernel size while $W0$ and $W1$ are both $F\timesF = 3\times3$ matrices in all depths. $S$ is the stride, which is the width of the sliding window; here, kernels move leftwards or downwards by 2 units each time. $P$ is the width of the padding, which denotes an extension of the input; here, the gray area shows zero padding with size 1.
#### Pooling Layer #### Pooling Layer
<p align="center"> <p align="center">
<img src="image/max_pooling_en.png" width="400px"><br/> <img src="image/max_pooling_en.png" width="400px"><br/>
Fig. 5 Pooling layer<br/> Fig. 5 Pooling layer using max-pooling<br/>
</p> </p>
A Pooling layer performs downsampling. The main functionality of this layer is to reduce computation by reducing the network parameters. It also prevents overfitting to some extent. Usually, a pooling layer is added after a convolutional layer. Pooling layer can be of various types like max pooling, average pooling, etc. Max pooling uses rectangles to segment the input layer into several parts and computes the maximum value in each part as the output (Fig. 5.) A **pooling layer** performs downsampling. The main functionality of this layer is to reduce computation by reducing the network parameters. It also prevents over-fitting to some extent. Usually, a pooling layer is added after a convolutional layer. Pooling layer can use various techniques, such as max pooling and average pooling. As shown in Fig.5, max pooling uses rectangles to segment the input layer into several parts and computes the maximum value in each part as the output.
#### LeNet-5 Network #### LeNet-5 Network
...@@ -136,13 +136,13 @@ A Pooling layer performs downsampling. The main functionality of this layer is t ...@@ -136,13 +136,13 @@ A Pooling layer performs downsampling. The main functionality of this layer is t
Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/> Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/>
</p> </p>
[LeNet-5](http://yann.lecun.com/exdb/lenet/) is one of the simplest Convolutional Neural Networks. Fig. 6. shows its architecture: A 2-dimensional input image is fed into two sets of convolutional layers and pooling layers, this output is then fed to a fully connected layer and a softmax classifier. The following three properties of convolution enable LeNet-5 to better recognize images than Multilayer fully connected perceptrons: [**LeNet-5**](http://yann.lecun.com/exdb/lenet/) is one of the simplest Convolutional Neural Networks. Fig. 6. shows its architecture: A 2-dimensional input image is fed into two sets of convolutional layers and pooling layers. This output is then fed to a fully connected layer and a softmax classifier. Compared to multilayer, fully connected perceptrons, the LeNet-5 can recognize images better. This is due to the following three properties of the convolution:
- 3D properties of neurons: a convolutional layer is organized by width, height and depth. Neurons in each layer are connected to only a small region in the previous layer. This region is called the receptive field. - The 3D nature of the neurons: a convolutional layer is organized by width, height and depth. Neurons in each layer are connected to only a small region in the previous layer. This region is called the receptive field.
- Local connection: A CNN utilizes the local space correlation by connecting local neurons. This design guarantees that the learned filter has a strong response to local input features. Stacking many such layers generates a non-linear filter that is more global. This enables the network to first obtain good representation for small parts of input and then combine them to represent a larger region. - Local connectivity: A CNN utilizes the local space correlation by connecting local neurons. This design guarantees that the learned filter has a strong response to local input features. Stacking many such layers generates a non-linear filter that is more global. This enables the network to first obtain good representation for small parts of input and then combine them to represent a larger region.
- Sharing weights: In a CNN, computation is iterated on shared parameters (weights and bias) to form a feature map. This means all neurons in the same depth of the output respond to the same feature. This allows detecting a feature regardless of its position in the input and enables translation equivariance. - Weight sharing: In a CNN, computation is iterated on shared parameters (weights and bias) to form a feature map. This means that all the neurons in the same depth of the output respond to the same feature. This allows the network to detect a feature regardless of its position in the input. In other words, it is shift invariant.
For more details on Convolutional Neural Networks, please refer to [this Stanford open course]( http://cs231n.github.io/convolutional-networks/ ) and [this Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) tutorial. For more details on Convolutional Neural Networks, please refer to the tutorial on [Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) and the [relevant lecture](http://cs231n.github.io/convolutional-networks/) from a Stanford open course.
### List of Common Activation Functions ### List of Common Activation Functions
- Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $ - Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $
...@@ -160,12 +160,12 @@ For more information, please refer to [Activation functions on Wikipedia](https: ...@@ -160,12 +160,12 @@ For more information, please refer to [Activation functions on Wikipedia](https:
PaddlePaddle provides a Python module, `paddle.dataset.mnist`, which downloads and caches the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). The cache is under `/home/username/.cache/paddle/dataset/mnist`: PaddlePaddle provides a Python module, `paddle.dataset.mnist`, which downloads and caches the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). The cache is under `/home/username/.cache/paddle/dataset/mnist`:
| File name | Description | | File name | Description | Size |
|----------------------|-------------------------| |----------------------|--------------|-----------|
|train-images-idx3-ubyte| Training images, 60,000 | |train-images-idx3-ubyte| Training images | 60,000 |
|train-labels-idx1-ubyte| Training labels, 60,000 | |train-labels-idx1-ubyte| Training labels | 60,000 |
|t10k-images-idx3-ubyte | Evaluation images, 10,000 | |t10k-images-idx3-ubyte | Evaluation images | 10,000 |
|t10k-labels-idx1-ubyte | Evaluation labels, 10,000 | |t10k-labels-idx1-ubyte | Evaluation labels | 10,000 |
## Model Configuration ## Model Configuration
...@@ -177,9 +177,9 @@ import gzip ...@@ -177,9 +177,9 @@ import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
``` ```
We want to use this program to demonstrate multiple kinds of models. Let define each of them as a Python function: We want to use this program to demonstrate three different classifiers, each defined as a Python function:
- softmax regression: the network has a fully-connection layer with softmax activation: - Softmax regression: the network has a fully-connection layer with softmax activation:
```python ```python
def softmax_regression(img): def softmax_regression(img):
...@@ -189,7 +189,7 @@ def softmax_regression(img): ...@@ -189,7 +189,7 @@ def softmax_regression(img):
return predict return predict
``` ```
- multi-layer perceptron: this network has two hidden fully-connected layers, one with LeRU and the other with softmax activation: - Multi-Layer Perceptron: this network has two hidden fully-connected layers, one with ReLU and the other with softmax activation:
```python ```python
def multilayer_perceptron(img): def multilayer_perceptron(img):
...@@ -203,7 +203,7 @@ def multilayer_perceptron(img): ...@@ -203,7 +203,7 @@ def multilayer_perceptron(img):
return predict return predict
``` ```
- convolution network LeNet-5: the input image is fed through two convolution-pooling layer, a fully-connected layer, and the softmax output layer: - Convolution network LeNet-5: the input image is fed through two convolution-pooling layers, a fully-connected layer, and the softmax output layer:
```python ```python
def convolutional_neural_network(img): def convolutional_neural_network(img):
...@@ -249,7 +249,7 @@ predict = convolutional_neural_network(images) # uncomment for LeNet5 ...@@ -249,7 +249,7 @@ predict = convolutional_neural_network(images) # uncomment for LeNet5
cost = paddle.layer.classification_cost(input=predict, label=label) cost = paddle.layer.classification_cost(input=predict, label=label)
``` ```
Now, it is time to specify training parameters. The number 0.9 in the following `Momentum` optimizer means that 90% of the current the momentum comes from the momentum of the previous iteration. Now, it is time to specify training parameters. In the following `Momentum` optimizer, `momentum=0.9` means that 90% of the current momentum comes from that of the previous iteration. The learning rate relates to the speed at which the network training converges. Regularization is meant to prevent over-fitting; here we use the L2 regularization.
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
...@@ -264,11 +264,11 @@ trainer = paddle.trainer.SGD(cost=cost, ...@@ -264,11 +264,11 @@ trainer = paddle.trainer.SGD(cost=cost,
update_equation=optimizer) update_equation=optimizer)
``` ```
Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data. Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two methods are *reader creators*. Once called, a reader creator returns a *reader*. A reader is a Python method, which, once called, returns a Python generator, which yields instances of data.
Here `shuffle` is a reader decorator, which takes a reader A as its parameter, and returns a new reader B, where B calls A to read in `buffer_size` data instances everytime into a buffer, then shuffles and yield instances in the buffer. If you want very shuffled data, try use a larger buffer size. `shuffle` is a reader decorator. It takes in a reader A as input and returns a new reader B. Under the hood, B calls A to read data in the following fashion: it copies in `buffer_size` instances at a time into a buffer, shuffles the data, and yields the shuffled instances one at a time. A large buffer size would yield very shuffled data.
`batch` is a special decorator, whose input is a reader and output is a *batch reader*, which doesn't yield an instance at a time, but a minibatch. `batch` is a special decorator, which takes in reader and outputs a *batch reader*, which doesn't yield an instance, but a minibatch at a time.
`event_handler_plot` is used to plot a figure like below: `event_handler_plot` is used to plot a figure like below:
...@@ -354,7 +354,7 @@ print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1]) ...@@ -354,7 +354,7 @@ print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100) print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
``` ```
Usually, with MNIST data, the softmax regression model can get accuracy around 92.34%, MLP can get about 97.66%, and convolution network can get up to around 99.20%. Convolution layers have been widely considered a great invention for image processsing. Usually, with MNIST data, the softmax regression model achieves an accuracy around 92.34%, the MLP 97.66%, and the convolution network around 99.20%. Convolution layers have been widely considered a great invention for image processing.
## Application ## Application
...@@ -378,9 +378,15 @@ lab = np.argsort(-probs) # probs and lab are the results of one batch data ...@@ -378,9 +378,15 @@ lab = np.argsort(-probs) # probs and lab are the results of one batch data
print "Label of image/infer_3.png is: %d" % lab[0][0] print "Label of image/infer_3.png is: %d" % lab[0][0]
``` ```
## Conclusion ## Conclusion
This tutorial describes a few basic Deep Learning models viz. Softmax regression, Multilayer Perceptron Network and Convolutional Neural Network. The subsequent tutorials will derive more sophisticated models from these. So it is crucial to understand these models for future learning. When our model evolved from a simple softmax regression to slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieved large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve results of an old one. Moreover, this tutorial introduced the basic flow of PaddlePaddle model design, starting with a dataprovider, model layer construction, to final training and prediction. Readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice. This tutorial describes a few common deep learning models using **Softmax regression**, **Multilayer Perceptron Network**, and **Convolutional Neural Network**. Understanding these models is crucial for future learning; the subsequent tutorials derive more sophisticated networks by building on top of them.
When our model evolves from a simple softmax regression to a slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieves a large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve the results of an old one.
Moreover, this tutorial introduces the basic flow of PaddlePaddle model design, which starts with a *dataprovider*, a model layer construction, and finally training and prediction. Motivated readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice.
## References ## References
......
...@@ -273,7 +273,7 @@ trainer = paddle.trainer.SGD(cost=cost, ...@@ -273,7 +273,7 @@ trainer = paddle.trainer.SGD(cost=cost,
下面`shuffle`是一个reader decorator,它接受一个reader A,返回另一个reader B —— reader B 每次读入`buffer_size`条训练数据到一个buffer里,然后随机打乱其顺序,并且逐条输出。 下面`shuffle`是一个reader decorator,它接受一个reader A,返回另一个reader B —— reader B 每次读入`buffer_size`条训练数据到一个buffer里,然后随机打乱其顺序,并且逐条输出。
`batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。 `batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minibatch。
`event_handler_plot`可以用来在训练过程中画图如下: `event_handler_plot`可以用来在训练过程中画图如下:
......
...@@ -48,77 +48,79 @@ def convolutional_neural_network(img): ...@@ -48,77 +48,79 @@ def convolutional_neural_network(img):
return predict return predict
paddle.init(use_gpu=False, trainer_count=1) def main():
paddle.init(use_gpu=False, trainer_count=1)
# define network topology
images = paddle.layer.data( # define network topology
name='pixel', type=paddle.data_type.dense_vector(784)) images = paddle.layer.data(
label = paddle.layer.data(name='label', type=paddle.data_type.integer_value(10)) name='pixel', type=paddle.data_type.dense_vector(784))
label = paddle.layer.data(
# Here we can build the prediction network in different ways. Please name='label', type=paddle.data_type.integer_value(10))
# choose one by uncomment corresponding line.
# predict = softmax_regression(images) # Here we can build the prediction network in different ways. Please
# predict = multilayer_perceptron(images) # choose one by uncomment corresponding line.
predict = convolutional_neural_network(images) # predict = softmax_regression(images)
# predict = multilayer_perceptron(images)
cost = paddle.layer.classification_cost(input=predict, label=label) predict = convolutional_neural_network(images)
parameters = paddle.parameters.create(cost) cost = paddle.layer.classification_cost(input=predict, label=label)
optimizer = paddle.optimizer.Momentum( parameters = paddle.parameters.create(cost)
learning_rate=0.1 / 128.0,
momentum=0.9, optimizer = paddle.optimizer.Momentum(
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128)) learning_rate=0.1 / 128.0,
momentum=0.9,
trainer = paddle.trainer.SGD( regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
cost=cost, parameters=parameters, update_equation=optimizer)
trainer = paddle.trainer.SGD(
lists = [] cost=cost, parameters=parameters, update_equation=optimizer)
lists = []
def event_handler(event):
if isinstance(event, paddle.event.EndIteration): def event_handler(event):
if event.batch_id % 100 == 0: if isinstance(event, paddle.event.EndIteration):
print "Pass %d, Batch %d, Cost %f, %s" % ( if event.batch_id % 100 == 0:
event.pass_id, event.batch_id, event.cost, event.metrics) print "Pass %d, Batch %d, Cost %f, %s" % (
if isinstance(event, paddle.event.EndPass): event.pass_id, event.batch_id, event.cost, event.metrics)
# save parameters if isinstance(event, paddle.event.EndPass):
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f: # save parameters
parameters.to_tar(f) with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=paddle.batch(
paddle.dataset.mnist.test(), batch_size=128)) result = trainer.test(reader=paddle.batch(
print "Test with Pass %d, Cost %f, %s\n" % (event.pass_id, result.cost, paddle.dataset.mnist.test(), batch_size=128))
result.metrics) print "Test with Pass %d, Cost %f, %s\n" % (
lists.append((event.pass_id, result.cost, event.pass_id, result.cost, result.metrics)
result.metrics['classification_error_evaluator'])) lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator']))
trainer.train( trainer.train(
reader=paddle.batch( reader=paddle.batch(
paddle.reader.shuffle(paddle.dataset.mnist.train(), buf_size=8192), paddle.reader.shuffle(paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128), batch_size=128),
event_handler=event_handler, event_handler=event_handler,
num_passes=1) num_passes=5)
# find the best pass # find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0] best = sorted(lists, key=lambda list: float(list[1]))[0]
print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1]) print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100) print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
def load_image(file):
def load_image(file): im = Image.open(file).convert('L')
im = Image.open(file).convert('L') im = im.resize((28, 28), Image.ANTIALIAS)
im = im.resize((28, 28), Image.ANTIALIAS) im = np.array(im).astype(np.float32).flatten()
im = np.array(im).astype(np.float32).flatten() im = im / 255.0
im = im / 255.0 return im
return im
test_data = []
test_data.append((load_image('image/infer_3.png'), ))
test_data = []
test_data.append((load_image('image/infer_3.png'), )) probs = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data)
probs = paddle.infer( lab = np.argsort(-probs) # probs and lab are the results of one batch data
output_layer=predict, parameters=parameters, input=test_data) print "Label of image/infer_3.png is: %d" % lab[0][0]
lab = np.argsort(-probs) # probs and lab are the results of one batch data
print "Label of image/infer_3.png is: %d" % lab[0][0]
if __name__ == '__main__':
main()
...@@ -211,6 +211,7 @@ Here we fetch the dictionary, and print its size: ...@@ -211,6 +211,7 @@ Here we fetch the dictionary, and print its size:
```python ```python
import math import math
import numpy as np import numpy as np
import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
...@@ -373,11 +374,11 @@ crf_cost = paddle.layer.crf( ...@@ -373,11 +374,11 @@ crf_cost = paddle.layer.crf(
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l',
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=paddle.attr.Param(name='crfw')) param_attr=paddle.attr.Param(name='crfw'))
evaluator.sum(input=crf_dec)
``` ```
## Train model ## Train model
...@@ -387,7 +388,7 @@ crf_dec = paddle.layer.crf_decoding( ...@@ -387,7 +388,7 @@ crf_dec = paddle.layer.crf_decoding(
All necessary parameters will be traced created given output layers that we need to use. All necessary parameters will be traced created given output layers that we need to use.
```python ```python
parameters = paddle.parameters.create([crf_cost, crf_dec]) parameters = paddle.parameters.create(crf_cost)
``` ```
We can print out parameter name. It will be generated if not specified. We can print out parameter name. It will be generated if not specified.
...@@ -420,7 +421,8 @@ optimizer = paddle.optimizer.Momentum( ...@@ -420,7 +421,8 @@ optimizer = paddle.optimizer.Momentum(
trainer = paddle.trainer.SGD(cost=crf_cost, trainer = paddle.trainer.SGD(cost=crf_cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer,
extra_layers=crf_dec)
``` ```
### Trainer ### Trainer
...@@ -455,8 +457,19 @@ feeding = { ...@@ -455,8 +457,19 @@ feeding = {
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost) event.pass_id, event.batch_id, event.cost, event.metrics)
if event.batch_id % 1000 == 0:
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, Batch %d, %s" % (event.pass_id, event.batch_id, result.metrics)
if isinstance(event, paddle.event.EndPass):
# save parameters
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
``` ```
`trainer.train` will train the model. `trainer.train` will train the model.
...@@ -469,6 +482,42 @@ trainer.train( ...@@ -469,6 +482,42 @@ trainer.train(
feeding=feeding) feeding=feeding)
``` ```
### Application
Aftern training is done, we need to select an optimal model based one performance index to do inference. In this task, one can simply select the model with the least number of marks on the test set. The `paddle.layer.crf_decoding` layer is used in the inference, but its inputs does not include the ground truth label.
```python
predict = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
param_attr=paddle.attr.Param(name='crfw'))
```
Here, using one testing sample as an example.
```python
test_creator = paddle.dataset.conll05.test()
test_data = []
for item in test_creator():
test_data.append(item[0:8])
if len(test_data) == 1:
break
```
The inference interface `paddle.infer` returns the index of predicting labels. Then printing the tagging results based dictionary `labels_reverse`.
```python
labs = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data, field='id')
assert len(labs) == len(test_data[0][0])
labels_reverse={}
for (k,v) in label_dict.items():
labels_reverse[v]=k
pre_lab = [labels_reverse[i] for i in labs]
print pre_lab
```
## Conclusion ## Conclusion
Semantic Role Labeling is an important intermediate step in a wide range of natural language processing tasks. In this tutorial, we use SRL as an example to illustrate using PaddlePaddle to do sequence tagging tasks. The models proposed are from our published paper\[[10](#Reference)\]. We only use test data for illustration since the training data on the CoNLL 2005 dataset is not completely public. This aims to propose an end-to-end neural network model with fewer dependencies on natural language processing tools but is comparable, or even better than traditional models in terms of performance. Please check out our paper for more information and discussions. Semantic Role Labeling is an important intermediate step in a wide range of natural language processing tasks. In this tutorial, we use SRL as an example to illustrate using PaddlePaddle to do sequence tagging tasks. The models proposed are from our published paper\[[10](#Reference)\]. We only use test data for illustration since the training data on the CoNLL 2005 dataset is not completely public. This aims to propose an end-to-end neural network model with fewer dependencies on natural language processing tools but is comparable, or even better than traditional models in terms of performance. Please check out our paper for more information and discussions.
......
...@@ -189,6 +189,7 @@ conll05st-release/ ...@@ -189,6 +189,7 @@ conll05st-release/
```python ```python
import math import math
import numpy as np import numpy as np
import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
...@@ -350,11 +351,11 @@ crf_cost = paddle.layer.crf( ...@@ -350,11 +351,11 @@ crf_cost = paddle.layer.crf(
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l',
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=paddle.attr.Param(name='crfw')) param_attr=paddle.attr.Param(name='crfw'))
evaluator.sum(input=crf_dec)
``` ```
## 训练模型 ## 训练模型
...@@ -365,7 +366,7 @@ crf_dec = paddle.layer.crf_decoding( ...@@ -365,7 +366,7 @@ crf_dec = paddle.layer.crf_decoding(
```python ```python
# create parameters # create parameters
parameters = paddle.parameters.create([crf_cost, crf_dec]) parameters = paddle.parameters.create(crf_cost)
``` ```
可以打印参数名字,如果在网络配置中没有指定名字,则默认生成。 可以打印参数名字,如果在网络配置中没有指定名字,则默认生成。
...@@ -400,7 +401,8 @@ optimizer = paddle.optimizer.Momentum( ...@@ -400,7 +401,8 @@ optimizer = paddle.optimizer.Momentum(
trainer = paddle.trainer.SGD(cost=crf_cost, trainer = paddle.trainer.SGD(cost=crf_cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer,
extra_layers=crf_dec)
``` ```
### 训练 ### 训练
...@@ -436,8 +438,19 @@ feeding = { ...@@ -436,8 +438,19 @@ feeding = {
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost) event.pass_id, event.batch_id, event.cost, event.metrics)
if event.batch_id % 1000 == 0:
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, Batch %d, %s" % (event.pass_id, event.batch_id, result.metrics)
if isinstance(event, paddle.event.EndPass):
# save parameters
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
``` ```
通过`trainer.train`函数训练: 通过`trainer.train`函数训练:
...@@ -450,6 +463,41 @@ trainer.train( ...@@ -450,6 +463,41 @@ trainer.train(
feeding=feeding) feeding=feeding)
``` ```
### 应用模型
训练完成之后,需要依据某个我们关心的性能指标选择最优的模型进行预测,可以简单的选择测试集上标记错误最少的那个模型。预测时使用 `paddle.layer.crf_decoding`,和训练不同的是,该层没有正确的标签层作为输入。如下所示:
```python
predict = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
param_attr=paddle.attr.Param(name='crfw'))
```
这里选用测试集的一条数据作为示例。
```python
test_creator = paddle.dataset.conll05.test()
test_data = []
for item in test_creator():
test_data.append(item[0:8])
if len(test_data) == 1:
break
```
推断接口`paddle.infer`返回标签的索引,并查询词典`labels_reverse`,打印出标记的结果。
```python
labs = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data, field='id')
assert len(labs) == len(test_data[0][0])
labels_reverse={}
for (k,v) in label_dict.items():
labels_reverse[v]=k
pre_lab = [labels_reverse[i] for i in labs]
print pre_lab
```
## 总结 ## 总结
语义角色标注是许多自然语言理解任务的重要中间步骤。这篇教程中我们以语义角色标注任务为例,介绍如何利用PaddlePaddle进行序列标注任务。教程中所介绍的模型来自我们发表的论文\[[10](#参考文献)\]。由于 CoNLL 2005 SRL任务的训练数据目前并非完全开放,教程中只使用测试数据作为示例。在这个过程中,我们希望减少对其它自然语言处理工具的依赖,利用神经网络数据驱动、端到端学习的能力,得到一个和传统方法可比、甚至更好的模型。在论文中我们证实了这种可能性。关于模型更多的信息和讨论可以在论文中找到。 语义角色标注是许多自然语言理解任务的重要中间步骤。这篇教程中我们以语义角色标注任务为例,介绍如何利用PaddlePaddle进行序列标注任务。教程中所介绍的模型来自我们发表的论文\[[10](#参考文献)\]。由于 CoNLL 2005 SRL任务的训练数据目前并非完全开放,教程中只使用测试数据作为示例。在这个过程中,我们希望减少对其它自然语言处理工具的依赖,利用神经网络数据驱动、端到端学习的能力,得到一个和传统方法可比、甚至更好的模型。在论文中我们证实了这种可能性。关于模型更多的信息和讨论可以在论文中找到。
......
...@@ -253,6 +253,7 @@ Here we fetch the dictionary, and print its size: ...@@ -253,6 +253,7 @@ Here we fetch the dictionary, and print its size:
```python ```python
import math import math
import numpy as np import numpy as np
import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
...@@ -415,11 +416,11 @@ crf_cost = paddle.layer.crf( ...@@ -415,11 +416,11 @@ crf_cost = paddle.layer.crf(
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l',
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=paddle.attr.Param(name='crfw')) param_attr=paddle.attr.Param(name='crfw'))
evaluator.sum(input=crf_dec)
``` ```
## Train model ## Train model
...@@ -429,7 +430,7 @@ crf_dec = paddle.layer.crf_decoding( ...@@ -429,7 +430,7 @@ crf_dec = paddle.layer.crf_decoding(
All necessary parameters will be traced created given output layers that we need to use. All necessary parameters will be traced created given output layers that we need to use.
```python ```python
parameters = paddle.parameters.create([crf_cost, crf_dec]) parameters = paddle.parameters.create(crf_cost)
``` ```
We can print out parameter name. It will be generated if not specified. We can print out parameter name. It will be generated if not specified.
...@@ -462,7 +463,8 @@ optimizer = paddle.optimizer.Momentum( ...@@ -462,7 +463,8 @@ optimizer = paddle.optimizer.Momentum(
trainer = paddle.trainer.SGD(cost=crf_cost, trainer = paddle.trainer.SGD(cost=crf_cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer,
extra_layers=crf_dec)
``` ```
### Trainer ### Trainer
...@@ -497,8 +499,19 @@ feeding = { ...@@ -497,8 +499,19 @@ feeding = {
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost) event.pass_id, event.batch_id, event.cost, event.metrics)
if event.batch_id % 1000 == 0:
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, Batch %d, %s" % (event.pass_id, event.batch_id, result.metrics)
if isinstance(event, paddle.event.EndPass):
# save parameters
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
``` ```
`trainer.train` will train the model. `trainer.train` will train the model.
...@@ -511,6 +524,42 @@ trainer.train( ...@@ -511,6 +524,42 @@ trainer.train(
feeding=feeding) feeding=feeding)
``` ```
### Application
Aftern training is done, we need to select an optimal model based one performance index to do inference. In this task, one can simply select the model with the least number of marks on the test set. The `paddle.layer.crf_decoding` layer is used in the inference, but its inputs does not include the ground truth label.
```python
predict = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
param_attr=paddle.attr.Param(name='crfw'))
```
Here, using one testing sample as an example.
```python
test_creator = paddle.dataset.conll05.test()
test_data = []
for item in test_creator():
test_data.append(item[0:8])
if len(test_data) == 1:
break
```
The inference interface `paddle.infer` returns the index of predicting labels. Then printing the tagging results based dictionary `labels_reverse`.
```python
labs = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data, field='id')
assert len(labs) == len(test_data[0][0])
labels_reverse={}
for (k,v) in label_dict.items():
labels_reverse[v]=k
pre_lab = [labels_reverse[i] for i in labs]
print pre_lab
```
## Conclusion ## Conclusion
Semantic Role Labeling is an important intermediate step in a wide range of natural language processing tasks. In this tutorial, we use SRL as an example to illustrate using PaddlePaddle to do sequence tagging tasks. The models proposed are from our published paper\[[10](#Reference)\]. We only use test data for illustration since the training data on the CoNLL 2005 dataset is not completely public. This aims to propose an end-to-end neural network model with fewer dependencies on natural language processing tools but is comparable, or even better than traditional models in terms of performance. Please check out our paper for more information and discussions. Semantic Role Labeling is an important intermediate step in a wide range of natural language processing tasks. In this tutorial, we use SRL as an example to illustrate using PaddlePaddle to do sequence tagging tasks. The models proposed are from our published paper\[[10](#Reference)\]. We only use test data for illustration since the training data on the CoNLL 2005 dataset is not completely public. This aims to propose an end-to-end neural network model with fewer dependencies on natural language processing tools but is comparable, or even better than traditional models in terms of performance. Please check out our paper for more information and discussions.
......
...@@ -231,6 +231,7 @@ conll05st-release/ ...@@ -231,6 +231,7 @@ conll05st-release/
```python ```python
import math import math
import numpy as np import numpy as np
import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
...@@ -392,11 +393,11 @@ crf_cost = paddle.layer.crf( ...@@ -392,11 +393,11 @@ crf_cost = paddle.layer.crf(
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l',
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=paddle.attr.Param(name='crfw')) param_attr=paddle.attr.Param(name='crfw'))
evaluator.sum(input=crf_dec)
``` ```
## 训练模型 ## 训练模型
...@@ -407,7 +408,7 @@ crf_dec = paddle.layer.crf_decoding( ...@@ -407,7 +408,7 @@ crf_dec = paddle.layer.crf_decoding(
```python ```python
# create parameters # create parameters
parameters = paddle.parameters.create([crf_cost, crf_dec]) parameters = paddle.parameters.create(crf_cost)
``` ```
可以打印参数名字,如果在网络配置中没有指定名字,则默认生成。 可以打印参数名字,如果在网络配置中没有指定名字,则默认生成。
...@@ -442,7 +443,8 @@ optimizer = paddle.optimizer.Momentum( ...@@ -442,7 +443,8 @@ optimizer = paddle.optimizer.Momentum(
trainer = paddle.trainer.SGD(cost=crf_cost, trainer = paddle.trainer.SGD(cost=crf_cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer,
extra_layers=crf_dec)
``` ```
### 训练 ### 训练
...@@ -478,8 +480,19 @@ feeding = { ...@@ -478,8 +480,19 @@ feeding = {
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost) event.pass_id, event.batch_id, event.cost, event.metrics)
if event.batch_id % 1000 == 0:
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, Batch %d, %s" % (event.pass_id, event.batch_id, result.metrics)
if isinstance(event, paddle.event.EndPass):
# save parameters
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
``` ```
通过`trainer.train`函数训练: 通过`trainer.train`函数训练:
...@@ -492,6 +505,41 @@ trainer.train( ...@@ -492,6 +505,41 @@ trainer.train(
feeding=feeding) feeding=feeding)
``` ```
### 应用模型
训练完成之后,需要依据某个我们关心的性能指标选择最优的模型进行预测,可以简单的选择测试集上标记错误最少的那个模型。预测时使用 `paddle.layer.crf_decoding`,和训练不同的是,该层没有正确的标签层作为输入。如下所示:
```python
predict = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
param_attr=paddle.attr.Param(name='crfw'))
```
这里选用测试集的一条数据作为示例。
```python
test_creator = paddle.dataset.conll05.test()
test_data = []
for item in test_creator():
test_data.append(item[0:8])
if len(test_data) == 1:
break
```
推断接口`paddle.infer`返回标签的索引,并查询词典`labels_reverse`,打印出标记的结果。
```python
labs = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data, field='id')
assert len(labs) == len(test_data[0][0])
labels_reverse={}
for (k,v) in label_dict.items():
labels_reverse[v]=k
pre_lab = [labels_reverse[i] for i in labs]
print pre_lab
```
## 总结 ## 总结
语义角色标注是许多自然语言理解任务的重要中间步骤。这篇教程中我们以语义角色标注任务为例,介绍如何利用PaddlePaddle进行序列标注任务。教程中所介绍的模型来自我们发表的论文\[[10](#参考文献)\]。由于 CoNLL 2005 SRL任务的训练数据目前并非完全开放,教程中只使用测试数据作为示例。在这个过程中,我们希望减少对其它自然语言处理工具的依赖,利用神经网络数据驱动、端到端学习的能力,得到一个和传统方法可比、甚至更好的模型。在论文中我们证实了这种可能性。关于模型更多的信息和讨论可以在论文中找到。 语义角色标注是许多自然语言理解任务的重要中间步骤。这篇教程中我们以语义角色标注任务为例,介绍如何利用PaddlePaddle进行序列标注任务。教程中所介绍的模型来自我们发表的论文\[[10](#参考文献)\]。由于 CoNLL 2005 SRL任务的训练数据目前并非完全开放,教程中只使用测试数据作为示例。在这个过程中,我们希望减少对其它自然语言处理工具的依赖,利用神经网络数据驱动、端到端学习的能力,得到一个和传统方法可比、甚至更好的模型。在论文中我们证实了这种可能性。关于模型更多的信息和讨论可以在论文中找到。
......
import math import math
import numpy as np import numpy as np
import gzip
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
import paddle.v2.evaluator as evaluator
word_dict, verb_dict, label_dict = conll05.get_dict()
word_dict_len = len(word_dict)
label_dict_len = len(label_dict)
pred_len = len(verb_dict)
def db_lstm(): mark_dict_len = 2
word_dict, verb_dict, label_dict = conll05.get_dict() word_dim = 32
word_dict_len = len(word_dict) mark_dim = 5
label_dict_len = len(label_dict) hidden_dim = 512
pred_len = len(verb_dict) depth = 8
default_std = 1 / math.sqrt(hidden_dim) / 3.0
mix_hidden_lr = 1e-3
mark_dict_len = 2
word_dim = 32
mark_dim = 5
hidden_dim = 512
depth = 8
#8 features def d_type(size):
def d_type(size): return paddle.data_type.integer_value_sequence(size)
return paddle.data_type.integer_value_sequence(size)
def db_lstm():
#8 features
word = paddle.layer.data(name='word_data', type=d_type(word_dict_len)) word = paddle.layer.data(name='word_data', type=d_type(word_dict_len))
predicate = paddle.layer.data(name='verb_data', type=d_type(pred_len)) predicate = paddle.layer.data(name='verb_data', type=d_type(pred_len))
...@@ -30,11 +35,8 @@ def db_lstm(): ...@@ -30,11 +35,8 @@ def db_lstm():
ctx_p2 = paddle.layer.data(name='ctx_p2_data', type=d_type(word_dict_len)) ctx_p2 = paddle.layer.data(name='ctx_p2_data', type=d_type(word_dict_len))
mark = paddle.layer.data(name='mark_data', type=d_type(mark_dict_len)) mark = paddle.layer.data(name='mark_data', type=d_type(mark_dict_len))
target = paddle.layer.data(name='target', type=d_type(label_dict_len))
emb_para = paddle.attr.Param(name='emb', initial_std=0., is_static=True) emb_para = paddle.attr.Param(name='emb', initial_std=0., is_static=True)
std_0 = paddle.attr.Param(initial_std=0.) std_0 = paddle.attr.Param(initial_std=0.)
default_std = 1 / math.sqrt(hidden_dim) / 3.0
std_default = paddle.attr.Param(initial_std=default_std) std_default = paddle.attr.Param(initial_std=default_std)
predicate_embedding = paddle.layer.embedding( predicate_embedding = paddle.layer.embedding(
...@@ -60,7 +62,6 @@ def db_lstm(): ...@@ -60,7 +62,6 @@ def db_lstm():
input=emb, param_attr=std_default) for emb in emb_layers input=emb, param_attr=std_default) for emb in emb_layers
]) ])
mix_hidden_lr = 1e-3
lstm_para_attr = paddle.attr.Param(initial_std=0.0, learning_rate=1.0) lstm_para_attr = paddle.attr.Param(initial_std=0.0, learning_rate=1.0)
hidden_para_attr = paddle.attr.Param( hidden_para_attr = paddle.attr.Param(
initial_std=default_std, learning_rate=mix_hidden_lr) initial_std=default_std, learning_rate=mix_hidden_lr)
...@@ -108,21 +109,7 @@ def db_lstm(): ...@@ -108,21 +109,7 @@ def db_lstm():
input=input_tmp[1], param_attr=lstm_para_attr) input=input_tmp[1], param_attr=lstm_para_attr)
], ) ], )
crf_cost = paddle.layer.crf( return feature_out
size=label_dict_len,
input=feature_out,
label=target,
param_attr=paddle.attr.Param(
name='crfw', initial_std=default_std, learning_rate=mix_hidden_lr))
crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l',
size=label_dict_len,
input=feature_out,
label=target,
param_attr=paddle.attr.Param(name='crfw'))
return crf_cost, crf_dec
def load_parameter(file_name, h, w): def load_parameter(file_name, h, w):
...@@ -135,10 +122,24 @@ def main(): ...@@ -135,10 +122,24 @@ def main():
paddle.init(use_gpu=False, trainer_count=1) paddle.init(use_gpu=False, trainer_count=1)
# define network topology # define network topology
crf_cost, crf_dec = db_lstm() feature_out = db_lstm()
target = paddle.layer.data(name='target', type=d_type(label_dict_len))
crf_cost = paddle.layer.crf(
size=label_dict_len,
input=feature_out,
label=target,
param_attr=paddle.attr.Param(
name='crfw', initial_std=default_std, learning_rate=mix_hidden_lr))
crf_dec = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
label=target,
param_attr=paddle.attr.Param(name='crfw'))
evaluator.sum(input=crf_dec)
# create parameters # create parameters
parameters = paddle.parameters.create([crf_cost, crf_dec]) parameters = paddle.parameters.create(crf_cost)
parameters.set('emb', load_parameter(conll05.get_embedding(), 44068, 32)) parameters.set('emb', load_parameter(conll05.get_embedding(), 44068, 32))
# create optimizer # create optimizer
...@@ -150,7 +151,10 @@ def main(): ...@@ -150,7 +151,10 @@ def main():
average_window=0.5, max_average_window=10000), ) average_window=0.5, max_average_window=10000), )
trainer = paddle.trainer.SGD( trainer = paddle.trainer.SGD(
cost=crf_cost, parameters=parameters, update_equation=optimizer) cost=crf_cost,
parameters=parameters,
update_equation=optimizer,
extra_layers=crf_dec)
reader = paddle.batch( reader = paddle.batch(
paddle.reader.shuffle(conll05.test(), buf_size=8192), batch_size=10) paddle.reader.shuffle(conll05.test(), buf_size=8192), batch_size=10)
...@@ -170,15 +174,50 @@ def main(): ...@@ -170,15 +174,50 @@ def main():
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost) event.pass_id, event.batch_id, event.cost, event.metrics)
if event.batch_id % 1000 == 0:
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, Batch %d, %s" % (
event.pass_id, event.batch_id, result.metrics)
if isinstance(event, paddle.event.EndPass):
# save parameters
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
trainer.train( trainer.train(
reader=reader, reader=reader,
event_handler=event_handler, event_handler=event_handler,
num_passes=10000, num_passes=1,
feeding=feeding) feeding=feeding)
test_creator = paddle.dataset.conll05.test()
test_data = []
for item in test_creator():
test_data.append(item[0:8])
if len(test_data) == 1:
break
predict = paddle.layer.crf_decoding(
size=label_dict_len,
input=feature_out,
param_attr=paddle.attr.Param(name='crfw'))
probs = paddle.infer(
output_layer=predict,
parameters=parameters,
input=test_data,
field='id')
assert len(probs) == len(test_data[0][0])
labels_reverse = {}
for (k, v) in label_dict.items():
labels_reverse[v] = k
pre_lab = [labels_reverse[i] for i in probs]
print pre_lab
if __name__ == '__main__': if __name__ == '__main__':
main() main()
...@@ -238,6 +238,7 @@ mov_categories = paddle.layer.data( ...@@ -238,6 +238,7 @@ mov_categories = paddle.layer.data(
len(paddle.dataset.movielens.movie_categories()))) len(paddle.dataset.movielens.movie_categories())))
mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32) mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_title_id = paddle.layer.data( mov_title_id = paddle.layer.data(
name='movie_title', name='movie_title',
type=paddle.data_type.integer_value_sequence(len(movie_title_dict))) type=paddle.data_type.integer_value_sequence(len(movie_title_dict)))
......
...@@ -244,6 +244,7 @@ mov_categories = paddle.layer.data( ...@@ -244,6 +244,7 @@ mov_categories = paddle.layer.data(
len(paddle.dataset.movielens.movie_categories()))) len(paddle.dataset.movielens.movie_categories())))
mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32) mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_title_id = paddle.layer.data( mov_title_id = paddle.layer.data(
name='movie_title', name='movie_title',
type=paddle.data_type.integer_value_sequence(len(movie_title_dict))) type=paddle.data_type.integer_value_sequence(len(movie_title_dict)))
......
...@@ -280,6 +280,7 @@ mov_categories = paddle.layer.data( ...@@ -280,6 +280,7 @@ mov_categories = paddle.layer.data(
len(paddle.dataset.movielens.movie_categories()))) len(paddle.dataset.movielens.movie_categories())))
mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32) mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_title_id = paddle.layer.data( mov_title_id = paddle.layer.data(
name='movie_title', name='movie_title',
type=paddle.data_type.integer_value_sequence(len(movie_title_dict))) type=paddle.data_type.integer_value_sequence(len(movie_title_dict)))
......
...@@ -286,6 +286,7 @@ mov_categories = paddle.layer.data( ...@@ -286,6 +286,7 @@ mov_categories = paddle.layer.data(
len(paddle.dataset.movielens.movie_categories()))) len(paddle.dataset.movielens.movie_categories())))
mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32) mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_title_id = paddle.layer.data( mov_title_id = paddle.layer.data(
name='movie_title', name='movie_title',
type=paddle.data_type.integer_value_sequence(len(movie_title_dict))) type=paddle.data_type.integer_value_sequence(len(movie_title_dict)))
......
...@@ -3,9 +3,7 @@ import cPickle ...@@ -3,9 +3,7 @@ import cPickle
import copy import copy
def main(): def get_usr_combined_features():
paddle.init(use_gpu=False)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
uid = paddle.layer.data( uid = paddle.layer.data(
name='user_id', name='user_id',
type=paddle.data_type.integer_value( type=paddle.data_type.integer_value(
...@@ -36,7 +34,11 @@ def main(): ...@@ -36,7 +34,11 @@ def main():
input=[usr_fc, usr_gender_fc, usr_age_fc, usr_job_fc], input=[usr_fc, usr_gender_fc, usr_age_fc, usr_job_fc],
size=200, size=200,
act=paddle.activation.Tanh()) act=paddle.activation.Tanh())
return usr_combined_features
def get_mov_combined_features():
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_id = paddle.layer.data( mov_id = paddle.layer.data(
name='movie_id', name='movie_id',
type=paddle.data_type.integer_value( type=paddle.data_type.integer_value(
...@@ -61,7 +63,13 @@ def main(): ...@@ -61,7 +63,13 @@ def main():
input=[mov_fc, mov_categories_hidden, mov_title_conv], input=[mov_fc, mov_categories_hidden, mov_title_conv],
size=200, size=200,
act=paddle.activation.Tanh()) act=paddle.activation.Tanh())
return mov_combined_features
def main():
paddle.init(use_gpu=False)
usr_combined_features = get_usr_combined_features()
mov_combined_features = get_mov_combined_features()
inference = paddle.layer.cos_sim( inference = paddle.layer.cos_sim(
a=usr_combined_features, b=mov_combined_features, size=1, scale=5) a=usr_combined_features, b=mov_combined_features, size=1, scale=5)
cost = paddle.layer.mse_cost( cost = paddle.layer.mse_cost(
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册