提交 5ea3c435 编写于 作者: L Luo Tao

refine mnist demo

上级 fafb9e80
此差异已折叠。
...@@ -196,34 +196,33 @@ def convolutional_neural_network(img): ...@@ -196,34 +196,33 @@ def convolutional_neural_network(img):
PaddlePaddle provides a special layer `layer.data` for reading data. Let us create a data layer for reading images and connect it to a classification network created using one of above three functions. We also need a cost layer for training the model. PaddlePaddle provides a special layer `layer.data` for reading data. Let us create a data layer for reading images and connect it to a classification network created using one of above three functions. We also need a cost layer for training the model.
```python ```python
def main(): paddle.init(use_gpu=False, trainer_count=1)
paddle.init(use_gpu=False, trainer_count=1)
images = paddle.layer.data( images = paddle.layer.data(
name='pixel', type=paddle.data_type.dense_vector(784)) name='pixel', type=paddle.data_type.dense_vector(784))
label = paddle.layer.data( label = paddle.layer.data(
name='label', type=paddle.data_type.integer_value(10)) name='label', type=paddle.data_type.integer_value(10))
predict = softmax_regression(images) predict = softmax_regression(images)
#predict = multilayer_perceptron(images) # uncomment for MLP #predict = multilayer_perceptron(images) # uncomment for MLP
#predict = convolutional_neural_network(images) # uncomment for LeNet5 #predict = convolutional_neural_network(images) # uncomment for LeNet5
cost = paddle.layer.classification_cost(input=predict, label=label) cost = paddle.layer.classification_cost(input=predict, label=label)
``` ```
Now, it is time to specify training parameters. The number 0.9 in the following `Momentum` optimizer means that 90% of the current the momentum comes from the momentum of the previous iteration. Now, it is time to specify training parameters. The number 0.9 in the following `Momentum` optimizer means that 90% of the current the momentum comes from the momentum of the previous iteration.
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
optimizer = paddle.optimizer.Momentum( optimizer = paddle.optimizer.Momentum(
learning_rate=0.1 / 128.0, learning_rate=0.1 / 128.0,
momentum=0.9, momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128)) regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
trainer = paddle.trainer.SGD(cost=cost, trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer)
``` ```
Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data. Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data.
...@@ -233,48 +232,48 @@ Here `shuffle` is a reader decorator, which takes a reader A as its parameter, a ...@@ -233,48 +232,48 @@ Here `shuffle` is a reader decorator, which takes a reader A as its parameter, a
`batch` is a special decorator, whose input is a reader and output is a *batch reader*, which doesn't yield an instance at a time, but a minibatch. `batch` is a special decorator, whose input is a reader and output is a *batch reader*, which doesn't yield an instance at a time, but a minibatch.
```python ```python
lists = [] lists = []
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics) event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass): if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched( result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128)) paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % ( print "Test with Pass %d, Cost %f, %s\n" % (
event.pass_id, result.cost, result.metrics) event.pass_id, result.cost, result.metrics)
lists.append((event.pass_id, result.cost, lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator'])) result.metrics['classification_error_evaluator']))
trainer.train( trainer.train(
reader=paddle.reader.batched( reader=paddle.reader.batched(
paddle.reader.shuffle( paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192), paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128), batch_size=128),
event_handler=event_handler, event_handler=event_handler,
num_passes=100) num_passes=100)
``` ```
During training, `trainer.train` invokes `event_handler` for certain events. This gives us a chance to print the training progress. During training, `trainer.train` invokes `event_handler` for certain events. This gives us a chance to print the training progress.
``` ```
# Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125} # Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125}
# Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375} # Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375}
# Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125} # Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125}
# Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625} # Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625}
# Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125} # Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125}
# Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197} # Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197}
``` ```
After the training, we can check the model's prediction accuracy. After the training, we can check the model's prediction accuracy.
``` ```
# find the best pass # find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0] best = sorted(lists, key=lambda list: float(list[1]))[0]
print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1]) print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100) print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
``` ```
Usually, with MNIST data, the softmax regression model can get accuracy around 92.34%, MLP can get about 97.66%, and convolution network can get up to around 99.20%. Convolution layers have been widely considered a great invention for image processsing. Usually, with MNIST data, the softmax regression model can get accuracy around 92.34%, MLP can get about 97.66%, and convolution network can get up to around 99.20%. Convolution layers have been widely considered a great invention for image processsing.
......
...@@ -195,20 +195,19 @@ def convolutional_neural_network(img): ...@@ -195,20 +195,19 @@ def convolutional_neural_network(img):
接着,通过`layer.data`调用来获取数据,然后调用分类器(这里我们提供了三个不同的分类器)得到分类结果。训练时,对该结果计算其损失函数,分类问题常常选择交叉熵损失函数。 接着,通过`layer.data`调用来获取数据,然后调用分类器(这里我们提供了三个不同的分类器)得到分类结果。训练时,对该结果计算其损失函数,分类问题常常选择交叉熵损失函数。
```python ```python
def main(): # 该模型运行在单个CPU上
# 该模型运行在单个CPU上 paddle.init(use_gpu=False, trainer_count=1)
paddle.init(use_gpu=False, trainer_count=1)
images = paddle.layer.data(
images = paddle.layer.data( name='pixel', type=paddle.data_type.dense_vector(784))
name='pixel', type=paddle.data_type.dense_vector(784)) label = paddle.layer.data(
label = paddle.layer.data( name='label', type=paddle.data_type.integer_value(10))
name='label', type=paddle.data_type.integer_value(10))
predict = softmax_regression(images) # Softmax回归
predict = softmax_regression(images) # Softmax回归 #predict = multilayer_perceptron(images) #多层感知器
#predict = multilayer_perceptron(images) #多层感知器 #predict = convolutional_neural_network(images) #LeNet5卷积神经网络
#predict = convolutional_neural_network(images) #LeNet5卷积神经网络
cost = paddle.layer.classification_cost(input=predict, label=label)
cost = paddle.layer.classification_cost(input=predict, label=label)
``` ```
然后,指定训练相关的参数。 然后,指定训练相关的参数。
...@@ -217,16 +216,16 @@ def main(): ...@@ -217,16 +216,16 @@ def main():
- 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。 - 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
optimizer = paddle.optimizer.Momentum( optimizer = paddle.optimizer.Momentum(
learning_rate=0.1 / 128.0, learning_rate=0.1 / 128.0,
momentum=0.9, momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128)) regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
trainer = paddle.trainer.SGD(cost=cost, trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer)
``` ```
下一步,我们开始训练过程。`paddle.dataset.movielens.train()``paddle.dataset.movielens.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python yield generator。 下一步,我们开始训练过程。`paddle.dataset.movielens.train()``paddle.dataset.movielens.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python yield generator。
...@@ -236,39 +235,39 @@ def main(): ...@@ -236,39 +235,39 @@ def main():
`batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。 `batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。
```python ```python
lists = [] lists = []
def event_handler(event): def event_handler(event):
if isinstance(event, paddle.event.EndIteration): if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % ( print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics) event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass): if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched( result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128)) paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % ( print "Test with Pass %d, Cost %f, %s\n" % (
event.pass_id, result.cost, result.metrics) event.pass_id, result.cost, result.metrics)
lists.append((event.pass_id, result.cost, lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator'])) result.metrics['classification_error_evaluator']))
trainer.train( trainer.train(
reader=paddle.batch( reader=paddle.reader.batched(
paddle.reader.shuffle( paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192), paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128), batch_size=128),
event_handler=event_handler, event_handler=event_handler,
num_passes=100) num_passes=100)
``` ```
训练过程是完全自动的,event_handler里打印的日志类似如下所示: 训练过程是完全自动的,event_handler里打印的日志类似如下所示:
``` ```
# Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125} # Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125}
# Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375} # Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375}
# Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125} # Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125}
# Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625} # Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625}
# Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125} # Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125}
# Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197} # Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197}
``` ```
训练之后,检查模型的预测准确度。用 MNIST 训练的时候,一般 softmax回归模型的分类准确率为约为 92.34%,多层感知器为97.66%,卷积神经网络可以达到 99.20%。 训练之后,检查模型的预测准确度。用 MNIST 训练的时候,一般 softmax回归模型的分类准确率为约为 92.34%,多层感知器为97.66%,卷积神经网络可以达到 99.20%。
......
...@@ -156,15 +156,8 @@ For more information, please refer to [Activation functions on Wikipedia](https: ...@@ -156,15 +156,8 @@ For more information, please refer to [Activation functions on Wikipedia](https:
## Data Preparation ## Data Preparation
### Data Download PaddlePaddle provides a Python module, `paddle.dataset.mnist`, which downloads and caches the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). The cache is under `/home/username/.cache/paddle/dataset/mnist`:
Execute the following command to download the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and unzip. Add paths to the training set and the test set to train.list and test.list respectively for PaddlePaddle to read.
```bash
./data/get_mnist_data.sh
```
`gzip` downloaded data. The following files can be found in `data/raw_data`:
| File name | Description | | File name | Description |
|----------------------|-------------------------| |----------------------|-------------------------|
...@@ -173,283 +166,159 @@ Execute the following command to download the [MNIST](http://yann.lecun.com/exdb ...@@ -173,283 +166,159 @@ Execute the following command to download the [MNIST](http://yann.lecun.com/exdb
|t10k-images-idx3-ubyte | Evaluation images, 10,000 | |t10k-images-idx3-ubyte | Evaluation images, 10,000 |
|t10k-labels-idx1-ubyte | Evaluation labels, 10,000 | |t10k-labels-idx1-ubyte | Evaluation labels, 10,000 |
Users can randomly generate 10 images with the following script (Refer to Fig. 1.)
```bash
./load_data.py
```
### Provide Data to PaddlePaddle
We use python interface to provide data to system. `mnist_provider.py` shows a complete example for training on MNIST data.
```python
# Define a py data provider
@provider(
input_types={'pixel': dense_vector(28 * 28),
'label': integer_value(10)})
def process(settings, filename): # settings is not used currently.
# Open image file
with open( filename + "-images-idx3-ubyte", "rb") as f:
# Read first 4 parameters. magic is data format. n is number of data. rows and cols are number of rows and columns, respectively
magic, n, rows, cols = struct.upack(">IIII", f.read(16))
# With empty string as a unit, read data one by one
images = np.fromfile(
f, 'ubyte',
count=n * rows * cols).reshape(n, rows, cols).astype('float32')
# Normalize data of [0, 255] to [-1,1]
images = images / 255.0 * 2.0 - 1.0
# Open label file
with open( filename + "-labels-idx1-ubyte", "rb") as l:
# Read first two parameters
magic, n = struct.upack(">II", l.read(8))
# With empty string as a unit, read data one by one
labels = np.fromfile(l, 'ubyte', count=n).astype("int")
for i in xrange(n):
yield {"pixel": images[i, :], 'label': labels[i]}
```
## Model Configurations
### Data Definition
In the model configuration, use `define_py_data_sources2` to define reading of data from `dataprovider`. If this configuration is used for prediction, data definition is not necessary.
```python
if not is_predict:
data_dir = './data/'
define_py_data_sources2(
train_list=data_dir + 'train.list',
test_list=data_dir + 'test.list',
module='mnist_provider',
obj='process')
```
### Algorithm Configuration ## Model Configuration
Set training related parameters. A PaddlePaddle program starts from importing the API package:
- batch_size: use 128 samples in each training step.
- learning_rate: determines step taken in each iteration, it determines how fast the model converges.
- learning_method: use optimizer `MomentumOptimizer` for training. The parameter 0.9 indicates momentum keeps 0.9 of previous speed.
- regularization: A method to prevent overfitting. Here L2 regularization is used.
```python ```python
settings( import paddle.v2 as paddle
batch_size=128,
learning_rate=0.1 / 128.0,
learning_method=MomentumOptimizer(0.9),
regularization=L2Regularization(0.0005 * 128))
``` ```
### Model Architecture We want to use this program to demonstrate multiple kinds of models. Let define each of them as a Python function:
#### Overview - softmax regression: the network has a fully-connection layer with softmax activation:
First get reference labels from `data_layer`, and get classification results (predictions) from classifier. Here we provide three different classifiers. In training, we compute loss function, which is usually cross entropy for classification problem. In prediction, we can directly output the results (predictions).
``` python
data_size = 1 * 28 * 28
label_size = 10
img = data_layer(name='pixel', size=data_size)
predict = softmax_regression(img) # Softmax Regression
#predict = multilayer_perceptron(img) # Multilayer Perceptron
#predict = convolutional_neural_network(img) #LeNet5 Convolutional Neural Network
if not is_predict:
lbl = data_layer(name="label", size=label_size)
inputs(img, lbl)
outputs(classification_cost(input=predict, label=lbl))
else:
outputs(predict)
```
#### Softmax Regression
One simple fully connected layer with softmax activation function outputs classification result.
```python ```python
def softmax_regression(img): def softmax_regression(img):
predict = fc_layer(input=img, size=10, act=SoftmaxActivation()) predict = paddle.layer.fc(input=img,
size=10,
act=paddle.activation.Softmax())
return predict return predict
``` ```
#### MultiLayer Perceptron - multi-layer perceptron: this network has two hidden fully-connected layers, one with LeRU and the other with softmax activation:
The following code implements a Multilayer Perceptron with two fully connected hidden layers and a ReLU activation function. The output layer has a Softmax activation function.
```python ```python
def multilayer_perceptron(img): def multilayer_perceptron(img):
# First fully connected layer with ReLU hidden1 = paddle.layer.fc(input=img, size=128, act=paddle.activation.Relu())
hidden1 = fc_layer(input=img, size=128, act=ReluActivation()) hidden2 = paddle.layer.fc(input=hidden1,
# Second fully connected layer with ReLU size=64,
hidden2 = fc_layer(input=hidden1, size=64, act=ReluActivation()) act=paddle.activation.Relu())
# Output layer as fully connected layer and softmax activation. The size must be 10. predict = paddle.layer.fc(input=hidden2,
predict = fc_layer(input=hidden2, size=10, act=SoftmaxActivation()) size=10,
act=paddle.activation.Softmax())
return predict return predict
``` ```
#### Convolutional Neural Network LeNet-5 - convolution network LeNet-5: the input image is fed through two convolution-pooling layer, a fully-connected layer, and the softmax output layer:
The following is the LeNet-5 network architecture. A 2D input image is first fed into two sets of convolutional layers and pooling layers, this result is then fed to a fully connected layer, and another fully connected layer with a softmax activation.
```python ```python
def convolutional_neural_network(img): def convolutional_neural_network(img):
# First convolutional layer - pooling layer
conv_pool_1 = simple_img_conv_pool( conv_pool_1 = paddle.networks.simple_img_conv_pool(
input=img, input=img,
filter_size=5, filter_size=5,
num_filters=20, num_filters=20,
num_channel=1, num_channel=1,
pool_size=2, pool_size=2,
pool_stride=2, pool_stride=2,
act=TanhActivation()) act=paddle.activation.Tanh())
# Second convolutional layer - pooling layer
conv_pool_2 = simple_img_conv_pool( conv_pool_2 = paddle.networks.simple_img_conv_pool(
input=conv_pool_1, input=conv_pool_1,
filter_size=5, filter_size=5,
num_filters=50, num_filters=50,
num_channel=20, num_channel=20,
pool_size=2, pool_size=2,
pool_stride=2, pool_stride=2,
act=TanhActivation()) act=paddle.activation.Tanh())
# Fully connected layer
fc1 = fc_layer(input=conv_pool_2, size=128, act=TanhActivation())
# Output layer as fully connected layer and softmax activation. The size must be 10.
predict = fc_layer(input=fc1, size=10, act=SoftmaxActivation())
return predict
```
## Training Model
### Training Commands and Logs
1.Configure `train.sh` to execute training:
```bash fc1 = paddle.layer.fc(input=conv_pool_2,
config=mnist_model.py # Select network in mnist_model.py size=128,
output=./softmax_mnist_model act=paddle.activation.Tanh())
log=softmax_train.log
paddle train \ predict = paddle.layer.fc(input=fc1,
--config=$config \ # Scripts for network configuration. size=10,
--dot_period=10 \ # After `dot_period` steps, print one `.` act=paddle.activation.Softmax())
--log_period=100 \ # Print a log every batchs return predict
--test_all_data_in_one_period=1 \ # Whether to use all data in every test
--use_gpu=0 \ # Whether to use GPU
--trainer_count=1 \ # Number of CPU or GPU
--num_passes=100 \ # Passes for training (One pass uses all data.)
--save_dir=$output \ # Path to saved model
2>&1 | tee $log
python -m paddle.utils.plotcurve -i $log > plot.png
``` ```
After configuring parameters, execute `./train.sh`. Training log is as follows. PaddlePaddle provides a special layer `layer.data` for reading data. Let us create a data layer for reading images and connect it to a classification network created using one of above three functions. We also need a cost layer for training the model.
``` ```python
I0117 12:52:29.628617 4538 TrainerInternal.cpp:165] Batch=100 samples=12800 AvgCost=2.63996 CurrentCost=2.63996 Eval: classification_error_evaluator=0.241172 CurrentEval: classification_error_evaluator=0.241172 paddle.init(use_gpu=False, trainer_count=1)
.........
I0117 12:52:29.768741 4538 TrainerInternal.cpp:165] Batch=200 samples=25600 AvgCost=1.74027 CurrentCost=0.840582 Eval: classification_error_evaluator=0.185234 CurrentEval: classification_error_evaluator=0.129297
.........
I0117 12:52:29.916970 4538 TrainerInternal.cpp:165] Batch=300 samples=38400 AvgCost=1.42119 CurrentCost=0.783026 Eval: classification_error_evaluator=0.167786 CurrentEval: classification_error_evaluator=0.132891
.........
I0117 12:52:30.061213 4538 TrainerInternal.cpp:165] Batch=400 samples=51200 AvgCost=1.23965 CurrentCost=0.695054 Eval: classification_error_evaluator=0.160039 CurrentEval: classification_error_evaluator=0.136797
......I0117 12:52:30.223270 4538 TrainerInternal.cpp:181] Pass=0 Batch=469 samples=60000 AvgCost=1.1628 Eval: classification_error_evaluator=0.156233
I0117 12:52:30.366894 4538 Tester.cpp:109] Test samples=10000 cost=0.50777 Eval: classification_error_evaluator=0.0978
```
2.Use `plot_cost.py` to plot error curve during training.
```bash images = paddle.layer.data(
python plot_cost.py softmax_train.log name='pixel', type=paddle.data_type.dense_vector(784))
``` label = paddle.layer.data(
name='label', type=paddle.data_type.integer_value(10))
3.Use `evaluate.py ` to select the best trained model. predict = softmax_regression(images)
#predict = multilayer_perceptron(images) # uncomment for MLP
#predict = convolutional_neural_network(images) # uncomment for LeNet5
```bash cost = paddle.layer.classification_cost(input=predict, label=label)
python evaluate.py softmax_train.log
``` ```
### Training Results for Softmax Regression Now, it is time to specify training parameters. The number 0.9 in the following `Momentum` optimizer means that 90% of the current the momentum comes from the momentum of the previous iteration.
<p align="center"> ```python
<img src="image/softmax_train_log_en.png" width="400px"><br/> parameters = paddle.parameters.create(cost)
Fig. 7 Softmax regression error curve<br/>
</p>
Evaluation results of the models: optimizer = paddle.optimizer.Momentum(
learning_rate=0.1 / 128.0,
momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
```text trainer = paddle.trainer.SGD(cost=cost,
Best pass is 00013, testing Avgcost is 0.484447 parameters=parameters,
The classification accuracy is 90.01% update_equation=optimizer)
``` ```
From the evaluation results, the best pass for softmax regression model is pass-00013, where the classification accuracy is 90.01%, and the last pass-00099 has an accuracy of 89.3%. From Fig. 7, we also see that the best accuracy may not appear in the last pass. This is because during training, the model may already arrive at a local optimum, and it just swings around nearby in the following passes, or it gets a lower local optimum. Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data.
### Results of Multilayer Perceptron Here `shuffle` is a reader decorator, which takes a reader A as its parameter, and returns a new reader B, where B calls A to read in `buffer_size` data instances everytime into a buffer, then shuffles and yield instances in the buffer. If you want very shuffled data, try use a larger buffer size.
<p align="center"> `batch` is a special decorator, whose input is a reader and output is a *batch reader*, which doesn't yield an instance at a time, but a minibatch.
<img src="image/mlp_train_log_en.png" width="400px"><br/>
Fig. 8. Multilayer Perceptron error curve<br/>
</p>
Evaluation results of the models: ```python
lists = []
```text
Best pass is 00085, testing Avgcost is 0.164746 def event_handler(event):
The classification accuracy is 94.95% if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % (
event.pass_id, result.cost, result.metrics)
lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator']))
trainer.train(
reader=paddle.reader.batched(
paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128),
event_handler=event_handler,
num_passes=100)
``` ```
From the evaluation results, the final training accuracy is 94.95%. It is significantly better than the softmax regression model. This is because the softmax regression is simple, and it cannot fit complex data. The Multilayer Perceptron with hidden layers has better capacity to fit complex data than the softmax regression. During training, `trainer.train` invokes `event_handler` for certain events. This gives us a chance to print the training progress.
### Training results for Convolutional Neural Network
<p align="center">
<img src="image/cnn_train_log_en.png" width="400px"><br/>
Fig. 9. Convolutional Neural Network error curve<br/>
</p>
Results of model evaluation:
```text
Best pass is 00076, testing Avgcost is 0.0244684
The classification accuracy is 99.20%
``` ```
# Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125}
From the evaluation result, the best accuracy of Convolutional Neural Network is 99.20%. So for image classification, a Convolutional Neural Network has better recognition results than a fully connected network. This is related to the local connection and parameter sharing of convolutional layers. In Fig. 9, the Convolutional Neural Network achieves good results in early steps, which indicates that it converges faster. # Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375}
# Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125}
## Application Model # Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625}
# Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125}
### Prediction Commands and Results # Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197}
Script `predict.py` can make prediction for trained models. For example, in softmax regression:
```bash
python predict.py -c mnist_model.py -d data/raw_data/ -m softmax_mnist_model/pass-00047
``` ```
- -c sets model architecture After the training, we can check the model's prediction accuracy.
- -d sets data for prediction
- -m sets model parameters, here the best trained model is used for prediction
Follow the instructions to input image ID for prediction. The classifier can output probabilities for each digit, predictions with the highest probability, and ground truth label.
``` ```
Input image_id [0~9999]: 3 # find the best pass
Predicted probability of each digit: best = sorted(lists, key=lambda list: float(list[1]))[0]
[[ 1.00000000e+00 1.60381094e-28 1.60381094e-28 1.60381094e-28 print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
1.60381094e-28 1.60381094e-28 1.60381094e-28 1.60381094e-28 print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
1.60381094e-28 1.60381094e-28]]
Predict Number: 0
Actual Number: 0
``` ```
From the result, this classifier recognizes the digit on the third image as digit 0 with near to 100% probability. This predicted result is consistent with the ground truth label. Usually, with MNIST data, the softmax regression model can get accuracy around 92.34%, MLP can get about 97.66%, and convolution network can get up to around 99.20%. Convolution layers have been widely considered a great invention for image processsing.
## Conclusion ## Conclusion
This tutorial describes a few basic Deep Learning models viz. Softmax regression, Multilayer Perceptron Network and Convolutional Neural Network. The subsequent tutorials will derive more sophisticated models from these. So it is crucial to understand these models for future learning. When our model evolved from a simple softmax regression to slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieved large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve results of an old one. Moreover, this tutorial introduced the basic flow of PaddlePaddle model design, starting with a dataprovider, model layer construction, to final training and prediction. Readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice. This tutorial describes a few basic Deep Learning models viz. Softmax regression, Multilayer Perceptron Network and Convolutional Neural Network. The subsequent tutorials will derive more sophisticated models from these. So it is crucial to understand these models for future learning. When our model evolved from a simple softmax regression to slightly complex Convolutional Neural Network, the recognition accuracy on the MNIST data set achieved large improvement in accuracy. This is due to the Convolutional layers' local connections and parameter sharing. While learning new models in the future, we encourage the readers to understand the key ideas that lead a new model to improve results of an old one. Moreover, this tutorial introduced the basic flow of PaddlePaddle model design, starting with a dataprovider, model layer construction, to final training and prediction. Readers can leverage the flow used in this MNIST handwritten digit classification example and experiment with different data and network architectures to train models for classification tasks of their choice.
......
...@@ -236,20 +236,19 @@ def convolutional_neural_network(img): ...@@ -236,20 +236,19 @@ def convolutional_neural_network(img):
接着,通过`layer.data`调用来获取数据,然后调用分类器(这里我们提供了三个不同的分类器)得到分类结果。训练时,对该结果计算其损失函数,分类问题常常选择交叉熵损失函数。 接着,通过`layer.data`调用来获取数据,然后调用分类器(这里我们提供了三个不同的分类器)得到分类结果。训练时,对该结果计算其损失函数,分类问题常常选择交叉熵损失函数。
```python ```python
def main(): # 该模型运行在单个CPU上
# 该模型运行在单个CPU上 paddle.init(use_gpu=False, trainer_count=1)
paddle.init(use_gpu=False, trainer_count=1)
images = paddle.layer.data( images = paddle.layer.data(
name='pixel', type=paddle.data_type.dense_vector(784)) name='pixel', type=paddle.data_type.dense_vector(784))
label = paddle.layer.data( label = paddle.layer.data(
name='label', type=paddle.data_type.integer_value(10)) name='label', type=paddle.data_type.integer_value(10))
predict = softmax_regression(images) # Softmax回归 predict = softmax_regression(images) # Softmax回归
#predict = multilayer_perceptron(images) #多层感知器 #predict = multilayer_perceptron(images) #多层感知器
#predict = convolutional_neural_network(images) #LeNet5卷积神经网络 #predict = convolutional_neural_network(images) #LeNet5卷积神经网络
cost = paddle.layer.classification_cost(input=predict, label=label) cost = paddle.layer.classification_cost(input=predict, label=label)
``` ```
然后,指定训练相关的参数。 然后,指定训练相关的参数。
...@@ -258,84 +257,61 @@ def main(): ...@@ -258,84 +257,61 @@ def main():
- 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。 - 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
optimizer = paddle.optimizer.Momentum( optimizer = paddle.optimizer.Momentum(
learning_rate=0.1 / 128.0, learning_rate=0.1 / 128.0,
momentum=0.9, momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128)) regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
trainer = paddle.trainer.SGD(cost=cost, trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters, parameters=parameters,
update_equation=optimizer) update_equation=optimizer)
``` ```
下一步,我们开始训练过程。`paddle.dataset.movielens.train()`和`paddle.dataset.movielens.test()`分别做训练和测试数据集,每次训练使用的数据为128条 下一步,我们开始训练过程。`paddle.dataset.movielens.train()`和`paddle.dataset.movielens.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python yield generator
```python 下面`shuffle`是一个reader decorator,它接受一个reader A,返回另一个reader B —— reader B 每次读入`buffer_size`条训练数据到一个buffer里,然后随机打乱其顺序,并且逐条输出。
lists = []
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % (
event.pass_id, result.cost, result.metrics)
lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator']))
trainer.train(
reader=paddle.reader.batched(
paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128),
event_handler=event_handler,
num_passes=100)
```
训练过程是完全自动的,event_handler里打印的日志类似如下所示: `batch`是一个特殊的decorator,它的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。
```python ```python
# Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125} lists = []
# Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375}
# Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125} def event_handler(event):
# Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625} if isinstance(event, paddle.event.EndIteration):
# Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125} if event.batch_id % 100 == 0:
# Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197} print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % (
event.pass_id, result.cost, result.metrics)
lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator']))
trainer.train(
reader=paddle.reader.batched(
paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128),
event_handler=event_handler,
num_passes=100)
``` ```
最后,选出最佳模型,并评估其效果。 训练过程是完全自动的,event_handler里打印的日志类似如下所示:
```python
# find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0]
print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
```
- softmax回归模型:分类效果最好的时候是pass-34,分类准确率为92.34%。
```python
# Best pass is 34, testing Avgcost is 0.275004139346
# The classification accuracy is 92.34%
``` ```
# Pass 0, Batch 0, Cost 2.780790, {'classification_error_evaluator': 0.9453125}
- 多层感知器:最终训练的准确率为97.66%,相比于softmax回归模型有了显著的提升。原因是softmax回归模型较为简单,无法拟合更为复杂的数据,而加入了隐藏层之后的多层感知器则具有更强的拟合能力。 # Pass 0, Batch 100, Cost 0.635356, {'classification_error_evaluator': 0.2109375}
# Pass 0, Batch 200, Cost 0.326094, {'classification_error_evaluator': 0.1328125}
```python # Pass 0, Batch 300, Cost 0.361920, {'classification_error_evaluator': 0.1015625}
# Best pass is 85, testing Avgcost is 0.0784368447196 # Pass 0, Batch 400, Cost 0.410101, {'classification_error_evaluator': 0.125}
# The classification accuracy is 97.66% # Test with Pass 0, Cost 0.326659, {'classification_error_evaluator': 0.09470000118017197}
``` ```
- 卷积神经网络:最好分类准确率达到惊人的99.20%。说明对于图像问题而言,卷积神经网络能够比一般的全连接网络达到更好的识别效果,而这与卷积层具有局部连接和共享权重的特性是分不开的。同时,从训练日志中可以看到,卷积神经网络在很早的时候就能达到很好的效果,说明其收敛速度非常快。 训练之后,检查模型的预测准确度。用 MNIST 训练的时候,一般 softmax回归模型的分类准确率为约为 92.34%,多层感知器为97.66%,卷积神经网络可以达到 99.20%。
```python
# Best pass is 76, testing Avgcost is 0.0244684
# The classification accuracy is 99.20%
```
## 总结 ## 总结
......
import paddle.v2 as paddle
def softmax_regression(img):
predict = paddle.layer.fc(input=img,
size=10,
act=paddle.activation.Softmax())
return predict
def multilayer_perceptron(img):
# The first fully-connected layer
hidden1 = paddle.layer.fc(input=img, size=128, act=paddle.activation.Relu())
# The second fully-connected layer and the according activation function
hidden2 = paddle.layer.fc(input=hidden1,
size=64,
act=paddle.activation.Relu())
# The thrid fully-connected layer, note that the hidden size should be 10,
# which is the number of unique digits
predict = paddle.layer.fc(input=hidden2,
size=10,
act=paddle.activation.Softmax())
return predict
def convolutional_neural_network(img):
# first conv layer
conv_pool_1 = paddle.networks.simple_img_conv_pool(
input=img,
filter_size=5,
num_filters=20,
num_channel=1,
pool_size=2,
pool_stride=2,
act=paddle.activation.Tanh())
# second conv layer
conv_pool_2 = paddle.networks.simple_img_conv_pool(
input=conv_pool_1,
filter_size=5,
num_filters=50,
num_channel=20,
pool_size=2,
pool_stride=2,
act=paddle.activation.Tanh())
# The first fully-connected layer
fc1 = paddle.layer.fc(input=conv_pool_2,
size=128,
act=paddle.activation.Tanh())
# The softmax layer, note that the hidden size should be 10,
# which is the number of unique digits
predict = paddle.layer.fc(input=fc1,
size=10,
act=paddle.activation.Softmax())
return predict
paddle.init(use_gpu=False, trainer_count=1)
# define network topology
images = paddle.layer.data(
name='pixel', type=paddle.data_type.dense_vector(784))
label = paddle.layer.data(name='label', type=paddle.data_type.integer_value(10))
# Here we can build the prediction network in different ways. Please
# choose one by uncomment corresponding line.
predict = softmax_regression(images)
#predict = multilayer_perceptron(images)
#predict = convolutional_neural_network(images)
cost = paddle.layer.classification_cost(input=predict, label=label)
parameters = paddle.parameters.create(cost)
optimizer = paddle.optimizer.Momentum(
learning_rate=0.1 / 128.0,
momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))
trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters,
update_equation=optimizer)
lists = []
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=paddle.reader.batched(
paddle.dataset.mnist.test(), batch_size=128))
print "Test with Pass %d, Cost %f, %s\n" % (event.pass_id, result.cost,
result.metrics)
lists.append((event.pass_id, result.cost,
result.metrics['classification_error_evaluator']))
trainer.train(
reader=paddle.reader.batched(
paddle.reader.shuffle(
paddle.dataset.mnist.train(), buf_size=8192),
batch_size=128),
event_handler=event_handler,
num_passes=100)
# find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0]
print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])
print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)
...@@ -111,7 +111,7 @@ Given the feature vectors of users and movies, we compute the relevance using co ...@@ -111,7 +111,7 @@ Given the feature vectors of users and movies, we compute the relevance using co
<p align="center"> <p align="center">
<img src="image/rec_regression_network.png" width="90%" ><br/> <img src="image/rec_regression_network_en.png" width="90%" ><br/>
Figure 3. A hybrid recommendation model. Figure 3. A hybrid recommendation model.
</p> </p>
......
...@@ -194,7 +194,7 @@ As illustrated in the figure above, skip-gram model maps the word embedding of t ...@@ -194,7 +194,7 @@ As illustrated in the figure above, skip-gram model maps the word embedding of t
## Model Configuration ## Model Configuration
<p align="center"> <p align="center">
<img src="image/ngram.png" width=400><br/> <img src="image/ngram.en.png" width=400><br/>
Figure 5. N-gram neural network model in model configuration Figure 5. N-gram neural network model in model configuration
</p> </p>
......
...@@ -182,7 +182,7 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去 ...@@ -182,7 +182,7 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去
## 数据准备 ## 数据准备
### 数据介绍与下载 ### 数据介绍
本教程使用Penn Tree Bank (PTB)数据集。PTB数据集较小,训练速度快,应用于Mikolov的公开语言模型训练工具\[[2](#参考文献)\]中。其统计情况如下: 本教程使用Penn Tree Bank (PTB)数据集。PTB数据集较小,训练速度快,应用于Mikolov的公开语言模型训练工具\[[2](#参考文献)\]中。其统计情况如下:
...@@ -206,109 +206,24 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去 ...@@ -206,109 +206,24 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去
</table> </table>
</p> </p>
执行以下命令,可下载该数据集,并分别将训练数据和验证数据输入`train.list`和`test.list`文件中,供PaddlePaddle训练时使用。
```bash
./data/getdata.sh
```
### 提供数据给PaddlePaddle ### 数据预处理
1. 使用initializer函数进行dataprovider的初始化,包括字典的建立(build_dict函数中)和PaddlePaddle输入字段的格式定义。注意:这里N为n-gram模型中的`n`, 本章代码中,定义$N=5$, 表示在PaddlePaddle训练时,每条数据的前4个词用来预测第5个词。大家也可以根据自己的数据和需求自行调整N,但调整的同时要在模型配置文件中加入/减少相应输入字段。
```python
from paddle.trainer.PyDataProvider2 import *
import collections
import logging
import pdb
logging.basicConfig(
format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s', )
logger = logging.getLogger('paddle')
logger.setLevel(logging.INFO)
N = 5 # Ngram
cutoff = 50 # select words with frequency > cutoff to dictionary
def build_dict(ftrain, fdict):
sentences = []
with open(ftrain) as fin:
for line in fin:
line = ['<s>'] + line.strip().split() + ['<e>']
sentences += line
wordfreq = collections.Counter(sentences)
wordfreq = filter(lambda x: x[1] > cutoff, wordfreq.items())
dictionary = sorted(wordfreq, key = lambda x: (-x[1], x[0]))
words, _ = list(zip(*dictionary))
for word in words:
print >> fdict, word
word_idx = dict(zip(words, xrange(len(words))))
logger.info("Dictionary size=%s" %len(words))
return word_idx
def initializer(settings, srcText, dictfile, **xargs):
with open(dictfile, 'w') as fdict:
settings.dicts = build_dict(srcText, fdict)
input_types = []
for i in xrange(N):
input_types.append(integer_value(len(settings.dicts)))
settings.input_types = input_types
```
2. 使用process函数中将数据逐一提供给PaddlePaddle。具体来说,将每句话前面补上N-1个开始符号 `<s>`, 末尾补上一个结束符号 `<e>`,然后以N为窗口大小,从头到尾每次向右滑动窗口并生成一条数据。
```python
@provider(init_hook=initializer)
def process(settings, filename):
UNKID = settings.dicts['<unk>']
with open(filename) as fin:
for line in fin:
line = ['<s>']*(N-1) + line.strip().split() + ['<e>']
line = [settings.dicts.get(w, UNKID) for w in line]
for i in range(N, len(line) + 1):
yield line[i-N: i]
```
如"I have a dream" 一句提供了5条数据:
> `<s> <s> <s> <s> I` <br>
> `<s> <s> <s> I have` <br>
> `<s> <s> I have a` <br>
> `<s> I have a dream` <br>
> `I have a dream <e>` <br>
## 模型配置说明
### 数据定义
通过`define_py_data_sources2`函数从dataprovider中读入数据,其中args指定了训练文本(srcText)和词汇表(dictfile)。
```python
from paddle.trainer_config_helpers import *
import math
args = {'srcText': 'data/simple-examples/data/ptb.train.txt', 本章训练的是5-gram模型,表示在PaddlePaddle训练时,每条数据的前4个词用来预测第5个词。PaddlePaddle提供了对应PTB数据集的python包`paddle.dataset.imikolov`,自动做数据的下载与预处理,方便大家使用。
'dictfile': 'data/vocabulary.txt'}
define_py_data_sources2(
train_list="data/train.list",
test_list="data/test.list",
module="dataprovider",
obj="process",
args=args)
```
### 算法配置 预处理会把数据集中的每一句话前后加上开始符号`<s>`以及结束符号`<e>`。然后依据窗口大小(本教程中为5),从头到尾每次向右滑动窗口并生成一条数据。
在这里,我们指定了模型的训练参数, L2正则项系数、学习率和batch size。 如"I have a dream that one day" 一句提供了5条数据:
```python ```text
settings( <s> I have a dream
batch_size=100, regularization=L2Regularization(8e-4), learning_rate=3e-3) I have a dream that
have a dream that one
a dream that one day
dream that one day <e>
``` ```
### 模型结构 ## 编程实现
本配置的模型结构如下图所示: 本配置的模型结构如下图所示:
...@@ -317,94 +232,132 @@ settings( ...@@ -317,94 +232,132 @@ settings(
图5. 模型配置中的N-gram神经网络模型 图5. 模型配置中的N-gram神经网络模型
</p> </p>
1. 定义参数维度和和数据输入。 首先,加载所需要的包:
```python ```python
dictsize = 1953 # 字典大小 import math
embsize = 32 # 词向量维度 import paddle.v2 as paddle
hiddensize = 256 # 隐层维度 ```
firstword = data_layer(name = "firstw", size = dictsize) 然后,定义参数:
secondword = data_layer(name = "secondw", size = dictsize) ```python
thirdword = data_layer(name = "thirdw", size = dictsize) embsize = 32 # 词向量维度
fourthword = data_layer(name = "fourthw", size = dictsize) hiddensize = 256 # 隐层维度
nextword = data_layer(name = "fifthw", size = dictsize) N = 5 # 训练5-Gram
``` ```
2. 将$w_t$之前的$n-1$个词 $w_{t-n+1},...w_{t-1}$,通过$|V|\times D$的矩阵映射到D维词向量(本例中取D=32)。 接着,定义网络结构:
- 将$w_t$之前的$n-1$个词 $w_{t-n+1},...w_{t-1}$,通过$|V|\times D$的矩阵映射到D维词向量(本例中取D=32)。
```python ```python
def wordemb(inlayer): def wordemb(inlayer):
wordemb = table_projection( wordemb = paddle.layer.table_projection(
input = inlayer, input=inlayer,
size = embsize, size=embsize,
param_attr=ParamAttr(name = "_proj", param_attr=paddle.attr.Param(
initial_std=0.001, # 参数初始化标准差 name="_proj",
l2_rate= 0,)) # 词向量不需要稀疏化,因此其l2_rate设为0 initial_std=0.001,
learning_rate=1,
l2_rate=0, ))
return wordemb return wordemb
```
Efirst = wordemb(firstword) - 定义输入层接受的数据类型以及名字。
Esecond = wordemb(secondword)
Ethird = wordemb(thirdword) ```python
Efourth = wordemb(fourthword) def main():
``` paddle.init(use_gpu=False, trainer_count=1) # 初始化PaddlePaddle
word_dict = paddle.dataset.imikolov.build_dict()
3. 接着,将这n-1个词向量经过concat_layer连接成一个大向量作为历史文本特征。 dict_size = len(word_dict)
# 每个输入层都接受整形数据,这些数据的范围是[0, dict_size)
```python firstword = paddle.layer.data(
contextemb = concat_layer(input = [Efirst, Esecond, Ethird, Efourth]) name="firstw", type=paddle.data_type.integer_value(dict_size))
``` secondword = paddle.layer.data(
4. 然后,将历史文本特征经过一个全连接得到文本隐层特征。 name="secondw", type=paddle.data_type.integer_value(dict_size))
thirdword = paddle.layer.data(
```python name="thirdw", type=paddle.data_type.integer_value(dict_size))
hidden1 = fc_layer( fourthword = paddle.layer.data(
input = contextemb, name="fourthw", type=paddle.data_type.integer_value(dict_size))
size = hiddensize, nextword = paddle.layer.data(
act = SigmoidActivation(), name="fifthw", type=paddle.data_type.integer_value(dict_size))
layer_attr = ExtraAttr(drop_rate=0.5),
bias_attr = ParamAttr(learning_rate = 2), Efirst = wordemb(firstword)
param_attr = ParamAttr( Esecond = wordemb(secondword)
initial_std = 1./math.sqrt(embsize*8), Ethird = wordemb(thirdword)
learning_rate = 1)) Efourth = wordemb(fourthword)
``` ```
5. 最后,将文本隐层特征,再经过一个全连接,映射成一个$|V|$维向量,同时通过softmax归一化得到这`|V|`个词的生成概率。 - 将这n-1个词向量经过concat_layer连接成一个大向量作为历史文本特征。
```python ```python
# use context embedding to predict nextword contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth])
predictword = fc_layer( ```
input = hidden1,
size = dictsize, - 将历史文本特征经过一个全连接得到文本隐层特征。
bias_attr = ParamAttr(learning_rate = 2),
act = SoftmaxActivation()) ```python
``` hidden1 = paddle.layer.fc(input=contextemb,
size=hiddensize,
6. 网络的损失函数为多分类交叉熵,可直接调用`classification_cost`函数。 act=paddle.activation.Sigmoid(),
layer_attr=paddle.attr.Extra(drop_rate=0.5),
```python bias_attr=paddle.attr.Param(learning_rate=2),
cost = classification_cost( param_attr=paddle.attr.Param(
input = predictword, initial_std=1. / math.sqrt(embsize * 8),
label = nextword) learning_rate=1))
# network input and output ```
outputs(cost)
```
##训练模型 - 将文本隐层特征,再经过一个全连接,映射成一个$|V|$维向量,同时通过softmax归一化得到这`|V|`个词的生成概率。
```python
predictword = paddle.layer.fc(input=hidden1,
size=dict_size,
bias_attr=paddle.attr.Param(learning_rate=2),
act=paddle.activation.Softmax())
```
模型训练命令为`./train.sh`。脚本内容如下,其中指定了总共需要执行30个pass - 网络的损失函数为多分类交叉熵,可直接调用`classification_cost`函数
```bash ```python
paddle train \ cost = paddle.layer.classification_cost(input=predictword, label=nextword)
--config ngram.py \ ```
--use_gpu=1 \
--dot_period=100 \ 然后,指定训练相关的参数:
--log_period=3000 \
--test_period=0 \ - 训练方法(optimizer): 代表训练过程在更新权重时采用动量优化器,本教程使用Adam优化器。
--save_dir=model \ - 训练速度(learning_rate): 迭代的速度,与网络的训练收敛速度有关系。
--num_passes=30 - 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。
```python
parameters = paddle.parameters.create(cost)
adam_optimizer = paddle.optimizer.Adam(
learning_rate=3e-3,
regularization=paddle.optimizer.L2Regularization(8e-4))
trainer = paddle.trainer.SGD(cost, parameters, adam_optimizer)
```
下一步,我们开始训练过程。`paddle.dataset.imikolov.train()`和`paddle.dataset.imikolov.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python generator。
`paddle.batch`的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。
```python
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
result = trainer.test(
paddle.batch(
paddle.dataset.imikolov.test(word_dict, N), 32))
print "Pass %d, Batch %d, Cost %f, %s, Testing metrics %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics,
result.metrics)
trainer.train(
paddle.batch(paddle.dataset.imikolov.train(word_dict, N), 32),
num_passes=30,
event_handler=event_handler)
``` ```
一个pass的训练日志如下所示: 训练过程是完全自动的,event_handler里打印的日志类似如下所示:
```text ```text
............................. .............................
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册