未验证 提交 be5105b3 编写于 作者: D daminglu 提交者: GitHub

Merge branch 'develop' into develop

...@@ -142,6 +142,15 @@ def train_program(): ...@@ -142,6 +142,15 @@ def train_program():
return avg_loss return avg_loss
``` ```
### Optimizer Function 配置
在下面的 `SGD optimizer``learning_rate` 是训练的速度,与网络的训练收敛速度有关系。
```python
def optimizer_program():
return fluid.optimizer.SGD(learning_rate=0.001)
```
### 定义运算场所 ### 定义运算场所
我们可以定义运算是发生在CPU还是GPU 我们可以定义运算是发生在CPU还是GPU
...@@ -157,7 +166,7 @@ place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() ...@@ -157,7 +166,7 @@ place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=train_program, train_func=train_program,
place=place, place=place,
optimizer_func=lambda : fluid.optimizer.SGD(learning_rate=0.001)) optimizer_func=optimizer_program)
``` ```
### 开始提供数据 ### 开始提供数据
......
...@@ -149,6 +149,14 @@ def train_program(): ...@@ -149,6 +149,14 @@ def train_program():
return avg_loss return avg_loss
``` ```
### Optimizer Function Configuration
In the following `SGD` optimizer, `learning_rate` specifies the learning rate in the optimization procedure.
```python
def optimizer_program():
return fluid.optimizer.SGD(learning_rate=0.001)
```
### Specify Place ### Specify Place
Specify your training environment, you should specify if the training is on CPU or GPU. Specify your training environment, you should specify if the training is on CPU or GPU.
...@@ -165,7 +173,7 @@ The trainer will take the `train_program` as input. ...@@ -165,7 +173,7 @@ The trainer will take the `train_program` as input.
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=train_program, train_func=train_program,
place=place, place=place,
optimizer_func=lambda : fluid.optimizer.SGD(learning_rate=0.001)) optimizer_func=optimizer_program)
``` ```
### Feeding Data ### Feeding Data
......
...@@ -89,22 +89,6 @@ $$MSE=\frac{1}{n}\sum_{i=1}^{n}{(\hat{Y_i}-Y_i)}^2$$ ...@@ -89,22 +89,6 @@ $$MSE=\frac{1}{n}\sum_{i=1}^{n}{(\hat{Y_i}-Y_i)}^2$$
## 数据集 ## 数据集
### 数据集接口的封装
首先加载需要的包
```python
import paddle.v2 as paddle
import paddle.v2.dataset.uci_housing as uci_housing
```
我们通过uci_housing模块引入了数据集合[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)
其中,在uci_housing模块中封装了:
1. 数据下载的过程。下载数据保存在~/.cache/paddle/dataset/uci_housing/housing.data。
2. [数据预处理](#数据预处理)的过程。
### 数据集介绍 ### 数据集介绍
这份数据集共506行,每行包含了波士顿郊区的一类房屋的相关信息及该类房屋价格的中位数。其各维属性的意义如下: 这份数据集共506行,每行包含了波士顿郊区的一类房屋的相关信息及该类房屋价格的中位数。其各维属性的意义如下:
...@@ -152,157 +136,167 @@ import paddle.v2.dataset.uci_housing as uci_housing ...@@ -152,157 +136,167 @@ import paddle.v2.dataset.uci_housing as uci_housing
`fit_a_line/trainer.py`演示了训练的整体过程。 `fit_a_line/trainer.py`演示了训练的整体过程。
### 初始化PaddlePaddle ### 配置数据提供器(Datafeeder)
首先我们引入必要的库:
```python ```python
paddle.init(use_gpu=False, trainer_count=1) import paddle
import paddle.fluid as fluid
import numpy
``` ```
### 模型配置 我们通过uci_housing模块引入了数据集合[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)
其中,在uci_housing模块中封装了:
线性回归的模型其实就是一个采用线性激活函数(linear activation,`LinearActivation`)的全连接层(fully-connected layer,`fc_layer`): 1. 数据下载的过程。下载数据保存在~/.cache/paddle/dataset/uci_housing/housing.data。
2. [数据预处理](#数据预处理)的过程。
接下来我们定义了用于训练和测试的数据提供器。提供器每次读入一个大小为`BATCH_SIZE`的数据批次。如果用户希望加一些随机性,她可以同时定义一个批次大小和一个缓存大小。这样的话,每次数据提供器会从缓存中随机读取批次大小那么多的数据。
```python ```python
x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13)) BATCH_SIZE = 20
y_predict = paddle.layer.fc(input=x,
size=1, train_reader = paddle.batch(
act=paddle.activation.Linear()) paddle.reader.shuffle(
y = paddle.layer.data(name='y', type=paddle.data_type.dense_vector(1)) paddle.dataset.uci_housing.train(), buf_size=500),
cost = paddle.layer.square_error_cost(input=y_predict, label=y) batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.uci_housing.test(), buf_size=500),
batch_size=BATCH_SIZE)
``` ```
### 保存网络拓扑 ### 配置训练程序
训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。
```python ```python
# Save the inference topology to protobuf. def train_program():
inference_topology = paddle.topology.Topology(layers=y_predict) y = fluid.layers.data(name='y', shape=[1], dtype='float32')
with open("inference_topology.pkl", 'wb') as f:
inference_topology.serialize_for_inference(f) # feature vector of length 13
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=1, act=None)
loss = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_loss = fluid.layers.mean(loss)
return avg_loss
``` ```
### 创建参数 ### Optimizer Function 配置
在下面的 `SGD optimizer`,`learning_rate` 是训练的速度,与网络的训练收敛速度有关系。
```python ```python
parameters = paddle.parameters.create(cost) def optimizer_program():
return fluid.optimizer.SGD(learning_rate=0.001)
``` ```
### 创建Trainer ### 定义运算场所
我们可以定义运算是发生在CPU还是GPU
```python ```python
optimizer = paddle.optimizer.Momentum(momentum=0) use_cuda = False
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters,
update_equation=optimizer)
``` ```
### 读取数据且打印训练的中间信息 ### 创建训练器
训练器会读入一个训练程序和一些必要的其他参数:
PaddlePaddle提供一个
[reader机制](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/reader)
来读取数据。 Reader返回的数据可以包括多列,我们需要一个Python dict把列
序号映射到网络里的数据层。
```python ```python
feeding={'x': 0, 'y': 1} trainer = fluid.Trainer(
train_func=train_program,
place=place,
optimizer_func=optimizer_program)
``` ```
此外,我们还可以提供一个 event handler,来打印训练的进度: ### 开始提供数据
PaddlePaddle提供了读取数据者发生器机制来读取训练数据。读取数据者会一次提供多列数据,因此我们需要一个Python的list来定义读取顺序。
```python ```python
# event_handler to print training and testing info feed_order=['x', 'y']
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f" % (
event.pass_id, event.batch_id, event.cost)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(
reader=paddle.batch(
uci_housing.test(), batch_size=2),
feeding=feeding)
print "Test %d, Cost %f" % (event.pass_id, result.cost)
``` ```
除此之外,可以定义一个事件相应器来处理类似`打印训练进程`的事件:
```python ```python
# event_handler to print training and testing info # Specify the directory path to save the parameters
from paddle.v2.plot import Ploter params_dirname = "fit_a_line.inference.model"
# Plot data
from paddle.v2.plot import Ploter
train_title = "Train cost" train_title = "Train cost"
test_title = "Test cost" test_title = "Test cost"
cost_ploter = Ploter(train_title, test_title) plot_cost = Ploter(train_title, test_title)
step = 0 step = 0
# event_handler to print training and testing info
def event_handler_plot(event): def event_handler_plot(event):
global step global step
if isinstance(event, paddle.event.EndIteration): if isinstance(event, fluid.EndStepEvent):
if step % 10 == 0: # every 10 batches, record a train cost if event.step % 10 == 0: # every 10 batches, record a test cost
cost_ploter.append(train_title, step, event.cost) test_metrics = trainer.test(
reader=test_reader, feed_order=feed_order)
if step % 100 == 0: # every 100 batches, record a test cost plot_cost.append(test_title, step, test_metrics[0])
result = trainer.test( plot_cost.plot()
reader=paddle.batch(
uci_housing.test(), batch_size=2),
feeding=feeding)
cost_ploter.append(test_title, step, result.cost)
if step % 100 == 0: # every 100 batches, update cost plot if test_metrics[0] < 10.0:
cost_ploter.plot() # If the accuracy is good enough, we can stop the training.
print('loss is less than 10.0, stop')
trainer.stop()
step += 1 # We can save the trained parameters for the inferences later
if params_dirname is not None:
trainer.save_params(params_dirname)
if isinstance(event, paddle.event.EndPass): step += 1
if event.pass_id % 10 == 0:
with open('params_pass_%d.tar' % event.pass_id, 'w') as f:
trainer.save_parameter_to_tar(f)
``` ```
### 开始训练 ### 开始训练
我们现在可以通过调用`trainer.train()`来开始训练
```python ```python
%matplotlib inline
# The training could take up to a few minutes.
trainer.train( trainer.train(
reader=paddle.batch( reader=train_reader,
paddle.reader.shuffle( num_epochs=100,
uci_housing.train(), buf_size=500),
batch_size=2),
feeding=feeding,
event_handler=event_handler_plot, event_handler=event_handler_plot,
num_passes=30) feed_order=feed_order)
``` ```
![png](./image/train_and_test.png) ![png](./image/train_and_test.png)
### 应用模型 ## 预测
提供一个`inference_program`和一个`params_dirname`来初始化预测器。`params_dirname`用来存储我们的参数
### 设定预测程序
类似于`trainer.train`,预测器需要一个预测程序来做预测我们可以稍加修改我们的训练程序来把预测值包含进来
#### 1. 生成测试数据
```python ```python
test_data_creator = paddle.dataset.uci_housing.test() def inference_program():
test_data = [] x = fluid.layers.data(name='x', shape=[13], dtype='float32')
test_label = [] y_predict = fluid.layers.fc(input=x, size=1, act=None)
return y_predict
for item in test_data_creator():
test_data.append((item[0],))
test_label.append(item[1])
if len(test_data) == 5:
break
``` ```
#### 2. 推测 inference ### 预测
预测器会从`params_dirname`中读取已经训练好的模型来对从未遇见过的数据进行预测
```python ```python
# load parameters from tar file. inferencer = fluid.Inferencer(
# users can remove the comments and change the model name infer_func=inference_program, param_path=params_dirname, place=place)
# with open('params_pass_20.tar', 'r') as f:
# parameters = paddle.parameters.Parameters.from_tar(f)
probs = paddle.infer( batch_size = 10
output_layer=y_predict, parameters=parameters, input=test_data) tensor_x = numpy.random.uniform(0, 10, [batch_size, 13]).astype("float32")
for i in xrange(len(probs)): results = inferencer.infer({'x': tensor_x})
print "label=" + str(test_label[i][0]) + ", predict=" + str(probs[i][0]) print("infer results: ", results[0])
``` ```
## 总结 ## 总结
......
...@@ -191,6 +191,14 @@ def train_program(): ...@@ -191,6 +191,14 @@ def train_program():
return avg_loss return avg_loss
``` ```
### Optimizer Function Configuration
In the following `SGD` optimizer, `learning_rate` specifies the learning rate in the optimization procedure.
```python
def optimizer_program():
return fluid.optimizer.SGD(learning_rate=0.001)
```
### Specify Place ### Specify Place
Specify your training environment, you should specify if the training is on CPU or GPU. Specify your training environment, you should specify if the training is on CPU or GPU.
...@@ -207,7 +215,7 @@ The trainer will take the `train_program` as input. ...@@ -207,7 +215,7 @@ The trainer will take the `train_program` as input.
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=train_program, train_func=train_program,
place=place, place=place,
optimizer=fluid.optimizer.SGD(learning_rate=0.001)) optimizer_func=optimizer_program)
``` ```
### Feeding Data ### Feeding Data
......
...@@ -49,9 +49,7 @@ use_cuda = False ...@@ -49,9 +49,7 @@ use_cuda = False
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=train_program, train_func=train_program, place=place, optimizer_func=optimizer_program)
place=place,
optimizer_func=optimizer_program)
feed_order = ['x', 'y'] feed_order = ['x', 'y']
......
...@@ -226,6 +226,7 @@ dream that one day <e> ...@@ -226,6 +226,7 @@ dream that one day <e>
``` ```
最后,每个输入会按其单词次在字典里的位置,转化成整数的索引序列,作为PaddlePaddle的输入。 最后,每个输入会按其单词次在字典里的位置,转化成整数的索引序列,作为PaddlePaddle的输入。
## 编程实现 ## 编程实现
本配置的模型结构如下图所示: 本配置的模型结构如下图所示:
...@@ -238,236 +239,236 @@ dream that one day <e> ...@@ -238,236 +239,236 @@ dream that one day <e>
首先,加载所需要的包: 首先,加载所需要的包:
```python ```python
import paddle
import paddle.fluid as fluid
import numpy
from functools import partial
import math import math
import paddle.v2 as paddle import os
import sys
``` ```
然后,定义参数: 然后,定义参数:
```python ```python
embsize = 32 # 词向量维度 EMBED_SIZE = 32 # word vector dimension
hiddensize = 256 # 隐层维度 HIDDEN_SIZE = 256 # hidden layer dimension
N = 5 # 训练5-Gram N = 5 # train 5-gram
``` BATCH_SIZE = 32 # batch size
用于保存和加载word_dict和embedding table的函数
```python
# save and load word dict and embedding table
def save_dict_and_embedding(word_dict, embeddings):
with open("word_dict", "w") as f:
for key in word_dict:
f.write(key + " " + str(word_dict[key]) + "\n")
with open("embedding_table", "w") as f:
numpy.savetxt(f, embeddings, delimiter=',', newline='\n')
def load_dict_and_embedding():
word_dict = dict()
with open("word_dict", "r") as f:
for line in f:
key, value = line.strip().split(" ")
word_dict[key] = int(value)
embeddings = numpy.loadtxt("embedding_table", delimiter=",")
return word_dict, embeddings
```
接着,定义网络结构:
- 将$w_t$之前的$n-1$个词 $w_{t-n+1},...w_{t-1}$,通过$|V|\times D$的矩阵映射到D维词向量(本例中取D=32)。 # can use CPU or GPU
use_cuda = os.getenv('WITH_GPU', '0') != '0'
```python
def wordemb(inlayer):
wordemb = paddle.layer.table_projection(
input=inlayer,
size=embsize,
param_attr=paddle.attr.Param(
name="_proj",
initial_std=0.001,
learning_rate=1,
l2_rate=0,
sparse_update=True))
return wordemb
```
- 定义输入层接受的数据类型以及名字。
```python
paddle.init(use_gpu=False, trainer_count=3) # 初始化PaddlePaddle
word_dict = paddle.dataset.imikolov.build_dict() word_dict = paddle.dataset.imikolov.build_dict()
dict_size = len(word_dict) dict_size = len(word_dict)
# 每个输入层都接受整形数据,这些数据的范围是[0, dict_size)
firstword = paddle.layer.data(
name="firstw", type=paddle.data_type.integer_value(dict_size))
secondword = paddle.layer.data(
name="secondw", type=paddle.data_type.integer_value(dict_size))
thirdword = paddle.layer.data(
name="thirdw", type=paddle.data_type.integer_value(dict_size))
fourthword = paddle.layer.data(
name="fourthw", type=paddle.data_type.integer_value(dict_size))
nextword = paddle.layer.data(
name="fifthw", type=paddle.data_type.integer_value(dict_size))
Efirst = wordemb(firstword)
Esecond = wordemb(secondword)
Ethird = wordemb(thirdword)
Efourth = wordemb(fourthword)
``` ```
- 将这n-1个词向量经过concat_layer连接成一个大向量作为历史文本特征 不同于之前的PaddlePaddle v2版本,在新的Fluid版本里,我们不必再手动计算词向量。PaddlePaddle提供了一个内置的方法`fluid.layers.embedding`,我们就可以直接用它来构造 N-gram 神经网络
```python - 我们来定义我们的 N-gram 神经网络结构。这个结构在训练和预测中都会使用到。因为词向量比较稀疏,我们传入参数 `is_sparse == True`, 可以加速稀疏矩阵的更新。
contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth])
```
- 将历史文本特征经过一个全连接得到文本隐层特征。
```python
hidden1 = paddle.layer.fc(input=contextemb,
size=hiddensize,
act=paddle.activation.Sigmoid(),
layer_attr=paddle.attr.Extra(drop_rate=0.5),
bias_attr=paddle.attr.Param(learning_rate=2),
param_attr=paddle.attr.Param(
initial_std=1. / math.sqrt(embsize * 8),
learning_rate=1))
```
- 将文本隐层特征,再经过一个全连接,映射成一个$|V|$维向量,同时通过softmax归一化得到这`|V|`个词的生成概率。
```python ```python
predictword = paddle.layer.fc(input=hidden1, def inference_program(is_sparse):
size=dict_size, first_word = fluid.layers.data(name='firstw', shape=[1], dtype='int64')
bias_attr=paddle.attr.Param(learning_rate=2), second_word = fluid.layers.data(name='secondw', shape=[1], dtype='int64')
act=paddle.activation.Softmax()) third_word = fluid.layers.data(name='thirdw', shape=[1], dtype='int64')
fourth_word = fluid.layers.data(name='fourthw', shape=[1], dtype='int64')
embed_first = fluid.layers.embedding(
input=first_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_second = fluid.layers.embedding(
input=second_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_third = fluid.layers.embedding(
input=third_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_fourth = fluid.layers.embedding(
input=fourth_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
concat_embed = fluid.layers.concat(
input=[embed_first, embed_second, embed_third, embed_fourth], axis=1)
hidden1 = fluid.layers.fc(input=concat_embed,
size=HIDDEN_SIZE,
act='sigmoid')
predict_word = fluid.layers.fc(input=hidden1, size=dict_size, act='softmax')
return predict_word
``` ```
- 网络的损失函数为多分类交叉熵,可直接调用`classification_cost`函数。 - 基于以上的神经网络结构,我们可以如下定义我们的`训练`方法
```python ```python
cost = paddle.layer.classification_cost(input=predictword, label=nextword) def train_program(is_sparse):
# The declaration of 'next_word' must be after the invoking of inference_program,
# or the data input order of train program would be [next_word, firstw, secondw,
# thirdw, fourthw], which is not correct.
predict_word = inference_program(is_sparse)
next_word = fluid.layers.data(name='nextw', shape=[1], dtype='int64')
cost = fluid.layers.cross_entropy(input=predict_word, label=next_word)
avg_cost = fluid.layers.mean(cost)
return avg_cost
``` ```
然后,指定训练相关的参数: - 现在我们可以开始训练啦。如今的版本较之以前就简单了许多。我们有现成的训练和测试集:`paddle.dataset.imikolov.train()`和`paddle.dataset.imikolov.test()`。两者都会返回一个读取器。在PaddlePaddle中,读取器是一个Python的函数,每次调用,会读取下一条数据。它是一个Python的generator。
- 训练方法(optimizer): 代表训练过程在更新权重时采用动量优化器,本教程使用Adam优化器。 `paddle.batch` 会读入一个读取器,然后输出一个批次化了的读取器。`event_handler`亦可以一并传入`trainer.train`来时不时的输出每个步骤,批次的训练情况。
- 训练速度(learning_rate): 迭代的速度,与网络的训练收敛速度有关系。
- 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。
```python ```python
parameters = paddle.parameters.create(cost) def optimizer_func():
adagrad = paddle.optimizer.AdaGrad( # Note here we need to choose more sophisticated optimizers
# such as AdaGrad with a decay rate. The normal SGD converges
# very slowly.
# optimizer=fluid.optimizer.SGD(learning_rate=0.001),
return fluid.optimizer.AdagradOptimizer(
learning_rate=3e-3, learning_rate=3e-3,
regularization=paddle.optimizer.L2Regularization(8e-4)) regularization=fluid.regularizer.L2DecayRegularizer(8e-4))
trainer = paddle.trainer.SGD(cost, parameters, adagrad)
def train(use_cuda, train_program, params_dirname):
train_reader = paddle.batch(
paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE)
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
def event_handler(event):
if isinstance(event, fluid.EndStepEvent):
# We output cost every 10 steps.
if event.step % 10 == 0:
outs = trainer.test(
reader=test_reader,
feed_order=['firstw', 'secondw', 'thirdw', 'fourthw', 'nextw'])
avg_cost = outs[0]
print "Step %d: Average Cost %f" % (event.step, avg_cost)
# If average cost is lower than 5.8, we consider the model good enough to stop.
# Note 5.8 is a relatively high value. In order to get a better model, one should
# aim for avg_cost lower than 3.5. But the training could take longer time.
if avg_cost < 5.8:
trainer.save_params(params_dirname)
trainer.stop()
if math.isnan(avg_cost):
sys.exit("got NaN loss, training failed.")
trainer = fluid.Trainer(
train_func=train_program,
optimizer_func=optimizer_func,
place=place)
trainer.train(
reader=train_reader,
num_epochs=1,
event_handler=event_handler,
feed_order=['firstw', 'secondw', 'thirdw', 'fourthw', 'nextw'])
``` ```
下一步,我们开始训练过程。`paddle.dataset.imikolov.train()`和`paddle.dataset.imikolov.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python generator。 - `trainer.train`将会开始训练`event_handler`返回的监控情况如下
`paddle.batch`的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。
```python ```python
def event_handler(event): Step 0: Average Cost 7.337213
if isinstance(event, paddle.event.EndIteration): Step 10: Average Cost 6.136128
if event.batch_id % 100 == 0: Step 20: Average Cost 5.766995
print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(
paddle.batch(
paddle.dataset.imikolov.test(word_dict, N), 32))
print "Pass %d, Testing metrics %s" % (event.pass_id, result.metrics)
with open("model_%d.tar"%event.pass_id, 'w') as f:
trainer.save_parameter_to_tar(f)
trainer.train(
paddle.batch(paddle.dataset.imikolov.train(word_dict, N), 32),
num_passes=100,
event_handler=event_handler)
```
```text
Pass 0, Batch 0, Cost 7.870579, {'classification_error_evaluator': 1.0}, Testing metrics {'classification_error_evaluator': 0.999591588973999}
Pass 0, Batch 100, Cost 6.136420, {'classification_error_evaluator': 0.84375}, Testing metrics {'classification_error_evaluator': 0.8328699469566345}
Pass 0, Batch 200, Cost 5.786797, {'classification_error_evaluator': 0.8125}, Testing metrics {'classification_error_evaluator': 0.8328542709350586}
... ...
``` ```
训练过程是完全自动的,event_handler里打印的日志类似如上所示: ## 模型应用
在模型训练后我们可以用它做一些预测
经过30个pass,我们将得到平均错误率为classification_error_evaluator=0.735611。
## 保存词典和embedding
训练完成之后,我们可以把词典和embedding table单独保存下来,后面可以直接使用 ### 预测下一个词
我们可以用我们训练过的模型在得知之前的 N-gram 预测下一个词
```python ```python
# save word dict and embedding table def infer(use_cuda, inference_program, params_dirname=None):
embeddings = parameters.get("_proj").reshape(len(word_dict), embsize) place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
save_dict_and_embedding(word_dict, embeddings) inferencer = fluid.Inferencer(
infer_func=inference_program, param_path=params_dirname, place=place)
# Setup inputs by creating 4 LoDTensors representing 4 words. Here each word
# is simply an index to look up for the corresponding word vector and hence
# the shape of word (base_shape) should be [1]. The length-based level of
# detail (lod) info of each LoDtensor should be [[1]] meaning there is only
# one lod_level and there is only one sequence of one word on this level.
# Note that lod info should be a list of lists.
data1 = [[211]] # 'among'
data2 = [[6]] # 'a'
data3 = [[96]] # 'group'
data4 = [[4]] # 'of'
lod = [[1]]
first_word = fluid.create_lod_tensor(data1, lod, place)
second_word = fluid.create_lod_tensor(data2, lod, place)
third_word = fluid.create_lod_tensor(data3, lod, place)
fourth_word = fluid.create_lod_tensor(data4, lod, place)
result = inferencer.infer(
{
'firstw': first_word,
'secondw': second_word,
'thirdw': third_word,
'fourthw': fourth_word
},
return_numpy=False)
print(numpy.array(result[0]))
most_possible_word_index = numpy.argmax(result[0])
print(most_possible_word_index)
print([
key for key, value in word_dict.iteritems()
if value == most_possible_word_index
][0])
``` ```
在经历3分钟的短暂训练后我们得到如下的预测我们的模型预测 `among a group of` 的下一个词是`a`。这比较符合文法规律如果我们训练时间更长比如几个小时那么我们会得到的下一个预测是 `workers`。
## 应用模型
训练模型后,我们可以加载模型参数,用训练出来的词向量初始化其他模型,也可以将模型查看参数用来做后续应用。
### 查看词向量
PaddlePaddle训练出来的参数可以直接使用`parameters.get()`获取出来。例如查看单词`apple`的词向量,即为
```python ```python
embeddings = parameters.get("_proj").reshape(len(word_dict), embsize) [[0.00106646 0.0007907 0.00072041 ... 0.00049024 0.00041355 0.00084464]]
6
print embeddings[word_dict['apple']] a
``` ```
```text 整个程序的入口很简单
[-0.38961065 -0.02392169 -0.00093231 0.36301503 0.13538605 0.16076435
-0.0678709 0.1090285 0.42014077 -0.24119169 -0.31847557 0.20410083
0.04910378 0.19021918 -0.0122014 -0.04099389 -0.16924137 0.1911236
-0.10917275 0.13068172 -0.23079982 0.42699069 -0.27679482 -0.01472992
0.2069038 0.09005053 -0.3282454 0.12717034 -0.24218646 0.25304323
0.19072419 -0.24286366]
```
### 修改词向量
获得到的embedding为一个标准的numpy矩阵。我们可以对这个numpy矩阵进行修改,然后赋值回去。
```python ```python
def modify_embedding(emb): def main(use_cuda, is_sparse):
# Add your modification here. if use_cuda and not fluid.core.is_compiled_with_cuda():
pass return
modify_embedding(embeddings)
parameters.set("_proj", embeddings)
```
### 计算词语之间的余弦距离
两个向量之间的距离可以用余弦值来表示,余弦值在$[-1,1]$的区间内,向量间余弦值越大,其距离越近。这里我们在`calculate_dis.py`中实现不同词语的距离度量。 params_dirname = "word2vec.inference.model"
用法如下:
train(
use_cuda=use_cuda,
train_program=partial(train_program, is_sparse),
params_dirname=params_dirname)
```python infer(
from scipy import spatial use_cuda=use_cuda,
inference_program=partial(inference_program, is_sparse),
params_dirname=params_dirname)
emb_1 = embeddings[word_dict['world']]
emb_2 = embeddings[word_dict['would']]
print spatial.distance.cosine(emb_1, emb_2) main(use_cuda=use_cuda, is_sparse=True)
``` ```
```text
0.99375076448
```
## 总结 ## 总结
本章中,我们介绍了词向量、语言模型和词向量的关系、以及如何通过训练神经网络模型获得词向量。在信息检索中,我们可以根据向量间的余弦夹角,来判断query和文档关键词这二者间的相关性。在句法分析和语义分析中,训练好的词向量可以用来初始化模型,以得到更好的效果。在文档分类中,有了词向量之后,可以用聚类的方法将文档中同义词进行分组。希望大家在本章后能够自行运用词向量进行相关领域的研究。 本章中我们介绍了词向量语言模型和词向量的关系以及如何通过训练神经网络模型获得词向量在信息检索中我们可以根据向量间的余弦夹角来判断query和文档关键词这二者间的相关性在句法分析和语义分析中训练好的词向量可以用来初始化模型以得到更好的效果在文档分类中有了词向量之后可以用聚类的方法将文档中同义词进行分组也可以用 N-gram 来预测下一个词希望大家在本章后能够自行运用词向量进行相关领域的研究
## 参考文献 ## 参考文献
......
...@@ -283,7 +283,8 @@ dict_size = len(word_dict) ...@@ -283,7 +283,8 @@ dict_size = len(word_dict)
Unlike from the previous PaddlePaddle v2, in the new API (Fluid), we do not need to calculate word embedding ourselves. PaddlePaddle provides a built-in method `fluid.layers.embedding` and we can use it directly to build our N-gram neural network model. Unlike from the previous PaddlePaddle v2, in the new API (Fluid), we do not need to calculate word embedding ourselves. PaddlePaddle provides a built-in method `fluid.layers.embedding` and we can use it directly to build our N-gram neural network model.
- We define our N-gram neural network structure as below. This structure will be used both in `train` and in `infer` - We define our N-gram neural network structure as below. This structure will be used both in `train` and in `infer`. We can specify `is_sparse = True` to accelerate sparse matrix update for word embedding.
```python ```python
def inference_program(is_sparse): def inference_program(is_sparse):
first_word = fluid.layers.data(name='firstw', shape=[1], dtype='int64') first_word = fluid.layers.data(name='firstw', shape=[1], dtype='int64')
...@@ -457,12 +458,12 @@ def infer(use_cuda, inference_program, params_dirname=None): ...@@ -457,12 +458,12 @@ def infer(use_cuda, inference_program, params_dirname=None):
][0]) ][0])
``` ```
When we spent 3 mins in training, the output is like below, which means the next word for `among a group of` is `board`. If we train the model with a longer time, it will give a meaningful prediction as `workers`. When we spent 3 mins in training, the output is like below, which means the next word for `among a group of` is `a`. If we train the model with a longer time, it will give a meaningful prediction as `workers`.
```text ```text
[[0.00144043 0.00073983 0.00042264 ... 0.00061815 0.00038701 0.00099838]] [[0.00106646 0.0007907 0.00072041 ... 0.00049024 0.00041355 0.00084464]]
142 6
board a
``` ```
The main entrance of the program is fairly simple: The main entrance of the program is fairly simple:
......
...@@ -163,8 +163,9 @@ Paddle在API中提供了自动加载数据的模块。数据模块为 `paddle.da ...@@ -163,8 +163,9 @@ Paddle在API中提供了自动加载数据的模块。数据模块为 `paddle.da
```python ```python
import paddle.v2 as paddle import paddle
paddle.init(use_gpu=False) movie_info = paddle.dataset.movielens.movie_info()
print movie_info.values()[0]
``` ```
...@@ -252,241 +253,304 @@ print "User %s rates Movie %s with Score %s"%(user_info[uid], movie_info[mov_id] ...@@ -252,241 +253,304 @@ print "User %s rates Movie %s with Score %s"%(user_info[uid], movie_info[mov_id]
## 模型配置说明 ## 模型配置说明
下面我们开始根据输入数据的形式配置模型。 下面我们开始根据输入数据的形式配置模型。首先引入所需的库函数以及定义全局变量。
```python ```python
uid = paddle.layer.data( import math
name='user_id', import sys
type=paddle.data_type.integer_value( import numpy as np
paddle.dataset.movielens.max_user_id() + 1)) import paddle
usr_emb = paddle.layer.embedding(input=uid, size=32) import paddle.fluid as fluid
usr_fc = paddle.layer.fc(input=usr_emb, size=32) import paddle.fluid.layers as layers
import paddle.fluid.nets as nets
usr_gender_id = paddle.layer.data(
name='gender_id', type=paddle.data_type.integer_value(2)) IS_SPARSE = True
usr_gender_emb = paddle.layer.embedding(input=usr_gender_id, size=16) USE_GPU = False
usr_gender_fc = paddle.layer.fc(input=usr_gender_emb, size=16) BATCH_SIZE = 256
usr_age_id = paddle.layer.data(
name='age_id',
type=paddle.data_type.integer_value(
len(paddle.dataset.movielens.age_table)))
usr_age_emb = paddle.layer.embedding(input=usr_age_id, size=16)
usr_age_fc = paddle.layer.fc(input=usr_age_emb, size=16)
usr_job_id = paddle.layer.data(
name='job_id',
type=paddle.data_type.integer_value(
paddle.dataset.movielens.max_job_id() + 1))
usr_job_emb = paddle.layer.embedding(input=usr_job_id, size=16)
usr_job_fc = paddle.layer.fc(input=usr_job_emb, size=16)
``` ```
如上述代码所示,对于每个用户,我们输入4维特征。其中包括`user_id`,`gender_id`,`age_id`,`job_id`。这几维特征均是简单的整数值。为了后续神经网络处理这些特征方便,我们借鉴NLP中的语言模型,将这几维离散的整数值,变换成embedding取出。分别形成`usr_emb`, `usr_gender_emb`, `usr_age_emb`, `usr_job_emb`。 然后为我们的用户特征综合模型定义模型配置
```python ```python
usr_combined_features = paddle.layer.fc( def get_usr_combined_features():
input=[usr_fc, usr_gender_fc, usr_age_fc, usr_job_fc],
size=200, USR_DICT_SIZE = paddle.dataset.movielens.max_user_id() + 1
act=paddle.activation.Tanh())
uid = layers.data(name='user_id', shape=[1], dtype='int64')
usr_emb = layers.embedding(
input=uid,
dtype='float32',
size=[USR_DICT_SIZE, 32],
param_attr='user_table',
is_sparse=IS_SPARSE)
usr_fc = layers.fc(input=usr_emb, size=32)
USR_GENDER_DICT_SIZE = 2
usr_gender_id = layers.data(name='gender_id', shape=[1], dtype='int64')
usr_gender_emb = layers.embedding(
input=usr_gender_id,
size=[USR_GENDER_DICT_SIZE, 16],
param_attr='gender_table',
is_sparse=IS_SPARSE)
usr_gender_fc = layers.fc(input=usr_gender_emb, size=16)
USR_AGE_DICT_SIZE = len(paddle.dataset.movielens.age_table)
usr_age_id = layers.data(name='age_id', shape=[1], dtype="int64")
usr_age_emb = layers.embedding(
input=usr_age_id,
size=[USR_AGE_DICT_SIZE, 16],
is_sparse=IS_SPARSE,
param_attr='age_table')
usr_age_fc = layers.fc(input=usr_age_emb, size=16)
USR_JOB_DICT_SIZE = paddle.dataset.movielens.max_job_id() + 1
usr_job_id = layers.data(name='job_id', shape=[1], dtype="int64")
usr_job_emb = layers.embedding(
input=usr_job_id,
size=[USR_JOB_DICT_SIZE, 16],
param_attr='job_table',
is_sparse=IS_SPARSE)
usr_job_fc = layers.fc(input=usr_job_emb, size=16)
concat_embed = layers.concat(
input=[usr_fc, usr_gender_fc, usr_age_fc, usr_job_fc], axis=1)
usr_combined_features = layers.fc(input=concat_embed, size=200, act="tanh")
return usr_combined_features
``` ```
如上述代码所示,对于每个用户,我们输入4维特征。其中包括user_id,gender_id,age_id,job_id。这几维特征均是简单的整数值。为了后续神经网络处理这些特征方便,我们借鉴NLP中的语言模型,将这几维离散的整数值,变换成embedding取出。分别形成usr_emb, usr_gender_emb, usr_age_emb, usr_job_emb。
然后,我们对于所有的用户特征,均输入到一个全连接层(fc)中。将所有特征融合为一个200维度的特征。 然后,我们对于所有的用户特征,均输入到一个全连接层(fc)中。将所有特征融合为一个200维度的特征。
进而,我们对每一个电影特征做类似的变换,网络配置为: 进而,我们对每一个电影特征做类似的变换,网络配置为:
```python ```python
mov_id = paddle.layer.data( def get_mov_combined_features():
name='movie_id',
type=paddle.data_type.integer_value(
paddle.dataset.movielens.max_movie_id() + 1))
mov_emb = paddle.layer.embedding(input=mov_id, size=32)
mov_fc = paddle.layer.fc(input=mov_emb, size=32)
mov_categories = paddle.layer.data(
name='category_id',
type=paddle.data_type.sparse_binary_vector(
len(paddle.dataset.movielens.movie_categories())))
mov_categories_hidden = paddle.layer.fc(input=mov_categories, size=32)
movie_title_dict = paddle.dataset.movielens.get_movie_title_dict()
mov_title_id = paddle.layer.data(
name='movie_title',
type=paddle.data_type.integer_value_sequence(len(movie_title_dict)))
mov_title_emb = paddle.layer.embedding(input=mov_title_id, size=32)
mov_title_conv = paddle.networks.sequence_conv_pool(
input=mov_title_emb, hidden_size=32, context_len=3)
mov_combined_features = paddle.layer.fc(
input=[mov_fc, mov_categories_hidden, mov_title_conv],
size=200,
act=paddle.activation.Tanh())
```
电影ID和电影类型分别映射到其对应的特征隐层。对于电影标题名称(title),一个ID序列表示的词语序列,在输入卷积层后,将得到每个时间窗口的特征(序列特征),然后通过在时间维度降采样得到固定维度的特征,整个过程在sequence_conv_pool实现。 MOV_DICT_SIZE = paddle.dataset.movielens.max_movie_id() + 1
最后再将电影的特征融合进`mov_combined_features`中。 mov_id = layers.data(name='movie_id', shape=[1], dtype='int64')
mov_emb = layers.embedding(
input=mov_id,
dtype='float32',
size=[MOV_DICT_SIZE, 32],
param_attr='movie_table',
is_sparse=IS_SPARSE)
```python mov_fc = layers.fc(input=mov_emb, size=32)
inference = paddle.layer.cos_sim(a=usr_combined_features, b=mov_combined_features, size=1, scale=5)
```
进而,我们使用余弦相似度计算用户特征与电影特征的相似性。并将这个相似性拟合(回归)到用户评分上。 CATEGORY_DICT_SIZE = len(paddle.dataset.movielens.movie_categories())
category_id = layers.data(
name='category_id', shape=[1], dtype='int64', lod_level=1)
```python mov_categories_emb = layers.embedding(
cost = paddle.layer.square_error_cost( input=category_id, size=[CATEGORY_DICT_SIZE, 32], is_sparse=IS_SPARSE)
input=inference,
label=paddle.layer.data(
name='score', type=paddle.data_type.dense_vector(1)))
```
至此,我们的优化目标就是这个网络配置中的`cost`了。 mov_categories_hidden = layers.sequence_pool(
input=mov_categories_emb, pool_type="sum")
## 训练模型 MOV_TITLE_DICT_SIZE = len(paddle.dataset.movielens.get_movie_title_dict())
### 定义参数 mov_title_id = layers.data(
神经网络的模型,我们可以简单的理解为网络拓朴结构+参数。之前一节,我们定义出了优化目标`cost`。这个`cost`即为网络模型的拓扑结构。我们开始训练模型,需要先定义出参数。定义方法为: name='movie_title', shape=[1], dtype='int64', lod_level=1)
mov_title_emb = layers.embedding(
input=mov_title_id, size=[MOV_TITLE_DICT_SIZE, 32], is_sparse=IS_SPARSE)
```python mov_title_conv = nets.sequence_conv_pool(
parameters = paddle.parameters.create(cost) input=mov_title_emb,
``` num_filters=32,
filter_size=3,
act="tanh",
pool_type="sum")
[INFO 2017-03-06 17:12:13,284 networks.py:1472] The input order is [user_id, gender_id, age_id, job_id, movie_id, category_id, movie_title, score] concat_embed = layers.concat(
[INFO 2017-03-06 17:12:13,287 networks.py:1478] The output order is [__square_error_cost_0__] input=[mov_fc, mov_categories_hidden, mov_title_conv], axis=1)
mov_combined_features = layers.fc(input=concat_embed, size=200, act="tanh")
`parameters`是模型的所有参数集合。他是一个python的dict。我们可以查看到这个网络中的所有参数名称。因为之前定义模型的时候,我们没有指定参数名称,这里参数名称是自动生成的。当然,我们也可以指定每一个参数名称,方便日后维护。 return mov_combined_features
```
电影标题名称(title)是一个序列的整数,整数代表的是这个词在索引序列中的下标。这个序列会被送入 `sequence_conv_pool` 层,这个层会在时间维度上使用卷积和池化。因为如此,所以输出会是固定长度,尽管输入的序列长度各不相同。
最后,我们定义一个`inference_program`来使用余弦相似度计算用户特征与电影特征的相似性。
```python ```python
print parameters.keys() def inference_program():
usr_combined_features = get_usr_combined_features()
mov_combined_features = get_mov_combined_features()
inference = layers.cos_sim(X=usr_combined_features, Y=mov_combined_features)
scale_infer = layers.scale(x=inference, scale=5.0)
return scale_infer
``` ```
[u'___fc_layer_2__.wbias', u'___fc_layer_2__.w2', u'___embedding_layer_3__.w0', u'___embedding_layer_5__.w0', u'___embedding_layer_2__.w0', u'___embedding_layer_1__.w0', u'___fc_layer_1__.wbias', u'___fc_layer_0__.wbias', u'___fc_layer_1__.w0', u'___fc_layer_0__.w2', u'___fc_layer_0__.w3', u'___fc_layer_0__.w0', u'___fc_layer_0__.w1', u'___fc_layer_2__.w1', u'___fc_layer_2__.w0', u'___embedding_layer_4__.w0', u'___sequence_conv_pool_0___conv_fc.w0', u'___embedding_layer_0__.w0', u'___sequence_conv_pool_0___conv_fc.wbias'] 进而,我们定义一个`train_program`来使用`inference_program`计算出的结果,在标记数据的帮助下来计算误差。我们还定义了一个`optimizer_func`来定义优化器。
```python
def train_program():
### 构造训练(trainer) scale_infer = inference_program()
下面,我们根据网络拓扑结构和模型参数来构造出一个本地训练(trainer)。在构造本地训练的时候,我们还需要指定这个训练的优化方法。这里我们使用Adam来作为优化算法。 label = layers.data(name='score', shape=[1], dtype='float32')
square_cost = layers.square_error_cost(input=scale_infer, label=label)
avg_cost = layers.mean(square_cost)
return [avg_cost, scale_infer]
```python
trainer = paddle.trainer.SGD(cost=cost, parameters=parameters, def optimizer_func():
update_equation=paddle.optimizer.Adam(learning_rate=1e-4)) return fluid.optimizer.SGD(learning_rate=0.2)
``` ```
[INFO 2017-03-06 17:12:13,378 networks.py:1472] The input order is [user_id, gender_id, age_id, job_id, movie_id, category_id, movie_title, score]
[INFO 2017-03-06 17:12:13,379 networks.py:1478] The output order is [__square_error_cost_0__]
## 训练模型
### 训练 ### 定义训练环境
定义您的训练环境,可以指定训练是发生在CPU还是GPU上。
下面我们开始训练过程。 ```python
use_cuda = False
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
```
我们直接使用Paddle提供的数据集读取程序。`paddle.dataset.movielens.train()`和`paddle.dataset.movielens.test()`分别做训练和预测数据集。并且通过`feeding`来指定每一个数据和data_layer的对应关系。 ### 定义数据提供器
下一步是为训练和测试定义数据提供器。提供器读入一个大小为 `BATCH_SIZE`的数据。`paddle.dataset.movielens.train` 每次会在乱序化后提供一个大小为`BATCH_SIZE`的数据,乱序化的大小为缓存大小`buf_size`。
例如,这里的feeding表示的是,对于数据层 `user_id`,使用了reader中每一条数据的第0个元素。`gender_id`数据层使用了第1个元素。以此类推。 ```python
train_reader = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.movielens.train(), buf_size=8192),
batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.movielens.test(), batch_size=BATCH_SIZE)
```
### 构造训练器(trainer)
训练器需要一个训练程序和一个训练优化函数。
```python ```python
feeding = { trainer = fluid.Trainer(
'user_id': 0, train_func=train_program, place=place, optimizer_func=optimizer_func)
'gender_id': 1,
'age_id': 2,
'job_id': 3,
'movie_id': 4,
'category_id': 5,
'movie_title': 6,
'score': 7
}
``` ```
训练过程是完全自动的。我们可以使用event_handler与event_handler_plot来观察训练过程,或进行测试等。这里我们在event_handler_plot里面绘制了训练误差曲线和测试误差曲线。并且保存了模型。 ### 提供数据
`feed_order`用来定义每条产生的数据和`paddle.layer.data`之间的映射关系。比如,`movielens.train`产生的第一列的数据对应的是`user_id`这个特征。
```python ```python
def event_handler(event): feed_order = [
if isinstance(event, paddle.event.EndIteration): 'user_id', 'gender_id', 'age_id', 'job_id', 'movie_id', 'category_id',
if event.batch_id % 100 == 0: 'movie_title', 'score'
print "Pass %d Batch %d Cost %.2f" % ( ]
event.pass_id, event.batch_id, event.cost)
``` ```
### 事件处理器
回调函数`event_handler`在一个之前定义好的事件发生后会被调用。例如,我们可以在每步训练结束后查看误差。
```python ```python
from paddle.v2.plot import Ploter # Specify the directory path to save the parameters
params_dirname = "recommender_system.inference.model"
train_title = "Train cost" from paddle.v2.plot import Ploter
test_title = "Test cost" test_title = "Test cost"
cost_ploter = Ploter(train_title, test_title) plot_cost = Ploter(test_title)
step = 0 def event_handler(event):
if isinstance(event, fluid.EndStepEvent):
avg_cost_set = trainer.test(
reader=test_reader, feed_order=feed_order)
def event_handler_plot(event): # get avg cost
global step avg_cost = np.array(avg_cost_set).mean()
if isinstance(event, paddle.event.EndIteration):
if step % 10 == 0: # every 10 batches, record a train cost
cost_ploter.append(train_title, step, event.cost)
if step % 1000 == 0: # every 1000 batches, record a test cost plot_cost.append(test_title, event.step, avg_cost_set[0])
result = trainer.test( plot_cost.plot()
reader=paddle.batch(
paddle.dataset.movielens.test(), batch_size=256),
feeding=feeding)
cost_ploter.append(test_title, step, result.cost)
if step % 100 == 0: # every 100 batches, update cost plot print("avg_cost: %s" % avg_cost)
cost_ploter.plot() print('BatchID {0}, Test Loss {1:0.2}'.format(event.epoch + 1,
float(avg_cost)))
step += 1 if event.step == 20: # Adjust this number for accuracy
trainer.save_params(params_dirname)
trainer.stop()
``` ```
### 开始训练
最后,我们传入训练循环数(`num_epoch`)和一些别的参数,调用 `trainer.train` 来开始训练。
```python ```python
trainer.train( trainer.train(
reader=paddle.batch( num_epochs=1,
paddle.reader.shuffle( event_handler=event_handler,
paddle.dataset.movielens.train(), buf_size=8192), reader=train_reader,
batch_size=256), feed_order=feed_order)
event_handler=event_handler_plot,
feeding=feeding,
num_passes=2)
``` ```
![png](./image/output_32_0.png)
## 应用模型 ## 应用模型
在训练了几轮以后,您可以对模型进行推断。我们可以使用任意一个用户ID和电影ID,来预测该用户对该电影的评分。示例程序为: ### 构建预测器
传入`inference_program`和`params_dirname`来初始化一个预测器, `params_dirname`用来存放训练过程中的各个参数。
```python ```python
import copy inferencer = fluid.Inferencer(
user_id = 234 inference_program, param_path=params_dirname, place=place)
movie_id = 345 ```
user = user_info[user_id]
movie = movie_info[movie_id]
feature = user.value() + movie.value() ### 生成测试用输入数据
使用 create_lod_tensor(data, lod, place) 的API来生成细节层次的张量。`data`是一个序列,每个元素是一个索引号的序列。`lod`是细节层次的信息,对应于`data`。比如,data = [[10, 2, 3], [2, 3]] 意味着它包含两个序列,长度分别是3和2。于是相应地 lod = [[3, 2]],它表明其包含一层细节信息,意味着 `data` 有两个序列,长度分别是3和2。
infer_dict = copy.copy(feeding) 在这个预测例子中,我们试着预测用户ID为1的用户对于电影'Hunchback of Notre Dame'的评分
del infer_dict['score']
prediction = paddle.infer(inference, parameters=parameters, input=[feature], feeding=infer_dict) ```python
score = (prediction[0][0] + 5.0) / 2 infer_movie_id = 783
print "[Predict] User %d Rating Movie %d With Score %.2f"%(user_id, movie_id, score) infer_movie_name = paddle.dataset.movielens.movie_info()[infer_movie_id].title
user_id = fluid.create_lod_tensor([[1]], [[1]], place)
gender_id = fluid.create_lod_tensor([[1]], [[1]], place)
age_id = fluid.create_lod_tensor([[0]], [[1]], place)
job_id = fluid.create_lod_tensor([[10]], [[1]], place)
movie_id = fluid.create_lod_tensor([[783]], [[1]], place) # Hunchback of Notre Dame
category_id = fluid.create_lod_tensor([[10, 8, 9]], [[3]], place) # Animation, Children's, Musical
movie_title = fluid.create_lod_tensor([[1069, 4140, 2923, 710, 988]], [[5]],
place) # 'hunchback','of','notre','dame','the'
``` ```
[INFO 2017-03-06 17:17:08,132 networks.py:1472] The input order is [user_id, gender_id, age_id, job_id, movie_id, category_id, movie_title] ### 测试
[INFO 2017-03-06 17:17:08,134 networks.py:1478] The output order is [__cos_sim_0__] 现在我们可以进行预测了。我们要提供的`feed_order`应该和训练过程一致。
[Predict] User 234 Rating Movie 345 With Score 4.16 ```python
results = inferencer.infer(
{
'user_id': user_id,
'gender_id': gender_id,
'age_id': age_id,
'job_id': job_id,
'movie_id': movie_id,
'category_id': category_id,
'movie_title': movie_title
},
return_numpy=False)
```
## 总结 ## 总结
......
...@@ -447,6 +447,11 @@ For example, we can check the cost by `trainer.test` when `EndStepEvent` occurs ...@@ -447,6 +447,11 @@ For example, we can check the cost by `trainer.test` when `EndStepEvent` occurs
# Specify the directory path to save the parameters # Specify the directory path to save the parameters
params_dirname = "recommender_system.inference.model" params_dirname = "recommender_system.inference.model"
from paddle.v2.plot import Ploter
test_title = "Test cost"
plot_cost = Ploter(test_title)
def event_handler(event): def event_handler(event):
if isinstance(event, fluid.EndStepEvent): if isinstance(event, fluid.EndStepEvent):
avg_cost_set = trainer.test( avg_cost_set = trainer.test(
...@@ -455,11 +460,14 @@ def event_handler(event): ...@@ -455,11 +460,14 @@ def event_handler(event):
# get avg cost # get avg cost
avg_cost = np.array(avg_cost_set).mean() avg_cost = np.array(avg_cost_set).mean()
plot_cost.append(test_title, event.step, avg_cost_set[0])
plot_cost.plot()
print("avg_cost: %s" % avg_cost) print("avg_cost: %s" % avg_cost)
print('BatchID {0}, Test Loss {1:0.2}'.format(event.epoch + 1, print('BatchID {0}, Test Loss {1:0.2}'.format(event.epoch + 1,
float(avg_cost))) float(avg_cost)))
if float(avg_cost) < 4: if event.step == 20: # Adjust this number for accuracy
trainer.save_params(params_dirname) trainer.save_params(params_dirname)
trainer.stop() trainer.stop()
......
...@@ -129,9 +129,8 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$ ...@@ -129,9 +129,8 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$
图3. 栈式双向LSTM用于文本分类 图3. 栈式双向LSTM用于文本分类
</p> </p>
## 示例程序
### 数据集介绍 ## 数据集介绍
我们以[IMDB情感分析数据集](http://ai.stanford.edu/%7Eamaas/data/sentiment/)为例进行介绍。IMDB数据集的训练集和测试集分别包含25000个已标注过的电影评论。其中,负面评论的得分小于等于4,正面评论的得分大于等于7,满分10分。 我们以[IMDB情感分析数据集](http://ai.stanford.edu/%7Eamaas/data/sentiment/)为例进行介绍。IMDB数据集的训练集和测试集分别包含25000个已标注过的电影评论。其中,负面评论的得分小于等于4,正面评论的得分大于等于7,满分10分。
```text ```text
...@@ -145,95 +144,70 @@ aclImdb ...@@ -145,95 +144,70 @@ aclImdb
``` ```
Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取,并提供了读取字典、训练数据、测试数据等API。 Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取,并提供了读取字典、训练数据、测试数据等API。
## 配置模型
在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
```python ```python
import sys import paddle
import paddle.v2 as paddle import paddle.fluid as fluid
from functools import partial
import numpy as np
CLASS_DIM = 2
EMB_DIM = 128
HID_DIM = 512
BATCH_SIZE = 128
USE_GPU = False
``` ```
## 配置模型
在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。
### 文本卷积神经网络 ### 文本卷积神经网络
我们构建神经网络`convolution_net`,示例代码如下。
需要注意的是:`fluid.nets.sequence_conv_pool` 包含卷积和池化层两个操作。
```python ```python
def convolution_net(input_dim, def convolution_net(data, input_dim, class_dim, emb_dim, hid_dim):
class_dim=2, emb = fluid.layers.embedding(
emb_dim=128, input=data, size=[input_dim, emb_dim], is_sparse=True)
hid_dim=128, conv_3 = fluid.nets.sequence_conv_pool(
is_predict=False): input=emb,
data = paddle.layer.data("word", num_filters=hid_dim,
paddle.data_type.integer_value_sequence(input_dim)) filter_size=3,
emb = paddle.layer.embedding(input=data, size=emb_dim) act="tanh",
conv_3 = paddle.networks.sequence_conv_pool( pool_type="sqrt")
input=emb, context_len=3, hidden_size=hid_dim) conv_4 = fluid.nets.sequence_conv_pool(
conv_4 = paddle.networks.sequence_conv_pool( input=emb,
input=emb, context_len=4, hidden_size=hid_dim) num_filters=hid_dim,
output = paddle.layer.fc(input=[conv_3, conv_4], filter_size=4,
size=class_dim, act="tanh",
act=paddle.activation.Softmax()) pool_type="sqrt")
if not is_predict: prediction = fluid.layers.fc(
lbl = paddle.layer.data("label", paddle.data_type.integer_value(2)) input=[conv_3, conv_4], size=class_dim, act="softmax")
cost = paddle.layer.classification_cost(input=output, label=lbl) return prediction
return cost
else:
return output
``` ```
网络的输入`input_dim`表示的是词典的大小,`class_dim`表示类别数。这里,我们使用[`sequence_conv_pool`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/networks.py) API实现了卷积和池化操作。 网络的输入`input_dim`表示的是词典的大小,`class_dim`表示类别数。这里,我们使用[`sequence_conv_pool`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/networks.py) API实现了卷积和池化操作。
### 栈式双向LSTM ### 栈式双向LSTM
栈式双向神经网络`stacked_lstm_net`的代码片段如下:
```python ```python
def stacked_lstm_net(input_dim, def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
class_dim=2,
emb_dim=128, emb = fluid.layers.embedding(
hid_dim=512, input=data, size=[input_dim, emb_dim], is_sparse=True)
stacked_num=3,
is_predict=False): fc1 = fluid.layers.fc(input=emb, size=hid_dim)
""" lstm1, cell1 = fluid.layers.dynamic_lstm(input=fc1, size=hid_dim)
A Wrapper for sentiment classification task.
This network uses bi-directional recurrent network,
consisting three LSTM layers. This configure is referred to
the paper as following url, but use fewer layrs.
http://www.aclweb.org/anthology/P15-1109
input_dim: here is word dictionary dimension.
class_dim: number of categories.
emb_dim: dimension of word embedding.
hid_dim: dimension of hidden layer.
stacked_num: number of stacked lstm-hidden layer.
"""
assert stacked_num % 2 == 1
fc_para_attr = paddle.attr.Param(learning_rate=1e-3)
lstm_para_attr = paddle.attr.Param(initial_std=0., learning_rate=1.)
para_attr = [fc_para_attr, lstm_para_attr]
bias_attr = paddle.attr.Param(initial_std=0., l2_rate=0.)
relu = paddle.activation.Relu()
linear = paddle.activation.Linear()
data = paddle.layer.data("word",
paddle.data_type.integer_value_sequence(input_dim))
emb = paddle.layer.embedding(input=data, size=emb_dim)
fc1 = paddle.layer.fc(input=emb,
size=hid_dim,
act=linear,
bias_attr=bias_attr)
lstm1 = paddle.layer.lstmemory(
input=fc1, act=relu, bias_attr=bias_attr)
inputs = [fc1, lstm1] inputs = [fc1, lstm1]
for i in range(2, stacked_num + 1): for i in range(2, stacked_num + 1):
fc = paddle.layer.fc(input=inputs, fc = fluid.layers.fc(input=inputs, size=hid_dim)
size=hid_dim, lstm, cell = fluid.layers.dynamic_lstm(
act=linear, input=fc, size=hid_dim, is_reverse=(i % 2) == 0)
param_attr=para_attr,
bias_attr=bias_attr)
lstm = paddle.layer.lstmemory(
input=fc,
reverse=(i % 2) == 0,
act=relu,
bias_attr=bias_attr)
inputs = [fc, lstm] inputs = [fc, lstm]
fc_last = paddle.layer.pooling(input=inputs[0], pooling_type=paddle.pooling.Max()) fc_last = paddle.layer.pooling(input=inputs[0], pooling_type=paddle.pooling.Max())
...@@ -244,162 +218,165 @@ def stacked_lstm_net(input_dim, ...@@ -244,162 +218,165 @@ def stacked_lstm_net(input_dim,
bias_attr=bias_attr, bias_attr=bias_attr,
param_attr=para_attr) param_attr=para_attr)
if not is_predict:
lbl = paddle.layer.data("label", paddle.data_type.integer_value(2)) lbl = paddle.layer.data("label", paddle.data_type.integer_value(2))
cost = paddle.layer.classification_cost(input=output, label=lbl) cost = paddle.layer.classification_cost(input=output, label=lbl)
return cost return cost, output
else:
return output
``` ```
网络的输入`stacked_num`表示的是LSTM的层数,需要是奇数,确保最高层LSTM正向。Paddle里面是通过一个fc和一个lstmemory来实现基于LSTM的循环神经网络 以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。`paddle.activation.Softmax`函数用来计算分类属于某个类别的概率
## 训练模型 重申一下,此处我们可以调用`convolution_net`或`stacked_lstm_net`的任何一个。我们以`convolution_net`为例。
接下来我们定义预测程序(`inference_program`)。预测程序使用`convolution_net`来对`fluid.layer.data`的输入进行预测。
```python ```python
if __name__ == '__main__': def inference_program(word_dict):
# init data = fluid.layers.data(
paddle.init(use_gpu=False) name="words", shape=[1], dtype="int64", lod_level=1)
dict_dim = len(word_dict)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
return net
``` ```
启动paddle程序,use_gpu=False表示用CPU训练,如果系统支持GPU也可以修改成True使用GPU训练。
### 训练数据 我们这里定义了`training_program`。它使用了从`inference_program`返回的结果来计算误差。我们同时定义了优化函数`optimizer_func`。
因为是有监督的学习,训练集的标签也在`paddle.layer.data`中定义了。在训练过程中,交叉熵用来在`paddle.layer.classification_cost`中作为损失函数。
在测试过程中,分类器会计算各个输出的概率。第一个返回的数值规定为 损耗(cost)。
使用Paddle提供的数据集`dataset.imdb`中的API来读取训练数据。
```python ```python
print 'load dictionary...' def train_program(word_dict):
word_dict = paddle.dataset.imdb.word_dict() prediction = inference_program(word_dict)
dict_dim = len(word_dict) label = fluid.layers.data(name="label", shape=[1], dtype="int64")
class_dim = 2 cost = fluid.layers.cross_entropy(input=prediction, label=label)
avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=prediction, label=label)
return [avg_cost, accuracy]
def optimizer_func():
return fluid.optimizer.Adagrad(learning_rate=0.002)
``` ```
加载数据字典,这里通过`word_dict()`API可以直接构造字典。`class_dim`是指样本类别数,该示例中样本只有正负两类。
## 训练模型
### 定义训练环境
定义您的训练是在CPU上还是在GPU上:
```python ```python
train_reader = paddle.batch( use_cuda = False
paddle.reader.shuffle( place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
lambda: paddle.dataset.imdb.train(word_dict), buf_size=1000),
batch_size=100)
test_reader = paddle.batch(
lambda: paddle.dataset.imdb.test(word_dict),
batch_size=100)
``` ```
这里,`dataset.imdb.train()`和`dataset.imdb.test()`分别是`dataset.imdb`中的训练数据和测试数据API。`train_reader`在训练时使用,意义是将读取的训练数据进行shuffle后,组成一个batch数据。同理,`test_reader`是在测试的时候使用,将读取的测试数据组成一个batch。
### 定义数据提供器
下一步是为训练和测试定义数据提供器。提供器读入一个大小为 BATCH_SIZE的数据。paddle.dataset.imdb.train 每次会在乱序化后提供一个大小为BATCH_SIZE的数据,乱序化的大小为缓存大小buf_size。
注意:读取IMDB的数据可能会花费几分钟的时间,请耐心等待。
```python ```python
feeding={'word': 0, 'label': 1} print("Loading IMDB word dict....")
word_dict = paddle.dataset.imdb.word_dict()
print ("Reading training data....")
train_reader = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.imdb.train(word_dict), buf_size=25000),
batch_size=BATCH_SIZE)
``` ```
`feeding`用来指定`train_reader`和`test_reader`返回的数据与模型配置中data_layer的对应关系。这里表示reader返回的第0列数据对应`word`层,第1列数据对应`label`层。
### 构造模型 ### 构造训练器(trainer)
训练器需要一个训练程序和一个训练优化函数。
```python ```python
# Please choose the way to build the network trainer = fluid.Trainer(
# by uncommenting the corresponding line. train_func=partial(train_program, word_dict),
cost = convolution_net(dict_dim, class_dim=class_dim) place=place,
# cost = stacked_lstm_net(dict_dim, class_dim=class_dim, stacked_num=3) optimizer_func=optimizer_func)
``` ```
该示例中默认使用`convolution_net`网络,如果使用`stacked_lstm_net`网络,注释相应的行即可。其中cost是网络的优化目标,同时cost包含了整个网络的拓扑信息。
### 网络参数 ### 提供数据
`feed_order`用来定义每条产生的数据和`paddle.layer.data`之间的映射关系。比如,`imdb.train`产生的第一列的数据对应的是`words`这个特征。
```python ```python
# create parameters feed_order = ['words', 'label']
parameters = paddle.parameters.create(cost)
``` ```
根据网络的拓扑构造网络参数。这里parameters是整个网络的参数集。
### 优化算法 ### 事件处理器
回调函数event_handler在一个之前定义好的事件发生后会被调用。例如,我们可以在每步训练结束后查看误差。
```python ```python
# create optimizer # Specify the directory path to save the parameters
adam_optimizer = paddle.optimizer.Adam( params_dirname = "understand_sentiment_conv.inference.model"
learning_rate=2e-3,
regularization=paddle.optimizer.L2Regularization(rate=8e-4), def event_handler(event):
model_average=paddle.optimizer.ModelAverage(average_window=0.5)) if isinstance(event, fluid.EndStepEvent):
print("Step {0}, Epoch {1} Metrics {2}".format(
event.step, event.epoch, map(np.array, event.metrics)))
if event.step == 10:
trainer.save_params(params_dirname)
trainer.stop()
``` ```
Paddle中提供了一系列优化算法的API,这里使用Adam优化算法。
### 训练 ### 开始训练
最后,我们传入训练循环数(num_epoch)和一些别的参数,调用 trainer.train 来开始训练。
可以通过`paddle.trainer.SGD`构造一个sgd trainer,并调用`trainer.train`来训练模型。另外,通过给train函数传递一个`event_handler`来获取每个batch和每个pass结束的状态。
```python ```python
# End batch and end pass event handler trainer.train(
def event_handler(event): num_epochs=1,
if isinstance(event, paddle.event.EndIteration): event_handler=event_handler,
if event.batch_id % 100 == 0: reader=train_reader,
print "\nPass %d, Batch %d, Cost %f, %s" % ( feed_order=feed_order)
event.pass_id, event.batch_id, event.cost, event.metrics)
else:
sys.stdout.write('.')
sys.stdout.flush()
if isinstance(event, paddle.event.EndPass):
with open('./params_pass_%d.tar' % event.pass_id, 'w') as f:
trainer.save_parameter_to_tar(f)
result = trainer.test(reader=test_reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
``` ```
比如,构造如下一个`event_handler`可以在每100个batch结束后输出cost和error;在每个pass结束后调用`trainer.test`计算一遍测试集并获得当前模型在测试集上的error。
## 应用模型
### 构建预测器
传入`inference_program`和`params_dirname`来初始化一个预测器, `params_dirname`用来存放训练过程中的各个参数。
```python ```python
from paddle.v2.plot import Ploter inferencer = fluid.Inferencer(
inference_program, param_path=params_dirname, place=place)
train_title = "Train cost"
cost_ploter = Ploter(train_title)
step = 0
def event_handler_plot(event):
global step
if isinstance(event, paddle.event.EndIteration):
cost_ploter.append(train_title, step, event.cost)
cost_ploter.plot()
step += 1
``` ```
或者构造一个`event_handler_plot`画出cost曲线。
### 生成测试用输入数据
为了进行预测,我们任意选取3个评论。请随意选取您看好的3个。我们把评论中的每个词对应到`word_dict`中的id。如果词典中没有这个词,则设为`unknown`。
然后我们用`create_lod_tensor`来创建细节层次的张量。
```python ```python
# create trainer reviews_str = [
trainer = paddle.trainer.SGD(cost=cost, 'read the book forget the movie', 'this is a great movie', 'this is very bad'
parameters=parameters, ]
update_equation=adam_optimizer) reviews = [c.split() for c in reviews_str]
trainer.train( UNK = word_dict['<unk>']
reader=train_reader, lod = []
event_handler=event_handler, for c in reviews:
feeding=feeding, lod.append([word_dict.get(words, UNK) for words in c])
num_passes=2)
``` base_shape = [[len(c) for c in lod]]
程序运行之后的输出如下。
```text tensor_words = fluid.create_lod_tensor(lod, base_shape, place)
Pass 0, Batch 0, Cost 0.693721, {'classification_error_evaluator': 0.5546875}
...................................................................................................
Pass 0, Batch 100, Cost 0.294321, {'classification_error_evaluator': 0.1015625}
...............................................................................................
Test with Pass 0, {'classification_error_evaluator': 0.11432000249624252}
``` ```
## 应用模型 ## 应用模型
可以使用训练好的模型对电影评论进行分类,下面程序展示了如何使用`paddle.infer`接口进行推断。 现在我们可以对每一条评论进行正面或者负面的预测啦。
```python ```python
import numpy as np results = inferencer.infer({'words': tensor_words})
# Movie Reviews, from imdb test for i, r in enumerate(results[0]):
reviews = [ print("Predict probability of ", r[0], " to be positive and ", r[1], " to be negative for review \'", reviews_str[i], "\'")
'Read the book, forget the movie!',
'This is a great movie.'
]
reviews = [c.split() for c in reviews]
UNK = word_dict['<unk>']
input = []
for c in reviews:
input.append([[word_dict.get(words, UNK) for words in c]])
# 0 stands for positive sample, 1 stands for negative sample
label = {0:'pos', 1:'neg'}
# Use the network used by trainer
out = convolution_net(dict_dim, class_dim=class_dim, is_predict=True)
# out = stacked_lstm_net(dict_dim, class_dim=class_dim, stacked_num=3, is_predict=True)
probs = paddle.infer(output_layer=out, parameters=parameters, input=input)
labs = np.argsort(-probs)
for idx, lab in enumerate(labs):
print idx, "predicting probability is", probs[idx], "label is", label[lab[0]]
``` ```
......
此差异已折叠。
...@@ -25,6 +25,7 @@ BATCH_SIZE = 10 ...@@ -25,6 +25,7 @@ BATCH_SIZE = 10
embedding_name = 'emb' embedding_name = 'emb'
def load_parameter(file_name, h, w): def load_parameter(file_name, h, w):
with open(file_name, 'rb') as f: with open(file_name, 'rb') as f:
f.read(16) # skip header. f.read(16) # skip header.
...@@ -52,8 +53,8 @@ def db_lstm(word, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, ...@@ -52,8 +53,8 @@ def db_lstm(word, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark,
fluid.layers.embedding( fluid.layers.embedding(
size=[word_dict_len, word_dim], size=[word_dict_len, word_dim],
input=x, input=x,
param_attr=fluid.ParamAttr( param_attr=fluid.ParamAttr(name=embedding_name, trainable=False))
name=embedding_name, trainable=False)) for x in word_input for x in word_input
] ]
emb_layers.append(predicate_embedding) emb_layers.append(predicate_embedding)
emb_layers.append(mark_embedding) emb_layers.append(mark_embedding)
...@@ -125,8 +126,7 @@ def train(use_cuda, save_dirname=None, is_local=True): ...@@ -125,8 +126,7 @@ def train(use_cuda, save_dirname=None, is_local=True):
crf_cost = fluid.layers.linear_chain_crf( crf_cost = fluid.layers.linear_chain_crf(
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=fluid.ParamAttr( param_attr=fluid.ParamAttr(name='crfw', learning_rate=mix_hidden_lr))
name='crfw', learning_rate=mix_hidden_lr))
avg_cost = fluid.layers.mean(crf_cost) avg_cost = fluid.layers.mean(crf_cost)
...@@ -143,13 +143,11 @@ def train(use_cuda, save_dirname=None, is_local=True): ...@@ -143,13 +143,11 @@ def train(use_cuda, save_dirname=None, is_local=True):
input=feature_out, param_attr=fluid.ParamAttr(name='crfw')) input=feature_out, param_attr=fluid.ParamAttr(name='crfw'))
train_data = paddle.batch( train_data = paddle.batch(
paddle.reader.shuffle( paddle.reader.shuffle(paddle.dataset.conll05.test(), buf_size=8192),
paddle.dataset.conll05.test(), buf_size=8192),
batch_size=BATCH_SIZE) batch_size=BATCH_SIZE)
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
feeder = fluid.DataFeeder( feeder = fluid.DataFeeder(
feed_list=[ feed_list=[
word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, predicate, mark, target word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, predicate, mark, target
...@@ -169,16 +167,15 @@ def train(use_cuda, save_dirname=None, is_local=True): ...@@ -169,16 +167,15 @@ def train(use_cuda, save_dirname=None, is_local=True):
batch_id = 0 batch_id = 0
for pass_id in xrange(PASS_NUM): for pass_id in xrange(PASS_NUM):
for data in train_data(): for data in train_data():
cost = exe.run(main_program, cost = exe.run(
feed=feeder.feed(data), main_program, feed=feeder.feed(data), fetch_list=[avg_cost])
fetch_list=[avg_cost])
cost = cost[0] cost = cost[0]
if batch_id % 10 == 0: if batch_id % 10 == 0:
print("avg_cost:" + str(cost)) print("avg_cost:" + str(cost))
if batch_id != 0: if batch_id != 0:
print("second per batch: " + str((time.time( print("second per batch: " + str((
) - start_time) / batch_id)) time.time() - start_time) / batch_id))
# Set the threshold low to speed up the CI test # Set the threshold low to speed up the CI test
if float(cost) < 60.0: if float(cost) < 60.0:
if save_dirname is not None: if save_dirname is not None:
...@@ -252,7 +249,8 @@ def infer(use_cuda, save_dirname=None): ...@@ -252,7 +249,8 @@ def infer(use_cuda, save_dirname=None):
assert feed_target_names[6] == 'ctx_p2_data' assert feed_target_names[6] == 'ctx_p2_data'
assert feed_target_names[7] == 'mark_data' assert feed_target_names[7] == 'mark_data'
results = exe.run(inference_program, results = exe.run(
inference_program,
feed={ feed={
feed_target_names[0]: word, feed_target_names[0]: word,
feed_target_names[1]: pred, feed_target_names[1]: pred,
...@@ -282,4 +280,3 @@ def main(use_cuda, is_local=True): ...@@ -282,4 +280,3 @@ def main(use_cuda, is_local=True):
main(use_cuda=False) main(use_cuda=False)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册