提交 3a5ab5d9 编写于 作者: H helinwang 提交者: GitHub

Merge pull request #160 from wangkuiyi/auto_convert_html_and_ipynb_in_pre_commit

Auto convert html and ipynb in pre commit
deprecated
*~
pandoc.template pandoc.template
.DS_Store .DS_Store
\ No newline at end of file
...@@ -10,11 +10,25 @@ ...@@ -10,11 +10,25 @@
- id: check-symlinks - id: check-symlinks
- id: detect-private-key - id: detect-private-key
- id: end-of-file-fixer - id: end-of-file-fixer
files: \.md$
- id: trailing-whitespace - id: trailing-whitespace
files: \.md$
- repo: git://github.com/Lucas-C/pre-commit-hooks - repo: git://github.com/Lucas-C/pre-commit-hooks
sha: v1.0.1 sha: v1.0.1
hooks: hooks:
- id: forbid-crlf - id: forbid-crlf
files: \.md$
- id: remove-crlf - id: remove-crlf
files: \.md$
- id: forbid-tabs - id: forbid-tabs
files: \.md$
- id: remove-tabs - id: remove-tabs
files: \.md$
- repo: local
hooks:
- id: convert-markdown-into-html
name: convert-markdown-into-html
description: "Convert README.md into index.html and README.en.md into index.en.html"
entry: python pre-commit-hooks/convert_markdown_into_html.py
language: system
files: \.md$
#!/bin/bash
for i in $(du -a | grep '\.\/.\+\/README.md' | cut -f 2); do
.tmpl/convert-markdown-into-html.sh $i > $(dirname $i)/index.html
done
for i in $(du -a | grep '\.\/.\+\/README.en.md' | cut -f 2); do
.tmpl/convert-markdown-into-html.sh $i > $(dirname $i)/index.en.html
done
...@@ -202,4 +202,4 @@ This chapter introduces *Linear Regression* and how to train and test this model ...@@ -202,4 +202,4 @@ This chapter introduces *Linear Regression* and how to train and test this model
4. Bishop C M. Pattern recognition[J]. Machine Learning, 2006, 128. 4. Bishop C M. Pattern recognition[J]. Machine Learning, 2006, 128.
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Common Creative License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a> This tutorial was created and published with [Creative Common License 4.0](http://creativecommons.org/licenses/by-nc-sa/4.0/).
此差异已折叠。
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -56,8 +57,8 @@ $$y_i = \omega_1x_{i1} + \omega_2x_{i2} + \ldots + \omega_dx_{id} + b, i=1,\ldo ...@@ -56,8 +57,8 @@ $$y_i = \omega_1x_{i1} + \omega_2x_{i2} + \ldots + \omega_dx_{id} + b, i=1,\ldo
## 效果展示 ## 效果展示
我们使用从[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)获得的波士顿房价数据集进行模型的训练和预测。下面的散点图展示了使用模型对部分房屋价格进行的预测。其中,每个点的横坐标表示同一类房屋真实价格的中位数,纵坐标表示线性回归模型根据特征预测的结果,当二者值完全相等的时候就会落在虚线上。所以模型预测得越准确,则点离虚线越近。 我们使用从[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)获得的波士顿房价数据集进行模型的训练和预测。下面的散点图展示了使用模型对部分房屋价格进行的预测。其中,每个点的横坐标表示同一类房屋真实价格的中位数,纵坐标表示线性回归模型根据特征预测的结果,当二者值完全相等的时候就会落在虚线上。所以模型预测得越准确,则点离虚线越近。
<p align="center"> <p align="center">
<img src = "image/predictions.png" width=400><br/> <img src = "image/predictions.png" width=400><br/>
图1. 预测值 V.S. 真实值 图1. 预测值 V.S. 真实值
</p> </p>
## 模型概览 ## 模型概览
...@@ -86,14 +87,25 @@ $$MSE=\frac{1}{n}\sum_{i=1}^{n}{(\hat{Y_i}-Y_i)}^2$$ ...@@ -86,14 +87,25 @@ $$MSE=\frac{1}{n}\sum_{i=1}^{n}{(\hat{Y_i}-Y_i)}^2$$
3. 根据损失函数进行反向误差传播 ([backpropagation](https://en.wikipedia.org/wiki/Backpropagation)),将网络误差从输出层依次向前传递, 并更新网络中的参数。 3. 根据损失函数进行反向误差传播 ([backpropagation](https://en.wikipedia.org/wiki/Backpropagation)),将网络误差从输出层依次向前传递, 并更新网络中的参数。
4. 重复2~3步骤,直至网络训练误差达到规定的程度或训练轮次达到设定值。 4. 重复2~3步骤,直至网络训练误差达到规定的程度或训练轮次达到设定值。
## 数据集
### 数据集接口的封装
首先加载需要的包
## 数据准备 ```python
执行以下命令来准备数据: import paddle.v2 as paddle
```bash import paddle.v2.dataset.uci_housing as uci_housing
cd data && python prepare_data.py
``` ```
这段代码将从[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)下载数据并进行[预处理](#数据预处理),最后数据将被分为训练集和测试集。
我们通过uci_housing模块引入了数据集合[UCI Housing Data Set](https://archive.ics.uci.edu/ml/datasets/Housing)
其中,在uci_housing模块中封装了:
1. 数据下载的过程。下载数据保存在~/.cache/paddle/dataset/uci_housing/housing.data。
2. [数据预处理](#数据预处理)的过程。
### 数据集介绍
这份数据集共506行,每行包含了波士顿郊区的一类房屋的相关信息及该类房屋价格的中位数。其各维属性的意义如下: 这份数据集共506行,每行包含了波士顿郊区的一类房屋的相关信息及该类房屋价格的中位数。其各维属性的意义如下:
| 属性名 | 解释 | 类型 | | 属性名 | 解释 | 类型 |
...@@ -126,94 +138,94 @@ cd data && python prepare_data.py ...@@ -126,94 +138,94 @@ cd data && python prepare_data.py
- 很多的机器学习技巧/模型(例如L1,L2正则项,向量空间模型-Vector Space Model)都基于这样的假设:所有的属性取值都差不多是以0为均值且取值范围相近的。 - 很多的机器学习技巧/模型(例如L1,L2正则项,向量空间模型-Vector Space Model)都基于这样的假设:所有的属性取值都差不多是以0为均值且取值范围相近的。
<p align="center"> <p align="center">
<img src = "image/ranges.png" width=550><br/> <img src = "image/ranges.png" width=550><br/>
图2. 各维属性的取值范围 图2. 各维属性的取值范围
</p> </p>
#### 整理训练集与测试集 #### 整理训练集与测试集
我们将数据集分割为两份:一份用于调整模型的参数,即进行模型的训练,模型在这份数据集上的误差被称为**训练误差**;另外一份被用来测试,模型在这份数据集上的误差被称为**测试误差**。我们训练模型的目的是为了通过从训练数据中找到规律来预测未知的新数据,所以测试误差是更能反映模型表现的指标。分割数据的比例要考虑到两个因素:更多的训练数据会降低参数估计的方差,从而得到更可信的模型;而更多的测试数据会降低测试误差的方差,从而得到更可信的测试误差。一种常见的分割比例为$8:2$,感兴趣的读者朋友们也可以尝试不同的设置来观察这两种误差的变化。 我们将数据集分割为两份:一份用于调整模型的参数,即进行模型的训练,模型在这份数据集上的误差被称为**训练误差**;另外一份被用来测试,模型在这份数据集上的误差被称为**测试误差**。我们训练模型的目的是为了通过从训练数据中找到规律来预测未知的新数据,所以测试误差是更能反映模型表现的指标。分割数据的比例要考虑到两个因素:更多的训练数据会降低参数估计的方差,从而得到更可信的模型;而更多的测试数据会降低测试误差的方差,从而得到更可信的测试误差。我们这个例子中设置的分割比例为$8:2$
在更复杂的模型训练过程中,我们往往还会多使用一种数据集:验证集。因为复杂的模型中常常还有一些超参数([Hyperparameter](https://en.wikipedia.org/wiki/Hyperparameter_optimization))需要调节,所以我们会尝试多种超参数的组合来分别训练多个模型,然后对比它们在验证集上的表现选择相对最好的一组超参数,最后才使用这组参数下训练的模型在测试集上评估测试误差。由于本章训练的模型比较简单,我们暂且忽略掉这个过程。
## 训练
`fit_a_line/trainer.py`演示了训练的整体过程。
### 初始化PaddlePaddle
执行如下命令可以分割数据集,并将训练集和测试集的地址分别写入train.list 和 test.list两个文件中,供PaddlePaddle读取。
```python ```python
python prepare_data.py -r 0.8 #默认使用8:2的比例进行分割 paddle.init(use_gpu=False, trainer_count=1)
``` ```
在更复杂的模型训练过程中,我们往往还会多使用一种数据集:验证集。因为复杂的模型中常常还有一些超参数([Hyperparameter](https://en.wikipedia.org/wiki/Hyperparameter_optimization))需要调节,所以我们会尝试多种超参数的组合来分别训练多个模型,然后对比它们在验证集上的表现选择相对最好的一组超参数,最后才使用这组参数下训练的模型在测试集上评估测试误差。由于本章训练的模型比较简单,我们暂且忽略掉这个过程。 ### 模型配置
### 提供数据给PaddlePaddle 线性回归的模型其实就是一个采用线性激活函数(linear activation,`LinearActivation`)的全连接层(fully-connected layer,`fc_layer`):
准备好数据之后,我们使用一个Python data provider来为PaddlePaddle的训练过程提供数据。一个 data provider 就是一个Python函数,它会被PaddlePaddle的训练过程调用。在这个例子里,只需要读取已经保存好的数据,然后一行一行地返回给PaddlePaddle的训练进程即可。
```python ```python
from paddle.trainer.PyDataProvider2 import * x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13))
import numpy as np y_predict = paddle.layer.fc(input=x,
#定义数据的类型和维度 size=1,
@provider(input_types=[dense_vector(13), dense_vector(1)]) act=paddle.activation.Linear())
def process(settings, input_file): y = paddle.layer.data(name='y', type=paddle.data_type.dense_vector(1))
data = np.load(input_file.strip()) cost = paddle.layer.regression_cost(input=y_predict, label=y)
for row in data: ```
yield row[:-1].tolist(), row[-1:].tolist() ### 创建参数
```python
parameters = paddle.parameters.create(cost)
``` ```
## 模型配置说明 ### 创建Trainer
### 数据定义
首先,通过 `define_py_data_sources2` 来配置PaddlePaddle从上面的`dataprovider.py`里读入训练数据和测试数据。 PaddlePaddle接受从命令行读入的配置信息,例如这里我们传入一个名为`is_predict`的变量来控制模型在训练和测试时的不同结构。
```python ```python
from paddle.trainer_config_helpers import * optimizer = paddle.optimizer.Momentum(momentum=0)
is_predict = get_config_arg('is_predict', bool, False) trainer = paddle.trainer.SGD(cost=cost,
parameters=parameters,
update_equation=optimizer)
```
define_py_data_sources2( ### 读取数据且打印训练的中间信息
train_list='data/train.list',
test_list='data/test.list',
module='dataprovider',
obj='process')
``` PaddlePaddle提供一个
[reader机制](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/reader)
来读取数据。 Reader返回的数据可以包括多列,我们需要一个Python dict把列
序号映射到网络里的数据层。
### 算法配置
接着,指定模型优化算法的细节。由于线性回归模型比较简单,我们只要设置基本的`batch_size`即可,它指定每次更新参数的时候使用多少条数据计算梯度信息。
```python ```python
settings(batch_size=2) feeding={'x': 0, 'y': 1}
``` ```
### 网络结构 此外,我们还可以提供一个 event handler,来打印训练的进度:
最后,使用`fc_layer`和`LinearActivation`来表示线性回归的模型本身。
```python ```python
#输入数据,13维的房屋信息 # event_handler to print training and testing info
x = data_layer(name='x', size=13) def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
y_predict = fc_layer( if event.batch_id % 100 == 0:
input=x, print "Pass %d, Batch %d, Cost %f" % (
param_attr=ParamAttr(name='w'), event.pass_id, event.batch_id, event.cost)
size=1,
act=LinearActivation(), if isinstance(event, paddle.event.EndPass):
bias_attr=ParamAttr(name='b')) result = trainer.test(
reader=paddle.batch(
if not is_predict: #训练时,我们使用MSE,即regression_cost作为损失函数 uci_housing.test(), batch_size=2),
y = data_layer(name='y', size=1) feeding=feeding)
cost = regression_cost(input=y_predict, label=y) print "Test %d, Cost %f" % (event.pass_id, result.cost)
outputs(cost) #训练时输出MSE来监控损失的变化
else: #测试时,输出预测值
outputs(y_predict)
``` ```
## 训练模型 ### 开始训练
在对应代码的根目录下执行PaddlePaddle的命令行训练程序。这里指定模型配置文件为`trainer_config.py`,训练30轮,结果保存在`output`路径下。
```bash
./train.sh
```
## 应用模型 ```python
现在来看下如何使用已经训练好的模型进行预测。 trainer.train(
```bash reader=paddle.batch(
python predict.py paddle.reader.shuffle(
``` uci_housing.train(), buf_size=500),
这里默认使用`output/pass-00029`中保存的模型进行预测,并将数据中的房价与预测结果进行对比,结果保存在 `predictions.png`中。 batch_size=2),
如果你想使用别的模型或者其它的数据进行预测,只要传入新的路径即可: feeding=feeding,
```bash event_handler=event_handler,
python predict.py -m output/pass-00020 -t data/housing.test.npy num_passes=30)
``` ```
## 总结 ## 总结
...@@ -228,6 +240,7 @@ python predict.py -m output/pass-00020 -t data/housing.test.npy ...@@ -228,6 +240,7 @@ python predict.py -m output/pass-00020 -t data/housing.test.npy
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -40,6 +41,7 @@ ...@@ -40,6 +41,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
TODO: Write about https://github.com/PaddlePaddle/Paddle/tree/develop/demo/gan TODO: Write about https://github.com/PaddlePaddle/Paddle/tree/develop/demo/gan
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -589,6 +590,7 @@ Traditional image classification methods involve multiple stages of processing a ...@@ -589,6 +590,7 @@ Traditional image classification methods involve multiple stages of processing a
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -214,24 +215,24 @@ paddle.init(use_gpu=False, trainer_count=1) ...@@ -214,24 +215,24 @@ paddle.init(use_gpu=False, trainer_count=1)
1. 定义数据输入及其维度 1. 定义数据输入及其维度
网络输入定义为 `data_layer` (数据层),在图像分类中即为图像像素信息。CIFRAR10是RGB 3通道32x32大小的彩色图,因此输入数据大小为3072(3x32x32),类别大小为10,即10分类。 网络输入定义为 `data_layer` (数据层),在图像分类中即为图像像素信息。CIFRAR10是RGB 3通道32x32大小的彩色图,因此输入数据大小为3072(3x32x32),类别大小为10,即10分类。
```python ```python
datadim = 3 * 32 * 32 datadim = 3 * 32 * 32
classdim = 10 classdim = 10
image = paddle.layer.data( image = paddle.layer.data(
name="image", type=paddle.data_type.dense_vector(datadim)) name="image", type=paddle.data_type.dense_vector(datadim))
``` ```
2. 定义VGG网络核心模块 2. 定义VGG网络核心模块
```python ```python
net = vgg_bn_drop(image) net = vgg_bn_drop(image)
``` ```
VGG核心模块的输入是数据层,`vgg_bn_drop` 定义了16层VGG结构,每层卷积后面引入BN层和Dropout层,详细的定义如下: VGG核心模块的输入是数据层,`vgg_bn_drop` 定义了16层VGG结构,每层卷积后面引入BN层和Dropout层,详细的定义如下:
```python ```python
def vgg_bn_drop(input): def vgg_bn_drop(input):
def conv_block(ipt, num_filter, groups, dropouts, num_channels=None): def conv_block(ipt, num_filter, groups, dropouts, num_channels=None):
return paddle.networks.img_conv_group( return paddle.networks.img_conv_group(
...@@ -260,33 +261,33 @@ paddle.init(use_gpu=False, trainer_count=1) ...@@ -260,33 +261,33 @@ paddle.init(use_gpu=False, trainer_count=1)
layer_attr=paddle.attr.Extra(drop_rate=0.5)) layer_attr=paddle.attr.Extra(drop_rate=0.5))
fc2 = paddle.layer.fc(input=bn, size=512, act=paddle.activation.Linear()) fc2 = paddle.layer.fc(input=bn, size=512, act=paddle.activation.Linear())
return fc2 return fc2
``` ```
2.1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.networks`中预定义的模块,由若干组 `Conv->BN->ReLu->Dropout` 和 一组 `Pooling` 组成, 2.1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.networks`中预定义的模块,由若干组 `Conv->BN->ReLu->Dropout` 和 一组 `Pooling` 组成,
2.2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。 2.2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。
2.3. 最后接两层512维的全连接。 2.3. 最后接两层512维的全连接。
3. 定义分类器 3. 定义分类器
通过上面VGG网络提取高层特征,然后经过全连接层映射到类别维度大小的向量,再通过Softmax归一化得到每个类别的概率,也可称作分类器。 通过上面VGG网络提取高层特征,然后经过全连接层映射到类别维度大小的向量,再通过Softmax归一化得到每个类别的概率,也可称作分类器。
```python ```python
out = paddle.layer.fc(input=net, out = paddle.layer.fc(input=net,
size=classdim, size=classdim,
act=paddle.activation.Softmax()) act=paddle.activation.Softmax())
``` ```
4. 定义损失函数和网络输出 4. 定义损失函数和网络输出
在有监督训练中需要输入图像对应的类别信息,同样通过`paddle.layer.data`来定义。训练中采用多类交叉熵作为损失函数,并作为网络的输出,预测阶段定义网络的输出为分类器得到的概率信息。 在有监督训练中需要输入图像对应的类别信息,同样通过`paddle.layer.data`来定义。训练中采用多类交叉熵作为损失函数,并作为网络的输出,预测阶段定义网络的输出为分类器得到的概率信息。
```python ```python
lbl = paddle.layer.data( lbl = paddle.layer.data(
name="label", type=paddle.data_type.integer_value(classdim)) name="label", type=paddle.data_type.integer_value(classdim))
cost = paddle.layer.classification_cost(input=out, label=lbl) cost = paddle.layer.classification_cost(input=out, label=lbl)
``` ```
### ResNet ### ResNet
...@@ -535,6 +536,7 @@ Test with Pass 0, {'classification_error_evaluator': 0.885200023651123} ...@@ -535,6 +536,7 @@ Test with Pass 0, {'classification_error_evaluator': 0.885200023651123}
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<meta http-equiv="refresh" content="0; url=https://github.com/paddlepaddle/book" /> <script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tmpl/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tmpl/github-markdown.css" rel='stylesheet'>
</head> </head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body> <body>
<a href="https://github.com/paddlepaddle/book">Please access github home page</a>
<div id="context" class="container markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
# 深度学习入门
1. 新手入门 [[fit_a_line](fit_a_line/)] [[html](http://book.paddlepaddle.org/fit_a_line)]
1. 识别数字 [[recognize_digits](recognize_digits/)] [[html](http://book.paddlepaddle.org/recognize_digits)]
1. 图像分类 [[image_classification](image_classification/)] [[html](http://book.paddlepaddle.org/image_classification)]
1. 词向量 [[word2vec](word2vec/)] [[html](http://book.paddlepaddle.org/word2vec)]
1. 情感分析 [[understand_sentiment](understand_sentiment/)] [[html](http://book.paddlepaddle.org/understand_sentiment)]
1. 语义角色标注 [[label_semantic_roles](label_semantic_roles/)] [[html](http://book.paddlepaddle.org/label_semantic_roles)]
1. 机器翻译 [[machine_translation](machine_translation/)] [[html](http://book.paddlepaddle.org/machine_translation)]
1. 个性化推荐 [[recommender_system](recommender_system/)] [[html](http://book.paddlepaddle.org/recommender_system)]
<br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body> </body>
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -63,34 +64,20 @@ Standard SRL system mostly builds on top of Syntactic Analysis and contains five ...@@ -63,34 +64,20 @@ Standard SRL system mostly builds on top of Syntactic Analysis and contains five
<div align="center"> <div align="center">
<img src="image/dependency_parsing.png" width = "80%" align=center /><br> <img src="image/dependency_parsing_en.png" width = "80%" align=center /><br>
Fig 1. Syntactic parse tree Fig 1. Syntactic parse tree
</div> </div>
核心关系-> HED
定中关系-> ATT
主谓关系-> SBV
状中结构-> ADV
介宾关系-> POB
右附加关系-> RAD
动宾关系-> VOB
标点-> WP
However, complete syntactic analysis requires identifying the relation among all constitutes and the performance of SRL is sensitive to the precision of syntactic analysis, which makes SRL a very challenging task. To reduce the complexity and obtain some syntactic structure information, we often use shallow syntactic analysis. Shallow Syntactic Analysis is also called partial parsing or chunking. Unlike complete syntactic analysis which requires the construction of the complete parsing tree, Shallow Syntactic Analysis only need to identify some independent components with relatively simple structure, such as verb phrases (chunk). To avoid difficulties in constructing a syntactic tree with high accuracy, some work\[[1](#Reference)\] proposed semantic chunking based SRL methods, which convert SRL as a sequence tagging problem. Sequence tagging tasks classify syntactic chunks using BIO representation. For syntactic chunks forming a chunk of type A, the first chunk receives the B-A tag (Begin), the remaining ones receive the tag I-A (Inside), and all chunks outside receive the tag O-A. However, complete syntactic analysis requires identifying the relation among all constitutes and the performance of SRL is sensitive to the precision of syntactic analysis, which makes SRL a very challenging task. To reduce the complexity and obtain some syntactic structure information, we often use shallow syntactic analysis. Shallow Syntactic Analysis is also called partial parsing or chunking. Unlike complete syntactic analysis which requires the construction of the complete parsing tree, Shallow Syntactic Analysis only need to identify some independent components with relatively simple structure, such as verb phrases (chunk). To avoid difficulties in constructing a syntactic tree with high accuracy, some work\[[1](#Reference)\] proposed semantic chunking based SRL methods, which convert SRL as a sequence tagging problem. Sequence tagging tasks classify syntactic chunks using BIO representation. For syntactic chunks forming a chunk of type A, the first chunk receives the B-A tag (Begin), the remaining ones receive the tag I-A (Inside), and all chunks outside receive the tag O-A.
The BIO representation of above example is shown in Fig.1. The BIO representation of above example is shown in Fig.1.
<div align="center"> <div align="center">
<img src="image/bio_example.png" width = "90%" align=center /><br> <img src="image/bio_example_en.png" width = "90%" align=center /><br>
Fig 2. BIO represention Fig 2. BIO represention
</div> </div>
输入序列-> input sequence
语块-> chunk
标注序列-> label sequence
角色-> role
This example illustrates the simplicity of sequence tagging because (1) shallow syntactic analysis reduces the precision requirement of syntactic analysis; (2) pruning candidate arguments is removed; 3) argument identification and tagging are finished at the same time. Such unified methods simplify the procedure, reduce the risk of accumulating errors and boost the performance further. This example illustrates the simplicity of sequence tagging because (1) shallow syntactic analysis reduces the precision requirement of syntactic analysis; (2) pruning candidate arguments is removed; 3) argument identification and tagging are finished at the same time. Such unified methods simplify the procedure, reduce the risk of accumulating errors and boost the performance further.
In this tutorial, our SRL system is built as an end-to-end system via a neural network. We take only text sequences, without using any syntactic parsing results or complex hand-designed features. We give public dataset [CoNLL-2004 and CoNLL-2005 Shared Tasks](http://www.cs.upc.edu/~srlconll/) as an example to illustrate: given a sentence with predicates marked, identify the corresponding arguments and their semantic roles by sequence tagging method. In this tutorial, our SRL system is built as an end-to-end system via a neural network. We take only text sequences, without using any syntactic parsing results or complex hand-designed features. We give public dataset [CoNLL-2004 and CoNLL-2005 Shared Tasks](http://www.cs.upc.edu/~srlconll/) as an example to illustrate: given a sentence with predicates marked, identify the corresponding arguments and their semantic roles by sequence tagging method.
...@@ -111,14 +98,11 @@ The operation of a single LSTM cell contain 3 parts: (1) input-to-hidden: map in ...@@ -111,14 +98,11 @@ The operation of a single LSTM cell contain 3 parts: (1) input-to-hidden: map in
Fig.3 illustrate the final stacked recurrent neural networks. Fig.3 illustrate the final stacked recurrent neural networks.
<p align="center"> <p align="center">
<img src="./image/stacked_lstm.png" width = "40%" align=center><br> <img src="./image/stacked_lstm_en.png" width = "40%" align=center><br>
Fig 3. Stacked Recurrent Neural Networks Fig 3. Stacked Recurrent Neural Networks
</p> </p>
线性变换-> linear transformation
输入层到隐层-> input-to-hidden
### Bidirectional Recurrent Neural Network ### Bidirectional Recurrent Neural Network
LSTMs can summarize the history of previous inputs seen up to now, but can not see the future. In most of NLP (natural language processing) tasks, the entire sentences are ready to use. Therefore, sequential learning might be much efficient if the future can be encoded as well like histories. LSTMs can summarize the history of previous inputs seen up to now, but can not see the future. In most of NLP (natural language processing) tasks, the entire sentences are ready to use. Therefore, sequential learning might be much efficient if the future can be encoded as well like histories.
...@@ -126,16 +110,11 @@ LSTMs can summarize the history of previous inputs seen up to now, but can not s ...@@ -126,16 +110,11 @@ LSTMs can summarize the history of previous inputs seen up to now, but can not s
To address the above drawbacks, we can design bidirectional recurrent neural networks by making a minor modification. Higher LSTM layers process the sequence in reversed direction with previous lower LSTM layers, i.e., Deep LSTMs operate from left-to-right, right-to-left, left-to-right,..., in depth. Therefore, LSTM layers at time-step $t$ can see both histories and the future since the second layer. Fig. 4 illustrates the bidirectional recurrent neural networks. To address the above drawbacks, we can design bidirectional recurrent neural networks by making a minor modification. Higher LSTM layers process the sequence in reversed direction with previous lower LSTM layers, i.e., Deep LSTMs operate from left-to-right, right-to-left, left-to-right,..., in depth. Therefore, LSTM layers at time-step $t$ can see both histories and the future since the second layer. Fig. 4 illustrates the bidirectional recurrent neural networks.
<p align="center"> <p align="center">
<img src="./image/bidirectional_stacked_lstm.png" width = "60%" align=center><br> <img src="./image/bidirectional_stacked_lstm_en.png" width = "60%" align=center><br>
Fig 4. Bidirectional LSTMs Fig 4. Bidirectional LSTMs
</p> </p>
线性变换-> linear transformation
输入层到隐层-> input-to-hidden
正向处理输出序列->process sequence in the forward direction
反向处理上一层序列-> process sequence from the previous layer in backward direction
Note that, this bidirectional RNNs is different with the one proposed by Bengio et al. in machine translation tasks \[[3](#Reference), [4](#Reference)\]. We will introduce another bidirectional RNNs in the following tasks[machine translation](https://github.com/PaddlePaddle/book/blob/develop/machine_translation/README.md) Note that, this bidirectional RNNs is different with the one proposed by Bengio et al. in machine translation tasks \[[3](#Reference), [4](#Reference)\]. We will introduce another bidirectional RNNs in the following tasks[machine translation](https://github.com/PaddlePaddle/book/blob/develop/machine_translation/README.md)
### Conditional Random Field ### Conditional Random Field
...@@ -147,7 +126,7 @@ CRF is a probabilistic graph model (undirected) with nodes denoting random varia ...@@ -147,7 +126,7 @@ CRF is a probabilistic graph model (undirected) with nodes denoting random varia
Sequence tagging tasks only consider input and output as linear sequences without extra dependent assumptions on graph model. Thus, the graph model of sequence tagging tasks is simple chain or line, which results in a Linear-Chain Conditional Random Field, shown in Fig.5. Sequence tagging tasks only consider input and output as linear sequences without extra dependent assumptions on graph model. Thus, the graph model of sequence tagging tasks is simple chain or line, which results in a Linear-Chain Conditional Random Field, shown in Fig.5.
<p align="center"> <p align="center">
<img src="./image/linear_chain_crf.png" width = "35%" align=center><br> <img src="./image/linear_chain_crf.png" width = "35%" align=center><br>
Fig 5. Linear Chain Conditional Random Field used in SRL tasks Fig 5. Linear Chain Conditional Random Field used in SRL tasks
</p> </p>
...@@ -196,19 +175,11 @@ After modification, the model is as follows: ...@@ -196,19 +175,11 @@ After modification, the model is as follows:
4. Take representation from step 3 as input of CRF, label sequence as supervision signal, do sequence tagging tasks 4. Take representation from step 3 as input of CRF, label sequence as supervision signal, do sequence tagging tasks
<div align="center"> <div align="center">
<img src="image/db_lstm_network.png" width = "60%" align=center /><br> <img src="image/db_lstm_en.png" width = "60%" align=center /><br>
Fig 6. DB-LSTM for SRL tasks Fig 6. DB-LSTM for SRL tasks
</div> </div>
论元-> argu
谓词-> pred
谓词上下文-> ctx-p
谓词上下文区域标记-> $m_r$
输入-> input
原句-> sentence
反向LSTM-> LSTM Reverse
## Data Preparation ## Data Preparation
In the tutorial, we use [CoNLL 2005](http://www.cs.upc.edu/~srlconll/) SRL task open dataset as an example. It is important to note that the training set and development set of the CoNLL 2005 SRL task are not free to download after the competition. Currently, only the test set can be obtained, including 23 sections of the Wall Street Journal and three sections of the Brown corpus. In this tutorial, we use the WSJ corpus as the training dataset to explain the model. However, since the training set is small, if you want to train a usable neural network SRL system, consider paying for the full corpus. In the tutorial, we use [CoNLL 2005](http://www.cs.upc.edu/~srlconll/) SRL task open dataset as an example. It is important to note that the training set and development set of the CoNLL 2005 SRL task are not free to download after the competition. Currently, only the test set can be obtained, including 23 sections of the Wall Street Journal and three sections of the Brown corpus. In this tutorial, we use the WSJ corpus as the training dataset to explain the model. However, since the training set is small, if you want to train a usable neural network SRL system, consider paying for the full corpus.
...@@ -320,7 +291,7 @@ Speciala note: hidden_dim = 512 means LSTM hidden vector of 128 dimension (512/4 ...@@ -320,7 +291,7 @@ Speciala note: hidden_dim = 512 means LSTM hidden vector of 128 dimension (512/4
- 2. The word sequence, predicate, predicate context, and region mark sequence are transformed into embedding vector sequences. - 2. The word sequence, predicate, predicate context, and region mark sequence are transformed into embedding vector sequences.
```python ```python
# Since word vectorlookup table is pre-trained, we won't update it this time. # Since word vectorlookup table is pre-trained, we won't update it this time.
# is_static being True prevents updating the lookup table during training. # is_static being True prevents updating the lookup table during training.
...@@ -349,7 +320,7 @@ emb_layers.append(mark_embedding) ...@@ -349,7 +320,7 @@ emb_layers.append(mark_embedding)
- 3. 8 LSTM units will be trained in "forward / backward" order. - 3. 8 LSTM units will be trained in "forward / backward" order.
```python ```python
hidden_0 = paddle.layer.mixed( hidden_0 = paddle.layer.mixed(
size=hidden_dim, size=hidden_dim,
bias_attr=std_default, bias_attr=std_default,
...@@ -542,6 +513,7 @@ Semantic Role Labeling is an important intermediate step in a wide range of natu ...@@ -542,6 +513,7 @@ Semantic Role Labeling is an important intermediate step in a wide range of natu
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -93,7 +94,7 @@ $$\mbox{[小明]}_{\mbox{Agent}}\mbox{[昨天]}_{\mbox{Time}}\mbox{[晚上]}_\mb ...@@ -93,7 +94,7 @@ $$\mbox{[小明]}_{\mbox{Agent}}\mbox{[昨天]}_{\mbox{Time}}\mbox{[晚上]}_\mb
图3是最终得到的栈式循环神经网络结构示意图。 图3是最终得到的栈式循环神经网络结构示意图。
<p align="center"> <p align="center">
<img src="./image/stacked_lstm.png" width = "40%" align=center><br> <img src="./image/stacked_lstm.png" width = "40%" align=center><br>
图3. 基于LSTM的栈式循环神经网络结构示意图 图3. 基于LSTM的栈式循环神经网络结构示意图
</p> </p>
...@@ -104,7 +105,7 @@ $$\mbox{[小明]}_{\mbox{Agent}}\mbox{[昨天]}_{\mbox{Time}}\mbox{[晚上]}_\mb ...@@ -104,7 +105,7 @@ $$\mbox{[小明]}_{\mbox{Agent}}\mbox{[昨天]}_{\mbox{Time}}\mbox{[晚上]}_\mb
为了克服这一缺陷,我们可以设计一种双向循环网络单元,它的思想简单且直接:对上一节的栈式循环神经网络进行一个小小的修改,堆叠多个LSTM单元,让每一层LSTM单元分别以:正向、反向、正向 …… 的顺序学习上一层的输出序列。于是,从第2层开始,$t$时刻我们的LSTM单元便总是可以看到历史和未来的信息。图4是基于LSTM的双向循环神经网络结构示意图。 为了克服这一缺陷,我们可以设计一种双向循环网络单元,它的思想简单且直接:对上一节的栈式循环神经网络进行一个小小的修改,堆叠多个LSTM单元,让每一层LSTM单元分别以:正向、反向、正向 …… 的顺序学习上一层的输出序列。于是,从第2层开始,$t$时刻我们的LSTM单元便总是可以看到历史和未来的信息。图4是基于LSTM的双向循环神经网络结构示意图。
<p align="center"> <p align="center">
<img src="./image/bidirectional_stacked_lstm.png" width = "60%" align=center><br> <img src="./image/bidirectional_stacked_lstm.png" width = "60%" align=center><br>
图4. 基于LSTM的双向循环神经网络结构示意图 图4. 基于LSTM的双向循环神经网络结构示意图
</p> </p>
...@@ -119,7 +120,7 @@ CRF是一种概率化结构模型,可以看作是一个概率无向图模型 ...@@ -119,7 +120,7 @@ CRF是一种概率化结构模型,可以看作是一个概率无向图模型
序列标注任务只需要考虑输入和输出都是一个线性序列,并且由于我们只是将输入序列作为条件,不做任何条件独立假设,因此输入序列的元素之间并不存在图结构。综上,在序列标注任务中使用的是如图5所示的定义在链式图上的CRF,称之为线性链条件随机场(Linear Chain Conditional Random Field)。 序列标注任务只需要考虑输入和输出都是一个线性序列,并且由于我们只是将输入序列作为条件,不做任何条件独立假设,因此输入序列的元素之间并不存在图结构。综上,在序列标注任务中使用的是如图5所示的定义在链式图上的CRF,称之为线性链条件随机场(Linear Chain Conditional Random Field)。
<p align="center"> <p align="center">
<img src="./image/linear_chain_crf.png" width = "35%" align=center><br> <img src="./image/linear_chain_crf.png" width = "35%" align=center><br>
图5. 序列标注任务中使用的线性链条件随机场 图5. 序列标注任务中使用的线性链条件随机场
</p> </p>
...@@ -163,7 +164,7 @@ $$L(\lambda, D) = - \text{log}\left(\prod_{m=1}^{N}p(Y_m|X_m, W)\right) + C \fra ...@@ -163,7 +164,7 @@ $$L(\lambda, D) = - \text{log}\left(\prod_{m=1}^{N}p(Y_m|X_m, W)\right) + C \fra
3. 第2步的4个词向量序列作为双向LSTM模型的输入;LSTM模型学习输入序列的特征表示,得到新的特性表示序列; 3. 第2步的4个词向量序列作为双向LSTM模型的输入;LSTM模型学习输入序列的特征表示,得到新的特性表示序列;
4. CRF以第3步中LSTM学习到的特征为输入,以标记序列为监督信号,完成序列标注; 4. CRF以第3步中LSTM学习到的特征为输入,以标记序列为监督信号,完成序列标注;
<div align="center"> <div align="center">
<img src="image/db_lstm_network.png" width = "60%" align=center /><br> <img src="image/db_lstm_network.png" width = "60%" align=center /><br>
图6. SRL任务上的深层双向LSTM模型 图6. SRL任务上的深层双向LSTM模型
</div> </div>
...@@ -283,7 +284,7 @@ target = paddle.layer.data(name='target', type=d_type(label_dict_len)) ...@@ -283,7 +284,7 @@ target = paddle.layer.data(name='target', type=d_type(label_dict_len))
- 2. 将句子序列、谓词、谓词上下文、谓词上下文区域标记通过词表,转换为实向量表示的词向量序列。 - 2. 将句子序列、谓词、谓词上下文、谓词上下文区域标记通过词表,转换为实向量表示的词向量序列。
```python ```python
# 在本教程中,我们加载了预训练的词向量,这里设置了:is_static=True # 在本教程中,我们加载了预训练的词向量,这里设置了:is_static=True
# is_static 为 True 时保证了在训练 SRL 模型过程中,词表不再更新 # is_static 为 True 时保证了在训练 SRL 模型过程中,词表不再更新
...@@ -312,7 +313,7 @@ emb_layers.append(mark_embedding) ...@@ -312,7 +313,7 @@ emb_layers.append(mark_embedding)
- 3. 8个LSTM单元以“正向/反向”的顺序对所有输入序列进行学习。 - 3. 8个LSTM单元以“正向/反向”的顺序对所有输入序列进行学习。
```python ```python
hidden_0 = paddle.layer.mixed( hidden_0 = paddle.layer.mixed(
size=hidden_dim, size=hidden_dim,
bias_attr=std_default, bias_attr=std_default,
...@@ -509,6 +510,7 @@ trainer.train( ...@@ -509,6 +510,7 @@ trainer.train(
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -764,6 +765,7 @@ End-to-end neural machine translation is a recently developed way to perform mac ...@@ -764,6 +765,7 @@ End-to-end neural machine translation is a recently developed way to perform mac
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
此差异已折叠。
markdown_file=$1 import argparse
import re
import sys
# Notice: the single-quotes around EOF below make outputs HEAD = """
# verbatium. c.f. http://stackoverflow.com/a/9870274/724872
cat <<'EOF'
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -10,8 +10,8 @@ cat <<'EOF' ...@@ -10,8 +10,8 @@ cat <<'EOF'
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -44,11 +44,9 @@ cat <<'EOF' ...@@ -44,11 +44,9 @@ cat <<'EOF'
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
EOF """
cat $markdown_file TAIL = """
cat <<'EOF'
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
...@@ -70,4 +68,28 @@ document.getElementById("context").innerHTML = marked( ...@@ -70,4 +68,28 @@ document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML) document.getElementById("markdown").innerHTML)
</script> </script>
</body> </body>
EOF """
def convert_markdown_into_html(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument('filenames', nargs='*', help='Filenames to fix')
args = parser.parse_args(argv)
retv = 0
for filename in args.filenames:
with open(
re.sub(r"README", "index", re.sub(r"\.md$", ".html", filename)),
"w") as output:
output.write(HEAD)
with open(filename) as input:
for line in input:
output.write(line)
output.write(TAIL)
return retv
if __name__ == '__main__':
sys.exit(convert_markdown_into_html())
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -143,7 +144,7 @@ Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/> ...@@ -143,7 +144,7 @@ Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/>
For more details on Convolutional Neural Networks, please refer to [this Stanford open course]( http://cs231n.github.io/convolutional-networks/ ) and [this Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) tutorial. For more details on Convolutional Neural Networks, please refer to [this Stanford open course]( http://cs231n.github.io/convolutional-networks/ ) and [this Image Classification](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md) tutorial.
### List of Common Activation Functions ### List of Common Activation Functions
- Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $ - Sigmoid activation function: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $
- Tanh activation function: $ f(x) = tanh(x) = \frac{e^x-e^{-x}}{e^x+e^{-x}} $ - Tanh activation function: $ f(x) = tanh(x) = \frac{e^x-e^{-x}}{e^x+e^{-x}} $
...@@ -266,7 +267,7 @@ trainer = paddle.trainer.SGD(cost=cost, ...@@ -266,7 +267,7 @@ trainer = paddle.trainer.SGD(cost=cost,
update_equation=optimizer) update_equation=optimizer)
``` ```
Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data. Then we specify the training data `paddle.dataset.movielens.train()` and testing data `paddle.dataset.movielens.test()`. These two functions are *reader creators*, once called, returns a *reader*. A reader is a Python function, which, once called, returns a Python generator, which yields instances of data.
Here `shuffle` is a reader decorator, which takes a reader A as its parameter, and returns a new reader B, where B calls A to read in `buffer_size` data instances everytime into a buffer, then shuffles and yield instances in the buffer. If you want very shuffled data, try use a larger buffer size. Here `shuffle` is a reader decorator, which takes a reader A as its parameter, and returns a new reader B, where B calls A to read in `buffer_size` data instances everytime into a buffer, then shuffles and yield instances in the buffer. If you want very shuffled data, try use a larger buffer size.
...@@ -338,6 +339,7 @@ This tutorial describes a few basic Deep Learning models viz. Softmax regression ...@@ -338,6 +339,7 @@ This tutorial describes a few basic Deep Learning models viz. Softmax regression
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">This book</span> is created by <a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a>, and uses <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Shared knowledge signature - non commercial use-Sharing 4.0 International Licensing Protocal</a>. <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">This book</span> is created by <a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a>, and uses <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Shared knowledge signature - non commercial use-Sharing 4.0 International Licensing Protocal</a>.
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -151,7 +152,7 @@ $$ b_0 = 1 $$ ...@@ -151,7 +152,7 @@ $$ b_0 = 1 $$
更详细的关于卷积神经网络的具体知识可以参考[斯坦福大学公开课]( http://cs231n.github.io/convolutional-networks/ )和[图像分类](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md)教程。 更详细的关于卷积神经网络的具体知识可以参考[斯坦福大学公开课]( http://cs231n.github.io/convolutional-networks/ )和[图像分类](https://github.com/PaddlePaddle/book/blob/develop/image_classification/README.md)教程。
### 常见激活函数介绍 ### 常见激活函数介绍
- sigmoid激活函数: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $ - sigmoid激活函数: $ f(x) = sigmoid(x) = \frac{1}{1+e^{-x}} $
- tanh激活函数: $ f(x) = tanh(x) = \frac{e^x-e^{-x}}{e^x+e^{-x}} $ - tanh激活函数: $ f(x) = tanh(x) = \frac{e^x-e^{-x}}{e^x+e^{-x}} $
...@@ -340,6 +341,7 @@ trainer.train( ...@@ -340,6 +341,7 @@ trainer.train(
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -117,7 +118,7 @@ Figure 3. A hybrid recommendation model. ...@@ -117,7 +118,7 @@ Figure 3. A hybrid recommendation model.
## Dataset ## Dataset
We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m.zip) to train our model. This dataset includes 10,000 ratings of 4,000 movies from 6,000 users to 4,000 movies. Each rate is in the range of 1~5. Thanks to GroupLens Research for collecting, processing and publishing the dataset. We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m.zip) to train our model. This dataset includes 10,000 ratings of 4,000 movies from 6,000 users to 4,000 movies. Each rate is in the range of 1~5. Thanks to GroupLens Research for collecting, processing and publishing the dataset.
We don't have to download and preprocess the data. Instead, we can use PaddlePaddle's dataset module `paddle.v2.dataset.movielens`. We don't have to download and preprocess the data. Instead, we can use PaddlePaddle's dataset module `paddle.v2.dataset.movielens`.
...@@ -150,6 +151,7 @@ This tutorial goes over traditional approaches in recommender system and a deep ...@@ -150,6 +151,7 @@ This tutorial goes over traditional approaches in recommender system and a deep
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">This tutorial</span> was created by <a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">the PaddlePaddle community</a> and published under <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Common Creative 4.0 License</a> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">This tutorial</span> was created by <a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">the PaddlePaddle community</a> and published under <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Common Creative 4.0 License</a>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
此差异已折叠。
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -112,7 +113,7 @@ where $W_{xh}$ is the weight matrix from input to latent; $W_{hh}$ is the latent ...@@ -112,7 +113,7 @@ where $W_{xh}$ is the weight matrix from input to latent; $W_{hh}$ is the latent
In NLP, words are first represented as a one-hot vector and then mapped to an embedding. The embedded feature goes through an RNN as input $x_t$ at every time step. Moreover, we can add other layers on top of RNN. e.g., a deep or stacked RNN. Also, the last latent state can be used as a feature for sentence classification. In NLP, words are first represented as a one-hot vector and then mapped to an embedding. The embedded feature goes through an RNN as input $x_t$ at every time step. Moreover, we can add other layers on top of RNN. e.g., a deep or stacked RNN. Also, the last latent state can be used as a feature for sentence classification.
### Long-Short Term Memory ### Long-Short Term Memory
For data of long sequence, training RNN sometimes has gradient vanishing and explosion problem \[[6](#)\]. To solve this problem Hochreiter S, Schmidhuber J. (1997) proposed the LSTM(long short term memory\[[5](#Refernce)\]). For data of long sequence, training RNN sometimes has gradient vanishing and explosion problem \[[6](#)\]. To solve this problem Hochreiter S, Schmidhuber J. (1997) proposed the LSTM(long short term memory\[[5](#Refernce)\]).
Compared with simple RNN, the structrue of LSTM has included memory cell $c$, input gate $i$, forget gate $f$ and output gate $o$. These gates and memory cells largely improves the ability of handling long sequences. We can formulate LSTM-RNN as a function $F$ as: Compared with simple RNN, the structrue of LSTM has included memory cell $c$, input gate $i$, forget gate $f$ and output gate $o$. These gates and memory cells largely improves the ability of handling long sequences. We can formulate LSTM-RNN as a function $F$ as:
...@@ -532,6 +533,7 @@ In this chapter, we use sentiment analysis as an example to introduce applying d ...@@ -532,6 +533,7 @@ In this chapter, we use sentiment analysis as an example to introduce applying d
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -55,24 +56,24 @@ ...@@ -55,24 +56,24 @@
<p align="center">表格 1 电影评论情感分析</p> <p align="center">表格 1 电影评论情感分析</p>
在自然语言处理中,情感分析属于典型的**文本分类**问题,即把需要进行情感分析的文本划分为其所属类别。文本分类涉及文本表示和分类方法两个问题。在深度学习的方法出现之前,主流的文本表示方法为词袋模型BOW(bag of words),话题模型等等;分类方法有SVM(support vector machine), LR(logistic regression)等等。 在自然语言处理中,情感分析属于典型的**文本分类**问题,即把需要进行情感分析的文本划分为其所属类别。文本分类涉及文本表示和分类方法两个问题。在深度学习的方法出现之前,主流的文本表示方法为词袋模型BOW(bag of words),话题模型等等;分类方法有SVM(support vector machine), LR(logistic regression)等等。
对于一段文本,BOW表示会忽略其词顺序、语法和句法,将这段文本仅仅看做是一个词集合,因此BOW方法并不能充分表示文本的语义信息。例如,句子“这部电影糟糕透了”和“一个乏味,空洞,没有内涵的作品”在情感分析中具有很高的语义相似度,但是它们的BOW表示的相似度为0。又如,句子“一个空洞,没有内涵的作品”和“一个不空洞而且有内涵的作品”的BOW相似度很高,但实际上它们的意思很不一样。 对于一段文本,BOW表示会忽略其词顺序、语法和句法,将这段文本仅仅看做是一个词集合,因此BOW方法并不能充分表示文本的语义信息。例如,句子“这部电影糟糕透了”和“一个乏味,空洞,没有内涵的作品”在情感分析中具有很高的语义相似度,但是它们的BOW表示的相似度为0。又如,句子“一个空洞,没有内涵的作品”和“一个不空洞而且有内涵的作品”的BOW相似度很高,但实际上它们的意思很不一样。
本章我们所要介绍的深度学习模型克服了BOW表示的上述缺陷,它在考虑词顺序的基础上把文本映射到低维度的语义空间,并且以端对端(end to end)的方式进行文本表示及分类,其性能相对于传统方法有显著的提升\[[1](#参考文献)\]。 本章我们所要介绍的深度学习模型克服了BOW表示的上述缺陷,它在考虑词顺序的基础上把文本映射到低维度的语义空间,并且以端对端(end to end)的方式进行文本表示及分类,其性能相对于传统方法有显著的提升\[[1](#参考文献)\]。
## 模型概览 ## 模型概览
本章所使用的文本表示模型为卷积神经网络(Convolutional Neural Networks)和循环神经网络(Recurrent Neural Networks)及其扩展。下面依次介绍这几个模型。 本章所使用的文本表示模型为卷积神经网络(Convolutional Neural Networks)和循环神经网络(Recurrent Neural Networks)及其扩展。下面依次介绍这几个模型。
### 文本卷积神经网络(CNN) ### 文本卷积神经网络(CNN)
卷积神经网络经常用来处理具有类似网格拓扑结构(grid-like topology)的数据。例如,图像可以视为二维网格的像素点,自然语言可以视为一维的词序列。卷积神经网络可以提取多种局部特征,并对其进行组合抽象得到更高级的特征表示。实验表明,卷积神经网络能高效地对图像及文本问题进行建模处理。 卷积神经网络经常用来处理具有类似网格拓扑结构(grid-like topology)的数据。例如,图像可以视为二维网格的像素点,自然语言可以视为一维的词序列。卷积神经网络可以提取多种局部特征,并对其进行组合抽象得到更高级的特征表示。实验表明,卷积神经网络能高效地对图像及文本问题进行建模处理。
卷积神经网络主要由卷积(convolution)和池化(pooling)操作构成,其应用及组合方式灵活多变,种类繁多。本小结我们以一种简单的文本分类卷积神经网络为例进行讲解\[[1](#参考文献)\],如图1所示: 卷积神经网络主要由卷积(convolution)和池化(pooling)操作构成,其应用及组合方式灵活多变,种类繁多。本小结我们以一种简单的文本分类卷积神经网络为例进行讲解\[[1](#参考文献)\],如图1所示:
<p align="center"> <p align="center">
<img src="image/text_cnn.png" width = "80%" align="center"/><br/> <img src="image/text_cnn.png" width = "80%" align="center"/><br/>
图1. 卷积神经网络文本分类模型 图1. 卷积神经网络文本分类模型
</p> </p>
假设待处理句子的长度为$n$,其中第$i$个词的词向量(word embedding)为$x_i\in\mathbb{R}^k$,$k$为维度大小。 假设待处理句子的长度为$n$,其中第$i$个词的词向量(word embedding)为$x_i\in\mathbb{R}^k$,$k$为维度大小。
首先,进行词向量的拼接操作:将每$h$个词拼接起来形成一个大小为$h$的词窗口,记为$x_{i:i+h-1}$,它表示词序列$x_{i},x_{i+1},\ldots,x_{i+h-1}$的拼接,其中,$i$表示词窗口中第一个词在整个句子中的位置,取值范围从$1$到$n-h+1$,$x_{i:i+h-1}\in\mathbb{R}^{hk}$。 首先,进行词向量的拼接操作:将每$h$个词拼接起来形成一个大小为$h$的词窗口,记为$x_{i:i+h-1}$,它表示词序列$x_{i},x_{i+1},\ldots,x_{i+h-1}$的拼接,其中,$i$表示词窗口中第一个词在整个句子中的位置,取值范围从$1$到$n-h+1$,$x_{i:i+h-1}\in\mathbb{R}^{hk}$。
其次,进行卷积操作:把卷积核(kernel)$w\in\mathbb{R}^{hk}$应用于包含$h$个词的窗口$x_{i:i+h-1}$,得到特征$c_i=f(w\cdot x_{i:i+h-1}+b)$,其中$b\in\mathbb{R}$为偏置项(bias),$f$为非线性激活函数,如$sigmoid$。将卷积核应用于句子中所有的词窗口${x_{1:h},x_{2:h+1},\ldots,x_{n-h+1:n}}$,产生一个特征图(feature map): 其次,进行卷积操作:把卷积核(kernel)$w\in\mathbb{R}^{hk}$应用于包含$h$个词的窗口$x_{i:i+h-1}$,得到特征$c_i=f(w\cdot x_{i:i+h-1}+b)$,其中$b\in\mathbb{R}$为偏置项(bias),$f$为非线性激活函数,如$sigmoid$。将卷积核应用于句子中所有的词窗口${x_{1:h},x_{2:h+1},\ldots,x_{n-h+1:n}}$,产生一个特征图(feature map):
...@@ -82,7 +83,7 @@ $$c=[c_1,c_2,\ldots,c_{n-h+1}], c \in \mathbb{R}^{n-h+1}$$ ...@@ -82,7 +83,7 @@ $$c=[c_1,c_2,\ldots,c_{n-h+1}], c \in \mathbb{R}^{n-h+1}$$
$$\hat c=max(c)$$ $$\hat c=max(c)$$
在实际应用中,我们会使用多个卷积核来处理句子,窗口大小相同的卷积核堆叠起来形成一个矩阵(上文中的单个卷积核参数$w$相当于矩阵的某一行),这样可以更高效的完成运算。另外,我们也可使用窗口大小不同的卷积核来处理句子(图1作为示意画了四个卷积核,不同颜色表示不同大小的卷积核操作)。 在实际应用中,我们会使用多个卷积核来处理句子,窗口大小相同的卷积核堆叠起来形成一个矩阵(上文中的单个卷积核参数$w$相当于矩阵的某一行),这样可以更高效的完成运算。另外,我们也可使用窗口大小不同的卷积核来处理句子(图1作为示意画了四个卷积核,不同颜色表示不同大小的卷积核操作)。
最后,将所有卷积核得到的特征拼接起来即为文本的定长向量表示,对于文本分类问题,将其连接至softmax即构建出完整的模型。 最后,将所有卷积核得到的特征拼接起来即为文本的定长向量表示,对于文本分类问题,将其连接至softmax即构建出完整的模型。
...@@ -97,12 +98,12 @@ $$\hat c=max(c)$$ ...@@ -97,12 +98,12 @@ $$\hat c=max(c)$$
$$h_t=f(x_t,h_{t-1})=\sigma(W_{xh}x_t+W_{hh}h_{h-1}+b_h)$$ $$h_t=f(x_t,h_{t-1})=\sigma(W_{xh}x_t+W_{hh}h_{h-1}+b_h)$$
其中$W_{xh}$是输入到隐层的矩阵参数,$W_{hh}$是隐层到隐层的矩阵参数,$b_h$为隐层的偏置向量(bias)参数,$\sigma$为$sigmoid$函数。 其中$W_{xh}$是输入到隐层的矩阵参数,$W_{hh}$是隐层到隐层的矩阵参数,$b_h$为隐层的偏置向量(bias)参数,$\sigma$为$sigmoid$函数。
在处理自然语言时,一般会先将词(one-hot表示)映射为其词向量(word embedding)表示,然后再作为循环神经网络每一时刻的输入$x_t$。此外,可以根据实际需要的不同在循环神经网络的隐层上连接其它层。如,可以把一个循环神经网络的隐层输出连接至下一个循环神经网络的输入构建深层(deep or stacked)循环神经网络,或者提取最后一个时刻的隐层状态作为句子表示进而使用分类模型等等。 在处理自然语言时,一般会先将词(one-hot表示)映射为其词向量(word embedding)表示,然后再作为循环神经网络每一时刻的输入$x_t$。此外,可以根据实际需要的不同在循环神经网络的隐层上连接其它层。如,可以把一个循环神经网络的隐层输出连接至下一个循环神经网络的输入构建深层(deep or stacked)循环神经网络,或者提取最后一个时刻的隐层状态作为句子表示进而使用分类模型等等。
### 长短期记忆网络(LSTM) ### 长短期记忆网络(LSTM)
对于较长的序列数据,循环神经网络的训练过程中容易出现梯度消失或爆炸现象\[[6](#参考文献)\]。为了解决这一问题,Hochreiter S, Schmidhuber J. (1997)提出了LSTM(long short term memory\[[5](#参考文献)\])。 对于较长的序列数据,循环神经网络的训练过程中容易出现梯度消失或爆炸现象\[[6](#参考文献)\]。为了解决这一问题,Hochreiter S, Schmidhuber J. (1997)提出了LSTM(long short term memory\[[5](#参考文献)\])。
相比于简单的循环神经网络,LSTM增加了记忆单元$c$、输入门$i$、遗忘门$f$及输出门$o$。这些门及记忆单元组合起来大大提升了循环神经网络处理长序列数据的能力。若将基于LSTM的循环神经网络表示的函数记为$F$,则其公式为: 相比于简单的循环神经网络,LSTM增加了记忆单元$c$、输入门$i$、遗忘门$f$及输出门$o$。这些门及记忆单元组合起来大大提升了循环神经网络处理长序列数据的能力。若将基于LSTM的循环神经网络表示的函数记为$F$,则其公式为:
...@@ -127,7 +128,7 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$ ...@@ -127,7 +128,7 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$
其中,$Recrurent$可以表示简单的循环神经网络、GRU或LSTM。 其中,$Recrurent$可以表示简单的循环神经网络、GRU或LSTM。
### 栈式双向LSTM(Stacked Bidirectional LSTM) ### 栈式双向LSTM(Stacked Bidirectional LSTM)
对于正常顺序的循环神经网络,$h_t$包含了$t$时刻之前的输入信息,也就是上文信息。同样,为了得到下文信息,我们可以使用反方向(将输入逆序处理)的循环神经网络。结合构建深层循环神经网络的方法(深层神经网络往往能得到更抽象和高级的特征表示),我们可以通过构建更加强有力的基于LSTM的栈式双向循环神经网络\[[9](#参考文献)\],来对时序数据进行建模。 对于正常顺序的循环神经网络,$h_t$包含了$t$时刻之前的输入信息,也就是上文信息。同样,为了得到下文信息,我们可以使用反方向(将输入逆序处理)的循环神经网络。结合构建深层循环神经网络的方法(深层神经网络往往能得到更抽象和高级的特征表示),我们可以通过构建更加强有力的基于LSTM的栈式双向循环神经网络\[[9](#参考文献)\],来对时序数据进行建模。
如图4所示(以三层为例),奇数层LSTM正向,偶数层LSTM反向,高一层的LSTM使用低一层LSTM及之前所有层的信息作为输入,对最高层LSTM序列使用时间维度上的最大池化即可得到文本的定长向量表示(这一表示充分融合了文本的上下文信息,并且对文本进行了深层次抽象),最后我们将文本表示连接至softmax构建分类模型。 如图4所示(以三层为例),奇数层LSTM正向,偶数层LSTM反向,高一层的LSTM使用低一层LSTM及之前所有层的信息作为输入,对最高层LSTM序列使用时间维度上的最大池化即可得到文本的定长向量表示(这一表示充分融合了文本的上下文信息,并且对文本进行了深层次抽象),最后我们将文本表示连接至softmax构建分类模型。
<p align="center"> <p align="center">
...@@ -353,6 +354,7 @@ Test with Pass 0, {'classification_error_evaluator': 0.11432000249624252} ...@@ -353,6 +354,7 @@ Test with Pass 0, {'classification_error_evaluator': 0.11432000249624252}
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -219,6 +220,7 @@ In information retrieval, the relevance between the query and document keyword c ...@@ -219,6 +220,7 @@ In information retrieval, the relevance between the query and document keyword c
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
<html> <html>
<head> <head>
<script type="text/x-mathjax-config"> <script type="text/x-mathjax-config">
...@@ -5,8 +6,8 @@ ...@@ -5,8 +6,8 @@
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"], extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"], jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: { tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ], inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ], displayMath: [ ['$$','$$'] ],
processEscapes: true processEscapes: true
}, },
"HTML-CSS": { availableFonts: ["TeX"] } "HTML-CSS": { availableFonts: ["TeX"] }
...@@ -39,6 +40,7 @@ ...@@ -39,6 +40,7 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
# 词向量 # 词向量
本教程源代码目录在[book/word2vec](https://github.com/PaddlePaddle/book/tree/develop/word2vec), 初次使用请参考PaddlePaddle[安装教程](http://www.paddlepaddle.org/doc_cn/build_and_install/index.html)。 本教程源代码目录在[book/word2vec](https://github.com/PaddlePaddle/book/tree/develop/word2vec), 初次使用请参考PaddlePaddle[安装教程](http://www.paddlepaddle.org/doc_cn/build_and_install/index.html)。
...@@ -72,8 +74,8 @@ $$X = USV^T$$ ...@@ -72,8 +74,8 @@ $$X = USV^T$$
本章中,当词向量训练好后,我们可以用数据可视化算法t-SNE\[[4](#参考文献)\]画出词语特征在二维上的投影(如下图所示)。从图中可以看出,语义相关的词语(如a, the, these; big, huge)在投影上距离很近,语意无关的词(如say, business; decision, japan)在投影上的距离很远。 本章中,当词向量训练好后,我们可以用数据可视化算法t-SNE\[[4](#参考文献)\]画出词语特征在二维上的投影(如下图所示)。从图中可以看出,语义相关的词语(如a, the, these; big, huge)在投影上距离很近,语意无关的词(如say, business; decision, japan)在投影上的距离很远。
<p align="center"> <p align="center">
<img src = "image/2d_similarity.png" width=400><br/> <img src = "image/2d_similarity.png" width=400><br/>
图1. 词向量的二维投影 图1. 词向量的二维投影
</p> </p>
另一方面,我们知道两个向量的余弦值在$[-1,1]$的区间内:两个完全相同的向量余弦值为1, 两个相互垂直的向量之间余弦值为0,两个方向完全相反的向量余弦值为-1,即相关性和余弦值大小成正比。因此我们还可以计算两个词向量的余弦相似度: 另一方面,我们知道两个向量的余弦值在$[-1,1]$的区间内:两个完全相同的向量余弦值为1, 两个相互垂直的向量之间余弦值为0,两个方向完全相反的向量余弦值为-1,即相关性和余弦值大小成正比。因此我们还可以计算两个词向量的余弦相似度:
...@@ -126,8 +128,8 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$ ...@@ -126,8 +128,8 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$
其中$f(w_t, w_{t-1}, ..., w_{t-n+1})$表示根据历史n-1个词得到当前词$w_t$的条件概率,$R(\theta)$表示参数正则项。 其中$f(w_t, w_{t-1}, ..., w_{t-n+1})$表示根据历史n-1个词得到当前词$w_t$的条件概率,$R(\theta)$表示参数正则项。
<p align="center"> <p align="center">
<img src="image/nnlm.png" width=500><br/> <img src="image/nnlm.png" width=500><br/>
图2. N-gram神经网络模型 图2. N-gram神经网络模型
</p> </p>
图2展示了N-gram神经网络模型,从下往上看,该模型分为以下几个部分: 图2展示了N-gram神经网络模型,从下往上看,该模型分为以下几个部分:
...@@ -137,7 +139,7 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$ ...@@ -137,7 +139,7 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$
- 然后所有词语的词向量连接成一个大向量,并经过一个非线性映射得到历史词语的隐层表示: - 然后所有词语的词向量连接成一个大向量,并经过一个非线性映射得到历史词语的隐层表示:
$$g=Utanh(\theta^Tx + b_1) + Wx + b_2$$ $$g=Utanh(\theta^Tx + b_1) + Wx + b_2$$
其中,$x$为所有词语的词向量连接成的大向量,表示文本历史特征;$\theta$、$U$、$b_1$、$b_2$和$W$分别为词向量层到隐层连接的参数。$g$表示未经归一化的所有输出单词概率,$g_i$表示未经归一化的字典中第$i$个单词的输出概率。 其中,$x$为所有词语的词向量连接成的大向量,表示文本历史特征;$\theta$、$U$、$b_1$、$b_2$和$W$分别为词向量层到隐层连接的参数。$g$表示未经归一化的所有输出单词概率,$g_i$表示未经归一化的字典中第$i$个单词的输出概率。
...@@ -158,8 +160,8 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$ ...@@ -158,8 +160,8 @@ $$\frac{1}{T}\sum_t f(w_t, w_{t-1}, ..., w_{t-n+1};\theta) + R(\theta)$$
CBOW模型通过一个词的上下文(各N个词)预测当前词。当N=2时,模型如下图所示: CBOW模型通过一个词的上下文(各N个词)预测当前词。当N=2时,模型如下图所示:
<p align="center"> <p align="center">
<img src="image/cbow.png" width=250><br/> <img src="image/cbow.png" width=250><br/>
图3. CBOW模型 图3. CBOW模型
</p> </p>
具体来说,不考虑上下文的词语输入顺序,CBOW是用上下文词语的词向量的均值来预测当前词。即: 具体来说,不考虑上下文的词语输入顺序,CBOW是用上下文词语的词向量的均值来预测当前词。即:
...@@ -173,8 +175,8 @@ $$context = \frac{x_{t-1} + x_{t-2} + x_{t+1} + x_{t+2}}{4}$$ ...@@ -173,8 +175,8 @@ $$context = \frac{x_{t-1} + x_{t-2} + x_{t+1} + x_{t+2}}{4}$$
CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去掉了噪声,因此在小数据集上很有效。而Skip-gram的方法中,用一个词预测其上下文,得到了当前词上下文的很多样本,因此可用于更大的数据集。 CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去掉了噪声,因此在小数据集上很有效。而Skip-gram的方法中,用一个词预测其上下文,得到了当前词上下文的很多样本,因此可用于更大的数据集。
<p align="center"> <p align="center">
<img src="image/skipgram.png" width=250><br/> <img src="image/skipgram.png" width=250><br/>
图4. Skip-gram模型 图4. Skip-gram模型
</p> </p>
如上图所示,Skip-gram模型的具体做法是,将一个词的词向量映射到$2n$个词的词向量($2n$表示当前输入词的前后各$n$个词),然后分别通过softmax得到这$2n$个词的分类损失值之和。 如上图所示,Skip-gram模型的具体做法是,将一个词的词向量映射到$2n$个词的词向量($2n$表示当前输入词的前后各$n$个词),然后分别通过softmax得到这$2n$个词的分类损失值之和。
...@@ -188,21 +190,21 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去 ...@@ -188,21 +190,21 @@ CBOW的好处是对上下文词语的分布在词向量上进行了平滑,去
<p align="center"> <p align="center">
<table> <table>
<tr> <tr>
<td>训练数据</td> <td>训练数据</td>
<td>验证数据</td> <td>验证数据</td>
<td>测试数据</td> <td>测试数据</td>
</tr> </tr>
<tr> <tr>
<td>ptb.train.txt</td> <td>ptb.train.txt</td>
<td>ptb.valid.txt</td> <td>ptb.valid.txt</td>
<td>ptb.test.txt</td> <td>ptb.test.txt</td>
</tr> </tr>
<tr> <tr>
<td>42068句</td> <td>42068句</td>
<td>3370句</td> <td>3370句</td>
<td>3761句</td> <td>3761句</td>
</tr> </tr>
</table> </table>
</p> </p>
...@@ -228,8 +230,8 @@ dream that one day <e> ...@@ -228,8 +230,8 @@ dream that one day <e>
本配置的模型结构如下图所示: 本配置的模型结构如下图所示:
<p align="center"> <p align="center">
<img src="image/ngram.png" width=400><br/> <img src="image/ngram.png" width=400><br/>
图5. 模型配置中的N-gram神经网络模型 图5. 模型配置中的N-gram神经网络模型
</p> </p>
首先,加载所需要的包: 首先,加载所需要的包:
...@@ -266,54 +268,54 @@ def wordemb(inlayer): ...@@ -266,54 +268,54 @@ def wordemb(inlayer):
- 定义输入层接受的数据类型以及名字。 - 定义输入层接受的数据类型以及名字。
```python ```python
def main(): paddle.init(use_gpu=False, trainer_count=3) # 初始化PaddlePaddle
paddle.init(use_gpu=False, trainer_count=1) # 初始化PaddlePaddle word_dict = paddle.dataset.imikolov.build_dict()
word_dict = paddle.dataset.imikolov.build_dict() dict_size = len(word_dict)
dict_size = len(word_dict) # 每个输入层都接受整形数据,这些数据的范围是[0, dict_size)
# 每个输入层都接受整形数据,这些数据的范围是[0, dict_size) firstword = paddle.layer.data(
firstword = paddle.layer.data( name="firstw", type=paddle.data_type.integer_value(dict_size))
name="firstw", type=paddle.data_type.integer_value(dict_size)) secondword = paddle.layer.data(
secondword = paddle.layer.data( name="secondw", type=paddle.data_type.integer_value(dict_size))
name="secondw", type=paddle.data_type.integer_value(dict_size)) thirdword = paddle.layer.data(
thirdword = paddle.layer.data( name="thirdw", type=paddle.data_type.integer_value(dict_size))
name="thirdw", type=paddle.data_type.integer_value(dict_size)) fourthword = paddle.layer.data(
fourthword = paddle.layer.data( name="fourthw", type=paddle.data_type.integer_value(dict_size))
name="fourthw", type=paddle.data_type.integer_value(dict_size)) nextword = paddle.layer.data(
nextword = paddle.layer.data( name="fifthw", type=paddle.data_type.integer_value(dict_size))
name="fifthw", type=paddle.data_type.integer_value(dict_size))
Efirst = wordemb(firstword)
Efirst = wordemb(firstword) Esecond = wordemb(secondword)
Esecond = wordemb(secondword) Ethird = wordemb(thirdword)
Ethird = wordemb(thirdword) Efourth = wordemb(fourthword)
Efourth = wordemb(fourthword)
``` ```
- 将这n-1个词向量经过concat_layer连接成一个大向量作为历史文本特征。 - 将这n-1个词向量经过concat_layer连接成一个大向量作为历史文本特征。
```python ```python
contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth]) contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth])
``` ```
- 将历史文本特征经过一个全连接得到文本隐层特征。 - 将历史文本特征经过一个全连接得到文本隐层特征。
```python ```python
hidden1 = paddle.layer.fc(input=contextemb, hidden1 = paddle.layer.fc(input=contextemb,
size=hiddensize, size=hiddensize,
act=paddle.activation.Sigmoid(), act=paddle.activation.Sigmoid(),
layer_attr=paddle.attr.Extra(drop_rate=0.5), layer_attr=paddle.attr.Extra(drop_rate=0.5),
bias_attr=paddle.attr.Param(learning_rate=2), bias_attr=paddle.attr.Param(learning_rate=2),
param_attr=paddle.attr.Param( param_attr=paddle.attr.Param(
initial_std=1. / math.sqrt(embsize * 8), initial_std=1. / math.sqrt(embsize * 8),
learning_rate=1)) learning_rate=1))
``` ```
- 将文本隐层特征,再经过一个全连接,映射成一个$|V|$维向量,同时通过softmax归一化得到这`|V|`个词的生成概率。 - 将文本隐层特征,再经过一个全连接,映射成一个$|V|$维向量,同时通过softmax归一化得到这`|V|`个词的生成概率。
```python ```python
predictword = paddle.layer.fc(input=hidden1, predictword = paddle.layer.fc(input=hidden1,
size=dict_size, size=dict_size,
bias_attr=paddle.attr.Param(learning_rate=2), bias_attr=paddle.attr.Param(learning_rate=2),
act=paddle.activation.Softmax()) act=paddle.activation.Softmax())
``` ```
- 网络的损失函数为多分类交叉熵,可直接调用`classification_cost`函数。 - 网络的损失函数为多分类交叉熵,可直接调用`classification_cost`函数。
...@@ -329,11 +331,11 @@ cost = paddle.layer.classification_cost(input=predictword, label=nextword) ...@@ -329,11 +331,11 @@ cost = paddle.layer.classification_cost(input=predictword, label=nextword)
- 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。 - 正则化(regularization): 是防止网络过拟合的一种手段,此处采用L2正则化。
```python ```python
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
adam_optimizer = paddle.optimizer.Adam( adam_optimizer = paddle.optimizer.Adam(
learning_rate=3e-3, learning_rate=3e-3,
regularization=paddle.optimizer.L2Regularization(8e-4)) regularization=paddle.optimizer.L2Regularization(8e-4))
trainer = paddle.trainer.SGD(cost, parameters, adam_optimizer) trainer = paddle.trainer.SGD(cost, parameters, adam_optimizer)
``` ```
下一步,我们开始训练过程。`paddle.dataset.imikolov.train()`和`paddle.dataset.imikolov.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python generator。 下一步,我们开始训练过程。`paddle.dataset.imikolov.train()`和`paddle.dataset.imikolov.test()`分别做训练和测试数据集。这两个函数各自返回一个reader——PaddlePaddle中的reader是一个Python函数,每次调用的时候返回一个Python generator。
...@@ -341,112 +343,94 @@ cost = paddle.layer.classification_cost(input=predictword, label=nextword) ...@@ -341,112 +343,94 @@ cost = paddle.layer.classification_cost(input=predictword, label=nextword)
`paddle.batch`的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。 `paddle.batch`的输入是一个reader,输出是一个batched reader —— 在PaddlePaddle里,一个reader每次yield一条训练数据,而一个batched reader每次yield一个minbatch。
```python ```python
def event_handler(event): import gzip
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0: def event_handler(event):
result = trainer.test( if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(
paddle.batch( paddle.batch(
paddle.dataset.imikolov.test(word_dict, N), 32)) paddle.dataset.imikolov.test(word_dict, N), 32))
print "Pass %d, Batch %d, Cost %f, %s, Testing metrics %s" % ( print "Pass %d, Testing metrics %s" % (event.pass_id, result.metrics)
event.pass_id, event.batch_id, event.cost, event.metrics, with gzip.open("model_%d.tar.gz"%event.pass_id, 'w') as f:
result.metrics) parameters.to_tar(f)
trainer.train( trainer.train(
paddle.batch(paddle.dataset.imikolov.train(word_dict, N), 32), paddle.batch(paddle.dataset.imikolov.train(word_dict, N), 32),
num_passes=30, num_passes=100,
event_handler=event_handler) event_handler=event_handler)
``` ```
训练过程是完全自动的,event_handler里打印的日志类似如下所示: ...
Pass 0, Batch 25000, Cost 4.251861, {'classification_error_evaluator': 0.84375}
Pass 0, Batch 25100, Cost 4.847692, {'classification_error_evaluator': 0.8125}
Pass 0, Testing metrics {'classification_error_evaluator': 0.7417652606964111}
训练过程是完全自动的,event_handler里打印的日志类似如上所示:
```text
.............................
I1222 09:27:16.477841 12590 TrainerInternal.cpp:162] Batch=3000 samples=300000 AvgCost=5.36135 CurrentCost=5.36135 Eval: classification_error_evaluator=0.818653 CurrentEval: class
ification_error_evaluator=0.818653
.............................
I1222 09:27:22.416700 12590 TrainerInternal.cpp:162] Batch=6000 samples=600000 AvgCost=5.29301 CurrentCost=5.22467 Eval: classification_error_evaluator=0.814542 CurrentEval: class
ification_error_evaluator=0.81043
.............................
I1222 09:27:28.343756 12590 TrainerInternal.cpp:162] Batch=9000 samples=900000 AvgCost=5.22494 CurrentCost=5.08876 Eval: classification_error_evaluator=0.810088 CurrentEval: class
ification_error_evaluator=0.80118
..I1222 09:27:29.128582 12590 TrainerInternal.cpp:179] Pass=0 Batch=9296 samples=929600 AvgCost=5.21786 Eval: classification_error_evaluator=0.809647
I1222 09:27:29.627616 12590 Tester.cpp:111] Test samples=73760 cost=4.9594 Eval: classification_error_evaluator=0.79676
I1222 09:27:29.627713 12590 GradientMachine.cpp:112] Saving parameters to model/pass-00000
```
经过30个pass,我们将得到平均错误率为classification_error_evaluator=0.735611。 经过30个pass,我们将得到平均错误率为classification_error_evaluator=0.735611。
## 应用模型 ## 应用模型
训练模型后,我们可以加载模型参数,用训练出来的词向量初始化其他模型,也可以将模型参数从二进制格式转换成文本格式进行后续应用。 训练模型后,我们可以加载模型参数,用训练出来的词向量初始化其他模型,也可以将模型查看参数用来做后续应用。
### 初始化其他模型
训练好的模型参数可以用来初始化其他模型。具体方法如下:
在PaddlePaddle 训练命令行中,用`--init_model_path` 来定义初始化模型的位置,用`--load_missing_parameter_strategy`指定除了词向量以外的新模型其他参数的初始化策略。注意,新模型需要和原模型共享被初始化参数的参数名。
### 查看词向量 ### 查看词向量
PaddlePaddle训练出来的参数为二进制格式,存储在对应训练pass的文件夹下。这里我们提供了文件`format_convert.py`用来互转PaddlePaddle训练结果的二进制文件和文本格式特征文件。
```bash PaddlePaddle训练出来的参数可以直接使用`parameters.get()`获取出来。例如查看单词的word的词向量,即为
python format_convert.py --b2t -i INPUT -o OUTPUT -d DIM
```
其中,INPUT是输入的(二进制)词向量模型名称,OUTPUT是输出的文本模型名称,DIM是词向量参数维度。
用法如:
```bash ```python
python format_convert.py --b2t -i model/pass-00029/_proj -o model/pass-00029/_proj.txt -d 32 embeddings = parameters.get("_proj").reshape(len(word_dict), embsize)
```
转换后得到的文本文件如下:
```text print embeddings[word_dict['word']]
0,4,62496
-0.7444070,-0.1846171,-1.5771370,0.7070392,2.1963732,-0.0091410, ......
-0.0721337,-0.2429973,-0.0606297,0.1882059,-0.2072131,-0.7661019, ......
......
``` ```
其中,第一行是PaddlePaddle 输出文件的格式说明,包含3个属性:<br/> [-0.38961065 -0.02392169 -0.00093231 0.36301503 0.13538605 0.16076435
1) PaddlePaddle的版本号,本例中为0;<br/> -0.0678709 0.1090285 0.42014077 -0.24119169 -0.31847557 0.20410083
2) 浮点数占用的字节数,本例中为4;<br/> 0.04910378 0.19021918 -0.0122014 -0.04099389 -0.16924137 0.1911236
3) 总计的参数个数, 本例中为62496(即1953*32);<br/> -0.10917275 0.13068172 -0.23079982 0.42699069 -0.27679482 -0.01472992
第二行及之后的每一行都按顺序表示字典里一个词的特征,用逗号分隔。 0.2069038 0.09005053 -0.3282454 0.12717034 -0.24218646 0.25304323
0.19072419 -0.24286366]
### 修改词向量
我们可以对词向量进行修改,并转换成PaddlePaddle参数二进制格式,方法:
```bash
python format_convert.py --t2b -i INPUT -o OUTPUT
```
其中,INPUT是输入的输入的文本词向量模型名称,OUTPUT是输出的二进制词向量模型名称 ### 修改词向量
输入的文本格式如下(注意,不包含上面二进制转文本后第一行的格式说明): 获得到的embedding为一个标准的numpy矩阵。我们可以对这个numpy矩阵进行修改,然后赋值回去。
```text
-0.7444070,-0.1846171,-1.5771370,0.7070392,2.1963732,-0.0091410, ......
-0.0721337,-0.2429973,-0.0606297,0.1882059,-0.2072131,-0.7661019, ......
......
```
```python
def modify_embedding(emb):
# Add your modification here.
pass
modify_embedding(embeddings)
parameters.set("_proj", embeddings)
```
### 计算词语之间的余弦距离 ### 计算词语之间的余弦距离
两个向量之间的距离可以用余弦值来表示,余弦值在$[-1,1]$的区间内,向量间余弦值越大,其距离越近。这里我们在`calculate_dis.py`中实现不同词语的距离度量。 两个向量之间的距离可以用余弦值来表示,余弦值在$[-1,1]$的区间内,向量间余弦值越大,其距离越近。这里我们在`calculate_dis.py`中实现不同词语的距离度量。
用法如下: 用法如下:
```bash
python calculate_dis.py VOCABULARY EMBEDDINGLAYER`
```
其中,`VOCABULARY`是字典,`EMBEDDINGLAYER`是词向量模型,示例如下: ```python
from scipy import spatial
```bash emb_1 = embeddings[word_dict['world']]
python calculate_dis.py data/vocabulary.txt model/pass-00029/_proj.txt emb_2 = embeddings[word_dict['would']]
print spatial.distance.cosine(emb_1, emb_2)
``` ```
0.99375076448
## 总结 ## 总结
本章中,我们介绍了词向量、语言模型和词向量的关系、以及如何通过训练神经网络模型获得词向量。在信息检索中,我们可以根据向量间的余弦夹角,来判断query和文档关键词这二者间的相关性。在句法分析和语义分析中,训练好的词向量可以用来初始化模型,以得到更好的效果。在文档分类中,有了词向量之后,可以用聚类的方法将文档中同义词进行分组。希望大家在本章后能够自行运用词向量进行相关领域的研究。 本章中,我们介绍了词向量、语言模型和词向量的关系、以及如何通过训练神经网络模型获得词向量。在信息检索中,我们可以根据向量间的余弦夹角,来判断query和文档关键词这二者间的相关性。在句法分析和语义分析中,训练好的词向量可以用来初始化模型,以得到更好的效果。在文档分类中,有了词向量之后,可以用聚类的方法将文档中同义词进行分组。希望大家在本章后能够自行运用词向量进行相关领域的研究。
...@@ -461,6 +445,7 @@ python calculate_dis.py data/vocabulary.txt model/pass-00029/_proj.txt ...@@ -461,6 +445,7 @@ python calculate_dis.py data/vocabulary.txt model/pass-00029/_proj.txt
<br/> <br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type">本教程</span><a xmlns:cc="http://creativecommons.org/ns#" href="http://book.paddlepaddle.org" property="cc:attributionName" rel="cc:attributionURL">PaddlePaddle</a> 创作,采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享 署名-非商业性使用-相同方式共享 4.0 国际 许可协议</a>进行许可。
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册