提交 cf9cf4ca 编写于 作者: W wwhu

resolve conflict

......@@ -25,3 +25,11 @@
files: \.md$
- id: remove-tabs
files: \.md$
- repo: local
hooks:
- id: convert-markdown-into-html
name: convert-markdown-into-html
description: Convert README.md into index.html
entry: python .pre-commit-hooks/convert_markdown_into_html.py
language: system
files: .+README\.md$
import argparse
import re
import sys
HEAD = """
<html>
<head>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tools/theme/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tools/theme/github-markdown.css" rel='stylesheet'>
</head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body>
<div id="context" class="container-fluid markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
"""
TAIL = """
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body>
"""
def convert_markdown_into_html(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument('filenames', nargs='*', help='Filenames to fix')
args = parser.parse_args(argv)
retv = 0
for filename in args.filenames:
with open(
re.sub(r"README", "index", re.sub(r"\.md$", ".html", filename)),
"w") as output:
output.write(HEAD)
with open(filename) as input:
for line in input:
output.write(line)
output.write(TAIL)
return retv
if __name__ == '__main__':
sys.exit(convert_markdown_into_html())
......@@ -2,6 +2,8 @@ language: cpp
cache: ccache
sudo: required
dist: trusty
services:
- docker
os:
- linux
env:
......@@ -16,8 +18,13 @@ addons:
- python2.7-dev
before_install:
- pip install -U virtualenv pre-commit pip
- docker pull paddlepaddle/paddle:latest
script:
- .travis/precommit.sh
- docker run -i --rm -v "$PWD:/py_unittest" paddlepaddle/paddle:latest /bin/bash -c
"cd /py_unittest && find . -name 'tests' -type d -print0 | xargs -0 -I{} -n1 bash -c 'cd {};
python -m unittest discover -v'"
notifications:
email:
on_success: change
......
# models简介
# models 简介
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](https://github.com/PaddlePaddle/models)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://github.com/PaddlePaddle/models)
......@@ -6,81 +6,65 @@
PaddlePaddle提供了丰富的运算单元,帮助大家以模块化的方式构建起千变万化的深度学习模型来解决不同的应用问题。这里,我们针对常见的机器学习任务,提供了不同的神经网络模型供大家学习和使用。
## 词向量
- **介绍**
## 1. 词向量
[词向量](https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md) 是深度学习应用于自然语言处理领域最成功的概念和成果之一,是一种分散式表示(distributed representation)法。分散式表示法用一个更低维度的实向量表示词语,向量的每个维度在实数域取值,都表示文本的某种潜在语法或语义特征。广义地讲,词向量也可以应用于普通离散特征。词向量的学习通常都是一个无监督的学习过程,因此,可以充分利用海量的无标记数据以捕获特征之间的关系,也可以有效地解决特征稀疏、标签数据缺失、数据噪声等问题
词向量用一个实向量表示词语,向量的每个维都表示文本的某种潜在语法或语义特征,是深度学习应用于自然语言处理领域最成功的概念和成果之一。广义的,词向量也可以应用于普通离散特征。词向量的学习通常都是一个无监督的学习过程,因此,可以充分利用海量的无标记数据以捕获特征之间的关系,也可以有效地解决特征稀疏、标签数据缺失、数据噪声等问题。然而,在常见词向量学习方法中,模型最后一层往往会遇到一个超大规模的分类问题,是计算性能的瓶颈
然而,在常见词向量学习方法中,模型最后一层往往会遇到一个超大规模的分类问题,是计算性能的瓶颈。在词向量的例子中,我们向大家展示如何使用Hierarchical-Sigmoid 和噪声对比估计(Noise Contrastive Estimation,NCE)来加速词向量的学习。
在词向量的例子中,我们向大家展示如何使用Hierarchical-Sigmoid 和噪声对比估计(Noise Contrastive Estimation,NCE)来加速词向量的学习。
- **应用领域**
- 1.1 [Hsigmoid加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/word_embedding)
- 1.2 [噪声对比估计加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
词向量是深度学习方法引入自然语言处理领域的核心技术之一,在大规模无标记语料上训练的词向量常作为各种自然语言处理任务的预训练参数,是一种较为通用的资源,对任务性能的进一步提升有一定的帮助。同时,词嵌入的思想也是深度学习模型处理离散特征的重要方法,有着广泛地借鉴和参考意义。
词向量是搜索引擎、广告系统、推荐系统等互联网服务背后的常见基础技术之一。
## 2. 语言模型
- **模型配置说明**
语言模型是自然语言处理领域里一个重要的基础模型,它是一个概率分布模型,利用它可以确定哪个词序列的可能性更大,或者给定若干个词,可以预测下一个最可能出现的词。语言模型被应用在很多领域,如:自动写作、QA、机器翻译、拼写检查、语音识别、词性标注等。
[word_embedding](https://github.com/PaddlePaddle/models/tree/develop/word_embedding)
## 文本生成
在语言模型的例子中,我们以文本生成为例,提供了RNN LM(包括LSTM、GRU)和N-Gram LM,供大家学习和使用。用户可以通过文档中的 “使用说明” 快速上手:适配训练语料,以训练 “自动写诗”、“自动写散文” 等有趣的模型。
- **介绍**
- 2.1 [基于LSTM、GRU、N-Gram的文本生成模型](https://github.com/PaddlePaddle/models/tree/develop/language_model)
我们期待有一天机器可以使用自然语言与人们进行交流,像人一样能够撰写高质量的自然语言文本,自动文本生成是实现这一目标的关键技术,可以应用于机器翻译系统、对话系统、问答系统等,为人们带来更加有趣地交互体验,也可以自动撰写新闻摘要,撰写歌词,简单的故事等等。或许未来的某一天,机器能够代替编辑,作家,歌词作者,颠覆这些内容创作领域的工作方式。
## 3. 点击率预估
基于神经网络生成文本常使用两类方法:1. 语言模型;2. 序列到序列(sequence to sequence)映射模型。在文本生成的例子中,我们为大家展示如何使用以上两种模型来自动生成文本
点击率预估模型预判用户对一条广告点击的概率,对每次广告的点击情况做出预测,是广告技术的核心算法之一。逻谛斯克回归对大规模稀疏特征有着很好的学习能力,在点击率预估任务发展的早期一统天下。近年来,DNN 模型由于其强大的学习能力逐渐接过点击率预估任务的大旗
特别的,对序列到序列映射模型,我们以机器翻译任务为例提供了多种改进模型,供大家学习和使用,包括:
1. 不带注意力机制的序列到序列映射模型,这一模型是所有序列到序列学习模型的基础。
2. 带注意力机制使用 scheduled sampling 改善生成质量,用来改善RNN模型在文本生成过程中的错误累积问题。
3. 带外部记忆机制的神经机器翻译,通过增强神经网络的记忆能力,来完成复杂的序列到序列学习任务。
在点击率预估的例子中,我们给出谷歌提出的 Wide & Deep 模型。这一模型融合了适用于学习抽象特征的 DNN 和适用于大规模稀疏特征的逻谛斯克回归两者模型的优点,可以作为一种相对成熟的模型框架使用, 在工业界也有一定的应用。
- 3.1 [Wide & deep 点击率预估模型](https://github.com/PaddlePaddle/models/tree/develop/ctr)
- **应用领域**
## 4. 文本分类
文本生成模型实现了两个甚至是多个不定长模型之间的映射,有着广泛地应用,包括机器翻译、智能对话与问答、广告创意语料生成、自动编码(如金融画像编码)、判断多个文本串之间的语义相关性等
文本分类是自然语言处理领域最基础的任务之一,深度学习方法能够免除复杂的特征工程,直接使用原始文本作为输入,数据驱动地最优化分类准确率
- **模型配置说明**
在文本分类的例子中,我们以情感分类任务为例,提供了基于DNN的非序列文本分类模型,以及基于CNN的序列模型供大家学习和使用(基于LSTM的模型见PaddleBook中[情感分类](https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)一课)。
[seq2seq](https://github.com/PaddlePaddle/models/tree/develop/seq2seq) | [scheduled_sampling](https://github.com/PaddlePaddle/models/tree/develop/scheduled_sampling) | [external_memory](https://github.com/PaddlePaddle/models/tree/develop/mt_with_external_memory)
- 4.1 [基于 DNN / CNN 的情感分类](https://github.com/PaddlePaddle/models/tree/develop/text_classification)
## 排序学习(Learning to Rank, LTR)
## 5. 排序学习
- **介绍**
排序学习(Learning to Rank, LTR)是信息检索和搜索引擎研究的核心问题之一,通过机器学习方法学习一个分值函数对待排序的候选进行打分,再根据分值的高低确定序关系。深度神经网络可以用来建模分值函数,构成各类基于深度学习的LTR模型。
排序学习(Learning to Rank,下简称LTR)是信息检索和搜索引擎研究的核心问题之一,通过机器学习方法学习一个分值函数(Scoring Function)对待排序的候选进行打分,再根据分值的高低确定序关系。深度神经网络可以用来建模分值函数,构成各类基于深度学习的LTR模型
在排序学习的例子中,我们介绍基于 RankLoss 损失函数的 Pairwise 排序模型和基于LambdaRank损失函数的Listwise排序模型(Pointwise学习策略见PaddleBook中[推荐系统](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)一课)
以信息检索任务为例,给定查询以及检索到的候选文档列表,LTR系统需要按照查询与候选文档的相关性,对候选文档进行打分并排序。LTR学习方法可以分为三种:
- 5.1 [基于 Pairwise 和 Listwise 的排序学习](https://github.com/PaddlePaddle/models/tree/develop/ltr)
- Pointwise:Pointwise 学习方法将LTR被转化为回归或是分类问题。给定查询以及一个候选文档,模型基于序数进行二分类、多分类或者回归拟合,是一种基础的LTR学习策略。
- Pairwise:Pairwise学习方法将排序问题归约为对有序对(ordered pair)的分类,比Pointwise方法更近了一步。模型判断一对候选文档中,哪一个与给定查询更相关,学习的目标为是最小化误分类文档对的数量。理想情况下,如果所有文档对都能被正确的分类,那么原始的候选文档也会被正确的排序。
- Listwise:与Pointwise与Pairwise学习方法相比,Listwise方法将给定查询对应的整个候选文档集合列表(list)作为输入,直接对排序结果列表进行优化。Listwise方法在损失函数中考虑了文档排序的位置因素,是前两种方法所不具备的。
## 6. 序列标注
Pointwise 学习策略可参考PaddleBook的[推荐系统](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)一节。在这里,我们提供了基于RankLoss 损失函数的Pairwise 排序模型,以及基于LambdaRank 损失函数的ListWise排序模型
给定输入序列,序列标注模型为序列中每一个元素贴上一个类别标签,是自然语言处理领域最基础的任务之一。随着深度学习的不断探索和发展,利用循环神经网络学习输入序列的特征表示,条件随机场(Conditional Random Field, CRF)在特征基础上完成序列标注任务,逐渐成为解决序列标注问题的标配解决方案
- **应用领域**
在序列标注的例子中,我们以命名实体识别(Named Entity Recognition,NER)任务为例,介绍如何训练一个端到端的序列标注模型。
LTR模型在搜索排序,包括:图片搜索排序、外卖美食搜索排序、App搜索排序、酒店搜索排序等场景中有着广泛的应用。还可以扩展应用于:关键词推荐、各类业务榜单、个性化推荐等任务。
- 6.1 [命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner)
- **模型配置说明**
## 7. 序列到序列学习
[Pointwise 排序模型](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)
| [Pairwise 排序模型](https://github.com/PaddlePaddle/models/tree/develop/ltr) | [Listwise 排序模型](https://github.com/PaddlePaddle/models/tree/develop/ltr)
序列到序列学习实现两个甚至是多个不定长模型之间的映射,有着广泛的应用,包括:机器翻译、智能对话与问答、广告创意语料生成、自动编码(如金融画像编码)、判断多个文本串之间的语义相关性等。
## 文本分类
在序列到序列学习的例子中,我们以机器翻译任务为例,提供了多种改进模型,供大家学习和使用。包括:不带注意力机制的序列到序列映射模型,这一模型是所有序列到序列学习模型的基础;使用 scheduled sampling 改善 RNN 模型在生成任务中的错误累积问题;带外部记忆机制的神经机器翻译,通过增强神经网络的记忆能力,来完成复杂的序列到序列学习任务。
- **介绍**
- 7.1 [无注意力机制的编码器解码器模型](https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention)
文本分类是自然语言处理领域最基础的任务之一,深度学习方法能够免除复杂的特征工程,直接使用原始文本作为输入,数据驱动地最优化分类准确率。我们以情感分类任务为例,提供了基于DNN的非序列文本分类模型,基于CNN和LSTM的序列模型供大家学习和使用。
- **应用领域**
分类是机器学习基础任务之一。文本分类模型在SPAM检测,文本打标签,文本意图识别,文章质量评估,色情暴力文章识别,评论情绪识别,广告物料风险控制等领域都有着广泛的应用。
- **模型配置说明**
[text_classification](https://github.com/PaddlePaddle/models/tree/develop/text_classification)
## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
# 点击率预估
## 背景介绍
CTR(Click-Through Rate,点击率预估)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\] 是用来表示用户点击一个特定链接的概率,
通常被用来衡量一个在线广告系统的有效性。
当有多个广告位时,CTR 预估一般会作为排序的基准。
比如在搜索引擎的广告系统里,当用户输入一个带商业价值的搜索词(query)时,系统大体上会执行下列步骤来展示广告:
1. 召回满足 query 的广告集合
2. 业务规则和相关性过滤
3. 根据拍卖机制和 CTR 排序
4. 展出广告
可以看到,CTR 在最终排序中起到了很重要的作用。
### 发展阶段
在业内,CTR 模型经历了如下的发展阶段:
- Logistic Regression(LR) / GBDT + 特征工程
- LR + DNN 特征
- DNN + 特征工程
在发展早期时 LR 一统天下,但最近 DNN 模型由于其强大的学习能力和逐渐成熟的性能优化,
逐渐地接过 CTR 预估任务的大旗。
### LR vs DNN
下图展示了 LR 和一个 \(3x2\) 的 DNN 模型的结构:
<p align="center">
<img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/>
Figure 1. LR 和 DNN 模型结构对比
</p>
LR 的蓝色箭头部分可以直接类比到 DNN 中对应的结构,可以看到 LR 和 DNN 有一些共通之处(比如权重累加),
但前者的模型复杂度在相同输入维度下比后者可能低很多(从某方面讲,模型越复杂,越有潜力学习到更复杂的信息)。
如果 LR 要达到匹敌 DNN 的学习能力,必须增加输入的维度,也就是增加特征的数量,
这也就是为何 LR 和大规模的特征工程必须绑定在一起的原因。
LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括内存和计算量等方面,工业界都有非常成熟的优化方法。
而 DNN 模型具有自己学习新特征的能力,一定程度上能够提升特征使用的效率,
这使得 DNN 模型在同样规模特征的情况下,更有可能达到更好的学习效果。
本文后面的章节会演示如何使用 PaddlePaddle 编写一个结合两者优点的模型。
## 数据和任务抽象
我们可以将 `click` 作为学习目标,任务可以有以下几种方案:
1. 直接学习 click,0,1 作二元分类
2. Learning to rank, 具体用 pairwise rank(标签 1>0)或者 listwise rank
3. 统计每个广告的点击率,将同一个 query 下的广告两两组合,点击率高的>点击率低的,做 rank 或者分类
我们直接使用第一种方法做分类任务。
我们使用 Kaggle 上 `Click-through rate prediction` 任务的数据集\[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\] 来演示模型。
具体的特征处理方法参看 [data process](./dataset.md)
## Wide & Deep Learning Model
谷歌在 16 年提出了 Wide & Deep Learning 的模型框架,用于融合适合学习抽象特征的 DNN 和 适用于大规模稀疏特征的 LR 两种模型的优点。
### 模型简介
Wide & Deep Learning Model\[[3](#参考文献)\] 可以作为一种相对成熟的模型框架使用,
在 CTR 预估的任务中工业界也有一定的应用,因此本文将演示使用此模型来完成 CTR 预估的任务。
模型结构如下:
<p align="center">
<img src="images/wide_deep.png" width="820" hspace='10'/> <br/>
Figure 2. Wide & Deep Model
</p>
模型左边的 Wide 部分,可以容纳大规模系数特征,并且对一些特定的信息(比如 ID)有一定的记忆能力;
而模型右边的 Deep 部分,能够学习特征间的隐含关系,在相同数量的特征下有更好的学习和推导能力。
### 编写模型输入
模型只接受 3 个输入,分别是
- `dnn_input` ,也就是 Deep 部分的输入
- `lr_input` ,也就是 Wide 部分的输入
- `click` , 点击与否,作为二分类模型学习的标签
```python
dnn_merged_input = layer.data(
name='dnn_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['dnn_input']))
lr_merged_input = layer.data(
name='lr_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['lr_input']))
click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
```
### 编写 Wide 部分
Wide 部分直接使用了 LR 模型,但激活函数改成了 `RELU` 来加速
```python
def build_lr_submodel():
fc = layer.fc(
input=lr_merged_input, size=1, name='lr', act=paddle.activation.Relu())
return fc
```
### 编写 Deep 部分
Deep 部分使用了标准的多层前向传导的 DNN 模型
```python
def build_dnn_submodel(dnn_layer_dims):
dnn_embedding = layer.fc(input=dnn_merged_input, size=dnn_layer_dims[0])
_input_layer = dnn_embedding
for i, dim in enumerate(dnn_layer_dims[1:]):
fc = layer.fc(
input=_input_layer,
size=dim,
act=paddle.activation.Relu(),
name='dnn-fc-%d' % i)
_input_layer = fc
return _input_layer
```
### 两者融合
两个 submodel 的最上层输出加权求和得到整个模型的输出,输出部分使用 `sigmoid` 作为激活函数,得到区间 (0,1) 的预测值,
来逼近训练数据中二元类别的分布,并最终作为 CTR 预估的值使用。
```python
# conbine DNN and LR submodels
def combine_submodels(dnn, lr):
merge_layer = layer.concat(input=[dnn, lr])
fc = layer.fc(
input=merge_layer,
size=1,
name='output',
# use sigmoid function to approximate ctr, wihch is a float value between 0 and 1.
act=paddle.activation.Sigmoid())
return fc
```
### 训练任务的定义
```python
dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel()
output = combine_submodels(dnn, lr)
# ==============================================================================
# cost and train period
# ==============================================================================
classification_cost = paddle.layer.multi_binary_label_cross_entropy_cost(
input=output, label=click)
paddle.init(use_gpu=False, trainer_count=11)
params = paddle.parameters.create(classification_cost)
optimizer = paddle.optimizer.Momentum(momentum=0)
trainer = paddle.trainer.SGD(
cost=classification_cost, parameters=params, update_equation=optimizer)
dataset = AvazuDataset(train_data_path, n_records_as_test=test_set_size)
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
logging.warning("Pass %d, Samples %d, Cost %f" % (
event.pass_id, event.batch_id * batch_size, event.cost))
if event.batch_id % 1000 == 0:
result = trainer.test(
reader=paddle.batch(dataset.test, batch_size=1000),
feeding=field_index)
logging.warning("Test %d-%d, Cost %f" % (event.pass_id, event.batch_id,
result.cost))
trainer.train(
reader=paddle.batch(
paddle.reader.shuffle(dataset.train, buf_size=500),
batch_size=batch_size),
feeding=field_index,
event_handler=event_handler,
num_passes=100)
```
## 运行训练和测试
训练模型需要如下步骤:
1. 下载训练数据,可以使用 Kaggle 上 CTR 比赛的数据\[[2](#参考文献)\]
1.[Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) 下载 train.gz
2. 解压 train.gz 得到 train.txt
2. 执行 `python train.py --train_data_path train.txt` ,开始训练
上面第2个步骤可以为 `train.py` 填充命令行参数来定制模型的训练过程,具体的命令行参数及用法如下
```
usage: train.py [-h] --train_data_path TRAIN_DATA_PATH
[--batch_size BATCH_SIZE] [--test_set_size TEST_SET_SIZE]
[--num_passes NUM_PASSES]
[--num_lines_to_detact NUM_LINES_TO_DETACT]
PaddlePaddle CTR example
optional arguments:
-h, --help show this help message and exit
--train_data_path TRAIN_DATA_PATH
path of training dataset
--batch_size BATCH_SIZE
size of mini-batch (default:10000)
--test_set_size TEST_SET_SIZE
size of the validation dataset(default: 10000)
--num_passes NUM_PASSES
number of passes to train
--num_lines_to_detact NUM_LINES_TO_DETACT
number of records to detect dataset's meta info
```
## 参考文献
1. <https://en.wikipedia.org/wiki/Click-through_rate>
2. <https://www.kaggle.com/c/avazu-ctr-prediction/data>
3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10.
import sys
import csv
import numpy as np
'''
The fields of the dataset are:
0. id: ad identifier
1. click: 0/1 for non-click/click
2. hour: format is YYMMDDHH, so 14091123 means 23:00 on Sept. 11, 2014 UTC.
3. C1 -- anonymized categorical variable
4. banner_pos
5. site_id
6. site_domain
7. site_category
8. app_id
9. app_domain
10. app_category
11. device_id
12. device_ip
13. device_model
14. device_type
15. device_conn_type
16. C14-C21 -- anonymized categorical variables
We will treat following fields as categorical features:
- C1
- banner_pos
- site_category
- app_category
- device_type
- device_conn_type
and some other features as id features:
- id
- site_id
- app_id
- device_id
The `hour` field will be treated as a continuous feature and will be transformed
to one-hot representation which has 24 bits.
'''
feature_dims = {}
categorial_features = ('C1 banner_pos site_category app_category ' +
'device_type device_conn_type').split()
id_features = 'id site_id app_id device_id _device_id_cross_site_id'.split()
def get_all_field_names(mode=0):
'''
@mode: int
0 for train, 1 for test
@return: list of str
'''
return categorial_features + ['hour'] + id_features + ['click'] \
if mode == 0 else []
class CategoryFeatureGenerator(object):
'''
Generator category features.
Register all records by calling `register` first, then call `gen` to generate
one-hot representation for a record.
'''
def __init__(self):
self.dic = {'unk': 0}
self.counter = 1
def register(self, key):
'''
Register record.
'''
if key not in self.dic:
self.dic[key] = self.counter
self.counter += 1
def size(self):
return len(self.dic)
def gen(self, key):
'''
Generate one-hot representation for a record.
'''
if key not in self.dic:
res = self.dic['unk']
else:
res = self.dic[key]
return [res]
def __repr__(self):
return '<CategoryFeatureGenerator %d>' % len(self.dic)
class IDfeatureGenerator(object):
def __init__(self, max_dim, cross_fea0=None, cross_fea1=None):
'''
@max_dim: int
Size of the id elements' space
'''
self.max_dim = max_dim
self.cross_fea0 = cross_fea0
self.cross_fea1 = cross_fea1
def gen(self, key):
'''
Generate one-hot representation for records
'''
return [hash(key) % self.max_dim]
def gen_cross_fea(self, fea1, fea2):
key = str(fea1) + str(fea2)
return self.gen(key)
def size(self):
return self.max_dim
class ContinuousFeatureGenerator(object):
def __init__(self, n_intervals):
self.min = sys.maxint
self.max = sys.minint
self.n_intervals = n_intervals
def register(self, val):
self.min = min(self.minint, val)
self.max = max(self.maxint, val)
def gen(self, val):
self.len_part = (self.max - self.min) / self.n_intervals
return (val - self.min) / self.len_part
# init all feature generators
fields = {}
for key in categorial_features:
fields[key] = CategoryFeatureGenerator()
for key in id_features:
# for cross features
if 'cross' in key:
feas = key[1:].split('_cross_')
fields[key] = IDfeatureGenerator(10000000, *feas)
# for normal ID features
else:
fields[key] = IDfeatureGenerator(10000)
# used as feed_dict in PaddlePaddle
field_index = dict((key, id)
for id, key in enumerate(['dnn_input', 'lr_input', 'click']))
def detect_dataset(path, topn, id_fea_space=10000):
'''
Parse the first `topn` records to collect meta information of this dataset.
NOTE the records should be randomly shuffled first.
'''
# create categorical statis objects.
with open(path, 'rb') as csvfile:
reader = csv.DictReader(csvfile)
for row_id, row in enumerate(reader):
if row_id > topn:
break
for key in categorial_features:
fields[key].register(row[key])
for key, item in fields.items():
feature_dims[key] = item.size()
#for key in id_features:
#feature_dims[key] = id_fea_space
feature_dims['hour'] = 24
feature_dims['click'] = 1
feature_dims['dnn_input'] = np.sum(
feature_dims[key] for key in categorial_features + ['hour']) + 1
feature_dims['lr_input'] = np.sum(feature_dims[key]
for key in id_features) + 1
return feature_dims
def concat_sparse_vectors(inputs, dims):
'''
Concaterate more than one sparse vectors into one.
@inputs: list
list of sparse vector
@dims: list of int
dimention of each sparse vector
'''
res = []
assert len(inputs) == len(dims)
start = 0
for no, vec in enumerate(inputs):
for v in vec:
res.append(v + start)
start += dims[no]
return res
class AvazuDataset(object):
'''
Load AVAZU dataset as train set.
'''
TRAIN_MODE = 0
TEST_MODE = 1
def __init__(self, train_path, n_records_as_test=-1):
self.train_path = train_path
self.n_records_as_test = n_records_as_test
# task model: 0 train, 1 test
self.mode = 0
def train(self):
self.mode = self.TRAIN_MODE
return self._parse(self.train_path, skip_n_lines=self.n_records_as_test)
def test(self):
self.mode = self.TEST_MODE
return self._parse(self.train_path, top_n_lines=self.n_records_as_test)
def _parse(self, path, skip_n_lines=-1, top_n_lines=-1):
with open(path, 'rb') as csvfile:
reader = csv.DictReader(csvfile)
categorial_dims = [
feature_dims[key] for key in categorial_features + ['hour']
]
id_dims = [feature_dims[key] for key in id_features]
for row_id, row in enumerate(reader):
if skip_n_lines > 0 and row_id < skip_n_lines:
continue
if top_n_lines > 0 and row_id > top_n_lines:
break
record = []
for key in categorial_features:
record.append(fields[key].gen(row[key]))
record.append([int(row['hour'][-2:])])
dense_input = concat_sparse_vectors(record, categorial_dims)
record = []
for key in id_features:
if 'cross' not in key:
record.append(fields[key].gen(row[key]))
else:
fea0 = fields[key].cross_fea0
fea1 = fields[key].cross_fea1
record.append(
fields[key].gen_cross_fea(row[fea0], row[fea1]))
sparse_input = concat_sparse_vectors(record, id_dims)
record = [dense_input, sparse_input]
record.append(list((int(row['click']), )))
yield record
if __name__ == '__main__':
path = 'train.txt'
print detect_dataset(path, 400000)
filereader = AvazuDataset(path)
for no, rcd in enumerate(filereader.train()):
print no, rcd
if no > 1000: break
# 数据及处理
## 数据集介绍
数据集使用 `csv` 格式存储,其中各个字段内容如下:
- `id` : ad identifier
- `click` : 0/1 for non-click/click
- `hour` : format is YYMMDDHH, so 14091123 means 23:00 on Sept. 11, 2014 UTC.
- `C1` : anonymized categorical variable
- `banner_pos`
- `site_id`
- `site_domain`
- `site_category`
- `app_id`
- `app_domain`
- `app_category`
- `device_id`
- `device_ip`
- `device_model`
- `device_type`
- `device_conn_type`
- `C14-C21` : anonymized categorical variables
## 特征提取
下面我们会简单演示几种特征的提取方式。
原始数据中的特征可以分为以下几类:
1. ID 类特征(稀疏,数量多)
- `id`
- `site_id`
- `app_id`
- `device_id`
2. 类别类特征(稀疏,但数量有限)
- `C1`
- `site_category`
- `device_type`
- `C14-C21`
3. 数值型特征转化为类别型特征
- hour (可以转化成数值,也可以按小时为单位转化为类别)
### 类别类特征
类别类特征的提取方法有以下两种:
1. One-hot 表示作为特征
2. 类似词向量,用一个 Embedding 将每个类别映射到对应的向量
### ID 类特征
ID 类特征的特点是稀疏数据,但量比较大,直接使用 One-hot 表示时维度过大。
一般会作如下处理:
1. 确定表示的最大维度 N
2. newid = id % N
3. 用 newid 作为类别类特征使用
上面的方法尽管存在一定的碰撞概率,但能够处理任意数量的 ID 特征,并保留一定的效果\[[2](#参考文献)\]
### 数值型特征
一般会做如下处理:
- 归一化,直接作为特征输入模型
- 用区间分割处理成类别类特征,稀疏化表示,模糊细微上的差别
## 特征处理
### 类别型特征
类别型特征有有限多种值,在模型中,我们一般使用 Embedding将每种值映射为连续值的向量。
这种特征在输入到模型时,一般使用 One-hot 表示,相关处理方法如下:
```python
class CategoryFeatureGenerator(object):
'''
Generator category features.
Register all records by calling ~register~ first, then call ~gen~ to generate
one-hot representation for a record.
'''
def __init__(self):
self.dic = {'unk': 0}
self.counter = 1
def register(self, key):
'''
Register record.
'''
if key not in self.dic:
self.dic[key] = self.counter
self.counter += 1
def size(self):
return len(self.dic)
def gen(self, key):
'''
Generate one-hot representation for a record.
'''
if key not in self.dic:
res = self.dic['unk']
else:
res = self.dic[key]
return [res]
def __repr__(self):
return '<CategoryFeatureGenerator %d>' % len(self.dic)
```
`CategoryFeatureGenerator` 需要先扫描数据集,得到该类别对应的项集合,之后才能开始生成特征。
我们的实验数据集\[[3](https://www.kaggle.com/c/avazu-ctr-prediction/data)\]已经经过shuffle,可以扫描前面一定数目的记录来近似总的类别项集合(等价于随机抽样),
对于没有抽样上的低频类别项,可以用一个 UNK 的特殊值表示。
```python
fields = {}
for key in categorial_features:
fields[key] = CategoryFeatureGenerator()
def detect_dataset(path, topn, id_fea_space=10000):
'''
Parse the first `topn` records to collect meta information of this dataset.
NOTE the records should be randomly shuffled first.
'''
# create categorical statis objects.
with open(path, 'rb') as csvfile:
reader = csv.DictReader(csvfile)
for row_id, row in enumerate(reader):
if row_id > topn:
break
for key in categorial_features:
fields[key].register(row[key])
```
`CategoryFeatureGenerator` 在注册得到数据集中对应类别信息后,可以对相应记录生成对应的特征表示:
```python
record = []
for key in categorial_features:
record.append(fields[key].gen(row[key]))
```
本任务中,类别类特征会输入到 DNN 中使用。
### ID 类特征
ID 类特征代稀疏值,且值的空间很大的情况,一般用模操作规约到一个有限空间,
之后可以当成类别类特征使用,这里我们会将 ID 类特征输入到 LR 模型中使用。
```python
class IDfeatureGenerator(object):
def __init__(self, max_dim):
'''
@max_dim: int
Size of the id elements' space
'''
self.max_dim = max_dim
def gen(self, key):
'''
Generate one-hot representation for records
'''
return [hash(key) % self.max_dim]
def size(self):
return self.max_dim
```
`IDfeatureGenerator` 不需要预先初始化,可以直接生成特征,比如
```python
record = []
for key in id_features:
if 'cross' not in key:
record.append(fields[key].gen(row[key]))
```
### 交叉类特征
LR 模型作为 Wide & Deep model 的 `wide` 部分,可以输入很 wide 的数据(特征空间的维度很大),
为了充分利用这个优势,我们将演示交叉组合特征构建成更大维度特征的情况,之后塞入到模型中训练。
这里我们依旧使用模操作来约束最终组合出的特征空间的大小,具体实现是直接在 `IDfeatureGenerator` 中添加一个 `gen_cross_feature` 的方法:
```python
def gen_cross_fea(self, fea1, fea2):
key = str(fea1) + str(fea2)
return self.gen(key)
```
比如,我们觉得原始数据中, `device_id``site_id` 有一些关联(比如某个 device 倾向于浏览特定 site),
我们通过组合出两者组合来捕捉这类信息。
```python
fea0 = fields[key].cross_fea0
fea1 = fields[key].cross_fea1
record.append(
fields[key].gen_cross_fea(row[fea0], row[fea1]))
```
### 特征维度
#### Deep submodel(DNN)特征
| feature | dimention |
|------------------|-----------|
| app_category | 21 |
| site_category | 22 |
| device_conn_type | 5 |
| hour | 24 |
| banner_pos | 7 |
| **Total** | 79 |
#### Wide submodel(LR)特征
| Feature | Dimention |
|---------------------|-----------|
| id | 10000 |
| site_id | 10000 |
| app_id | 10000 |
| device_id | 10000 |
| device_id X site_id | 1000000 |
| **Total** | 1,040,000 |
## 输入到 PaddlePaddle 中
Deep 和 Wide 两部分均以 `sparse_binary_vector` 的格式 \[[1](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/api/v1/data_provider/pydataprovider2_en.rst)\] 输入,输入前需要将相关特征拼合,模型最终只接受 3 个 input,
分别是
1. `dnn input` ,DNN 的输入
2. `lr input` , LR 的输入
3. `click` , 标签
拼合特征的方法:
```python
def concat_sparse_vectors(inputs, dims):
'''
concaterate sparse vectors into one
@inputs: list
list of sparse vector
@dims: list of int
dimention of each sparse vector
'''
res = []
assert len(inputs) == len(dims)
start = 0
for no, vec in enumerate(inputs):
for v in vec:
res.append(v + start)
start += dims[no]
return res
```
生成最终特征的代码如下:
```python
# dimentions of the features
categorial_dims = [
feature_dims[key] for key in categorial_features + ['hour']
]
id_dims = [feature_dims[key] for key in id_features]
dense_input = concat_sparse_vectors(record, categorial_dims)
sparse_input = concat_sparse_vectors(record, id_dims)
record = [dense_input, sparse_input]
record.append(list((int(row['click']), )))
yield record
```
## 参考文献
1. <https://github.com/PaddlePaddle/Paddle/blob/develop/doc/api/v1/data_provider/pydataprovider2_en.rst>
2. Mikolov T, Deoras A, Povey D, et al. [Strategies for training large scale neural network language models](https://www.researchgate.net/profile/Lukas_Burget/publication/241637478_Strategies_for_training_large_scale_neural_network_language_models/links/542c14960cf27e39fa922ed3.pdf)[C]//Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on. IEEE, 2011: 196-201.
3. <https://www.kaggle.com/c/avazu-ctr-prediction/data>
<html>
<head>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tools/theme/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tools/theme/github-markdown.css" rel='stylesheet'>
</head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body>
<div id="context" class="container-fluid markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
# 点击率预估
## 背景介绍
CTR(Click-Through Rate,点击率预估)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\] 是用来表示用户点击一个特定链接的概率,
通常被用来衡量一个在线广告系统的有效性。
当有多个广告位时,CTR 预估一般会作为排序的基准。
比如在搜索引擎的广告系统里,当用户输入一个带商业价值的搜索词(query)时,系统大体上会执行下列步骤来展示广告:
1. 召回满足 query 的广告集合
2. 业务规则和相关性过滤
3. 根据拍卖机制和 CTR 排序
4. 展出广告
可以看到,CTR 在最终排序中起到了很重要的作用。
### 发展阶段
在业内,CTR 模型经历了如下的发展阶段:
- Logistic Regression(LR) / GBDT + 特征工程
- LR + DNN 特征
- DNN + 特征工程
在发展早期时 LR 一统天下,但最近 DNN 模型由于其强大的学习能力和逐渐成熟的性能优化,
逐渐地接过 CTR 预估任务的大旗。
### LR vs DNN
下图展示了 LR 和一个 \(3x2\) 的 DNN 模型的结构:
<p align="center">
<img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/>
Figure 1. LR 和 DNN 模型结构对比
</p>
LR 的蓝色箭头部分可以直接类比到 DNN 中对应的结构,可以看到 LR 和 DNN 有一些共通之处(比如权重累加),
但前者的模型复杂度在相同输入维度下比后者可能低很多(从某方面讲,模型越复杂,越有潜力学习到更复杂的信息)。
如果 LR 要达到匹敌 DNN 的学习能力,必须增加输入的维度,也就是增加特征的数量,
这也就是为何 LR 和大规模的特征工程必须绑定在一起的原因。
LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括内存和计算量等方面,工业界都有非常成熟的优化方法。
而 DNN 模型具有自己学习新特征的能力,一定程度上能够提升特征使用的效率,
这使得 DNN 模型在同样规模特征的情况下,更有可能达到更好的学习效果。
本文后面的章节会演示如何使用 PaddlePaddle 编写一个结合两者优点的模型。
## 数据和任务抽象
我们可以将 `click` 作为学习目标,任务可以有以下几种方案:
1. 直接学习 click,0,1 作二元分类
2. Learning to rank, 具体用 pairwise rank(标签 1>0)或者 listwise rank
3. 统计每个广告的点击率,将同一个 query 下的广告两两组合,点击率高的>点击率低的,做 rank 或者分类
我们直接使用第一种方法做分类任务。
我们使用 Kaggle 上 `Click-through rate prediction` 任务的数据集\[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\] 来演示模型。
具体的特征处理方法参看 [data process](./dataset.md)
## Wide & Deep Learning Model
谷歌在 16 年提出了 Wide & Deep Learning 的模型框架,用于融合适合学习抽象特征的 DNN 和 适用于大规模稀疏特征的 LR 两种模型的优点。
### 模型简介
Wide & Deep Learning Model\[[3](#参考文献)\] 可以作为一种相对成熟的模型框架使用,
在 CTR 预估的任务中工业界也有一定的应用,因此本文将演示使用此模型来完成 CTR 预估的任务。
模型结构如下:
<p align="center">
<img src="images/wide_deep.png" width="820" hspace='10'/> <br/>
Figure 2. Wide & Deep Model
</p>
模型左边的 Wide 部分,可以容纳大规模系数特征,并且对一些特定的信息(比如 ID)有一定的记忆能力;
而模型右边的 Deep 部分,能够学习特征间的隐含关系,在相同数量的特征下有更好的学习和推导能力。
### 编写模型输入
模型只接受 3 个输入,分别是
- `dnn_input` ,也就是 Deep 部分的输入
- `lr_input` ,也就是 Wide 部分的输入
- `click` , 点击与否,作为二分类模型学习的标签
```python
dnn_merged_input = layer.data(
name='dnn_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['dnn_input']))
lr_merged_input = layer.data(
name='lr_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['lr_input']))
click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
```
### 编写 Wide 部分
Wide 部分直接使用了 LR 模型,但激活函数改成了 `RELU` 来加速
```python
def build_lr_submodel():
fc = layer.fc(
input=lr_merged_input, size=1, name='lr', act=paddle.activation.Relu())
return fc
```
### 编写 Deep 部分
Deep 部分使用了标准的多层前向传导的 DNN 模型
```python
def build_dnn_submodel(dnn_layer_dims):
dnn_embedding = layer.fc(input=dnn_merged_input, size=dnn_layer_dims[0])
_input_layer = dnn_embedding
for i, dim in enumerate(dnn_layer_dims[1:]):
fc = layer.fc(
input=_input_layer,
size=dim,
act=paddle.activation.Relu(),
name='dnn-fc-%d' % i)
_input_layer = fc
return _input_layer
```
### 两者融合
两个 submodel 的最上层输出加权求和得到整个模型的输出,输出部分使用 `sigmoid` 作为激活函数,得到区间 (0,1) 的预测值,
来逼近训练数据中二元类别的分布,并最终作为 CTR 预估的值使用。
```python
# conbine DNN and LR submodels
def combine_submodels(dnn, lr):
merge_layer = layer.concat(input=[dnn, lr])
fc = layer.fc(
input=merge_layer,
size=1,
name='output',
# use sigmoid function to approximate ctr, wihch is a float value between 0 and 1.
act=paddle.activation.Sigmoid())
return fc
```
### 训练任务的定义
```python
dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel()
output = combine_submodels(dnn, lr)
# ==============================================================================
# cost and train period
# ==============================================================================
classification_cost = paddle.layer.multi_binary_label_cross_entropy_cost(
input=output, label=click)
paddle.init(use_gpu=False, trainer_count=11)
params = paddle.parameters.create(classification_cost)
optimizer = paddle.optimizer.Momentum(momentum=0)
trainer = paddle.trainer.SGD(
cost=classification_cost, parameters=params, update_equation=optimizer)
dataset = AvazuDataset(train_data_path, n_records_as_test=test_set_size)
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
logging.warning("Pass %d, Samples %d, Cost %f" % (
event.pass_id, event.batch_id * batch_size, event.cost))
if event.batch_id % 1000 == 0:
result = trainer.test(
reader=paddle.batch(dataset.test, batch_size=1000),
feeding=field_index)
logging.warning("Test %d-%d, Cost %f" % (event.pass_id, event.batch_id,
result.cost))
trainer.train(
reader=paddle.batch(
paddle.reader.shuffle(dataset.train, buf_size=500),
batch_size=batch_size),
feeding=field_index,
event_handler=event_handler,
num_passes=100)
```
## 运行训练和测试
训练模型需要如下步骤:
1. 下载训练数据,可以使用 Kaggle 上 CTR 比赛的数据\[[2](#参考文献)\]
1. 从 [Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) 下载 train.gz
2. 解压 train.gz 得到 train.txt
2. 执行 `python train.py --train_data_path train.txt` ,开始训练
上面第2个步骤可以为 `train.py` 填充命令行参数来定制模型的训练过程,具体的命令行参数及用法如下
```
usage: train.py [-h] --train_data_path TRAIN_DATA_PATH
[--batch_size BATCH_SIZE] [--test_set_size TEST_SET_SIZE]
[--num_passes NUM_PASSES]
[--num_lines_to_detact NUM_LINES_TO_DETACT]
PaddlePaddle CTR example
optional arguments:
-h, --help show this help message and exit
--train_data_path TRAIN_DATA_PATH
path of training dataset
--batch_size BATCH_SIZE
size of mini-batch (default:10000)
--test_set_size TEST_SET_SIZE
size of the validation dataset(default: 10000)
--num_passes NUM_PASSES
number of passes to train
--num_lines_to_detact NUM_LINES_TO_DETACT
number of records to detect dataset's meta info
```
## 参考文献
1. <https://en.wikipedia.org/wiki/Click-through_rate>
2. <https://www.kaggle.com/c/avazu-ctr-prediction/data>
3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10.
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body>
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import argparse
import logging
import paddle.v2 as paddle
from paddle.v2 import layer
from paddle.v2 import data_type as dtype
from data_provider import field_index, detect_dataset, AvazuDataset
parser = argparse.ArgumentParser(description="PaddlePaddle CTR example")
parser.add_argument(
'--train_data_path',
type=str,
required=True,
help="path of training dataset")
parser.add_argument(
'--batch_size',
type=int,
default=10000,
help="size of mini-batch (default:10000)")
parser.add_argument(
'--test_set_size',
type=int,
default=10000,
help="size of the validation dataset(default: 10000)")
parser.add_argument(
'--num_passes', type=int, default=10, help="number of passes to train")
parser.add_argument(
'--num_lines_to_detact',
type=int,
default=500000,
help="number of records to detect dataset's meta info")
args = parser.parse_args()
dnn_layer_dims = [128, 64, 32, 1]
data_meta_info = detect_dataset(args.train_data_path, args.num_lines_to_detact)
logging.warning('detect categorical fields in dataset %s' %
args.train_data_path)
for key, item in data_meta_info.items():
logging.warning(' - {}\t{}'.format(key, item))
paddle.init(use_gpu=False, trainer_count=1)
# ==============================================================================
# input layers
# ==============================================================================
dnn_merged_input = layer.data(
name='dnn_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['dnn_input']))
lr_merged_input = layer.data(
name='lr_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['lr_input']))
click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
# ==============================================================================
# network structure
# ==============================================================================
def build_dnn_submodel(dnn_layer_dims):
dnn_embedding = layer.fc(input=dnn_merged_input, size=dnn_layer_dims[0])
_input_layer = dnn_embedding
for i, dim in enumerate(dnn_layer_dims[1:]):
fc = layer.fc(
input=_input_layer,
size=dim,
act=paddle.activation.Relu(),
name='dnn-fc-%d' % i)
_input_layer = fc
return _input_layer
# config LR submodel
def build_lr_submodel():
fc = layer.fc(
input=lr_merged_input, size=1, name='lr', act=paddle.activation.Relu())
return fc
# conbine DNN and LR submodels
def combine_submodels(dnn, lr):
merge_layer = layer.concat(input=[dnn, lr])
fc = layer.fc(
input=merge_layer,
size=1,
name='output',
# use sigmoid function to approximate ctr rate, a float value between 0 and 1.
act=paddle.activation.Sigmoid())
return fc
dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel()
output = combine_submodels(dnn, lr)
# ==============================================================================
# cost and train period
# ==============================================================================
classification_cost = paddle.layer.multi_binary_label_cross_entropy_cost(
input=output, label=click)
params = paddle.parameters.create(classification_cost)
optimizer = paddle.optimizer.Momentum(momentum=0.01)
trainer = paddle.trainer.SGD(
cost=classification_cost, parameters=params, update_equation=optimizer)
dataset = AvazuDataset(
args.train_data_path, n_records_as_test=args.test_set_size)
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
num_samples = event.batch_id * args.batch_size
if event.batch_id % 100 == 0:
logging.warning("Pass %d, Samples %d, Cost %f" %
(event.pass_id, num_samples, event.cost))
if event.batch_id % 1000 == 0:
result = trainer.test(
reader=paddle.batch(dataset.test, batch_size=args.batch_size),
feeding=field_index)
logging.warning("Test %d-%d, Cost %f" %
(event.pass_id, event.batch_id, result.cost))
trainer.train(
reader=paddle.batch(
paddle.reader.shuffle(dataset.train, buf_size=500),
batch_size=args.batch_size),
feeding=field_index,
event_handler=event_handler,
num_passes=args.num_passes)
# Deep Speech 2 on PaddlePaddle
## Installation
Please replace `$PADDLE_INSTALL_DIR` with your own paddle installation directory.
```
pip install -r requirements.txt
export LD_LIBRARY_PATH=$PADDLE_INSTALL_DIR/Paddle/third_party/install/warpctc/lib:$LD_LIBRARY_PATH
```
For some machines, we also need to install libsndfile1. Details to be added.
## Usage
### Preparing Data
```
cd datasets
sh run_all.sh
cd ..
```
`sh run_all.sh` prepares all ASR datasets (currently, only LibriSpeech available). After running, we have several summarization manifest files in json-format.
A manifest file summarizes a speech data set, with each line containing the meta data (i.e. audio filepath, transcript text, audio duration) of each audio file within the data set, in json format. Manifest file serves as an interface informing our system of where and what to read the speech samples.
More help for arguments:
```
python datasets/librispeech/librispeech.py --help
```
### Preparing for Training
```
python compute_mean_std.py
```
`python compute_mean_std.py` computes mean and stdandard deviation for audio features, and save them to a file with a default name `./mean_std.npz`. This file will be used in both training and inferencing.
More help for arguments:
```
python compute_mean_std.py --help
```
### Training
For GPU Training:
```
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --trainer_count 4
```
For CPU Training:
```
python train.py --trainer_count 8 --use_gpu False
```
More help for arguments:
```
python train.py --help
```
### Inferencing
```
CUDA_VISIBLE_DEVICES=0 python infer.py
```
More help for arguments:
```
python infer.py --help
```
"""Compute mean and std for feature normalizer, and save to file."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
from data_utils.normalizer import FeatureNormalizer
from data_utils.augmentor.augmentation import AugmentationPipeline
from data_utils.featurizer.audio_featurizer import AudioFeaturizer
parser = argparse.ArgumentParser(
description='Computing mean and stddev for feature normalizer.')
parser.add_argument(
"--manifest_path",
default='datasets/manifest.train',
type=str,
help="Manifest path for computing normalizer's mean and stddev."
"(default: %(default)s)")
parser.add_argument(
"--num_samples",
default=2000,
type=int,
help="Number of samples for computing mean and stddev. "
"(default: %(default)s)")
parser.add_argument(
"--augmentation_config",
default='{}',
type=str,
help="Augmentation configuration in json-format. "
"(default: %(default)s)")
parser.add_argument(
"--output_file",
default='mean_std.npz',
type=str,
help="Filepath to write mean and std to (.npz)."
"(default: %(default)s)")
args = parser.parse_args()
def main():
augmentation_pipeline = AugmentationPipeline(args.augmentation_config)
audio_featurizer = AudioFeaturizer()
def augment_and_featurize(audio_segment):
augmentation_pipeline.transform_audio(audio_segment)
return audio_featurizer.featurize(audio_segment)
normalizer = FeatureNormalizer(
mean_std_filepath=None,
manifest_path=args.manifest_path,
featurize_func=augment_and_featurize,
num_samples=args.num_samples)
normalizer.write_to_file(args.output_file)
if __name__ == '__main__':
main()
"""Contains the audio segment class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import io
import soundfile
class AudioSegment(object):
"""Monaural audio segment abstraction.
:param samples: Audio samples [num_samples x num_channels].
:type samples: ndarray.float32
:param sample_rate: Audio sample rate.
:type sample_rate: int
:raises TypeError: If the sample data type is not float or int.
"""
def __init__(self, samples, sample_rate):
"""Create audio segment from samples.
Samples are convert float32 internally, with int scaled to [-1, 1].
"""
self._samples = self._convert_samples_to_float32(samples)
self._sample_rate = sample_rate
if self._samples.ndim >= 2:
self._samples = np.mean(self._samples, 1)
def __eq__(self, other):
"""Return whether two objects are equal."""
if type(other) is not type(self):
return False
if self._sample_rate != other._sample_rate:
return False
if self._samples.shape != other._samples.shape:
return False
if np.any(self.samples != other._samples):
return False
return True
def __ne__(self, other):
"""Return whether two objects are unequal."""
return not self.__eq__(other)
def __str__(self):
"""Return human-readable representation of segment."""
return ("%s: num_samples=%d, sample_rate=%d, duration=%.2fsec, "
"rms=%.2fdB" % (type(self), self.num_samples, self.sample_rate,
self.duration, self.rms_db))
@classmethod
def from_file(cls, file):
"""Create audio segment from audio file.
:param filepath: Filepath or file object to audio file.
:type filepath: basestring|file
:return: Audio segment instance.
:rtype: AudioSegment
"""
samples, sample_rate = soundfile.read(file, dtype='float32')
return cls(samples, sample_rate)
@classmethod
def from_bytes(cls, bytes):
"""Create audio segment from a byte string containing audio samples.
:param bytes: Byte string containing audio samples.
:type bytes: str
:return: Audio segment instance.
:rtype: AudioSegment
"""
samples, sample_rate = soundfile.read(
io.BytesIO(bytes), dtype='float32')
return cls(samples, sample_rate)
def to_wav_file(self, filepath, dtype='float32'):
"""Save audio segment to disk as wav file.
:param filepath: WAV filepath or file object to save the
audio segment.
:type filepath: basestring|file
:param dtype: Subtype for audio file. Options: 'int16', 'int32',
'float32', 'float64'. Default is 'float32'.
:type dtype: str
:raises TypeError: If dtype is not supported.
"""
samples = self._convert_samples_from_float32(self._samples, dtype)
subtype_map = {
'int16': 'PCM_16',
'int32': 'PCM_32',
'float32': 'FLOAT',
'float64': 'DOUBLE'
}
soundfile.write(
filepath,
samples,
self._sample_rate,
format='WAV',
subtype=subtype_map[dtype])
def to_bytes(self, dtype='float32'):
"""Create a byte string containing the audio content.
:param dtype: Data type for export samples. Options: 'int16', 'int32',
'float32', 'float64'. Default is 'float32'.
:type dtype: str
:return: Byte string containing audio content.
:rtype: str
"""
samples = self._convert_samples_from_float32(self._samples, dtype)
return samples.tostring()
def apply_gain(self, gain):
"""Apply gain in decibels to samples.
Note that this is an in-place transformation.
:param gain: Gain in decibels to apply to samples.
:type gain: float
"""
self._samples *= 10.**(gain / 20.)
def change_speed(self, speed_rate):
"""Change the audio speed by linear interpolation.
Note that this is an in-place transformation.
:param speed_rate: Rate of speed change:
speed_rate > 1.0, speed up the audio;
speed_rate = 1.0, unchanged;
speed_rate < 1.0, slow down the audio;
speed_rate <= 0.0, not allowed, raise ValueError.
:type speed_rate: float
:raises ValueError: If speed_rate <= 0.0.
"""
if speed_rate <= 0:
raise ValueError("speed_rate should be greater than zero.")
old_length = self._samples.shape[0]
new_length = int(old_length / speed_rate)
old_indices = np.arange(old_length)
new_indices = np.linspace(start=0, stop=old_length, num=new_length)
self._samples = np.interp(new_indices, old_indices, self._samples)
def normalize(self, target_sample_rate):
raise NotImplementedError()
def resample(self, target_sample_rate):
raise NotImplementedError()
def pad_silence(self, duration, sides='both'):
raise NotImplementedError()
def subsegment(self, start_sec=None, end_sec=None):
raise NotImplementedError()
def convolve(self, filter, allow_resample=False):
raise NotImplementedError()
def convolve_and_normalize(self, filter, allow_resample=False):
raise NotImplementedError()
@property
def samples(self):
"""Return audio samples.
:return: Audio samples.
:rtype: ndarray
"""
return self._samples.copy()
@property
def sample_rate(self):
"""Return audio sample rate.
:return: Audio sample rate.
:rtype: int
"""
return self._sample_rate
@property
def num_samples(self):
"""Return number of samples.
:return: Number of samples.
:rtype: int
"""
return self._samples.shape(0)
@property
def duration(self):
"""Return audio duration.
:return: Audio duration in seconds.
:rtype: float
"""
return self._samples.shape[0] / float(self._sample_rate)
@property
def rms_db(self):
"""Return root mean square energy of the audio in decibels.
:return: Root mean square energy in decibels.
:rtype: float
"""
# square root => multiply by 10 instead of 20 for dBs
mean_square = np.mean(self._samples**2)
return 10 * np.log10(mean_square)
def _convert_samples_to_float32(self, samples):
"""Convert sample type to float32.
Audio sample type is usually integer or float-point.
Integers will be scaled to [-1, 1] in float32.
"""
float32_samples = samples.astype('float32')
if samples.dtype in np.sctypes['int']:
bits = np.iinfo(samples.dtype).bits
float32_samples *= (1. / 2**(bits - 1))
elif samples.dtype in np.sctypes['float']:
pass
else:
raise TypeError("Unsupported sample type: %s." % samples.dtype)
return float32_samples
def _convert_samples_from_float32(self, samples, dtype):
"""Convert sample type from float32 to dtype.
Audio sample type is usually integer or float-point. For integer
type, float32 will be rescaled from [-1, 1] to the maximum range
supported by the integer type.
This is for writing a audio file.
"""
dtype = np.dtype(dtype)
output_samples = samples.copy()
if dtype in np.sctypes['int']:
bits = np.iinfo(dtype).bits
output_samples *= (2**(bits - 1) / 1.)
min_val = np.iinfo(dtype).min
max_val = np.iinfo(dtype).max
output_samples[output_samples > max_val] = max_val
output_samples[output_samples < min_val] = min_val
elif samples.dtype in np.sctypes['float']:
min_val = np.finfo(dtype).min
max_val = np.finfo(dtype).max
output_samples[output_samples > max_val] = max_val
output_samples[output_samples < min_val] = min_val
else:
raise TypeError("Unsupported sample type: %s." % samples.dtype)
return output_samples.astype(dtype)
"""Contains the data augmentation pipeline."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import random
from data_utils.augmentor.volume_perturb import VolumePerturbAugmentor
class AugmentationPipeline(object):
"""Build a pre-processing pipeline with various augmentation models.Such a
data augmentation pipeline is oftern leveraged to augment the training
samples to make the model invariant to certain types of perturbations in the
real world, improving model's generalization ability.
The pipeline is built according the the augmentation configuration in json
string, e.g.
.. code-block::
'[{"type": "volume",
"params": {"min_gain_dBFS": -15,
"max_gain_dBFS": 15},
"prob": 0.5},
{"type": "speed",
"params": {"min_speed_rate": 0.8,
"max_speed_rate": 1.2},
"prob": 0.5}
]'
This augmentation configuration inserts two augmentation models
into the pipeline, with one is VolumePerturbAugmentor and the other
SpeedPerturbAugmentor. "prob" indicates the probability of the current
augmentor to take effect.
:param augmentation_config: Augmentation configuration in json string.
:type augmentation_config: str
:param random_seed: Random seed.
:type random_seed: int
:raises ValueError: If the augmentation json config is in incorrect format".
"""
def __init__(self, augmentation_config, random_seed=0):
self._rng = random.Random(random_seed)
self._augmentors, self._rates = self._parse_pipeline_from(
augmentation_config)
def transform_audio(self, audio_segment):
"""Run the pre-processing pipeline for data augmentation.
Note that this is an in-place transformation.
:param audio_segment: Audio segment to process.
:type audio_segment: AudioSegmenet|SpeechSegment
"""
for augmentor, rate in zip(self._augmentors, self._rates):
if self._rng.uniform(0., 1.) <= rate:
augmentor.transform_audio(audio_segment)
def _parse_pipeline_from(self, config_json):
"""Parse the config json to build a augmentation pipelien."""
try:
configs = json.loads(config_json)
augmentors = [
self._get_augmentor(config["type"], config["params"])
for config in configs
]
rates = [config["prob"] for config in configs]
except Exception as e:
raise ValueError("Failed to parse the augmentation config json: "
"%s" % str(e))
return augmentors, rates
def _get_augmentor(self, augmentor_type, params):
"""Return an augmentation model by the type name, and pass in params."""
if augmentor_type == "volume":
return VolumePerturbAugmentor(self._rng, **params)
else:
raise ValueError("Unknown augmentor type [%s]." % augmentor_type)
"""Contains the abstract base class for augmentation models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from abc import ABCMeta, abstractmethod
class AugmentorBase(object):
"""Abstract base class for augmentation model (augmentor) class.
All augmentor classes should inherit from this class, and implement the
following abstract methods.
"""
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self):
pass
@abstractmethod
def transform_audio(self, audio_segment):
"""Adds various effects to the input audio segment. Such effects
will augment the training data to make the model invariant to certain
types of perturbations in the real world, improving model's
generalization ability.
Note that this is an in-place transformation.
:param audio_segment: Audio segment to add effects to.
:type audio_segment: AudioSegmenet|SpeechSegment
"""
pass
"""Contains the volume perturb augmentation model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from data_utils.augmentor.base import AugmentorBase
class VolumePerturbAugmentor(AugmentorBase):
"""Augmentation model for adding random volume perturbation.
This is used for multi-loudness training of PCEN. See
https://arxiv.org/pdf/1607.05666v1.pdf
for more details.
:param rng: Random generator object.
:type rng: random.Random
:param min_gain_dBFS: Minimal gain in dBFS.
:type min_gain_dBFS: float
:param max_gain_dBFS: Maximal gain in dBFS.
:type max_gain_dBFS: float
"""
def __init__(self, rng, min_gain_dBFS, max_gain_dBFS):
self._min_gain_dBFS = min_gain_dBFS
self._max_gain_dBFS = max_gain_dBFS
self._rng = rng
def transform_audio(self, audio_segment):
"""Change audio loadness.
Note that this is an in-place transformation.
:param audio_segment: Audio segment to add effects to.
:type audio_segment: AudioSegmenet|SpeechSegment
"""
gain = self._rng.uniform(min_gain_dBFS, max_gain_dBFS)
audio_segment.apply_gain(gain)
"""Contains data generator for orgnaizing various audio data preprocessing
pipeline and offering data reader interface of PaddlePaddle requirements.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
import numpy as np
import paddle.v2 as paddle
from data_utils import utils
from data_utils.augmentor.augmentation import AugmentationPipeline
from data_utils.featurizer.speech_featurizer import SpeechFeaturizer
from data_utils.speech import SpeechSegment
from data_utils.normalizer import FeatureNormalizer
class DataGenerator(object):
"""
DataGenerator provides basic audio data preprocessing pipeline, and offers
data reader interfaces of PaddlePaddle requirements.
:param vocab_filepath: Vocabulary filepath for indexing tokenized
transcripts.
:type vocab_filepath: basestring
:param mean_std_filepath: File containing the pre-computed mean and stddev.
:type mean_std_filepath: None|basestring
:param augmentation_config: Augmentation configuration in json string.
Details see AugmentationPipeline.__doc__.
:type augmentation_config: str
:param max_duration: Audio with duration (in seconds) greater than
this will be discarded.
:type max_duration: float
:param min_duration: Audio with duration (in seconds) smaller than
this will be discarded.
:type min_duration: float
:param stride_ms: Striding size (in milliseconds) for generating frames.
:type stride_ms: float
:param window_ms: Window size (in milliseconds) for generating frames.
:type window_ms: float
:param max_freq: Used when specgram_type is 'linear', only FFT bins
corresponding to frequencies between [0, max_freq] are
returned.
:types max_freq: None|float
:param specgram_type: Specgram feature type. Options: 'linear'.
:type specgram_type: str
:param random_seed: Random seed.
:type random_seed: int
"""
def __init__(self,
vocab_filepath,
mean_std_filepath,
augmentation_config='{}',
max_duration=float('inf'),
min_duration=0.0,
stride_ms=10.0,
window_ms=20.0,
max_freq=None,
specgram_type='linear',
random_seed=0):
self._max_duration = max_duration
self._min_duration = min_duration
self._normalizer = FeatureNormalizer(mean_std_filepath)
self._augmentation_pipeline = AugmentationPipeline(
augmentation_config=augmentation_config, random_seed=random_seed)
self._speech_featurizer = SpeechFeaturizer(
vocab_filepath=vocab_filepath,
specgram_type=specgram_type,
stride_ms=stride_ms,
window_ms=window_ms,
max_freq=max_freq)
self._rng = random.Random(random_seed)
self._epoch = 0
def batch_reader_creator(self,
manifest_path,
batch_size,
min_batch_size=1,
padding_to=-1,
flatten=False,
sortagrad=False,
batch_shuffle=False):
"""
Batch data reader creator for audio data. Return a callable generator
function to produce batches of data.
Audio features within one batch will be padded with zeros to have the
same shape, or a user-defined shape.
:param manifest_path: Filepath of manifest for audio files.
:type manifest_path: basestring
:param batch_size: Number of instances in a batch.
:type batch_size: int
:param min_batch_size: Any batch with batch size smaller than this will
be discarded. (To be deprecated in the future.)
:type min_batch_size: int
:param padding_to: If set -1, the maximun shape in the batch
will be used as the target shape for padding.
Otherwise, `padding_to` will be the target shape.
:type padding_to: int
:param flatten: If set True, audio features will be flatten to 1darray.
:type flatten: bool
:param sortagrad: If set True, sort the instances by audio duration
in the first epoch for speed up training.
:type sortagrad: bool
:param batch_shuffle: If set True, instances are batch-wise shuffled.
For more details, please see
``_batch_shuffle.__doc__``.
If sortagrad is True, batch_shuffle is disabled
for the first epoch.
:type batch_shuffle: bool
:return: Batch reader function, producing batches of data when called.
:rtype: callable
"""
def batch_reader():
# read manifest
manifest = utils.read_manifest(
manifest_path=manifest_path,
max_duration=self._max_duration,
min_duration=self._min_duration)
# sort (by duration) or batch-wise shuffle the manifest
if self._epoch == 0 and sortagrad:
manifest.sort(key=lambda x: x["duration"])
elif batch_shuffle:
manifest = self._batch_shuffle(manifest, batch_size)
# prepare batches
instance_reader = self._instance_reader_creator(manifest)
batch = []
for instance in instance_reader():
batch.append(instance)
if len(batch) == batch_size:
yield self._padding_batch(batch, padding_to, flatten)
batch = []
if len(batch) >= min_batch_size:
yield self._padding_batch(batch, padding_to, flatten)
self._epoch += 1
return batch_reader
@property
def feeding(self):
"""Returns data reader's feeding dict.
:return: Data feeding dict.
:rtype: dict
"""
return {"audio_spectrogram": 0, "transcript_text": 1}
@property
def vocab_size(self):
"""Return the vocabulary size.
:return: Vocabulary size.
:rtype: int
"""
return self._speech_featurizer.vocab_size
@property
def vocab_list(self):
"""Return the vocabulary in list.
:return: Vocabulary in list.
:rtype: list
"""
return self._speech_featurizer.vocab_list
def _process_utterance(self, filename, transcript):
"""Load, augment, featurize and normalize for speech data."""
speech_segment = SpeechSegment.from_file(filename, transcript)
self._augmentation_pipeline.transform_audio(speech_segment)
specgram, text_ids = self._speech_featurizer.featurize(speech_segment)
specgram = self._normalizer.apply(specgram)
return specgram, text_ids
def _instance_reader_creator(self, manifest):
"""
Instance reader creator. Create a callable function to produce
instances of data.
Instance: a tuple of ndarray of audio spectrogram and a list of
token indices for transcript.
"""
def reader():
for instance in manifest:
yield self._process_utterance(instance["audio_filepath"],
instance["text"])
return reader
def _padding_batch(self, batch, padding_to=-1, flatten=False):
"""
Padding audio features with zeros to make them have the same shape (or
a user-defined shape) within one bach.
If ``padding_to`` is -1, the maximun shape in the batch will be used
as the target shape for padding. Otherwise, `padding_to` will be the
target shape (only refers to the second axis).
If `flatten` is True, features will be flatten to 1darray.
"""
new_batch = []
# get target shape
max_length = max([audio.shape[1] for audio, text in batch])
if padding_to != -1:
if padding_to < max_length:
raise ValueError("If padding_to is not -1, it should be larger "
"than any instance's shape in the batch")
max_length = padding_to
# padding
for audio, text in batch:
padded_audio = np.zeros([audio.shape[0], max_length])
padded_audio[:, :audio.shape[1]] = audio
if flatten:
padded_audio = padded_audio.flatten()
new_batch.append((padded_audio, text))
return new_batch
def _batch_shuffle(self, manifest, batch_size):
"""Put similarly-sized instances into minibatches for better efficiency
and make a batch-wise shuffle.
1. Sort the audio clips by duration.
2. Generate a random number `k`, k in [0, batch_size).
3. Randomly shift `k` instances in order to create different batches
for different epochs. Create minibatches.
4. Shuffle the minibatches.
:param manifest: Manifest contents. List of dict.
:type manifest: list
:param batch_size: Batch size. This size is also used for generate
a random number for batch shuffle.
:type batch_size: int
:return: Batch shuffled mainifest.
:rtype: list
"""
manifest.sort(key=lambda x: x["duration"])
shift_len = self._rng.randint(0, batch_size - 1)
batch_manifest = zip(*[iter(manifest[shift_len:])] * batch_size)
self._rng.shuffle(batch_manifest)
batch_manifest = list(sum(batch_manifest, ()))
res_len = len(manifest) - shift_len - len(batch_manifest)
batch_manifest.extend(manifest[-res_len:])
batch_manifest.extend(manifest[0:shift_len])
return batch_manifest
"""Contains the audio featurizer class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from data_utils import utils
from data_utils.audio import AudioSegment
class AudioFeaturizer(object):
"""Audio featurizer, for extracting features from audio contents of
AudioSegment or SpeechSegment.
Currently, it only supports feature type of linear spectrogram.
:param specgram_type: Specgram feature type. Options: 'linear'.
:type specgram_type: str
:param stride_ms: Striding size (in milliseconds) for generating frames.
:type stride_ms: float
:param window_ms: Window size (in milliseconds) for generating frames.
:type window_ms: float
:param max_freq: Used when specgram_type is 'linear', only FFT bins
corresponding to frequencies between [0, max_freq] are
returned.
:types max_freq: None|float
"""
def __init__(self,
specgram_type='linear',
stride_ms=10.0,
window_ms=20.0,
max_freq=None):
self._specgram_type = specgram_type
self._stride_ms = stride_ms
self._window_ms = window_ms
self._max_freq = max_freq
def featurize(self, audio_segment):
"""Extract audio features from AudioSegment or SpeechSegment.
:param audio_segment: Audio/speech segment to extract features from.
:type audio_segment: AudioSegment|SpeechSegment
:return: Spectrogram audio feature in 2darray.
:rtype: ndarray
"""
return self._compute_specgram(audio_segment.samples,
audio_segment.sample_rate)
def _compute_specgram(self, samples, sample_rate):
"""Extract various audio features."""
if self._specgram_type == 'linear':
return self._compute_linear_specgram(
samples, sample_rate, self._stride_ms, self._window_ms,
self._max_freq)
else:
raise ValueError("Unknown specgram_type %s. "
"Supported values: linear." % self._specgram_type)
def _compute_linear_specgram(self,
samples,
sample_rate,
stride_ms=10.0,
window_ms=20.0,
max_freq=None,
eps=1e-14):
"""Compute the linear spectrogram from FFT energy."""
if max_freq is None:
max_freq = sample_rate / 2
if max_freq > sample_rate / 2:
raise ValueError("max_freq must be greater than half of "
"sample rate.")
if stride_ms > window_ms:
raise ValueError("Stride size must not be greater than "
"window size.")
stride_size = int(0.001 * sample_rate * stride_ms)
window_size = int(0.001 * sample_rate * window_ms)
specgram, freqs = self._specgram_real(
samples,
window_size=window_size,
stride_size=stride_size,
sample_rate=sample_rate)
ind = np.where(freqs <= max_freq)[0][-1] + 1
return np.log(specgram[:ind, :] + eps)
def _specgram_real(self, samples, window_size, stride_size, sample_rate):
"""Compute the spectrogram for samples from a real signal."""
# extract strided windows
truncate_size = (len(samples) - window_size) % stride_size
samples = samples[:len(samples) - truncate_size]
nshape = (window_size, (len(samples) - window_size) // stride_size + 1)
nstrides = (samples.strides[0], samples.strides[0] * stride_size)
windows = np.lib.stride_tricks.as_strided(
samples, shape=nshape, strides=nstrides)
assert np.all(
windows[:, 1] == samples[stride_size:(stride_size + window_size)])
# window weighting, squared Fast Fourier Transform (fft), scaling
weighting = np.hanning(window_size)[:, None]
fft = np.fft.rfft(windows * weighting, axis=0)
fft = np.absolute(fft)**2
scale = np.sum(weighting**2) * sample_rate
fft[1:-1, :] *= (2.0 / scale)
fft[(0, -1), :] /= scale
# prepare fft frequency list
freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])
return fft, freqs
"""Contains the speech featurizer class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from data_utils.featurizer.audio_featurizer import AudioFeaturizer
from data_utils.featurizer.text_featurizer import TextFeaturizer
class SpeechFeaturizer(object):
"""Speech featurizer, for extracting features from both audio and transcript
contents of SpeechSegment.
Currently, for audio parts, it only supports feature type of linear
spectrogram; for transcript parts, it only supports char-level tokenizing
and conversion into a list of token indices. Note that the token indexing
order follows the given vocabulary file.
:param vocab_filepath: Filepath to load vocabulary for token indices
conversion.
:type specgram_type: basestring
:param specgram_type: Specgram feature type. Options: 'linear'.
:type specgram_type: str
:param stride_ms: Striding size (in milliseconds) for generating frames.
:type stride_ms: float
:param window_ms: Window size (in milliseconds) for generating frames.
:type window_ms: float
:param max_freq: Used when specgram_type is 'linear', only FFT bins
corresponding to frequencies between [0, max_freq] are
returned.
:types max_freq: None|float
"""
def __init__(self,
vocab_filepath,
specgram_type='linear',
stride_ms=10.0,
window_ms=20.0,
max_freq=None):
self._audio_featurizer = AudioFeaturizer(specgram_type, stride_ms,
window_ms, max_freq)
self._text_featurizer = TextFeaturizer(vocab_filepath)
def featurize(self, speech_segment):
"""Extract features for speech segment.
1. For audio parts, extract the audio features.
2. For transcript parts, convert text string to a list of token indices
in char-level.
:param audio_segment: Speech segment to extract features from.
:type audio_segment: SpeechSegment
:return: A tuple of 1) spectrogram audio feature in 2darray, 2) list of
char-level token indices.
:rtype: tuple
"""
audio_feature = self._audio_featurizer.featurize(speech_segment)
text_ids = self._text_featurizer.featurize(speech_segment.transcript)
return audio_feature, text_ids
@property
def vocab_size(self):
"""Return the vocabulary size.
:return: Vocabulary size.
:rtype: int
"""
return self._text_featurizer.vocab_size
@property
def vocab_list(self):
"""Return the vocabulary in list.
:return: Vocabulary in list.
:rtype: list
"""
return self._text_featurizer.vocab_list
"""Contains the text featurizer class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
class TextFeaturizer(object):
"""Text featurizer, for processing or extracting features from text.
Currently, it only supports char-level tokenizing and conversion into
a list of token indices. Note that the token indexing order follows the
given vocabulary file.
:param vocab_filepath: Filepath to load vocabulary for token indices
conversion.
:type specgram_type: basestring
"""
def __init__(self, vocab_filepath):
self._vocab_dict, self._vocab_list = self._load_vocabulary_from_file(
vocab_filepath)
def featurize(self, text):
"""Convert text string to a list of token indices in char-level.Note
that the token indexing order follows the given vocabulary file.
:param text: Text to process.
:type text: basestring
:return: List of char-level token indices.
:rtype: list
"""
tokens = self._char_tokenize(text)
return [self._vocab_dict[token] for token in tokens]
@property
def vocab_size(self):
"""Return the vocabulary size.
:return: Vocabulary size.
:rtype: int
"""
return len(self._vocab_list)
@property
def vocab_list(self):
"""Return the vocabulary in list.
:return: Vocabulary in list.
:rtype: list
"""
return self._vocab_list
def _char_tokenize(self, text):
"""Character tokenizer."""
return list(text.strip())
def _load_vocabulary_from_file(self, vocab_filepath):
"""Load vocabulary from file."""
vocab_lines = []
with open(vocab_filepath, 'r') as file:
vocab_lines.extend(file.readlines())
vocab_list = [line[:-1] for line in vocab_lines]
vocab_dict = dict(
[(token, id) for (id, token) in enumerate(vocab_list)])
return vocab_dict, vocab_list
"""Contains feature normalizers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import random
import data_utils.utils as utils
from data_utils.audio import AudioSegment
class FeatureNormalizer(object):
"""Feature normalizer. Normalize features to be of zero mean and unit
stddev.
if mean_std_filepath is provided (not None), the normalizer will directly
initilize from the file. Otherwise, both manifest_path and featurize_func
should be given for on-the-fly mean and stddev computing.
:param mean_std_filepath: File containing the pre-computed mean and stddev.
:type mean_std_filepath: None|basestring
:param manifest_path: Manifest of instances for computing mean and stddev.
:type meanifest_path: None|basestring
:param featurize_func: Function to extract features. It should be callable
with ``featurize_func(audio_segment)``.
:type featurize_func: None|callable
:param num_samples: Number of random samples for computing mean and stddev.
:type num_samples: int
:param random_seed: Random seed for sampling instances.
:type random_seed: int
:raises ValueError: If both mean_std_filepath and manifest_path
(or both mean_std_filepath and featurize_func) are None.
"""
def __init__(self,
mean_std_filepath,
manifest_path=None,
featurize_func=None,
num_samples=500,
random_seed=0):
if not mean_std_filepath:
if not (manifest_path and featurize_func):
raise ValueError("If mean_std_filepath is None, meanifest_path "
"and featurize_func should not be None.")
self._rng = random.Random(random_seed)
self._compute_mean_std(manifest_path, featurize_func, num_samples)
else:
self._read_mean_std_from_file(mean_std_filepath)
def apply(self, features, eps=1e-14):
"""Normalize features to be of zero mean and unit stddev.
:param features: Input features to be normalized.
:type features: ndarray
:param eps: added to stddev to provide numerical stablibity.
:type eps: float
:return: Normalized features.
:rtype: ndarray
"""
return (features - self._mean) / (self._std + eps)
def write_to_file(self, filepath):
"""Write the mean and stddev to the file.
:param filepath: File to write mean and stddev.
:type filepath: basestring
"""
np.savez(filepath, mean=self._mean, std=self._std)
def _read_mean_std_from_file(self, filepath):
"""Load mean and std from file."""
npzfile = np.load(filepath)
self._mean = npzfile["mean"]
self._std = npzfile["std"]
def _compute_mean_std(self, manifest_path, featurize_func, num_samples):
"""Compute mean and std from randomly sampled instances."""
manifest = utils.read_manifest(manifest_path)
sampled_manifest = self._rng.sample(manifest, num_samples)
features = []
for instance in sampled_manifest:
features.append(
featurize_func(
AudioSegment.from_file(instance["audio_filepath"])))
features = np.hstack(features)
self._mean = np.mean(features, axis=1).reshape([-1, 1])
self._std = np.std(features, axis=1).reshape([-1, 1])
"""Contains the speech segment class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from data_utils.audio import AudioSegment
class SpeechSegment(AudioSegment):
"""Speech segment abstraction, a subclass of AudioSegment,
with an additional transcript.
:param samples: Audio samples [num_samples x num_channels].
:type samples: ndarray.float32
:param sample_rate: Audio sample rate.
:type sample_rate: int
:param transcript: Transcript text for the speech.
:type transript: basestring
:raises TypeError: If the sample data type is not float or int.
"""
def __init__(self, samples, sample_rate, transcript):
AudioSegment.__init__(self, samples, sample_rate)
self._transcript = transcript
def __eq__(self, other):
"""Return whether two objects are equal.
"""
if not AudioSegment.__eq__(self, other):
return False
if self._transcript != other._transcript:
return False
return True
def __ne__(self, other):
"""Return whether two objects are unequal."""
return not self.__eq__(other)
@classmethod
def from_file(cls, filepath, transcript):
"""Create speech segment from audio file and corresponding transcript.
:param filepath: Filepath or file object to audio file.
:type filepath: basestring|file
:param transcript: Transcript text for the speech.
:type transript: basestring
:return: Audio segment instance.
:rtype: AudioSegment
"""
audio = AudioSegment.from_file(filepath)
return cls(audio.samples, audio.sample_rate, transcript)
@classmethod
def from_bytes(cls, bytes, transcript):
"""Create speech segment from a byte string and corresponding
transcript.
:param bytes: Byte string containing audio samples.
:type bytes: str
:param transcript: Transcript text for the speech.
:type transript: basestring
:return: Audio segment instance.
:rtype: AudioSegment
"""
audio = AudioSegment.from_bytes(bytes)
return cls(audio.samples, audio.sample_rate, transcript)
@property
def transcript(self):
"""Return the transcript text.
:return: Transcript text for the speech.
:rtype: basestring
"""
return self._transcript
"""Contains data helper functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0):
"""Load and parse manifest file.
Instances with durations outside [min_duration, max_duration] will be
filtered out.
:param manifest_path: Manifest file to load and parse.
:type manifest_path: basestring
:param max_duration: Maximal duration in seconds for instance filter.
:type max_duration: float
:param min_duration: Minimal duration in seconds for instance filter.
:type min_duration: float
:return: Manifest parsing results. List of dict.
:rtype: list
:raises IOError: If failed to parse the manifest.
"""
manifest = []
for json_line in open(manifest_path):
try:
json_data = json.loads(json_line)
except Exception as e:
raise IOError("Error reading manifest: %s" % str(e))
if (json_data["duration"] <= max_duration and
json_data["duration"] >= min_duration):
manifest.append(json_data)
return manifest
"""Prepare Librispeech ASR datasets.
Download, unpack and create manifest files.
Manifest file is a json-format file with each line containing the
meta data (i.e. audio filepath, transcript and audio duration)
of each audio file in the data set.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import distutils.util
import os
import wget
import tarfile
import argparse
import soundfile
import json
from paddle.v2.dataset.common import md5file
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = "http://www.openslr.org/resources/12"
URL_TEST_CLEAN = URL_ROOT + "/test-clean.tar.gz"
URL_TEST_OTHER = URL_ROOT + "/test-other.tar.gz"
URL_DEV_CLEAN = URL_ROOT + "/dev-clean.tar.gz"
URL_DEV_OTHER = URL_ROOT + "/dev-other.tar.gz"
URL_TRAIN_CLEAN_100 = URL_ROOT + "/train-clean-100.tar.gz"
URL_TRAIN_CLEAN_360 = URL_ROOT + "/train-clean-360.tar.gz"
URL_TRAIN_OTHER_500 = URL_ROOT + "/train-other-500.tar.gz"
MD5_TEST_CLEAN = "32fa31d27d2e1cad72775fee3f4849a9"
MD5_TEST_OTHER = "fb5a50374b501bb3bac4815ee91d3135"
MD5_DEV_CLEAN = "42e2234ba48799c1f50f24a7926300a1"
MD5_DEV_OTHER = "c8d0bcc9cca99d4f8b62fcc847357931"
MD5_TRAIN_CLEAN_100 = "2a93770f6d5c6c964bc36631d331a522"
MD5_TRAIN_CLEAN_360 = "c0e676e450a7ff2f54aeade5171606fa"
MD5_TRAIN_OTHER_500 = "d1a0fd59409feb2c614ce4d30c387708"
parser = argparse.ArgumentParser(
description='Downloads and prepare LibriSpeech dataset.')
parser.add_argument(
"--target_dir",
default=DATA_HOME + "/Libri",
type=str,
help="Directory to save the dataset. (default: %(default)s)")
parser.add_argument(
"--manifest_prefix",
default="manifest",
type=str,
help="Filepath prefix for output manifests. (default: %(default)s)")
parser.add_argument(
"--full_download",
default="True",
type=distutils.util.strtobool,
help="Download all datasets for Librispeech."
" If False, only download a minimal requirement (test-clean, dev-clean"
" train-clean-100). (default: %(default)s)")
args = parser.parse_args()
def download(url, md5sum, target_dir):
"""
Download file from url to target_dir, and check md5sum.
"""
if not os.path.exists(target_dir): os.makedirs(target_dir)
filepath = os.path.join(target_dir, url.split("/")[-1])
if not (os.path.exists(filepath) and md5file(filepath) == md5sum):
print("Downloading %s ..." % url)
wget.download(url, target_dir)
print("\nMD5 Chesksum %s ..." % filepath)
if not md5file(filepath) == md5sum:
raise RuntimeError("MD5 checksum failed.")
else:
print("File exists, skip downloading. (%s)" % filepath)
return filepath
def unpack(filepath, target_dir):
"""
Unpack the file to the target_dir.
"""
print("Unpacking %s ..." % filepath)
tar = tarfile.open(filepath)
tar.extractall(target_dir)
tar.close()
def create_manifest(data_dir, manifest_path):
"""
Create a manifest json file summarizing the data set, with each line
containing the meta data (i.e. audio filepath, transcription text, audio
duration) of each audio file within the data set.
"""
print("Creating manifest %s ..." % manifest_path)
json_lines = []
for subfolder, _, filelist in sorted(os.walk(data_dir)):
text_filelist = [
filename for filename in filelist if filename.endswith('trans.txt')
]
if len(text_filelist) > 0:
text_filepath = os.path.join(data_dir, subfolder, text_filelist[0])
for line in open(text_filepath):
segments = line.strip().split()
text = ' '.join(segments[1:]).lower()
audio_filepath = os.path.join(data_dir, subfolder,
segments[0] + '.flac')
audio_data, samplerate = soundfile.read(audio_filepath)
duration = float(len(audio_data)) / samplerate
json_lines.append(
json.dumps({
'audio_filepath': audio_filepath,
'duration': duration,
'text': text
}))
with open(manifest_path, 'w') as out_file:
for line in json_lines:
out_file.write(line + '\n')
def prepare_dataset(url, md5sum, target_dir, manifest_path):
"""
Download, unpack and create summmary manifest file.
"""
if not os.path.exists(os.path.join(target_dir, "LibriSpeech")):
# download
filepath = download(url, md5sum, target_dir)
# unpack
unpack(filepath, target_dir)
else:
print("Skip downloading and unpacking. Data already exists in %s." %
target_dir)
# create manifest json file
create_manifest(target_dir, manifest_path)
def main():
prepare_dataset(
url=URL_TEST_CLEAN,
md5sum=MD5_TEST_CLEAN,
target_dir=os.path.join(args.target_dir, "test-clean"),
manifest_path=args.manifest_prefix + ".test-clean")
prepare_dataset(
url=URL_DEV_CLEAN,
md5sum=MD5_DEV_CLEAN,
target_dir=os.path.join(args.target_dir, "dev-clean"),
manifest_path=args.manifest_prefix + ".dev-clean")
prepare_dataset(
url=URL_TRAIN_CLEAN_100,
md5sum=MD5_TRAIN_CLEAN_100,
target_dir=os.path.join(args.target_dir, "train-clean-100"),
manifest_path=args.manifest_prefix + ".train-clean-100")
if args.full_download:
prepare_dataset(
url=URL_TEST_OTHER,
md5sum=MD5_TEST_OTHER,
target_dir=os.path.join(args.target_dir, "test-other"),
manifest_path=args.manifest_prefix + ".test-other")
prepare_dataset(
url=URL_DEV_OTHER,
md5sum=MD5_DEV_OTHER,
target_dir=os.path.join(args.target_dir, "dev-other"),
manifest_path=args.manifest_prefix + ".dev-other")
prepare_dataset(
url=URL_TRAIN_CLEAN_360,
md5sum=MD5_TRAIN_CLEAN_360,
target_dir=os.path.join(args.target_dir, "train-clean-360"),
manifest_path=args.manifest_prefix + ".train-clean-360")
prepare_dataset(
url=URL_TRAIN_OTHER_500,
md5sum=MD5_TRAIN_OTHER_500,
target_dir=os.path.join(args.target_dir, "train-other-500"),
manifest_path=args.manifest_prefix + ".train-other-500")
if __name__ == '__main__':
main()
cd librispeech
python librispeech.py
if [ $? -ne 0 ]; then
echo "Prepare LibriSpeech failed. Terminated."
exit 1
fi
cd -
cat librispeech/manifest.train* | shuf > manifest.train
cat librispeech/manifest.dev-clean > manifest.dev
cat librispeech/manifest.test-clean > manifest.test
echo "All done."
'
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
"""Contains various CTC decoder."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from itertools import groupby
def ctc_best_path_decode(probs_seq, vocabulary):
"""
Best path decoding, also called argmax decoding or greedy decoding.
Path consisting of the most probable tokens are further post-processed to
remove consecutive repetitions and all blanks.
:param probs_seq: 2-D list of probabilities over the vocabulary for each
character. Each element is a list of float probabilities
for one character.
:type probs_seq: list
:param vocabulary: Vocabulary list.
:type vocabulary: list
:return: Decoding result string.
:rtype: baseline
"""
# dimension verification
for probs in probs_seq:
if not len(probs) == len(vocabulary) + 1:
raise ValueError("probs_seq dimension mismatchedd with vocabulary")
# argmax to get the best index for each time step
max_index_list = list(np.array(probs_seq).argmax(axis=1))
# remove consecutive duplicate indexes
index_list = [index_group[0] for index_group in groupby(max_index_list)]
# remove blank indexes
blank_index = len(vocabulary)
index_list = [index for index in index_list if index != blank_index]
# convert index list to string
return ''.join([vocabulary[index] for index in index_list])
def ctc_decode(probs_seq, vocabulary, method):
"""
CTC-like sequence decoding from a sequence of likelihood probablilites.
:param probs_seq: 2-D list of probabilities over the vocabulary for each
character. Each element is a list of float probabilities
for one character.
:type probs_seq: list
:param vocabulary: Vocabulary list.
:type vocabulary: list
:param method: Decoding method name, with options: "best_path".
:type method: basestring
:return: Decoding result string.
:rtype: baseline
"""
for prob_list in probs_seq:
if not len(prob_list) == len(vocabulary) + 1:
raise ValueError("probs dimension mismatchedd with vocabulary")
if method == "best_path":
return ctc_best_path_decode(probs_seq, vocabulary)
else:
raise ValueError("Decoding method [%s] is not supported.")
"""Inferer for DeepSpeech2 model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import gzip
import distutils.util
import paddle.v2 as paddle
from data_utils.data import DataGenerator
from model import deep_speech2
from decoder import ctc_decode
parser = argparse.ArgumentParser(
description='Simplified version of DeepSpeech2 inference.')
parser.add_argument(
"--num_samples",
default=10,
type=int,
help="Number of samples for inference. (default: %(default)s)")
parser.add_argument(
"--num_conv_layers",
default=2,
type=int,
help="Convolution layer number. (default: %(default)s)")
parser.add_argument(
"--num_rnn_layers",
default=3,
type=int,
help="RNN layer number. (default: %(default)s)")
parser.add_argument(
"--rnn_layer_size",
default=512,
type=int,
help="RNN layer cell number. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=True,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--mean_std_filepath",
default='mean_std.npz',
type=str,
help="Manifest path for normalizer. (default: %(default)s)")
parser.add_argument(
"--decode_manifest_path",
default='datasets/manifest.test',
type=str,
help="Manifest path for decoding. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default='./params.tar.gz',
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
args = parser.parse_args()
def infer():
"""
Max-ctc-decoding for DeepSpeech2.
"""
# initialize data generator
data_generator = DataGenerator(
vocab_filepath=args.vocab_filepath,
mean_std_filepath=args.mean_std_filepath,
augmentation_config='{}')
# create network config
# paddle.data_type.dense_array is used for variable batch input.
# The size 161 * 161 is only an placeholder value and the real shape
# of input batch data will be induced during training.
audio_data = paddle.layer.data(
name="audio_spectrogram", type=paddle.data_type.dense_array(161 * 161))
text_data = paddle.layer.data(
name="transcript_text",
type=paddle.data_type.integer_value_sequence(data_generator.vocab_size))
output_probs = deep_speech2(
audio_data=audio_data,
text_data=text_data,
dict_size=data_generator.vocab_size,
num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers,
rnn_size=args.rnn_layer_size,
is_inference=True)
# load parameters
parameters = paddle.parameters.Parameters.from_tar(
gzip.open(args.model_filepath))
# prepare infer data
batch_reader = data_generator.batch_reader_creator(
manifest_path=args.decode_manifest_path,
batch_size=args.num_samples,
sortagrad=False,
batch_shuffle=False)
infer_data = batch_reader().next()
# run inference
infer_results = paddle.infer(
output_layer=output_probs, parameters=parameters, input=infer_data)
num_steps = len(infer_results) // len(infer_data)
probs_split = [
infer_results[i * num_steps:(i + 1) * num_steps]
for i in xrange(len(infer_data))
]
# decode and print
for i, probs in enumerate(probs_split):
output_transcription = ctc_decode(
probs_seq=probs,
vocabulary=data_generator.vocab_list,
method="best_path")
target_transcription = ''.join(
[data_generator.vocab_list[index] for index in infer_data[i][1]])
print("Target Transcription: %s \nOutput Transcription: %s \n" %
(target_transcription, output_transcription))
def main():
paddle.init(use_gpu=args.use_gpu, trainer_count=1)
infer()
if __name__ == '__main__':
main()
"""Contains DeepSpeech2 model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle.v2 as paddle
def conv_bn_layer(input, filter_size, num_channels_in, num_channels_out, stride,
padding, act):
"""
Convolution layer with batch normalization.
"""
conv_layer = paddle.layer.img_conv(
input=input,
filter_size=filter_size,
num_channels=num_channels_in,
num_filters=num_channels_out,
stride=stride,
padding=padding,
act=paddle.activation.Linear(),
bias_attr=False)
return paddle.layer.batch_norm(input=conv_layer, act=act)
def bidirectional_simple_rnn_bn_layer(name, input, size, act):
"""
Bidirectonal simple rnn layer with sequence-wise batch normalization.
The batch normalization is only performed on input-state weights.
"""
# input-hidden weights shared across bi-direcitonal rnn.
input_proj = paddle.layer.fc(
input=input, size=size, act=paddle.activation.Linear(), bias_attr=False)
# batch norm is only performed on input-state projection
input_proj_bn = paddle.layer.batch_norm(
input=input_proj, act=paddle.activation.Linear())
# forward and backward in time
forward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn, act=act, reverse=False)
backward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn, act=act, reverse=True)
return paddle.layer.concat(input=[forward_simple_rnn, backward_simple_rnn])
def conv_group(input, num_stacks):
"""
Convolution group with several stacking convolution layers.
"""
conv = conv_bn_layer(
input=input,
filter_size=(11, 41),
num_channels_in=1,
num_channels_out=32,
stride=(3, 2),
padding=(5, 20),
act=paddle.activation.BRelu())
for i in xrange(num_stacks - 1):
conv = conv_bn_layer(
input=conv,
filter_size=(11, 21),
num_channels_in=32,
num_channels_out=32,
stride=(1, 2),
padding=(5, 10),
act=paddle.activation.BRelu())
output_num_channels = 32
output_height = 160 // pow(2, num_stacks) + 1
return conv, output_num_channels, output_height
def rnn_group(input, size, num_stacks):
"""
RNN group with several stacking RNN layers.
"""
output = input
for i in xrange(num_stacks):
output = bidirectional_simple_rnn_bn_layer(
name=str(i), input=output, size=size, act=paddle.activation.BRelu())
return output
def deep_speech2(audio_data,
text_data,
dict_size,
num_conv_layers=2,
num_rnn_layers=3,
rnn_size=256,
is_inference=False):
"""
The whole DeepSpeech2 model structure (a simplified version).
:param audio_data: Audio spectrogram data layer.
:type audio_data: LayerOutput
:param text_data: Transcription text data layer.
:type text_data: LayerOutput
:param dict_size: Dictionary size for tokenized transcription.
:type dict_size: int
:param num_conv_layers: Number of stacking convolution layers.
:type num_conv_layers: int
:param num_rnn_layers: Number of stacking RNN layers.
:type num_rnn_layers: int
:param rnn_size: RNN layer size (number of RNN cells).
:type rnn_size: int
:param is_inference: False in the training mode, and True in the
inferene mode.
:type is_inference: bool
:return: If is_inference set False, return a ctc cost layer;
if is_inference set True, return a sequence layer of output
probability distribution.
:rtype: tuple of LayerOutput
"""
# convolution group
conv_group_output, conv_group_num_channels, conv_group_height = conv_group(
input=audio_data, num_stacks=num_conv_layers)
# convert data form convolution feature map to sequence of vectors
conv2seq = paddle.layer.block_expand(
input=conv_group_output,
num_channels=conv_group_num_channels,
stride_x=1,
stride_y=1,
block_x=1,
block_y=conv_group_height)
# rnn group
rnn_group_output = rnn_group(
input=conv2seq, size=rnn_size, num_stacks=num_rnn_layers)
fc = paddle.layer.fc(
input=rnn_group_output,
size=dict_size + 1,
act=paddle.activation.Linear(),
bias_attr=True)
if is_inference:
# probability distribution with softmax
return paddle.layer.mixed(
input=paddle.layer.identity_projection(input=fc),
act=paddle.activation.Softmax())
else:
# ctc cost
return paddle.layer.warp_ctc(
input=fc,
label=text_data,
size=dict_size + 1,
blank=dict_size,
norm_by_times=True)
SoundFile==0.9.0.post1
wget==3.2
"""Trainer for DeepSpeech2 model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import os
import argparse
import gzip
import time
import distutils.util
import paddle.v2 as paddle
from model import deep_speech2
from data_utils.data import DataGenerator
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--batch_size", default=32, type=int, help="Minibatch size.")
parser.add_argument(
"--num_passes",
default=20,
type=int,
help="Training pass number. (default: %(default)s)")
parser.add_argument(
"--num_conv_layers",
default=2,
type=int,
help="Convolution layer number. (default: %(default)s)")
parser.add_argument(
"--num_rnn_layers",
default=3,
type=int,
help="RNN layer number. (default: %(default)s)")
parser.add_argument(
"--rnn_layer_size",
default=512,
type=int,
help="RNN layer cell number. (default: %(default)s)")
parser.add_argument(
"--adam_learning_rate",
default=5e-4,
type=float,
help="Learning rate for ADAM Optimizer. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=True,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--use_sortagrad",
default=True,
type=distutils.util.strtobool,
help="Use sortagrad or not. (default: %(default)s)")
parser.add_argument(
"--trainer_count",
default=4,
type=int,
help="Trainer number. (default: %(default)s)")
parser.add_argument(
"--mean_std_filepath",
default='mean_std.npz',
type=str,
help="Manifest path for normalizer. (default: %(default)s)")
parser.add_argument(
"--train_manifest_path",
default='datasets/manifest.train',
type=str,
help="Manifest path for training. (default: %(default)s)")
parser.add_argument(
"--dev_manifest_path",
default='datasets/manifest.dev',
type=str,
help="Manifest path for validation. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
parser.add_argument(
"--init_model_path",
default=None,
type=str,
help="If set None, the training will start from scratch. "
"Otherwise, the training will resume from "
"the existing model of this path. (default: %(default)s)")
parser.add_argument(
"--augmentation_config",
default='{}',
type=str,
help="Augmentation configuration in json-format. "
"(default: %(default)s)")
args = parser.parse_args()
def train():
"""
DeepSpeech2 training.
"""
# initialize data generator
def data_generator():
return DataGenerator(
vocab_filepath=args.vocab_filepath,
mean_std_filepath=args.mean_std_filepath,
augmentation_config=args.augmentation_config)
train_generator = data_generator()
test_generator = data_generator()
# create network config
# paddle.data_type.dense_array is used for variable batch input.
# The size 161 * 161 is only an placeholder value and the real shape
# of input batch data will be induced during training.
audio_data = paddle.layer.data(
name="audio_spectrogram", type=paddle.data_type.dense_array(161 * 161))
text_data = paddle.layer.data(
name="transcript_text",
type=paddle.data_type.integer_value_sequence(
train_generator.vocab_size))
cost = deep_speech2(
audio_data=audio_data,
text_data=text_data,
dict_size=train_generator.vocab_size,
num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers,
rnn_size=args.rnn_layer_size,
is_inference=False)
# create/load parameters and optimizer
if args.init_model_path is None:
parameters = paddle.parameters.create(cost)
else:
if not os.path.isfile(args.init_model_path):
raise IOError("Invalid model!")
parameters = paddle.parameters.Parameters.from_tar(
gzip.open(args.init_model_path))
optimizer = paddle.optimizer.Adam(
learning_rate=args.adam_learning_rate, gradient_clipping_threshold=400)
trainer = paddle.trainer.SGD(
cost=cost, parameters=parameters, update_equation=optimizer)
# prepare data reader
train_batch_reader = train_generator.batch_reader_creator(
manifest_path=args.train_manifest_path,
batch_size=args.batch_size,
sortagrad=args.use_sortagrad if args.init_model_path is None else False,
batch_shuffle=True)
test_batch_reader = test_generator.batch_reader_creator(
manifest_path=args.dev_manifest_path,
batch_size=args.batch_size,
sortagrad=False,
batch_shuffle=False)
# create event handler
def event_handler(event):
global start_time, cost_sum, cost_counter
if isinstance(event, paddle.event.EndIteration):
cost_sum += event.cost
cost_counter += 1
if event.batch_id % 50 == 0:
print("\nPass: %d, Batch: %d, TrainCost: %f" %
(event.pass_id, event.batch_id, cost_sum / cost_counter))
cost_sum, cost_counter = 0.0, 0
with gzip.open("params_tmp.tar.gz", 'w') as f:
parameters.to_tar(f)
else:
sys.stdout.write('.')
sys.stdout.flush()
if isinstance(event, paddle.event.BeginPass):
start_time = time.time()
cost_sum, cost_counter = 0.0, 0
if isinstance(event, paddle.event.EndPass):
result = trainer.test(
reader=test_batch_reader, feeding=test_generator.feeding)
print("\n------- Time: %d sec, Pass: %d, ValidationCost: %s" %
(time.time() - start_time, event.pass_id, result.cost))
# run train
trainer.train(
reader=train_batch_reader,
event_handler=event_handler,
num_passes=args.num_passes,
feeding=train_generator.feeding)
def main():
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
train()
if __name__ == '__main__':
main()
## 使用说明
`caffe2paddle.py`提供了将Caffe训练的模型转换为PaddlePaddle可使用的模型的接口`ModelConverter`,其封装了图像领域常用的Convolution、BatchNorm等layer的转换函数,可以完成VGG、ResNet等常用模型的转换。模型转换的基本过程是:基于Caffe的Python API加载模型并依次获取每一个layer的信息,将其中的参数根据layer类型与PaddlePaddle适配后序列化保存(对于Pooling等无需训练的layer不做处理),输出可以直接为PaddlePaddle的Python API加载使用的模型文件。
可以按如下方法使用`ModelConverter`接口:
```python
# 定义以下变量为相应的文件路径和文件名
caffe_model_file = "./ResNet-50-deploy.prototxt" # Caffe网络配置文件的路径
caffe_pretrained_file = "./ResNet-50-model.caffemodel" # Caffe模型文件的路径
paddle_tar_name = "Paddle_ResNet50.tar.gz" # 输出的Paddle模型的文件名
# 初始化,从指定文件加载模型
converter = ModelConverter(caffe_model_file=caffe_model_file,
caffe_pretrained_file=caffe_pretrained_file,
paddle_tar_name=paddle_tar_name)
# 进行模型转换
converter.convert()
```
`caffe2paddle.py`中已提供以上步骤,修改其中文件相关变量的值后执行`python caffe2paddle.py`即可完成模型转换。此外,为辅助验证转换结果,`ModelConverter`中封装了使用Caffe API预测的接口`caffe_predict`,使用如下所示,将会打印按类别概率排序的(类别id, 概率)的列表:
```python
# img为图片路径,mean_file为图像均值文件的路径
converter.caffe_predict(img="./cat.jpg", mean_file="./imagenet/ilsvrc_2012_mean.npy")
```
需要注意,在模型转换时会对layer的参数进行命名,这里默认使用PaddlePaddle中默认的layer和参数命名规则:以`wrap_name_default`中的值和该layer类型的调用计数构造layer name,并以此为前缀构造参数名,比如第一个InnerProduct层(相应转换函数说明见下方)的bias参数将被命名为`___fc_layer_0__.wbias`
```python
# 对InnerProduct层的参数进行转换,使用name值构造对应layer的参数名
# wrap_name_default设置默认name值为fc_layer
@wrap_name_default("fc_layer")
def convert_InnerProduct_layer(self, params, name=None)
```
为此,在验证和使用转换得到的模型时,编写PaddlePaddle网络配置无需指定layer name并且要保证和Caffe端模型使用同样的拓扑顺序,尤其是对于ResNet这种有分支的网络结构,要保证两分支在PaddlePaddle和Caffe中先后顺序一致,这样才能够使得模型参数正确加载。
如果不希望使用默认的命名,并且在PaddlePaddle网络配置中指定了layer name,可以建立Caffe和PaddlePaddle网络配置间layer name对应关系的`dict`并在调用`ModelConverter.convert`时作为`name_map`的值传入,这样在命名保存layer中的参数时将使用相应的layer name,不受拓扑顺序的影响。另外这里只针对Caffe网络配置中Convolution、InnerProduct和BatchNorm类别的layer建立`name_map`即可(一方面,对于Pooling等无需训练的layer不需要保存,故这里没有提供转换接口;另一方面,对于Caffe中的Scale类别的layer,由于Caffe和PaddlePaddle在实现上的一些差别,PaddlePaddle中的batch_norm层是BatchNorm和Scale层的复合,故这里对Scale进行了特殊处理)。
import os
import struct
import gzip
import tarfile
import cStringIO
import numpy as np
import cv2
import caffe
from paddle.proto.ParameterConfig_pb2 import ParameterConfig
from paddle.trainer_config_helpers.default_decorators import wrap_name_default
class ModelConverter(object):
def __init__(self, caffe_model_file, caffe_pretrained_file,
paddle_tar_name):
self.net = caffe.Net(caffe_model_file, caffe_pretrained_file,
caffe.TEST)
self.tar_name = paddle_tar_name
self.params = dict()
self.pre_layer_name = ""
self.pre_layer_type = ""
def convert(self, name_map=None):
layer_dict = self.net.layer_dict
for layer_name in layer_dict.keys():
layer = layer_dict[layer_name]
layer_params = layer.blobs
layer_type = layer.type
if len(layer_params) > 0:
self.pre_layer_name = getattr(
self, "convert_" + layer_type + "_layer")(
layer_params,
name=None
if name_map == None else name_map.get(layer_name))
self.pre_layer_type = layer_type
with gzip.open(self.tar_name, 'w') as f:
self.to_tar(f)
return
def to_tar(self, f):
tar = tarfile.TarFile(fileobj=f, mode='w')
for param_name in self.params.keys():
param_conf, param_data = self.params[param_name]
confStr = param_conf.SerializeToString()
tarinfo = tarfile.TarInfo(name="%s.protobuf" % param_name)
tarinfo.size = len(confStr)
buf = cStringIO.StringIO(confStr)
buf.seek(0)
tar.addfile(tarinfo, fileobj=buf)
buf = cStringIO.StringIO()
self.serialize(param_data, buf)
tarinfo = tarfile.TarInfo(name=param_name)
buf.seek(0)
tarinfo.size = len(buf.getvalue())
tar.addfile(tarinfo, buf)
@staticmethod
def serialize(data, f):
f.write(struct.pack("IIQ", 0, 4, data.size))
f.write(data.tobytes())
@wrap_name_default("conv")
def convert_Convolution_layer(self, params, name=None):
for i in range(len(params)):
data = np.array(params[i].data)
if len(params) == 2:
suffix = "0" if i == 0 else "bias"
file_name = "_%s.w%s" % (name, suffix)
else:
file_name = "_%s.w%s" % (name, str(i))
param_conf = ParameterConfig()
param_conf.name = file_name
param_conf.size = reduce(lambda a, b: a * b, data.shape)
self.params[file_name] = (param_conf, data.flatten())
return name
@wrap_name_default("fc_layer")
def convert_InnerProduct_layer(self, params, name=None):
for i in range(len(params)):
data = np.array(params[i].data)
if len(params) == 2:
suffix = "0" if i == 0 else "bias"
file_name = "_%s.w%s" % (name, suffix)
else:
file_name = "_%s.w%s" % (name, str(i))
data = np.transpose(data)
param_conf = ParameterConfig()
param_conf.name = file_name
dims = list(data.shape)
if len(dims) < 2:
dims.insert(0, 1)
param_conf.size = reduce(lambda a, b: a * b, dims)
param_conf.dims.extend(dims)
self.params[file_name] = (param_conf, data.flatten())
return name
@wrap_name_default("batch_norm")
def convert_BatchNorm_layer(self, params, name=None):
scale = 1 / np.array(params[-1].data)[0] if np.array(
params[-1].data)[0] != 0 else 0
for i in range(2):
data = np.array(params[i].data) * scale
file_name = "_%s.w%s" % (name, str(i + 1))
param_conf = ParameterConfig()
param_conf.name = file_name
dims = list(data.shape)
assert len(dims) == 1
dims.insert(0, 1)
param_conf.size = reduce(lambda a, b: a * b, dims)
param_conf.dims.extend(dims)
self.params[file_name] = (param_conf, data.flatten())
return name
def convert_Scale_layer(self, params, name=None):
assert self.pre_layer_type == "BatchNorm"
name = self.pre_layer_name
for i in range(len(params)):
data = np.array(params[i].data)
suffix = "0" if i == 0 else "bias"
file_name = "_%s.w%s" % (name, suffix)
param_conf = ParameterConfig()
param_conf.name = file_name
dims = list(data.shape)
assert len(dims) == 1
dims.insert(0, 1)
param_conf.size = reduce(lambda a, b: a * b, dims)
if i == 1:
param_conf.dims.extend(dims)
self.params[file_name] = (param_conf, data.flatten())
return name
def caffe_predict(self,
img,
mean_file='./caffe/imagenet/ilsvrc_2012_mean.npy'):
net = self.net
net.blobs['data'].data[...] = load_image(img, mean_file=mean_file)
out = net.forward()
output_prob = net.blobs['prob'].data[0].flatten()
print zip(np.argsort(output_prob)[::-1], np.sort(output_prob)[::-1])
def load_image(file, resize_size=256, crop_size=224, mean_file=None):
# load image
im = cv2.imread(file)
# resize
h, w = im.shape[:2]
h_new, w_new = resize_size, resize_size
if h > w:
h_new = resize_size * h / w
else:
w_new = resize_size * w / h
im = cv2.resize(im, (h_new, w_new), interpolation=cv2.INTER_CUBIC)
# crop
h, w = im.shape[:2]
h_start = (h - crop_size) / 2
w_start = (w - crop_size) / 2
h_end, w_end = h_start + crop_size, w_start + crop_size
im = im[h_start:h_end, w_start:w_end, :]
# transpose to CHW order
im = im.transpose((2, 0, 1))
if mean_file:
mu = np.load(mean_file)
mu = mu.mean(1).mean(1)
im = im - mu[:, None, None]
im = im / 255.0
return im
if __name__ == "__main__":
caffe_model_file = "./ResNet-50-deploy.prototxt"
caffe_pretrained_file = "./ResNet-50-model.caffemodel"
paddle_tar_name = "Paddle_ResNet50.tar.gz"
converter = ModelConverter(
caffe_model_file=caffe_model_file,
caffe_pretrained_file=caffe_pretrained_file,
paddle_tar_name=paddle_tar_name)
converter.convert()
converter.caffe_predict("./cat.jpg",
"./caffe/imagenet/ilsvrc_2012_mean.npy")
......@@ -106,4 +106,4 @@ def main():
if __name__ == '__main__':
main()
main()
\ No newline at end of file
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.v2 as paddle
__all__ = ['vgg13', 'vgg16', 'vgg19']
......
TBD
# 语言模型
## 简介
语言模型即 Language Model,简称LM。它是一个概率分布模型,简单来说,就是用来计算一个句子的概率的模型。利用它可以确定哪个词序列的可能性更大,或者给定若干个词,可以预测下一个最可能出现的词。语言模型是自然语言处理领域里一个重要的基础模型。
## 应用场景
**语言模型被应用在很多领域**,如:
* **自动写作**:语言模型可以根据上文生成下一个词,递归下去可以生成整个句子、段落、篇章。
* **QA**:语言模型可以根据Question生成Answer。
* **机器翻译**:当前主流的机器翻译模型大多基于Encoder-Decoder模式,其中Decoder就是一个语言模型,用来生成目标语言。
* **拼写检查**:语言模型可以计算出词序列的概率,一般在拼写错误处序列的概率会骤减,可以用来识别拼写错误并提供改正候选集。
* **词性标注、句法分析、语音识别......**
## 关于本例
Language Model 常见的实现方式有 N-Gram、RNN、seq2seq。本例中实现了基于N-Gram、RNN的语言模型。**本例的文件结构如下**`images` 文件夹与使用无关可不关心):
```text
.
├── data # toy、demo数据,用户可据此格式化自己的数据
│ ├── chinese.test.txt # test用的数据demo
| ├── chinese.train.txt # train用的数据demo
│ └── input.txt # infer用的输入数据demo
├── config.py # 配置文件,包括data、train、infer相关配置
├── infer.py # 预测任务脚本,即生成文本
├── network_conf.py # 本例中涉及的各种网络结构均定义在此文件中,希望进一步修改模型结构,请修改此文件
├── reader.py # 读取数据接口
├── README.md # 文档
├── train.py # 训练任务脚本
└── utils.py # 定义通用的函数,例如:构建字典、加载字典等
```
**注:一般情况下基于N-Gram的语言模型不如基于RNN的语言模型效果好,所以实际使用时建议使用基于RNN的语言模型,本例中也将着重介绍基于RNN的模型,简略介绍基于N-Gram的模型。**
## RNN 语言模型
### 简介
RNN是一个序列模型,基本思路是:在时刻t,将前一时刻t-1的隐藏层输出和t时刻的词向量一起输入到隐藏层从而得到时刻t的特征表示,然后用这个特征表示得到t时刻的预测输出,如此在时间维上递归下去。可以看出RNN善于使用上文信息、历史知识,具有“记忆”功能。理论上RNN能实现“长依赖”(即利用很久之前的知识),但在实际应用中发现效果并不理想,于是出现了很多RNN的变种,如常用的LSTM和GRU,它们对传统RNN的cell进行了改进,弥补了传统RNN的不足,本例中即使用了LSTM、GRU。下图是RNN(广义上包含了LSTM、GRU等)语言模型“循环”思想的示意图:
<p align=center><img src='images/rnn.png' width='500px'/></p>
### 模型实现
本例中RNN语言模型的实现简介如下:
* **定义模型参数**`config.py`中的`Config_rnn`**类**中定义了模型的参数变量。
* **定义模型结构**`network_conf.py`中的`rnn_lm`**函数**中定义了模型的**结构**,如下:
* 输入层:将输入的词(或字)序列映射成向量,即embedding。
* 中间层:根据配置实现RNN层,将上一步得到的embedding向量序列作为输入。
* 输出层:使用softmax归一化计算单词的概率,将output结果返回
* loss:定义模型的cost为多类交叉熵损失函数。
* **训练模型**`train.py`中的`main`方法实现了模型的训练,实现流程如下:
* 准备输入数据:建立并保存词典、构建train和test数据的reader。
* 初始化模型:包括模型的结构、参数。
* 构建训练器:demo中使用的是Adam优化算法。
* 定义回调函数:构建`event_handler`来跟踪训练过程中loss的变化,并在每轮训练结束时保存模型的参数。
* 训练:使用trainer训练模型。
* **生成文本**`infer.py`中的`main`方法实现了文本的生成,实现流程如下:
* 根据配置选择生成方法:RNN模型 or N-Gram模型。
* 加载train好的模型和词典文件。
* 读取`input_file`文件(每行为一个sentence的前缀),用启发式图搜索算法`beam_search`根据各sentence的前缀生成文本。
* 将生成的文本及其前缀保存到文件`output_file`
## N-Gram 语言模型
### 简介
N-Gram模型也称为N-1阶马尔科夫模型,它有一个有限历史假设:当前词的出现概率仅仅与前面N-1个词相关。一般采用最大似然估计(Maximum Likelihood Estimation,MLE)方法对模型的参数进行估计。当N取1、2、3时,N-Gram模型分别称为unigram、bigram和trigram语言模型。一般情况下,N越大、训练语料的规模越大,参数估计的结果越可靠,但由于模型较简单、表达能力不强以及数据稀疏等问题。一般情况下用N-Gram实现的语言模型不如RNN、seq2seq效果好。下图是基于神经网络的N-Gram语言模型结构示意图:
<p align=center><img src='images/ngram.png' width='500px'/></p>
### 模型实现
本例中N-Gram语言模型的实现简介如下:
* **定义模型参数**`config.py`中的`Config_ngram`**类**中定义了模型的参数变量。
* **定义模型结构**`network_conf.py`中的`ngram_lm`**函数**中定义了模型的**结构**,如下:
* 输入层:本例中N取5,将前四个词分别做embedding,然后连接起来作为输入。
* 中间层:根据配置实现DNN层,将上一步得到的embedding向量序列作为输入。
* 输出层:使用softmax归一化计算单词的概率,将output结果返回
* loss:定义模型的cost为多类交叉熵损失函数。
* **训练模型**`train.py`中的`main`方法实现了模型的训练,实现流程与上文中RNN语言模型基本一致。
* **生成文本**`infer.py`中的`main`方法实现了文本的生成,实现流程与上文中RNN语言模型基本一致,区别在于构建input时本例会取每个前缀的最后4(N-1)个词作为输入。
## 使用说明
运行本例的方法如下:
* 1,运行`python train.py`命令,开始train模型(默认使用RNN),待训练结束。
* 2,运行`python infer.py`命令做prediction。(输入的文本默认为`data/input.txt`,生成的文本默认保存到`data/output.txt`中。)
**如果用户需要使用自己的语料、定制模型,需要修改的地方主要是`语料`和`config.py`中的配置,需要注意的细节和适配工作详情如下:**
### 语料适配
* 清洗语料:去除原文中空格、tab、乱码,按需去除数字、标点符号、特殊符号等。
* 编码格式:utf-8,本例中已经对中文做了适配。
* 内容格式:每个句子占一行;每行中的各词之间使用一个空格符分开。
* 按需要配置`config.py`中对于data的配置:
```python
# -- config : data --
train_file = 'data/chinese.train.txt'
test_file = 'data/chinese.test.txt'
vocab_file = 'data/vocab_cn.txt' # the file to save vocab
build_vocab_method = 'fixed_size' # 'frequency' or 'fixed_size'
vocab_max_size = 3000 # when build_vocab_method = 'fixed_size'
unk_threshold = 1 # # when build_vocab_method = 'frequency'
min_sentence_length = 3
max_sentence_length = 60
```
其中,`build_vocab_method `指定了构建词典的方法:**1,按词频**,即将出现次数小于`unk_threshold `的词视为`<UNK>`;**2,按词典长度**,`vocab_max_size`定义了词典的最大长度,如果语料中出现的不同词的个数大于这个值,则根据各词的词频倒序排,取`top(vocab_max_size)`个词纳入词典。
其中`min_sentence_length`和`max_sentence_length `分别指定了句子的最小和最大长度,小于最小长度的和大于最大长度的句子将被过滤掉、不参与训练。
*注:需要注意的是词典越大生成的内容越丰富但训练耗时越久,一般中文分词之后,语料中不同的词能有几万乃至几十万,如果vocab\_max\_size取值过小则导致\<UNK\>占比过高,如果vocab\_max\_size取值较大则严重影响训练速度(对精度也有影响),所以也有“按字”训练模型的方式,即:把每个汉字当做一个词,常用汉字也就几千个,使得字典的大小不会太大、不会丢失太多信息,但汉语中同一个字在不同词中语义相差很大,有时导致模型效果不理想。建议用户多试试、根据实际情况选择是“按词训练”还是“按字训练”。*
### 模型适配、训练
* 按需调整`config.py`中对于模型的配置,详解如下:
```python
# -- config : train --
use_which_model = 'rnn' # must be: 'rnn' or 'ngram'
use_gpu = False # whether to use gpu
trainer_count = 1 # number of trainer
class Config_rnn(object):
"""
config for RNN language model
"""
rnn_type = 'gru' # or 'lstm'
emb_dim = 200
hidden_size = 200
num_layer = 2
num_passs = 2
batch_size = 32
model_file_name_prefix = 'lm_' + rnn_type + '_params_pass_'
class Config_ngram(object):
"""
config for N-Gram language model
"""
emb_dim = 200
hidden_size = 200
num_layer = 2
N = 5
num_passs = 2
batch_size = 32
model_file_name_prefix = 'lm_ngram_pass_'
```
其中,`use_which_model`指定了要train的模型,如果使用RNN语言模型则设置为'rnn',如果使用N-Gram语言模型则设置为'ngram';`use_gpu`指定了train的时候是否使用gpu;`trainer_count`指定了并行度、用几个trainer去train模型;`rnn_type` 用于配置rnn cell类型,可以取‘lstm’或‘gru’;`hidden_size`配置unit个数;`num_layer`配置RNN的层数;`num_passs`配置训练的轮数;`emb_dim`配置embedding的dimension;`batch_size `配置了train model时每个batch的大小;`model_file_name_prefix `配置了要保存的模型的名字前缀。
* 运行`python train.py`命令训练模型,模型将被保存到当前目录。
### 按需生成文本
* 按需调整`config.py`中对于infer的配置,详解如下:
```python
# -- config : infer --
input_file = 'data/input.txt' # input file contains sentence prefix each line
output_file = 'data/output.txt' # the file to save results
num_words = 10 # the max number of words need to generate
beam_size = 5 # beam_width, the number of the prediction sentence for each prefix
```
其中,`input_file`中保存的是待生成的文本前缀,utf-8编码,每个前缀占一行,形如:
```text
我 是
```
用户将需要生成的文本前缀按此格式存入文件即可;
`num_words`指定了要生成多少个单词(实际生成过程中遇到结束符会停止生成,所以实际生成的词个数可能会比此值小);`beam_size`指定了beam search方法的width,即每个前缀生成多少个候选词序列;`output_file`指定了生成结果的存放位置。
* 运行`python infer.py`命令生成文本,生成的结果格式如下:
```text
我 <EOS> 0.107702672482
我 爱 。我 中国 中国 <EOS> 0.000177299271939
我 爱 中国 。我 是 中国 <EOS> 4.51695544709e-05
我 爱 中国 中国 <EOS> 0.000910127729821
我 爱 中国 。我 是 <EOS> 0.00015957862922
```
其中,‘我’是前缀,其下方的五个句子时补全的结果,每个句子末尾的浮点数表示此句子的生成概率。
# coding=utf-8
# -- config : data --
train_file = 'data/chinese.train.txt'
test_file = 'data/chinese.test.txt'
vocab_file = 'data/vocab_cn.txt' # the file to save vocab
build_vocab_method = 'fixed_size' # 'frequency' or 'fixed_size'
vocab_max_size = 3000 # when build_vocab_method = 'fixed_size'
unk_threshold = 1 # # when build_vocab_method = 'frequency'
min_sentence_length = 3
max_sentence_length = 60
# -- config : train --
use_which_model = 'ngram' # must be: 'rnn' or 'ngram'
use_gpu = False # whether to use gpu
trainer_count = 1 # number of trainer
class Config_rnn(object):
"""
config for RNN language model
"""
rnn_type = 'gru' # or 'lstm'
emb_dim = 200
hidden_size = 200
num_layer = 2
num_passs = 2
batch_size = 32
model_file_name_prefix = 'lm_' + rnn_type + '_params_pass_'
class Config_ngram(object):
"""
config for N-Gram language model
"""
emb_dim = 200
hidden_size = 200
num_layer = 2
N = 5
num_passs = 2
batch_size = 32
model_file_name_prefix = 'lm_ngram_pass_'
# -- config : infer --
input_file = 'data/input.txt' # input file contains sentence prefix each line
output_file = 'data/output.txt' # the file to save results
num_words = 10 # the max number of words need to generate
beam_size = 5 # beam_width, the number of the prediction sentence for each prefix
我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。
\ No newline at end of file
我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。我 是 中国 人 。
我 爱 中国 。
\ No newline at end of file
我 是
我 是 中国
我 爱
我 是 中国 人。
我 爱 中国
我 爱 中国 。我
我 爱 中国 。我 爱
我 爱 中国 。我 是
我 爱 中国 。我 是 中国
\ No newline at end of file
# coding=utf-8
import paddle.v2 as paddle
import gzip
import numpy as np
from utils import *
import network_conf
from config import *
def generate_using_rnn(word_id_dict, num_words, beam_size):
"""
Demo: use RNN model to do prediction.
:param word_id_dict: vocab.
:type word_id_dict: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
:param num_words: the number of the words to generate.
:type num_words: int
:param beam_size: beam width.
:type beam_size: int
:return: save prediction results to output_file
"""
# prepare and cache model
config = Config_rnn()
_, output_layer = network_conf.rnn_lm(
vocab_size=len(word_id_dict),
emb_dim=config.emb_dim,
rnn_type=config.rnn_type,
hidden_size=config.hidden_size,
num_layer=config.num_layer) # network config
model_file_name = config.model_file_name_prefix + str(config.num_passs -
1) + '.tar.gz'
parameters = paddle.parameters.Parameters.from_tar(
gzip.open(model_file_name)) # load parameters
inferer = paddle.inference.Inference(
output_layer=output_layer, parameters=parameters)
# tools, different from generate_using_ngram's tools
id_word_dict = dict(
[(v, k) for k, v in word_id_dict.items()]) # {id : word}
def str2ids(str):
return [[[
word_id_dict.get(w, word_id_dict['<UNK>']) for w in str.split()
]]]
def ids2str(ids):
return [[[id_word_dict.get(id, ' ') for id in ids]]]
# generate text
with open(input_file) as file:
output_f = open(output_file, 'w')
for line in file:
line = line.decode('utf-8').strip()
# generate
texts = {} # type: {text : probability}
texts[line] = 1
for _ in range(num_words):
texts_new = {}
for (text, prob) in texts.items():
if '<EOS>' in text: # stop prediction when <EOS> appear
texts_new[text] = prob
continue
# next word's probability distribution
predictions = inferer.infer(input=str2ids(text))
predictions[-1][word_id_dict['<UNK>']] = -1 # filter <UNK>
# find next beam_size words
for _ in range(beam_size):
cur_maxProb_index = np.argmax(
predictions[-1]) # next word's id
text_new = text + ' ' + id_word_dict[
cur_maxProb_index] # text append next word
texts_new[text_new] = texts[text] * predictions[-1][
cur_maxProb_index]
predictions[-1][cur_maxProb_index] = -1
texts.clear()
if len(texts_new) <= beam_size:
texts = texts_new
else: # cutting
texts = dict(
sorted(
texts_new.items(), key=lambda d: d[1], reverse=True)
[:beam_size])
# save results to output file
output_f.write(line.encode('utf-8') + '\n')
for (sentence, prob) in texts.items():
output_f.write('\t' + sentence.encode('utf-8', 'replace') + '\t'
+ str(prob) + '\n')
output_f.write('\n')
output_f.close()
print('already saved results to ' + output_file)
def generate_using_ngram(word_id_dict, num_words, beam_size):
"""
Demo: use N-Gram model to do prediction.
:param word_id_dict: vocab.
:type word_id_dict: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
:param num_words: the number of the words to generate.
:type num_words: int
:param beam_size: beam width.
:type beam_size: int
:return: save prediction results to output_file
"""
# prepare and cache model
config = Config_ngram()
_, output_layer = network_conf.ngram_lm(
vocab_size=len(word_id_dict),
emb_dim=config.emb_dim,
hidden_size=config.hidden_size,
num_layer=config.num_layer) # network config
model_file_name = config.model_file_name_prefix + str(config.num_passs -
1) + '.tar.gz'
parameters = paddle.parameters.Parameters.from_tar(
gzip.open(model_file_name)) # load parameters
inferer = paddle.inference.Inference(
output_layer=output_layer, parameters=parameters)
# tools, different from generate_using_rnn's tools
id_word_dict = dict(
[(v, k) for k, v in word_id_dict.items()]) # {id : word}
def str2ids(str):
return [[
word_id_dict.get(w, word_id_dict['<UNK>']) for w in str.split()
]]
def ids2str(ids):
return [[id_word_dict.get(id, ' ') for id in ids]]
# generate text
with open(input_file) as file:
output_f = open(output_file, 'w')
for line in file:
line = line.decode('utf-8').strip()
words = line.split()
if len(words) < config.N:
output_f.write(line.encode('utf-8') + "\n\tnone\n")
continue
# generate
texts = {} # type: {text : probability}
texts[line] = 1
for _ in range(num_words):
texts_new = {}
for (text, prob) in texts.items():
if '<EOS>' in text: # stop prediction when <EOS> appear
texts_new[text] = prob
continue
# next word's probability distribution
predictions = inferer.infer(
input=str2ids(' '.join(text.split()[-config.N:])))
predictions[-1][word_id_dict['<UNK>']] = -1 # filter <UNK>
# find next beam_size words
for _ in range(beam_size):
cur_maxProb_index = np.argmax(
predictions[-1]) # next word's id
text_new = text + ' ' + id_word_dict[
cur_maxProb_index] # text append nextWord
texts_new[text_new] = texts[text] * predictions[-1][
cur_maxProb_index]
predictions[-1][cur_maxProb_index] = -1
texts.clear()
if len(texts_new) <= beam_size:
texts = texts_new
else: # cutting
texts = dict(
sorted(
texts_new.items(), key=lambda d: d[1], reverse=True)
[:beam_size])
# save results to output file
output_f.write(line.encode('utf-8') + '\n')
for (sentence, prob) in texts.items():
output_f.write('\t' + sentence.encode('utf-8', 'replace') + '\t'
+ str(prob) + '\n')
output_f.write('\n')
output_f.close()
print('already saved results to ' + output_file)
def main():
# init paddle
paddle.init(use_gpu=use_gpu, trainer_count=trainer_count)
# prepare and cache vocab
if os.path.isfile(vocab_file):
word_id_dict = load_vocab(vocab_file) # load word dictionary
else:
if build_vocab_method == 'fixed_size':
word_id_dict = build_vocab_with_fixed_size(
train_file, vocab_max_size) # build vocab
else:
word_id_dict = build_vocab_using_threshhold(
train_file, unk_threshold) # build vocab
save_vocab(word_id_dict, vocab_file) # save vocab
# generate
if use_which_model == 'rnn':
generate_using_rnn(
word_id_dict=word_id_dict, num_words=num_words, beam_size=beam_size)
elif use_which_model == 'ngram':
generate_using_ngram(
word_id_dict=word_id_dict, num_words=num_words, beam_size=beam_size)
else:
raise Exception('use_which_model must be rnn or ngram!')
if __name__ == "__main__":
main()
# coding=utf-8
import paddle.v2 as paddle
def rnn_lm(vocab_size, emb_dim, rnn_type, hidden_size, num_layer):
"""
RNN language model definition.
:param vocab_size: size of vocab.
:param emb_dim: embedding vector's dimension.
:param rnn_type: the type of RNN cell.
:param hidden_size: number of unit.
:param num_layer: layer number.
:return: cost and output layer of model.
"""
assert emb_dim > 0 and hidden_size > 0 and vocab_size > 0 and num_layer > 0
# input layers
input = paddle.layer.data(
name="input", type=paddle.data_type.integer_value_sequence(vocab_size))
target = paddle.layer.data(
name="target", type=paddle.data_type.integer_value_sequence(vocab_size))
# embedding layer
input_emb = paddle.layer.embedding(input=input, size=emb_dim)
# rnn layer
if rnn_type == 'lstm':
rnn_cell = paddle.networks.simple_lstm(
input=input_emb, size=hidden_size)
for _ in range(num_layer - 1):
rnn_cell = paddle.networks.simple_lstm(
input=rnn_cell, size=hidden_size)
elif rnn_type == 'gru':
rnn_cell = paddle.networks.simple_gru(input=input_emb, size=hidden_size)
for _ in range(num_layer - 1):
rnn_cell = paddle.networks.simple_gru(
input=rnn_cell, size=hidden_size)
else:
raise Exception('rnn_type error!')
# fc(full connected) and output layer
output = paddle.layer.fc(
input=[rnn_cell], size=vocab_size, act=paddle.activation.Softmax())
# loss
cost = paddle.layer.classification_cost(input=output, label=target)
return cost, output
def ngram_lm(vocab_size, emb_dim, hidden_size, num_layer):
"""
N-Gram language model definition.
:param vocab_size: size of vocab.
:param emb_dim: embedding vector's dimension.
:param hidden_size: size of unit.
:param num_layer: layer number.
:return: cost and output layer of model.
"""
assert emb_dim > 0 and hidden_size > 0 and vocab_size > 0 and num_layer > 0
def wordemb(inlayer):
wordemb = paddle.layer.table_projection(
input=inlayer,
size=emb_dim,
param_attr=paddle.attr.Param(
name="_proj", initial_std=0.001, learning_rate=1, l2_rate=0))
return wordemb
# input layers
first_word = paddle.layer.data(
name="first_word", type=paddle.data_type.integer_value(vocab_size))
second_word = paddle.layer.data(
name="second_word", type=paddle.data_type.integer_value(vocab_size))
third_word = paddle.layer.data(
name="third_word", type=paddle.data_type.integer_value(vocab_size))
fourth_word = paddle.layer.data(
name="fourth_word", type=paddle.data_type.integer_value(vocab_size))
next_word = paddle.layer.data(
name="next_word", type=paddle.data_type.integer_value(vocab_size))
# embedding layer
first_emb = wordemb(first_word)
second_emb = wordemb(second_word)
third_emb = wordemb(third_word)
fourth_emb = wordemb(fourth_word)
context_emb = paddle.layer.concat(
input=[first_emb, second_emb, third_emb, fourth_emb])
# hidden layer
hidden = paddle.layer.fc(
input=context_emb, size=hidden_size, act=paddle.activation.Relu())
for _ in range(num_layer - 1):
hidden = paddle.layer.fc(
input=hidden, size=hidden_size, act=paddle.activation.Relu())
# fc(full connected) and output layer
predict_word = paddle.layer.fc(
input=[hidden], size=vocab_size, act=paddle.activation.Softmax())
# loss
cost = paddle.layer.classification_cost(input=predict_word, label=next_word)
return cost, predict_word
# coding=utf-8
import collections
import os
def rnn_reader(file_name, min_sentence_length, max_sentence_length,
word_id_dict):
"""
create reader for RNN, each line is a sample.
:param file_name: file name.
:param min_sentence_length: sentence's min length.
:param max_sentence_length: sentence's max length.
:param word_id_dict: vocab with content of '{word, id}', 'word' is string type , 'id' is int type.
:return: data reader.
"""
def reader():
UNK = word_id_dict['<UNK>']
with open(file_name) as file:
for line in file:
words = line.decode('utf-8', 'ignore').strip().split()
if len(words) < min_sentence_length or len(
words) > max_sentence_length:
continue
ids = [word_id_dict.get(w, UNK) for w in words]
ids.append(word_id_dict['<EOS>'])
target = ids[1:]
target.append(word_id_dict['<EOS>'])
yield ids[:], target[:]
return reader
def ngram_reader(file_name, N, word_id_dict):
"""
create reader for N-Gram.
:param file_name: file name.
:param N: N-Gram's N.
:param word_id_dict: vocab with content of '{word, id}', 'word' is string type , 'id' is int type.
:return: data reader.
"""
assert N >= 2
def reader():
ids = []
UNK_ID = word_id_dict['<UNK>']
cache_size = 10000000
with open(file_name) as file:
for line in file:
words = line.decode('utf-8', 'ignore').strip().split()
ids += [word_id_dict.get(w, UNK_ID) for w in words]
ids_len = len(ids)
if ids_len > cache_size: # output
for i in range(ids_len - N - 1):
yield tuple(ids[i:i + N])
ids = []
ids_len = len(ids)
for i in range(ids_len - N - 1):
yield tuple(ids[i:i + N])
return reader
# coding=utf-8
import sys
import paddle.v2 as paddle
import reader
from utils import *
import network_conf
import gzip
from config import *
def train(model_cost, train_reader, test_reader, model_file_name_prefix,
num_passes):
"""
train model.
:param model_cost: cost layer of the model to train.
:param train_reader: train data reader.
:param test_reader: test data reader.
:param model_file_name_prefix: model's prefix name.
:param num_passes: epoch.
:return:
"""
# init paddle
paddle.init(use_gpu=use_gpu, trainer_count=trainer_count)
# create parameters
parameters = paddle.parameters.create(model_cost)
# create optimizer
adam_optimizer = paddle.optimizer.Adam(
learning_rate=1e-3,
regularization=paddle.optimizer.L2Regularization(rate=1e-3),
model_average=paddle.optimizer.ModelAverage(
average_window=0.5, max_average_window=10000))
# create trainer
trainer = paddle.trainer.SGD(
cost=model_cost, parameters=parameters, update_equation=adam_optimizer)
# define event_handler callback
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print("\nPass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics))
else:
sys.stdout.write('.')
sys.stdout.flush()
# save model each pass
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=test_reader)
print("\nTest with Pass %d, %s" % (event.pass_id, result.metrics))
with gzip.open(
model_file_name_prefix + str(event.pass_id) + '.tar.gz',
'w') as f:
parameters.to_tar(f)
# start to train
print('start training...')
trainer.train(
reader=train_reader, event_handler=event_handler, num_passes=num_passes)
print("Training finished.")
def main():
# prepare vocab
print('prepare vocab...')
if build_vocab_method == 'fixed_size':
word_id_dict = build_vocab_with_fixed_size(
train_file, vocab_max_size) # build vocab
else:
word_id_dict = build_vocab_using_threshhold(
train_file, unk_threshold) # build vocab
save_vocab(word_id_dict, vocab_file) # save vocab
# init model and data reader
if use_which_model == 'rnn':
# init RNN model
print('prepare rnn model...')
config = Config_rnn()
cost, _ = network_conf.rnn_lm(
len(word_id_dict), config.emb_dim, config.rnn_type,
config.hidden_size, config.num_layer)
# init RNN data reader
train_reader = paddle.batch(
paddle.reader.shuffle(
reader.rnn_reader(train_file, min_sentence_length,
max_sentence_length, word_id_dict),
buf_size=65536),
batch_size=config.batch_size)
test_reader = paddle.batch(
paddle.reader.shuffle(
reader.rnn_reader(test_file, min_sentence_length,
max_sentence_length, word_id_dict),
buf_size=65536),
batch_size=config.batch_size)
elif use_which_model == 'ngram':
# init N-Gram model
print('prepare ngram model...')
config = Config_ngram()
assert config.N == 5
cost, _ = network_conf.ngram_lm(
vocab_size=len(word_id_dict),
emb_dim=config.emb_dim,
hidden_size=config.hidden_size,
num_layer=config.num_layer)
# init N-Gram data reader
train_reader = paddle.batch(
paddle.reader.shuffle(
reader.ngram_reader(train_file, config.N, word_id_dict),
buf_size=65536),
batch_size=config.batch_size)
test_reader = paddle.batch(
paddle.reader.shuffle(
reader.ngram_reader(test_file, config.N, word_id_dict),
buf_size=65536),
batch_size=config.batch_size)
else:
raise Exception('use_which_model must be rnn or ngram!')
# train model
train(
model_cost=cost,
train_reader=train_reader,
test_reader=test_reader,
model_file_name_prefix=config.model_file_name_prefix,
num_passes=config.num_passs)
if __name__ == "__main__":
main()
# coding=utf-8
import os
import collections
def save_vocab(word_id_dict, vocab_file_name):
"""
save vocab.
:param word_id_dict: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
:param vocab_file_name: vocab file name.
"""
f = open(vocab_file_name, 'w')
for (k, v) in word_id_dict.items():
f.write(k.encode('utf-8') + '\t' + str(v) + '\n')
print('save vocab to ' + vocab_file_name)
f.close()
def load_vocab(vocab_file_name):
"""
load vocab from file.
:param vocab_file_name: vocab file name.
:return: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
"""
assert os.path.isfile(vocab_file_name)
dict = {}
with open(vocab_file_name) as file:
for line in file:
if len(line) < 2:
continue
kv = line.decode('utf-8').strip().split('\t')
dict[kv[0]] = int(kv[1])
return dict
def build_vocab_using_threshhold(file_name, unk_threshold):
"""
build vacab using_<UNK> threshhold.
:param file_name:
:param unk_threshold: <UNK> threshhold.
:type unk_threshold: int.
:return: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
"""
counter = {}
with open(file_name) as file:
for line in file:
words = line.decode('utf-8', 'ignore').strip().split()
for word in words:
if word in counter:
counter[word] += 1
else:
counter[word] = 1
counter_new = {}
for (word, frequency) in counter.items():
if frequency >= unk_threshold:
counter_new[word] = frequency
counter.clear()
counter_new = sorted(counter_new.items(), key=lambda d: -d[1])
words = [word_frequency[0] for word_frequency in counter_new]
word_id_dict = dict(zip(words, range(2, len(words) + 2)))
word_id_dict['<UNK>'] = 0
word_id_dict['<EOS>'] = 1
return word_id_dict
def build_vocab_with_fixed_size(file_name, vocab_max_size):
"""
build vacab with assigned max size.
:param vocab_max_size: vocab's max size.
:return: dictionary with content of '{word, id}', 'word' is string type , 'id' is int type.
"""
words = []
for line in open(file_name):
words += line.decode('utf-8', 'ignore').strip().split()
counter = collections.Counter(words)
counter = sorted(counter.items(), key=lambda x: -x[1])
if len(counter) > vocab_max_size:
counter = counter[:vocab_max_size]
words, counts = zip(*counter)
word_id_dict = dict(zip(words, range(2, len(words) + 2)))
word_id_dict['<UNK>'] = 0
word_id_dict['<EOS>'] = 1
return word_id_dict
此差异已折叠。
此差异已折叠。
import os, sys
import gzip
import paddle.v2 as paddle
import numpy as np
import functools
def lambda_rank(input_dim):
"""
lambda_rank is a Listwise rank model, the input data and label must be sequences.
https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
parameters :
input_dim, one document's dense feature vector dimension
format of the dense_vector_sequence:
[[f, ...], [f, ...], ...], f is a float or an int number
"""
label = paddle.layer.data("label",
paddle.data_type.dense_vector_sequence(1))
data = paddle.layer.data("data",
paddle.data_type.dense_vector_sequence(input_dim))
# hidden layer
hd1 = paddle.layer.fc(
input=data,
size=128,
act=paddle.activation.Tanh(),
param_attr=paddle.attr.Param(initial_std=0.01))
hd2 = paddle.layer.fc(
input=hd1,
size=10,
act=paddle.activation.Tanh(),
param_attr=paddle.attr.Param(initial_std=0.01))
output = paddle.layer.fc(
input=hd2,
size=1,
act=paddle.activation.Linear(),
param_attr=paddle.attr.Param(initial_std=0.01))
# evaluator
evaluator = paddle.evaluator.auc(input=output, label=label)
# cost layer
cost = paddle.layer.lambda_cost(
input=output, score=label, NDCG_num=6, max_sort_size=-1)
return cost, output
def train_lambda_rank(num_passes):
# listwise input sequence
fill_default_train = functools.partial(
paddle.dataset.mq2007.train, format="listwise")
fill_default_test = functools.partial(
paddle.dataset.mq2007.test, format="listwise")
train_reader = paddle.batch(
paddle.reader.shuffle(fill_default_train, buf_size=100), batch_size=32)
test_reader = paddle.batch(fill_default_test, batch_size=32)
# mq2007 input_dim = 46, dense format
input_dim = 46
cost, output = lambda_rank(input_dim)
parameters = paddle.parameters.create(cost)
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
update_equation=paddle.optimizer.Adam(learning_rate=1e-4))
# Define end batch and end pass event handler
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
print "Pass %d Batch %d Cost %.9f" % (event.pass_id, event.batch_id,
event.cost)
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=test_reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
with gzip.open("lambda_rank_params_%d.tar.gz" % (event.pass_id),
"w") as f:
parameters.to_tar(f)
feeding = {"label": 0, "data": 1}
trainer.train(
reader=train_reader,
event_handler=event_handler,
feeding=feeding,
num_passes=num_passes)
def lambda_rank_infer(pass_id):
"""
lambda_rank model inference interface
parameters:
pass_id : inference model in pass_id
"""
print "Begin to Infer..."
input_dim = 46
output = lambda_rank(input_dim)
parameters = paddle.parameters.Parameters.from_tar(
gzip.open("lambda_rank_params_%d.tar.gz" % (pass_id - 1)))
infer_query_id = None
infer_data = []
infer_data_num = 1
fill_default_test = functools.partial(
paddle.dataset.mq2007.test, format="listwise")
for label, querylist in fill_default_test():
infer_data.append(querylist)
if len(infer_data) == infer_data_num:
break
# predict score of infer_data document. Re-sort the document base on predict score
# in descending order. then we build the ranking documents
predicitons = paddle.infer(
output_layer=output, parameters=parameters, input=infer_data)
for i, score in enumerate(predicitons):
print i, score
if __name__ == '__main__':
paddle.init(use_gpu=False, trainer_count=1)
train_lambda_rank(2)
lambda_rank_infer(pass_id=1)
import numpy as np
import unittest
def ndcg(score_list):
"""
measure the ndcg score of order list
https://en.wikipedia.org/wiki/Discounted_cumulative_gain
parameter:
score_list: np.array, shape=(sample_num,1)
e.g. predict rank score list :
>>> scores = [3, 2, 3, 0, 1, 2]
>>> ndcg_score = ndcg(scores)
"""
def dcg(score_list):
n = len(score_list)
cost = .0
for i in range(n):
cost += float(score_list[i]) / np.log((i + 1) + 1)
return cost
dcg_cost = dcg(score_list)
score_ranking = sorted(score_list, reverse=True)
ideal_cost = dcg(score_ranking)
return dcg_cost / ideal_cost
class NdcgTest(unittest.TestCase):
def __init__(self):
pass
def runcase(self):
a = [3, 2, 3, 0, 1, 2]
value = ndcg(a)
self.assertAlmostEqual(0.961, value, places=3)
if __name__ == '__main__':
unittest.main()
import os
import sys
import gzip
import functools
import paddle.v2 as paddle
import numpy as np
from metrics import ndcg
# ranknet is the classic pairwise learning to rank algorithm
# http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf
def half_ranknet(name_prefix, input_dim):
"""
parameter in same name will be shared in paddle framework,
these parameters in ranknet can be used in shared state, e.g. left network and right network
shared parameters in detail
https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/api.md
"""
# data layer
data = paddle.layer.data(name_prefix + "/data",
paddle.data_type.dense_vector(input_dim))
# hidden layer
hd1 = paddle.layer.fc(
input=data,
size=10,
act=paddle.activation.Tanh(),
param_attr=paddle.attr.Param(initial_std=0.01, name="hidden_w1"))
# fully connect layer/ output layer
output = paddle.layer.fc(
input=hd1,
size=1,
act=paddle.activation.Linear(),
param_attr=paddle.attr.Param(initial_std=0.01, name="output"))
return output
def ranknet(input_dim):
# label layer
label = paddle.layer.data("label", paddle.data_type.dense_vector(1))
# reuse the parameter in half_ranknet
output_left = half_ranknet("left", input_dim)
output_right = half_ranknet("right", input_dim)
evaluator = paddle.evaluator.auc(input=output_left, label=label)
# rankcost layer
cost = paddle.layer.rank_cost(
name="cost", left=output_left, right=output_right, label=label)
return cost
def train_ranknet(num_passes):
train_reader = paddle.batch(
paddle.reader.shuffle(paddle.dataset.mq2007.train, buf_size=100),
batch_size=100)
test_reader = paddle.batch(paddle.dataset.mq2007.test, batch_size=100)
# mq2007 feature_dim = 46, dense format
# fc hidden_dim = 128
feature_dim = 46
cost = ranknet(feature_dim)
parameters = paddle.parameters.create(cost)
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
update_equation=paddle.optimizer.Adam(learning_rate=2e-4))
# Define the input data order
feeding = {"label": 0, "left/data": 1, "right/data": 2}
# Define end batch and end pass event handler
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
print "Pass %d Batch %d Cost %.9f" % (
event.pass_id, event.batch_id, event.cost)
else:
sys.stdout.write(".")
sys.stdout.flush()
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=test_reader, feeding=feeding)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
with gzip.open("ranknet_params_%d.tar.gz" % (event.pass_id),
"w") as f:
parameters.to_tar(f)
trainer.train(
reader=train_reader,
event_handler=event_handler,
feeding=feeding,
num_passes=num_passes)
def ranknet_infer(pass_id):
"""
load the trained model. And predict with plain txt input
"""
print "Begin to Infer..."
feature_dim = 46
# we just need half_ranknet to predict a rank score, which can be used in sort documents
output = half_ranknet("left", feature_dim)
parameters = paddle.parameters.Parameters.from_tar(
gzip.open("ranknet_params_%d.tar.gz" % (pass_id - 1)))
# load data of same query and relevance documents, need ranknet to rank these candidates
infer_query_id = []
infer_data = []
infer_doc_index = []
# convert to mq2007 built-in data format
# <query_id> <relevance_score> <feature_vector>
plain_txt_test = functools.partial(
paddle.dataset.mq2007.test, format="plain_txt")
for query_id, relevance_score, feature_vector in plain_txt_test():
infer_query_id.append(query_id)
infer_data.append(feature_vector)
# predict score of infer_data document. Re-sort the document base on predict score
# in descending order. then we build the ranking documents
scores = paddle.infer(
output_layer=output, parameters=parameters, input=infer_data)
for query_id, score in zip(infer_query_id, scores):
print "query_id : ", query_id, " ranknet rank document order : ", score
if __name__ == '__main__':
paddle.init(use_gpu=False, trainer_count=4)
pass_num = 2
train_ranknet(pass_num)
ranknet_infer(pass_id=pass_num - 1)
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
wget http://cs224d.stanford.edu/assignment2/assignment2.zip
unzip assignment2.zip
cp assignment2_release/data/ner/wordVectors.txt data/
cp assignment2_release/data/ner/vocab.txt data/
rm -rf assignment2.zip assignment2_release
B-LOC
I-LOC
B-MISC
I-MISC
B-ORG
I-ORG
B-PER
I-PER
O
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
data
*.tar.gz
*.log
*.pyc
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册