提交 793d61ad 编写于 作者: Y Yibing Liu

rm tune.py in root dir of DS2

# This file is used by clang-format to autoformat paddle source code
#
# The clang-format is part of llvm toolchain.
# It need to install llvm and clang to format source code style.
#
# The basic usage is,
# clang-format -i -style=file PATH/TO/SOURCE/CODE
#
# The -style=file implicit use ".clang-format" file located in one of
# parent directory.
# The -i means inplace change.
#
# The document of clang-format is
# http://clang.llvm.org/docs/ClangFormat.html
# http://clang.llvm.org/docs/ClangFormatStyleOptions.html
---
Language: Cpp
BasedOnStyle: Google
IndentWidth: 2
TabWidth: 2
ContinuationIndentWidth: 4
MaxEmptyLinesToKeep: 2
AccessModifierOffset: -2 # The private/protected/public has no indent in class
Standard: Cpp11
AllowAllParametersOfDeclarationOnNextLine: true
BinPackParameters: false
BinPackArguments: false
...
#!/usr/bin/env bash
set -e
readonly VERSION="3.9"
version=$(clang-format -version)
if ! [[ $version == *"$VERSION"* ]]; then
echo "clang-format version check failed."
echo "a version contains '$VERSION' is needed, but get '$version'"
echo "you can install the right version, and make an soft-link to '\$PATH' env"
exit -1
fi
clang-format $@
.DS_Store .DS_Store
*.pyc
...@@ -25,6 +25,14 @@ ...@@ -25,6 +25,14 @@
files: \.md$ files: \.md$
- id: remove-tabs - id: remove-tabs
files: \.md$ files: \.md$
- repo: local
hooks:
- id: clang-format
name: clang-format
description: Format files with ClangFormat
entry: bash .clang_format.hook -i
language: system
files: \.(c|cc|cxx|cpp|cu|h|hpp|hxx|cuh|proto)$
- repo: local - repo: local
hooks: hooks:
- id: convert-markdown-into-html - id: convert-markdown-into-html
......
...@@ -17,7 +17,7 @@ addons: ...@@ -17,7 +17,7 @@ addons:
- python-pip - python-pip
- python2.7-dev - python2.7-dev
before_install: before_install:
- pip install -U virtualenv pre-commit pip - sudo pip install -U virtualenv pre-commit pip
- docker pull paddlepaddle/paddle:latest - docker pull paddlepaddle/paddle:latest
script: script:
- .travis/precommit.sh - .travis/precommit.sh
......
# models 简介
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](https://github.com/PaddlePaddle/models)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://github.com/PaddlePaddle/models)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
PaddlePaddle提供了丰富的运算单元,帮助大家以模块化的方式构建起千变万化的深度学习模型来解决不同的应用问题。这里,我们针对常见的机器学习任务,提供了不同的神经网络模型供大家学习和使用。
## 1. 词向量
词向量用一个实向量表示词语,向量的每个维都表示文本的某种潜在语法或语义特征,是深度学习应用于自然语言处理领域最成功的概念和成果之一。广义的,词向量也可以应用于普通离散特征。词向量的学习通常都是一个无监督的学习过程,因此,可以充分利用海量的无标记数据以捕获特征之间的关系,也可以有效地解决特征稀疏、标签数据缺失、数据噪声等问题。然而,在常见词向量学习方法中,模型最后一层往往会遇到一个超大规模的分类问题,是计算性能的瓶颈。
在词向量的例子中,我们向大家展示如何使用Hierarchical-Sigmoid 和噪声对比估计(Noise Contrastive Estimation,NCE)来加速词向量的学习。
- 1.1 [Hsigmoid加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/hsigmoid)
- 1.2 [噪声对比估计加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
## 2. 使用循环神经网络语言模型生成文本
语言模型是自然语言处理领域里一个重要的基础模型,除了得到词向量(语言模型训练的副产物),还可以帮助我们生成文本。给定若干个词,语言模型可以帮助我们预测下一个最可能出现的词。在利用语言模型生成文本的例子中,我们重点介绍循环神经网络语言模型,大家可以通过文档中的使用说明快速适配到自己的训练语料,完成自动写诗、自动写散文等有趣的模型。
- 2.1 [使用循环神经网络语言模型生成文本](https://github.com/PaddlePaddle/models/tree/develop/generate_sequence_by_rnn_lm)
## 3. 点击率预估
点击率预估模型预判用户对一条广告点击的概率,对每次广告的点击情况做出预测,是广告技术的核心算法之一。逻谛斯克回归对大规模稀疏特征有着很好的学习能力,在点击率预估任务发展的早期一统天下。近年来,DNN 模型由于其强大的学习能力逐渐接过点击率预估任务的大旗。
在点击率预估的例子中,我们给出谷歌提出的 Wide & Deep 模型。这一模型融合了适用于学习抽象特征的 DNN 和适用于大规模稀疏特征的逻谛斯克回归两者模型的优点,可以作为一种相对成熟的模型框架使用, 在工业界也有一定的应用。
- 3.1 [Wide & deep 点击率预估模型](https://github.com/PaddlePaddle/models/tree/develop/ctr)
## 4. 文本分类
文本分类是自然语言处理领域最基础的任务之一,深度学习方法能够免除复杂的特征工程,直接使用原始文本作为输入,数据驱动地最优化分类准确率。
在文本分类的例子中,我们以情感分类任务为例,提供了基于DNN的非序列文本分类模型,以及基于CNN的序列模型供大家学习和使用(基于LSTM的模型见PaddleBook中[情感分类](https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)一课)。
- 4.1 [基于 DNN / CNN 的情感分类](https://github.com/PaddlePaddle/models/tree/develop/text_classification)
## 5. 排序学习
排序学习(Learning to Rank, LTR)是信息检索和搜索引擎研究的核心问题之一,通过机器学习方法学习一个分值函数对待排序的候选进行打分,再根据分值的高低确定序关系。深度神经网络可以用来建模分值函数,构成各类基于深度学习的LTR模型。
在排序学习的例子中,我们介绍基于 RankLoss 损失函数的 Pairwise 排序模型和基于LambdaRank损失函数的Listwise排序模型(Pointwise学习策略见PaddleBook中[推荐系统](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)一课)。
- 5.1 [基于 Pairwise 和 Listwise 的排序学习](https://github.com/PaddlePaddle/models/tree/develop/ltr)
## 6. 深度结构化语义模型
深度结构化语义模型使用DNN模型在一个连续的语义空间中学习文本低纬的向量表示,最终建模两个句子间的语义相似度。
本例中我们演示如何使用 PaddlePaddle实现一个通用的深度结构化语义模型来建模两个字符串间的语义相似度。
模型支持CNN(卷积网络)、FC(全连接网络)、RNN(递归神经网络)等不同的网络结构,以及分类、回归、排序等不同损失函数,采用了比较通用的数据格式,用户替换数据便可以在真实场景中使用。
- 6.1 [深度结构化语义模型](https://github.com/PaddlePaddle/models/tree/develop/dssm)
## 7. 序列标注
给定输入序列,序列标注模型为序列中每一个元素贴上一个类别标签,是自然语言处理领域最基础的任务之一。随着深度学习的不断探索和发展,利用循环神经网络学习输入序列的特征表示,条件随机场(Conditional Random Field, CRF)在特征基础上完成序列标注任务,逐渐成为解决序列标注问题的标配解决方案。
在序列标注的例子中,我们以命名实体识别(Named Entity Recognition,NER)任务为例,介绍如何训练一个端到端的序列标注模型。
- 7.1 [命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner)
## 8. 序列到序列学习
序列到序列学习实现两个甚至是多个不定长模型之间的映射,有着广泛的应用,包括:机器翻译、智能对话与问答、广告创意语料生成、自动编码(如金融画像编码)、判断多个文本串之间的语义相关性等。
在序列到序列学习的例子中,我们以机器翻译任务为例,提供了多种改进模型,供大家学习和使用。包括:不带注意力机制的序列到序列映射模型,这一模型是所有序列到序列学习模型的基础;使用 scheduled sampling 改善 RNN 模型在生成任务中的错误累积问题;带外部记忆机制的神经机器翻译,通过增强神经网络的记忆能力,来完成复杂的序列到序列学习任务。
- 8.1 [无注意力机制的编码器解码器模型](https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention)
## 9. 图像分类
图像相比文字能够提供更加生动、容易理解及更具艺术感的信息,是人们转递与交换信息的重要来源。在图像分类的例子中,我们向大家介绍如何在PaddlePaddle中训练AlexNet、VGG、GoogLeNet和ResNet模型。同时还提供了一个模型转换工具,能够将Caffe训练好的模型文件,转换为PaddlePaddle的模型文件。
- 9.1 [将Caffe模型文件转换为PaddlePaddle模型文件](https://github.com/PaddlePaddle/models/tree/develop/image_classification/caffe2paddle)
- 9.2 [AlexNet](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
- 9.3 [VGG](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
- 9.4 [Residual Network](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
# models 简介 # Introduction to models
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](https://github.com/PaddlePaddle/models) PaddlePaddle provides a rich set of computational units to enable users to adopt a modular approach to solving various learning problems. In this repo, we demonstrate how to use PaddlePaddle to solve common machine learning tasks, providing several different neural network model that anyone can easily learn and use.
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://github.com/PaddlePaddle/models)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
PaddlePaddle提供了丰富的运算单元,帮助大家以模块化的方式构建起千变万化的深度学习模型来解决不同的应用问题。这里,我们针对常见的机器学习任务,提供了不同的神经网络模型供大家学习和使用。 ## 1. Word Embedding
The word embedding expresses words with a real vector. Each dimension of the vector represents some of the latent grammatical or semantic features of the text and is one of the most successful concepts in the field of natural language processing. The generalized word vector can also be applied to discrete features. The study of word vector is usually an unsupervised learning. Therefore, it is possible to take full advantage of massive unmarked data to capture the relationship between features and to solve the problem of sparse features, missing tag data, and data noise. However, in the common word vector learning method, the last layer of the model often encounters a large-scale classification problem, which is the bottleneck of computing performance.
## 1. 词向量 In the example of word vectors, we show how to use Hierarchical-Sigmoid and Noise Contrastive Estimation (NCE) to accelerate word-vector learning.
词向量用一个实向量表示词语,向量的每个维都表示文本的某种潜在语法或语义特征,是深度学习应用于自然语言处理领域最成功的概念和成果之一。广义的,词向量也可以应用于普通离散特征。词向量的学习通常都是一个无监督的学习过程,因此,可以充分利用海量的无标记数据以捕获特征之间的关系,也可以有效地解决特征稀疏、标签数据缺失、数据噪声等问题。然而,在常见词向量学习方法中,模型最后一层往往会遇到一个超大规模的分类问题,是计算性能的瓶颈。 - 1.1 [Hsigmoid Accelerated Word Vector Training] (https://github.com/PaddlePaddle/models/tree/develop/hsigmoid)
- 1.2 [Noise Contrast Estimation Accelerated Word Vector Training] (https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
在词向量的例子中,我们向大家展示如何使用Hierarchical-Sigmoid 和噪声对比估计(Noise Contrastive Estimation,NCE)来加速词向量的学习。
- 1.1 [Hsigmoid加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/hsigmoid) ## 2. Generate text using the recurrent neural network language model
- 1.2 [噪声对比估计加速词向量训练](https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
The language model is important in the field of natural language processing. In addition to getting the word vector (a by-product of language model training), it can also help us to generate text. Given a number of words, the language model can help us predict the next most likely word. In the example of using the language model to generate text, we focus on the recurrent neural network language model. We can use the instructions in the document quickly adapt to their training corpus, complete automatic writing poetry, automatic writing prose and other interesting models.
## 2. 使用循环神经网络语言模型生成文本 - 2.1 [Generate text using the annotated neural network language model] (https://github.com/PaddlePaddle/models/tree/develop/generate_sequence_by_rnn_lm)
语言模型是自然语言处理领域里一个重要的基础模型,除了得到词向量(语言模型训练的副产物),还可以帮助我们生成文本。给定若干个词,语言模型可以帮助我们预测下一个最可能出现的词。在利用语言模型生成文本的例子中,我们重点介绍循环神经网络语言模型,大家可以通过文档中的使用说明快速适配到自己的训练语料,完成自动写诗、自动写散文等有趣的模型。 ## 3. Click-Through Rate prediction
The click-through rate model predicts the probability that a user will click on an ad. This is widely used for advertising technology. Logistic Regression has a good learning performance for large-scale sparse features in the early stages of the development of click-through rate prediction. In recent years, DNN model because of its strong learning ability to gradually take the banner rate of the task of the banner.
- 2.1 [使用循环神经网络语言模型生成文本](https://github.com/PaddlePaddle/models/tree/develop/generate_sequence_by_rnn_lm) In the example of click-through rate estimates, we give the Google's Wide & Deep model. This model combines the advantages of DNN and the applicable logistic regression model for DNN and large-scale sparse features.
## 3. 点击率预估 - 3.1 [Click-Through Rate Model] (https://github.com/PaddlePaddle/models/tree/develop/ctr)
点击率预估模型预判用户对一条广告点击的概率,对每次广告的点击情况做出预测,是广告技术的核心算法之一。逻谛斯克回归对大规模稀疏特征有着很好的学习能力,在点击率预估任务发展的早期一统天下。近年来,DNN 模型由于其强大的学习能力逐渐接过点击率预估任务的大旗。 ## 4. Text classification
在点击率预估的例子中,我们给出谷歌提出的 Wide & Deep 模型。这一模型融合了适用于学习抽象特征的 DNN 和适用于大规模稀疏特征的逻谛斯克回归两者模型的优点,可以作为一种相对成熟的模型框架使用, 在工业界也有一定的应用。 Text classification is one of the most basic tasks in natural language processing. The deep learning method can eliminate the complex feature engineering, and use the original text as input to optimize the classification accuracy.
- 3.1 [Wide & deep 点击率预估模型](https://github.com/PaddlePaddle/models/tree/develop/ctr) For text classification, we provide a non-sequential text classification model based on DNN and CNN. (For LSTM-based model, please refer to PaddleBook [Sentiment Analysis] https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)).
## 4. 文本分类 - 4.1 [Sentiment analysis based on DNN / CNN] (https://github.com/PaddlePaddle/models/tree/develop/text_classification)
文本分类是自然语言处理领域最基础的任务之一,深度学习方法能够免除复杂的特征工程,直接使用原始文本作为输入,数据驱动地最优化分类准确率。 ## 5. Learning to rank
在文本分类的例子中,我们以情感分类任务为例,提供了基于DNN的非序列文本分类模型,以及基于CNN的序列模型供大家学习和使用(基于LSTM的模型见PaddleBook中[情感分类](https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)一课)。 Learning to rank (LTR) is one of the core problems in information retrieval and search engine research. Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.
The depth neural network can be used to model the fractional function to form various LTR models based on depth learning.
- 4.1 [基于 DNN / CNN 的情感分类](https://github.com/PaddlePaddle/models/tree/develop/text_classification) The algorithms for learning to rank are usually categorized into three groups by their input representation and the loss function. These are pointwise, pairwise and listwise approaches. Here we demonstrate RankLoss loss function method (pairwise approach), and LambdaRank loss function method (listwise approach). (For Pointwise approaches, please refer to [Recommended System] (https://github.com/PaddlePaddle/book/ blob / develop / 05.recommender_system / README.cn.md)).
## 5. 排序学习 - 5.1 [Learning to rank based on Pairwise and Listwise approches] (https://github.com/PaddlePaddle/models/tree/develop/ltr)
排序学习(Learning to Rank, LTR)是信息检索和搜索引擎研究的核心问题之一,通过机器学习方法学习一个分值函数对待排序的候选进行打分,再根据分值的高低确定序关系。深度神经网络可以用来建模分值函数,构成各类基于深度学习的LTR模型。 ## 6. Semantic model
The deep structured semantic model uses the DNN model to learn the vector representation of the low latitude in a continuous semantic space, finally models the semantic similarity between the two sentences.
在排序学习的例子中,我们介绍基于 RankLoss 损失函数的 Pairwise 排序模型和基于LambdaRank损失函数的Listwise排序模型(Pointwise学习策略见PaddleBook中[推荐系统](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)一课)。 In this example, we demonstrate how to use PaddlePaddle to implement a generic deep structured semantic model to model the semantic similarity between two strings. The model supports different network structures such as CNN (Convolutional Network), FC (Fully Connected Network), RNN (Recurrent Neural Network), and different loss functions such as classification, regression, and sequencing.
- 5.1 [基于 Pairwise 和 Listwise 的排序学习](https://github.com/PaddlePaddle/models/tree/develop/ltr) - 6.1 [Deep structured semantic model] (https://github.com/PaddlePaddle/models/tree/develop/dssm)
## 6. 深度结构化语义模型 ## 7. Sequence tagging
深度结构化语义模型使用DNN模型在一个连续的语义空间中学习文本低纬的向量表示,最终建模两个句子间的语义相似度。
本例中我们演示如何使用 PaddlePaddle实现一个通用的深度结构化语义模型来建模两个字符串间的语义相似度。 Given the input sequence, the sequence tagging model is one of the most basic tasks in the natural language processing by assigning a category tag to each element in the sequence. Recurrent neural network models with Conditional Random Field (CRF) are commonly used for sequence tagging tasks.
模型支持CNN(卷积网络)、FC(全连接网络)、RNN(递归神经网络)等不同的网络结构,以及分类、回归、排序等不同损失函数,采用了比较通用的数据格式,用户替换数据便可以在真实场景中使用。
- 6.1 [深度结构化语义模型](https://github.com/PaddlePaddle/models/tree/develop/dssm) In the example of the sequence tagging, we describe how to train an end-to-end sequence tagging model with the Named Entity Recognition (NER) task as an example.
## 7. 序列标注 - 7.1 [Name Entity Recognition] (https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner)
给定输入序列,序列标注模型为序列中每一个元素贴上一个类别标签,是自然语言处理领域最基础的任务之一。随着深度学习的不断探索和发展,利用循环神经网络学习输入序列的特征表示,条件随机场(Conditional Random Field, CRF)在特征基础上完成序列标注任务,逐渐成为解决序列标注问题的标配解决方案。 ## 8. Sequence to sequence learning
在序列标注的例子中,我们以命名实体识别(Named Entity Recognition,NER)任务为例,介绍如何训练一个端到端的序列标注模型。 Sequence-to-sequence model has a wide range of applications. This includes machine translation, dialogue system, and parse tree generation.
- 7.1 [命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner) As an example for sequence-to-sequence learning, we take the machine translation task. We demonstrate the sequence-to-sequence mapping model without attention mechanism, which is the basis for all sequence-to-sequence learning models. We will use scheduled sampling to improve the problem of error accumulation in the RNN model, and machine translation with external memory mechanism.
## 8. 序列到序列学习 - 8.1 [Basic Sequence-to-sequence model] (https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention)
序列到序列学习实现两个甚至是多个不定长模型之间的映射,有着广泛的应用,包括:机器翻译、智能对话与问答、广告创意语料生成、自动编码(如金融画像编码)、判断多个文本串之间的语义相关性等。 ## 9. Image classification
在序列到序列学习的例子中,我们以机器翻译任务为例,提供了多种改进模型,供大家学习和使用。包括:不带注意力机制的序列到序列映射模型,这一模型是所有序列到序列学习模型的基础;使用 scheduled sampling 改善 RNN 模型在生成任务中的错误累积问题;带外部记忆机制的神经机器翻译,通过增强神经网络的记忆能力,来完成复杂的序列到序列学习任务。 For the example of image classification, we show you how to train AlexNet, VGG, GoogLeNet and ResNet models in PaddlePaddle. It also provides a model conversion tool that converts Caffe trained model files into PaddlePaddle model files.
- 8.1 [无注意力机制的编码器解码器模型](https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention) - 9.1 [convert Caffe model file to PaddlePaddle model file] (https://github.com/PaddlePaddle/models/tree/develop/image_classification/caffe2paddle)
- 9.2 [AlexNet] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
## 9. 图像分类 - 9.3 [VGG] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
图像相比文字能够提供更加生动、容易理解及更具艺术感的信息,是人们转递与交换信息的重要来源。在图像分类的例子中,我们向大家介绍如何在PaddlePaddle中训练AlexNet、VGG、GoogLeNet和ResNet模型。同时还提供了一个模型转换工具,能够将Caffe训练好的模型文件,转换为PaddlePaddle的模型文件。 - 9.4 [Residual Network] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
- 9.1 [将Caffe模型文件转换为PaddlePaddle模型文件](https://github.com/PaddlePaddle/models/tree/develop/image_classification/caffe2paddle)
- 9.2 [AlexNet](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
- 9.3 [VGG](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
- 9.4 [Residual Network](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
## Copyright and License ## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
PaddlePaddle is provided under the [Apache-2.0 license] (LICENSE).
# 点击率预估 # Click-Through Rate Prediction
以下是本例目录包含的文件以及对应说明: ## Introduction
``` CTR(Click-Through Rate)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\]
├── README.md # 本教程markdown 文档 is a prediction of the probability that a user clicks on an advertisement. This model is widely used in the advertisement industry. Accurate click rate estimates are important for maximizing online advertising revenue.
├── dataset.md # 数据集处理教程
├── images # 本教程图片目录
│   ├── lr_vs_dnn.jpg
│   └── wide_deep.png
├── infer.py # 预测脚本
├── network_conf.py # 模型网络配置
├── reader.py # data reader
├── train.py # 训练脚本
└── utils.py # helper functions
└── avazu_data_processer.py # 示例数据预处理脚本
```
## 背景介绍
CTR(Click-Through Rate,点击率预估)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\]
是对用户点击一个特定链接的概率做出预测,是广告投放过程中的一个重要环节。精准的点击率预估对在线广告系统收益最大化具有重要意义。
当有多个广告位时,CTR 预估一般会作为排序的基准,比如在搜索引擎的广告系统里,当用户输入一个带商业价值的搜索词(query)时,系统大体上会执行下列步骤来展示广告: When there are multiple ad slots, CTR estimates are generally used as a baseline for ranking. For example, in a search engine's ad system, when the user enters a query, the system typically performs the following steps to show relevant ads.
1. 获取与用户搜索词相关的广告集合 1. Get the ad collection associated with the user's search term.
2. 业务规则和相关性过滤 2. Business rules and relevance filtering.
3. 根据拍卖机制和 CTR 排序 3. Rank by auction mechanism and CTR.
4. 展出广告 4. Show ads.
可以看到,CTR 在最终排序中起到了很重要的作用。 Here,CTR plays a crucial role.
### 发展阶段 ### Brief history
在业内,CTR 模型经历了如下的发展阶段: Historically, the CTR prediction model has been evolving as follows.
- Logistic Regression(LR) / GBDT + 特征工程 - Logistic Regression(LR) / Gradient Boosting Decision Trees (GBDT) + feature engineering
- LR + DNN 特征 - LR + Deep Neural Network (DNN)
- DNN + 特征工程 - DNN + feature engineering
在发展早期时 LR 一统天下,但最近 DNN 模型由于其强大的学习能力和逐渐成熟的性能优化, In the early stages of development LR dominated, but the recent years DNN based models are mainly used.
逐渐地接过 CTR 预估任务的大旗。
### LR vs DNN ### LR vs DNN
下图展示了 LR 和一个 \(3x2\) 的 DNN 模型的结构: The following figure shows the structure of LR and DNN model:
<p align="center"> <p align="center">
<img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/> <img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/>
Figure 1. LR 和 DNN 模型结构对比 Figure 1. LR and DNN model structure comparison
</p> </p>
LR 的蓝色箭头部分可以直接类比到 DNN 中对应的结构,可以看到 LR 和 DNN 有一些共通之处(比如权重累加), We can see, LR and CNN have some common structures. However, DNN can have non-linear relation between input and output values by adding activation unit and further layers. This enables DNN to achieve better learning results in CTR estimates.
但前者的模型复杂度在相同输入维度下比后者可能低很多(从某方面讲,模型越复杂,越有潜力学习到更复杂的信息);
如果 LR 要达到匹敌 DNN 的学习能力,必须增加输入的维度,也就是增加特征的数量,
这也就是为何 LR 和大规模的特征工程必须绑定在一起的原因。
LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括内存和计算量等方面,工业界都有非常成熟的优化方法; In the following, we demonstrate how to use PaddlePaddle to learn to predict CTR.
而 DNN 模型具有自己学习新特征的能力,一定程度上能够提升特征使用的效率,
这使得 DNN 模型在同样规模特征的情况下,更有可能达到更好的学习效果。
本文后面的章节会演示如何使用 PaddlePaddle 编写一个结合两者优点的模型。 ## Data and Model formation
Here `click` is the learning objective. There are several ways to learn the objectives.
## 数据和任务抽象 1. Direct learning click, 0,1 for binary classification
2. Learning to rank, pairwise rank or listwise rank
3. Measure the ad click rate of each ad, then rank by the click rate.
我们可以将 `click` 作为学习目标,任务可以有以下几种方案: In this example, we use the first method.
1. 直接学习 click,0,1 作二元分类 We use the Kaggle `Click-through rate prediction` task \[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\].
2. Learning to rank, 具体用 pairwise rank(标签 1>0)或者 listwise rank
3. 统计每个广告的点击率,将同一个 query 下的广告两两组合,点击率高的>点击率低的,做 rank 或者分类
我们直接使用第一种方法做分类任务。 Please see the [data process](./dataset.md) for pre-processing data.
我们使用 Kaggle 上 `Click-through rate prediction` 任务的数据集\[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\] 来演示本例中的模型。 The input data format for the demo model in this tutorial is as follows:
具体的特征处理方法参看 [data process](./dataset.md)
本教程中演示模型的输入格式如下:
``` ```
# <dnn input ids> \t <lr input sparse values> \t click # <dnn input ids> \t <lr input sparse values> \t click
...@@ -84,10 +59,10 @@ LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括 ...@@ -84,10 +59,10 @@ LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括
23 231 \t 1230:0.12 13421:0.9 \t 1 23 231 \t 1230:0.12 13421:0.9 \t 1
``` ```
详细的格式描述如下 Description
- `dnn input ids` 采用 one-hot 表示,只需要填写值为1的ID(注意这里不是变长输入) - `dnn input ids` one-hot coding.
- `lr input sparse values` 使用了 `ID:VALUE` 的表示,值部分最好规约到值域 `[-1, 1]` - `lr input sparse values` Use `ID:VALUE` , values are preferaly scaled to the range `[-1, 1]`
此外,模型训练时需要传入一个文件描述 dnn 和 lr两个子模型的输入维度,文件的格式如下: 此外,模型训练时需要传入一个文件描述 dnn 和 lr两个子模型的输入维度,文件的格式如下:
...@@ -96,9 +71,9 @@ dnn_input_dim: <int> ...@@ -96,9 +71,9 @@ dnn_input_dim: <int>
lr_input_dim: <int> lr_input_dim: <int>
``` ```
其中, `<int>` 表示一个整型数值。 <int> represents an integer value.
本目录下的 `avazu_data_processor.py` 可以对下载的演示数据集\[[2](#参考文档)\] 进行处理,具体使用方法参考如下说明: `avazu_data_processor.py` can be used to download the data set \[[2](#参考文档)\]and pre-process the data.
``` ```
usage: avazu_data_processer.py [-h] --data_path DATA_PATH --output_dir usage: avazu_data_processer.py [-h] --data_path DATA_PATH --output_dir
...@@ -123,40 +98,38 @@ optional arguments: ...@@ -123,40 +98,38 @@ optional arguments:
size of the trainset (default: 100000) size of the trainset (default: 100000)
``` ```
- `data_path` 是待处理的数据路径 - `data_path` The data path to be processed
- `output_dir` 生成数据的输出路径 - `output_dir` The output path of the data
- `num_lines_to_detect` 预先扫描数据生成ID的个数,这里是扫描的文件行数 - `num_lines_to_detect` The number of generated IDs
- `test_set_size` 生成测试集的行数 - `test_set_size` The number of rows for the test set
- `train_size` 生成训练姐的行数 - `train_size` The number of rows of training set
## Wide & Deep Learning Model ## Wide & Deep Learning Model
谷歌在 16 年提出了 Wide & Deep Learning 的模型框架,用于融合适合学习抽象特征的 DNN 和 适用于大规模稀疏特征的 LR 两种模型的优点。 Google proposed a model framework for Wide & Deep Learning to integrate the advantages of both DNNs suitable for learning abstract features and LR models for large sparse features.
### 模型简介 ### Introduction to the model
Wide & Deep Learning Model\[[3](#参考文献)\] 可以作为一种相对成熟的模型框架使用, Wide & Deep Learning Model\[[3](#References)\] is a relatively mature model, but this model is still being used in the CTR predicting task. Here we demonstrate the use of this model to complete the CTR predicting task.
在 CTR 预估的任务中工业界也有一定的应用,因此本文将演示使用此模型来完成 CTR 预估的任务。
模型结构如下: The model structure is as follows:
<p align="center"> <p align="center">
<img src="images/wide_deep.png" width="820" hspace='10'/> <br/> <img src="images/wide_deep.png" width="820" hspace='10'/> <br/>
Figure 2. Wide & Deep Model Figure 2. Wide & Deep Model
</p> </p>
模型左边的 Wide 部分,可以容纳大规模系数特征,并且对一些特定的信息(比如 ID)有一定的记忆能力; The wide part of the left side of the model can accommodate large-scale coefficient features and has some memory for some specific information (such as ID); and the Deep part of the right side of the model can learn the implicit relationship between features.
而模型右边的 Deep 部分,能够学习特征间的隐含关系,在相同数量的特征下有更好的学习和推导能力。
### 编写模型输入 ### Model Input
模型只接受 3 个输入,分别是 The model has three inputs as follows.
- `dnn_input`也就是 Deep 部分的输入 - `dnn_input`the Deep part of the input
- `lr_input`也就是 Wide 部分的输入 - `lr_input`the wide part of the input
- `click`点击与否,作为二分类模型学习的标签 - `click`click on or not
```python ```python
dnn_merged_input = layer.data( dnn_merged_input = layer.data(
...@@ -170,9 +143,9 @@ lr_merged_input = layer.data( ...@@ -170,9 +143,9 @@ lr_merged_input = layer.data(
click = paddle.layer.data(name='click', type=dtype.dense_vector(1)) click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
``` ```
### 编写 Wide 部分 ### Wide part
Wide 部分直接使用了 LR 模型,但激活函数改成了 `RELU` 来加速 Wide part uses of the LR model, but the activation function changed to `RELU` for speed.
```python ```python
def build_lr_submodel(): def build_lr_submodel():
...@@ -181,9 +154,9 @@ def build_lr_submodel(): ...@@ -181,9 +154,9 @@ def build_lr_submodel():
return fc return fc
``` ```
### 编写 Deep 部分 ### Deep part
Deep 部分使用了标准的多层前向传导的 DNN 模型 The Deep part uses a standard multi-layer DNN.
```python ```python
def build_dnn_submodel(dnn_layer_dims): def build_dnn_submodel(dnn_layer_dims):
...@@ -199,10 +172,9 @@ def build_dnn_submodel(dnn_layer_dims): ...@@ -199,10 +172,9 @@ def build_dnn_submodel(dnn_layer_dims):
return _input_layer return _input_layer
``` ```
### 两者融合 ### Combine
两个 submodel 的最上层输出加权求和得到整个模型的输出,输出部分使用 `sigmoid` 作为激活函数,得到区间 (0,1) 的预测值, The output section uses `sigmoid` function to output (0,1) as the prediction value.
来逼近训练数据中二元类别的分布,并最终作为 CTR 预估的值使用。
```python ```python
# conbine DNN and LR submodels # conbine DNN and LR submodels
...@@ -217,7 +189,7 @@ def combine_submodels(dnn, lr): ...@@ -217,7 +189,7 @@ def combine_submodels(dnn, lr):
return fc return fc
``` ```
### 训练任务的定义 ### Training
```python ```python
dnn = build_dnn_submodel(dnn_layer_dims) dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel() lr = build_lr_submodel()
...@@ -263,16 +235,17 @@ trainer.train( ...@@ -263,16 +235,17 @@ trainer.train(
event_handler=event_handler, event_handler=event_handler,
num_passes=100) num_passes=100)
``` ```
## 运行训练和测试
训练模型需要如下步骤:
1. 准备训练数据 ## Run training and testing
1.[Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) 下载 train.gz The model go through the following steps:
2. 解压 train.gz 得到 train.txt
1. Prepare training data
1. Download train.gz from [Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) .
2. Unzip train.gz to get train.txt
3. `mkdir -p output; python avazu_data_processer.py --data_path train.txt --output_dir output --num_lines_to_detect 1000 --test_set_size 100` 生成演示数据 3. `mkdir -p output; python avazu_data_processer.py --data_path train.txt --output_dir output --num_lines_to_detect 1000 --test_set_size 100` 生成演示数据
2. 执行 `python train.py --train_data_path ./output/train.txt --test_data_path ./output/test.txt --data_meta_file ./output/data.meta.txt --model_type=0` 开始训练 2. Execute `python train.py --train_data_path ./output/train.txt --test_data_path ./output/test.txt --data_meta_file ./output/data.meta.txt --model_type=0`. Start training.
上面第2个步骤可以为 `train.py` 填充命令行参数来定制模型的训练过程,具体的命令行参数及用法如下 The argument options for `train.py` are as follows.
``` ```
usage: train.py [-h] --train_data_path TRAIN_DATA_PATH usage: train.py [-h] --train_data_path TRAIN_DATA_PATH
...@@ -303,15 +276,16 @@ optional arguments: ...@@ -303,15 +276,16 @@ optional arguments:
classification) classification)
``` ```
- `train_data_path` : 训练集的路径 - `train_data_path` : The path of the training set
- `test_data_path` : 测试集的路径 - `test_data_path` : The path of the testing set
- `num_passes`: 模型训练多少轮 - `num_passes`: number of rounds of model training
- `data_meta_file`: 参考[数据和任务抽象](### 数据和任务抽象)的描述。 - `data_meta_file`: Please refer to [数据和任务抽象](### 数据和任务抽象)的描述。
- `model_type`: 模型分类或回归 - `model_type`: Model classification or regressio
## Use the training model for prediction
The training model can be used to predict new data, and the format of the forecast data is as follows.
## 用训好的模型做预测
训好的模型可以用来预测新的数据, 预测数据的格式为
``` ```
# <dnn input ids> \t <lr input sparse values> # <dnn input ids> \t <lr input sparse values>
...@@ -319,9 +293,9 @@ optional arguments: ...@@ -319,9 +293,9 @@ optional arguments:
23 231 \t 1230:0.12 13421:0.9 23 231 \t 1230:0.12 13421:0.9
``` ```
这里与训练数据的格式唯一不同的地方,就是没有标签,也就是训练数据中第3列 `click` 对应的数值。 Here the only difference to the training data is that there is no label (i.e. `click` values).
`infer.py` 的使用方法如下 We now can use `infer.py` to perform inference.
``` ```
usage: infer.py [-h] --model_gz_path MODEL_GZ_PATH --data_path DATA_PATH usage: infer.py [-h] --model_gz_path MODEL_GZ_PATH --data_path DATA_PATH
...@@ -345,21 +319,21 @@ optional arguments: ...@@ -345,21 +319,21 @@ optional arguments:
classification) classification)
``` ```
- `model_gz_path_model``gz` 压缩过的模型路径 - `model_gz_path_model`path for `gz` compressed data.
- `data_path` 需要预测的数据路径 - `data_path`
- `prediction_output_paht`:预测输出的路径 - `prediction_output_patj`:path for the predicted values s
- `data_meta_file`参考[数据和任务抽象](### 数据和任务抽象)的描述 - `data_meta_file`Please refer to [数据和任务抽象](### 数据和任务抽象)
- `model_type`分类或回归 - `model_type`Classification or regression
示例数据可以用如下命令预测 The sample data can be predicted with the following command
``` ```
python infer.py --model_gz_path <model_path> --data_path output/infer.txt --prediction_output_path predictions.txt --data_meta_path data.meta.txt python infer.py --model_gz_path <model_path> --data_path output/infer.txt --prediction_output_path predictions.txt --data_meta_path data.meta.txt
``` ```
最终的预测结果位于 `predictions.txt` The final prediction is written in `predictions.txt`
## 参考文献 ## References
1. <https://en.wikipedia.org/wiki/Click-through_rate> 1. <https://en.wikipedia.org/wiki/Click-through_rate>
2. <https://www.kaggle.com/c/avazu-ctr-prediction/data> 2. <https://www.kaggle.com/c/avazu-ctr-prediction/data>
3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10. 3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10.
...@@ -40,85 +40,60 @@ ...@@ -40,85 +40,60 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
# 点击率预估 # Click-Through Rate Prediction
以下是本例目录包含的文件以及对应说明: ## Introduction
``` CTR(Click-Through Rate)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\]
├── README.md # 本教程markdown 文档 is a prediction of the probability that a user clicks on an advertisement. This model is widely used in the advertisement industry. Accurate click rate estimates are important for maximizing online advertising revenue.
├── dataset.md # 数据集处理教程
├── images # 本教程图片目录
│   ├── lr_vs_dnn.jpg
│   └── wide_deep.png
├── infer.py # 预测脚本
├── network_conf.py # 模型网络配置
├── reader.py # data reader
├── train.py # 训练脚本
└── utils.py # helper functions
└── avazu_data_processer.py # 示例数据预处理脚本
```
## 背景介绍
CTR(Click-Through Rate,点击率预估)\[[1](https://en.wikipedia.org/wiki/Click-through_rate)\]
是对用户点击一个特定链接的概率做出预测,是广告投放过程中的一个重要环节。精准的点击率预估对在线广告系统收益最大化具有重要意义。
当有多个广告位时,CTR 预估一般会作为排序的基准,比如在搜索引擎的广告系统里,当用户输入一个带商业价值的搜索词(query)时,系统大体上会执行下列步骤来展示广告: When there are multiple ad slots, CTR estimates are generally used as a baseline for ranking. For example, in a search engine's ad system, when the user enters a query, the system typically performs the following steps to show relevant ads.
1. 获取与用户搜索词相关的广告集合 1. Get the ad collection associated with the user's search term.
2. 业务规则和相关性过滤 2. Business rules and relevance filtering.
3. 根据拍卖机制和 CTR 排序 3. Rank by auction mechanism and CTR.
4. 展出广告 4. Show ads.
可以看到,CTR 在最终排序中起到了很重要的作用。 Here,CTR plays a crucial role.
### 发展阶段 ### Brief history
在业内,CTR 模型经历了如下的发展阶段: Historically, the CTR prediction model has been evolving as follows.
- Logistic Regression(LR) / GBDT + 特征工程 - Logistic Regression(LR) / Gradient Boosting Decision Trees (GBDT) + feature engineering
- LR + DNN 特征 - LR + Deep Neural Network (DNN)
- DNN + 特征工程 - DNN + feature engineering
在发展早期时 LR 一统天下,但最近 DNN 模型由于其强大的学习能力和逐渐成熟的性能优化, In the early stages of development LR dominated, but the recent years DNN based models are mainly used.
逐渐地接过 CTR 预估任务的大旗。
### LR vs DNN ### LR vs DNN
下图展示了 LR 和一个 \(3x2\) 的 DNN 模型的结构: The following figure shows the structure of LR and DNN model:
<p align="center"> <p align="center">
<img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/> <img src="images/lr_vs_dnn.jpg" width="620" hspace='10'/> <br/>
Figure 1. LR 和 DNN 模型结构对比 Figure 1. LR and DNN model structure comparison
</p> </p>
LR 的蓝色箭头部分可以直接类比到 DNN 中对应的结构,可以看到 LR 和 DNN 有一些共通之处(比如权重累加), We can see, LR and CNN have some common structures. However, DNN can have non-linear relation between input and output values by adding activation unit and further layers. This enables DNN to achieve better learning results in CTR estimates.
但前者的模型复杂度在相同输入维度下比后者可能低很多(从某方面讲,模型越复杂,越有潜力学习到更复杂的信息);
如果 LR 要达到匹敌 DNN 的学习能力,必须增加输入的维度,也就是增加特征的数量,
这也就是为何 LR 和大规模的特征工程必须绑定在一起的原因。
LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括内存和计算量等方面,工业界都有非常成熟的优化方法; In the following, we demonstrate how to use PaddlePaddle to learn to predict CTR.
而 DNN 模型具有自己学习新特征的能力,一定程度上能够提升特征使用的效率,
这使得 DNN 模型在同样规模特征的情况下,更有可能达到更好的学习效果。
本文后面的章节会演示如何使用 PaddlePaddle 编写一个结合两者优点的模型。 ## Data and Model formation
Here `click` is the learning objective. There are several ways to learn the objectives.
## 数据和任务抽象 1. Direct learning click, 0,1 for binary classification
2. Learning to rank, pairwise rank or listwise rank
3. Measure the ad click rate of each ad, then rank by the click rate.
我们可以将 `click` 作为学习目标,任务可以有以下几种方案: In this example, we use the first method.
1. 直接学习 click,0,1 作二元分类 We use the Kaggle `Click-through rate prediction` task \[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\].
2. Learning to rank, 具体用 pairwise rank(标签 1>0)或者 listwise rank
3. 统计每个广告的点击率,将同一个 query 下的广告两两组合,点击率高的>点击率低的,做 rank 或者分类
我们直接使用第一种方法做分类任务。 Please see the [data process](./dataset.md) for pre-processing data.
我们使用 Kaggle 上 `Click-through rate prediction` 任务的数据集\[[2](https://www.kaggle.com/c/avazu-ctr-prediction/data)\] 来演示本例中的模型。 The input data format for the demo model in this tutorial is as follows:
具体的特征处理方法参看 [data process](./dataset.md)。
本教程中演示模型的输入格式如下:
``` ```
# <dnn input ids> \t <lr input sparse values> \t click # <dnn input ids> \t <lr input sparse values> \t click
...@@ -126,10 +101,10 @@ LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括 ...@@ -126,10 +101,10 @@ LR 对于 DNN 模型的优势是对大规模稀疏特征的容纳能力,包括
23 231 \t 1230:0.12 13421:0.9 \t 1 23 231 \t 1230:0.12 13421:0.9 \t 1
``` ```
详细的格式描述如下 Description
- `dnn input ids` 采用 one-hot 表示,只需要填写值为1的ID(注意这里不是变长输入) - `dnn input ids` one-hot coding.
- `lr input sparse values` 使用了 `ID:VALUE` 的表示,值部分最好规约到值域 `[-1, 1]`。 - `lr input sparse values` Use `ID:VALUE` , values are preferaly scaled to the range `[-1, 1]`。
此外,模型训练时需要传入一个文件描述 dnn 和 lr两个子模型的输入维度,文件的格式如下: 此外,模型训练时需要传入一个文件描述 dnn 和 lr两个子模型的输入维度,文件的格式如下:
...@@ -138,9 +113,9 @@ dnn_input_dim: <int> ...@@ -138,9 +113,9 @@ dnn_input_dim: <int>
lr_input_dim: <int> lr_input_dim: <int>
``` ```
其中, `<int>` 表示一个整型数值。 <int> represents an integer value.
本目录下的 `avazu_data_processor.py` 可以对下载的演示数据集\[[2](#参考文档)\] 进行处理,具体使用方法参考如下说明: `avazu_data_processor.py` can be used to download the data set \[[2](#参考文档)\]and pre-process the data.
``` ```
usage: avazu_data_processer.py [-h] --data_path DATA_PATH --output_dir usage: avazu_data_processer.py [-h] --data_path DATA_PATH --output_dir
...@@ -165,40 +140,38 @@ optional arguments: ...@@ -165,40 +140,38 @@ optional arguments:
size of the trainset (default: 100000) size of the trainset (default: 100000)
``` ```
- `data_path` 是待处理的数据路径 - `data_path` The data path to be processed
- `output_dir` 生成数据的输出路径 - `output_dir` The output path of the data
- `num_lines_to_detect` 预先扫描数据生成ID的个数,这里是扫描的文件行数 - `num_lines_to_detect` The number of generated IDs
- `test_set_size` 生成测试集的行数 - `test_set_size` The number of rows for the test set
- `train_size` 生成训练姐的行数 - `train_size` The number of rows of training set
## Wide & Deep Learning Model ## Wide & Deep Learning Model
谷歌在 16 年提出了 Wide & Deep Learning 的模型框架,用于融合适合学习抽象特征的 DNN 和 适用于大规模稀疏特征的 LR 两种模型的优点。 Google proposed a model framework for Wide & Deep Learning to integrate the advantages of both DNNs suitable for learning abstract features and LR models for large sparse features.
### 模型简介 ### Introduction to the model
Wide & Deep Learning Model\[[3](#参考文献)\] 可以作为一种相对成熟的模型框架使用, Wide & Deep Learning Model\[[3](#References)\] is a relatively mature model, but this model is still being used in the CTR predicting task. Here we demonstrate the use of this model to complete the CTR predicting task.
在 CTR 预估的任务中工业界也有一定的应用,因此本文将演示使用此模型来完成 CTR 预估的任务。
模型结构如下: The model structure is as follows:
<p align="center"> <p align="center">
<img src="images/wide_deep.png" width="820" hspace='10'/> <br/> <img src="images/wide_deep.png" width="820" hspace='10'/> <br/>
Figure 2. Wide & Deep Model Figure 2. Wide & Deep Model
</p> </p>
模型左边的 Wide 部分,可以容纳大规模系数特征,并且对一些特定的信息(比如 ID)有一定的记忆能力; The wide part of the left side of the model can accommodate large-scale coefficient features and has some memory for some specific information (such as ID); and the Deep part of the right side of the model can learn the implicit relationship between features.
而模型右边的 Deep 部分,能够学习特征间的隐含关系,在相同数量的特征下有更好的学习和推导能力。
### 编写模型输入 ### Model Input
模型只接受 3 个输入,分别是 The model has three inputs as follows.
- `dnn_input` ,也就是 Deep 部分的输入 - `dnn_input` ,the Deep part of the input
- `lr_input` ,也就是 Wide 部分的输入 - `lr_input` ,the wide part of the input
- `click` , 点击与否,作为二分类模型学习的标签 - `click` , click on or not
```python ```python
dnn_merged_input = layer.data( dnn_merged_input = layer.data(
...@@ -212,9 +185,9 @@ lr_merged_input = layer.data( ...@@ -212,9 +185,9 @@ lr_merged_input = layer.data(
click = paddle.layer.data(name='click', type=dtype.dense_vector(1)) click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
``` ```
### 编写 Wide 部分 ### Wide part
Wide 部分直接使用了 LR 模型,但激活函数改成了 `RELU` 来加速 Wide part uses of the LR model, but the activation function changed to `RELU` for speed.
```python ```python
def build_lr_submodel(): def build_lr_submodel():
...@@ -223,9 +196,9 @@ def build_lr_submodel(): ...@@ -223,9 +196,9 @@ def build_lr_submodel():
return fc return fc
``` ```
### 编写 Deep 部分 ### Deep part
Deep 部分使用了标准的多层前向传导的 DNN 模型 The Deep part uses a standard multi-layer DNN.
```python ```python
def build_dnn_submodel(dnn_layer_dims): def build_dnn_submodel(dnn_layer_dims):
...@@ -241,10 +214,9 @@ def build_dnn_submodel(dnn_layer_dims): ...@@ -241,10 +214,9 @@ def build_dnn_submodel(dnn_layer_dims):
return _input_layer return _input_layer
``` ```
### 两者融合 ### Combine
两个 submodel 的最上层输出加权求和得到整个模型的输出,输出部分使用 `sigmoid` 作为激活函数,得到区间 (0,1) 的预测值, The output section uses `sigmoid` function to output (0,1) as the prediction value.
来逼近训练数据中二元类别的分布,并最终作为 CTR 预估的值使用。
```python ```python
# conbine DNN and LR submodels # conbine DNN and LR submodels
...@@ -259,7 +231,7 @@ def combine_submodels(dnn, lr): ...@@ -259,7 +231,7 @@ def combine_submodels(dnn, lr):
return fc return fc
``` ```
### 训练任务的定义 ### Training
```python ```python
dnn = build_dnn_submodel(dnn_layer_dims) dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel() lr = build_lr_submodel()
...@@ -305,16 +277,17 @@ trainer.train( ...@@ -305,16 +277,17 @@ trainer.train(
event_handler=event_handler, event_handler=event_handler,
num_passes=100) num_passes=100)
``` ```
## 运行训练和测试
训练模型需要如下步骤:
1. 准备训练数据 ## Run training and testing
1. 从 [Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) 下载 train.gz The model go through the following steps:
2. 解压 train.gz 得到 train.txt
1. Prepare training data
1. Download train.gz from [Kaggle CTR](https://www.kaggle.com/c/avazu-ctr-prediction/data) .
2. Unzip train.gz to get train.txt
3. `mkdir -p output; python avazu_data_processer.py --data_path train.txt --output_dir output --num_lines_to_detect 1000 --test_set_size 100` 生成演示数据 3. `mkdir -p output; python avazu_data_processer.py --data_path train.txt --output_dir output --num_lines_to_detect 1000 --test_set_size 100` 生成演示数据
2. 执行 `python train.py --train_data_path ./output/train.txt --test_data_path ./output/test.txt --data_meta_file ./output/data.meta.txt --model_type=0` 开始训练 2. Execute `python train.py --train_data_path ./output/train.txt --test_data_path ./output/test.txt --data_meta_file ./output/data.meta.txt --model_type=0`. Start training.
上面第2个步骤可以为 `train.py` 填充命令行参数来定制模型的训练过程,具体的命令行参数及用法如下 The argument options for `train.py` are as follows.
``` ```
usage: train.py [-h] --train_data_path TRAIN_DATA_PATH usage: train.py [-h] --train_data_path TRAIN_DATA_PATH
...@@ -345,15 +318,16 @@ optional arguments: ...@@ -345,15 +318,16 @@ optional arguments:
classification) classification)
``` ```
- `train_data_path` : 训练集的路径 - `train_data_path` : The path of the training set
- `test_data_path` : 测试集的路径 - `test_data_path` : The path of the testing set
- `num_passes`: 模型训练多少轮 - `num_passes`: number of rounds of model training
- `data_meta_file`: 参考[数据和任务抽象](### 数据和任务抽象)的描述。 - `data_meta_file`: Please refer to [数据和任务抽象](### 数据和任务抽象)的描述。
- `model_type`: 模型分类或回归 - `model_type`: Model classification or regressio
## Use the training model for prediction
The training model can be used to predict new data, and the format of the forecast data is as follows.
## 用训好的模型做预测
训好的模型可以用来预测新的数据, 预测数据的格式为
``` ```
# <dnn input ids> \t <lr input sparse values> # <dnn input ids> \t <lr input sparse values>
...@@ -361,9 +335,9 @@ optional arguments: ...@@ -361,9 +335,9 @@ optional arguments:
23 231 \t 1230:0.12 13421:0.9 23 231 \t 1230:0.12 13421:0.9
``` ```
这里与训练数据的格式唯一不同的地方,就是没有标签,也就是训练数据中第3列 `click` 对应的数值。 Here the only difference to the training data is that there is no label (i.e. `click` values).
`infer.py` 的使用方法如下 We now can use `infer.py` to perform inference.
``` ```
usage: infer.py [-h] --model_gz_path MODEL_GZ_PATH --data_path DATA_PATH usage: infer.py [-h] --model_gz_path MODEL_GZ_PATH --data_path DATA_PATH
...@@ -387,21 +361,21 @@ optional arguments: ...@@ -387,21 +361,21 @@ optional arguments:
classification) classification)
``` ```
- `model_gz_path_model`:用 `gz` 压缩过的模型路径 - `model_gz_path_model`:path for `gz` compressed data.
- `data_path` : 需要预测的数据路径 - `data_path` :
- `prediction_output_paht`:预测输出的路径 - `prediction_output_patj`:path for the predicted values s
- `data_meta_file` :参考[数据和任务抽象](### 数据和任务抽象)的描述 - `data_meta_file` :Please refer to [数据和任务抽象](### 数据和任务抽象)
- `model_type` :分类或回归 - `model_type` :Classification or regression
示例数据可以用如下命令预测 The sample data can be predicted with the following command
``` ```
python infer.py --model_gz_path <model_path> --data_path output/infer.txt --prediction_output_path predictions.txt --data_meta_path data.meta.txt python infer.py --model_gz_path <model_path> --data_path output/infer.txt --prediction_output_path predictions.txt --data_meta_path data.meta.txt
``` ```
最终的预测结果位于 `predictions.txt`。 The final prediction is written in `predictions.txt`。
## 参考文献 ## References
1. <https://en.wikipedia.org/wiki/Click-through_rate> 1. <https://en.wikipedia.org/wiki/Click-through_rate>
2. <https://www.kaggle.com/c/avazu-ctr-prediction/data> 2. <https://www.kaggle.com/c/avazu-ctr-prediction/data>
3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10. 3. Cheng H T, Koc L, Harmsen J, et al. [Wide & deep learning for recommender systems](https://arxiv.org/pdf/1606.07792.pdf)[C]//Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016: 7-10.
......
...@@ -100,6 +100,6 @@ class CTRmodel(object): ...@@ -100,6 +100,6 @@ class CTRmodel(object):
self.output = layer.fc( self.output = layer.fc(
input=merge_layer, size=1, act=paddle.activation.Sigmoid()) input=merge_layer, size=1, act=paddle.activation.Sigmoid())
if not self.is_infer: if not self.is_infer:
self.train_cost = paddle.layer.mse_cost( self.train_cost = paddle.layer.square_error_cost(
input=self.output, label=self.click) input=self.output, label=self.click)
return self.output return self.output
# Deep Speech 2 on PaddlePaddle # DeepSpeech2 on PaddlePaddle
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
- [Inference and Evaluation](#inference-and-evaluation)
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
- [Training for Mandarin Language](#training-for-mandarin-language)
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
- [Released Models](#released-models)
- [Experiments and Benchmarks](#experiments-and-benchmarks)
- [Questions and Help](#questions-and-help)
## Prerequisites
- Python 2.7 only supported
- PaddlePaddle the latest version (please refer to the [Installation Guide](https://github.com/PaddlePaddle/Paddle#installation))
## Installation ## Installation
### Prerequisites Please make sure the above [prerequisites](#prerequisites) have been satisfied before moving on.
- **Python = 2.7** only supported; ```bash
- **cuDNN >= 6.0** is required to utilize NVIDIA GPU platform in the installation of PaddlePaddle, and the **CUDA toolkit** with proper version suitable for cuDNN. The cuDNN library below 6.0 is found to yield a fatal error in batch normalization when handling utterances with long duration in inference. git clone https://github.com/PaddlePaddle/models.git
cd models/deep_speech_2
### Setup
```
sh setup.sh sh setup.sh
export LD_LIBRARY_PATH=$PADDLE_INSTALL_DIR/Paddle/third_party/install/warpctc/lib:$LD_LIBRARY_PATH
``` ```
Please replace `$PADDLE_INSTALL_DIR` with your own paddle installation directory. ## Getting Started
## Usage Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](http://www.openslr.org/33)). Reading these examples will also help you to understand how to make it work with your own data.
### Preparing Data Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICES` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead. Besides, if out-of-memory problem occurs, just reduce `--batch_size` to fit.
``` Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.
cd datasets
sh run_all.sh
cd ..
```
`sh run_all.sh` prepares all ASR datasets (currently, only LibriSpeech available). After running, we have several summarization manifest files in json-format. - Go to directory
A manifest file summarizes a speech data set, with each line containing the meta data (i.e. audio filepath, transcript text, audio duration) of each audio file within the data set, in json format. Manifest file serves as an interface informing our system of where and what to read the speech samples. ```bash
cd examples/tiny
```
Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
- Prepare the data
More help for arguments: ```bash
sh run_data.sh
```
``` `run_data.sh` will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments.
python datasets/librispeech/librispeech.py --help - Train your own ASR model
```
### Preparing for Training ```bash
sh run_train.sh
```
``` `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. These checkpoints could be used for training resuming, inference, evaluation and deployment.
python compute_mean_std.py - Case inference with an existing model
```
It will compute mean and stdandard deviation for audio features, and save them to a file with a default name `./mean_std.npz`. This file will be used in both training and inferencing. The default feature of audio data is power spectrum, and the mfcc feature is also supported. To train and infer based on mfcc feature, please generate this file by ```bash
sh run_infer.sh
```
``` `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, you can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
python compute_mean_std.py --specgram_type mfcc
```
and specify ```--specgram_type mfcc``` when running train.py, infer.py, evaluator.py or tune.py. ```bash
sh run_infer_golden.sh
```
- Evaluate an existing model
More help for arguments: ```bash
sh run_test.sh
```
`run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance:
```bash
sh run_test_golden.sh
```
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
```
python compute_mean_std.py --help
```
### Training ## Data Preparation
For GPU Training: ### Generate Manifest
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
``` ```
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py {"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
``` ```
For CPU Training: To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which will download data and generate manifest files for LibriSpeech dataset.
### Compute Mean & Stddev for Normalizer
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
```bash
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
``` ```
python train.py --use_gpu False
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.
### Build Vocabulary
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
```bash
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
``` ```
More help for arguments: It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
### More Help
For more help on arguments:
```bash
python data/librispeech/librispeech.py --help
python tools/compute_mean_std.py --help
python tools/build_vocab.py --help
``` ```
## Training a model
`train.py` is the main caller of the training module. Examples of usage are shown below.
- Start training from scratch with 8 GPUs:
```
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
```
- Start training from scratch with 16 CPUs:
```
python train.py --use_gpu False --trainer_count 16
```
- Resume training from a checkpoint:
```
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python train.py \
--init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
```
For more help on arguments:
```bash
python train.py --help python train.py --help
``` ```
or refer to `example/librispeech/run_train.sh`.
### Preparing language model ## Data Augmentation Pipeline
The following steps, inference, parameters tuning and evaluating, will require a language model during decoding. Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. You don't have to do the syntheses on your own, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
A compressed language model is provided and can be accessed by
``` Six optional augmentation components are provided to be selected, configured and inserted into the processing pipeline.
cd ./lm
sh run.sh
cd ..
```
### Inference - Volume Perturbation
- Speed Perturbation
- Shifting Perturbation
- Online Bayesian normalization
- Noise Perturbation (need background noise audio files)
- Impulse Response (need impulse audio files)
For GPU inference In order to inform the trainer of what augmentation components are needed and what their processing orders are, it is required to prepare in advance a *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
``` ```
CUDA_VISIBLE_DEVICES=0 python infer.py [{
"type": "speed",
"params": {"min_speed_rate": 0.95,
"max_speed_rate": 1.05},
"prob": 0.6
},
{
"type": "shift",
"params": {"min_shift_ms": -5,
"max_shift_ms": 5},
"prob": 0.8
}]
``` ```
For CPU inference When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
For other configuration examples, please refer to `conf/augmenatation.config.example`.
Be careful when utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
## Inference and Evaluation
### Prepare Language Model
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. Users can simply run this to download the preprared language models:
```bash
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
``` ```
python infer.py --use_gpu=False If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials.
```
TODO: any other requirements or tips to add?
### Speech-to-text Inference
An inference module caller `infer.py` is provided to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
More help for arguments: - Inference with GPU:
```bash
CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
```
- Inference with CPUs:
```bash
python infer.py --use_gpu False --trainer_count 12
```
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
For more help on arguments:
``` ```
python infer.py --help python infer.py --help
``` ```
or refer to `example/librispeech/run_infer.sh`.
### Evaluating ### Evaluate a Model
``` To evaluate a model's performance quantitatively, please run:
CUDA_VISIBLE_DEVICES=0 python evaluate.py
```
More help for arguments: - Evaluation with GPUs:
``` ```bash
python evaluate.py --help CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
``` ```
### Parameters tuning - Evaluation with CPUs:
Usually, the parameters $\alpha$ and $\beta$ for the CTC [prefix beam search](https://arxiv.org/abs/1408.2873) decoder need to be tuned after retraining the acoustic model. ```bash
python test.py --use_gpu False --trainer_count 12
```
For GPU tuning The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
For more help on arguments:
```bash
python test.py --help
``` ```
CUDA_VISIBLE_DEVICES=0 python tune.py or refer to `example/librispeech/run_test.sh`.
```
For CPU tuning ## Hyper-parameters Tuning
``` The hyper-parameters $\alpha$ (language model weight) and $\beta$ (word insertion weight) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on the validation set when the acoustic model is renewed.
python tune.py --use_gpu=False
```
More help for arguments: `tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. You must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
``` - Tuning with GPU:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python tools/tune.py \
--trainer_count 8 \
--alpha_from 1.0 \
--alpha_to 3.2 \
--num_alphas 45 \
--beta_from 0.1 \
--beta_to 0.45 \
--num_betas 8
```
- Tuning with CPU:
```bash
python tools/tune.py --use_gpu False
```
The grid search will print the WER (word error rate) or CER (character error rate) at each point in the hyper-parameters space, and draw the error surface optionally. A proper hyper-parameters range should include the global minima of the error surface for WER/CER, as illustrated in the following figure.
<p align="center">
<img src="docs/images/tuning_error_surface.png" width=550>
<br/>An example error surface for tuning on the dev-clean set of LibriSpeech
</p>
Usually, as the figure shows, the variation of language model weight ($\alpha$) significantly affect the performance of CTC beam search decoder. And a better procedure is to first tune on serveral data batches (the number can be specified) to find out the proper range of hyper-parameters, then change to the whole validation set to carray out an accurate tuning.
After tuning, you can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance. For more help
```bash
python tune.py --help python tune.py --help
``` ```
or refer to `example/librispeech/run_tune.sh`.
Then reset parameters with the tuning result before inference or evaluating.
### Playing with the ASR Demo
A real-time ASR demo is built for users to try out the ASR model with their own voice. Please do the following installation on the machine you'd like to run the demo's client (no need for the machine running the demo's server). ## Distributed Cloud Training
For example, on MAC OS X: We also provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).
Please take the following steps to submit a training job:
- Go to directory:
```bash
cd cloud
```
- Upload data:
Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
```bash
sh pcloud_upload_data.sh
```
Given input manifests, `pcloud_upload_data.sh` will:
- Extract the audio files listed in the input manifests.
- Pack them into a specified number of tar files.
- Upload these tar files to PaddleCloud filesystem.
- Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.
It should be done only once for the very first time to do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).
- Configure training arguments:
Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).
For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).
- Submit the job:
By running:
```bash
sh pcloud_submit.sh
```
a training job has been submitted to PaddleCloud, with the job name printed to the console.
- Get training logs
Run this to list all the jobs you have submitted, as well as their running status:
```bash
paddlecloud get jobs
```
Run this, the corresponding job's logs will be printed.
```bash
paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
```
For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).
For more information about the DeepSpeech2 training on PaddleCloud, please refer to
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).
## Training for Mandarin Language
TODO: to be added
## Trying Live Demo with Your Own Voice
Until now, an ASR model is trained and tested qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But it is not yet tested with your own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling you to test and play around with the demo, with your own voice.
To start the demo's server, please run this in one console:
```bash
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
``` ```
For the machine (might not be the same machine) to run the demo's client, please do the following installation before moving on.
For example, on MAC OS X:
```bash
brew install portaudio brew install portaudio
pip install pyaudio pip install pyaudio
pip install pynput pip install pynput
``` ```
After a model and language model is prepared, we can first start the demo's server:
``` Then to start the client, please run this in another console:
CUDA_VISIBLE_DEVICES=0 python demo_server.py
```
And then in another console, start the demo's client:
```bash
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
--host_port 8086
``` ```
python demo_client.py
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until finishing your utterance, release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
Notice that `deploy/demo_client.py` must be run on a machine with a microphone device, while `deploy/demo_server.py` could be run on one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running on one single machine.
Please also refer to `examples/mandarin/run_demo_server.sh`, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, you can speak Mandarin to test it. If you would like to try some other models, just update `--model_path` argument in the script.  
For more help on arguments:
```bash
python deploy/demo_server.py --help
python deploy/demo_client.py --help
``` ```
On the client console, press and hold the "white-space" key on the keyboard to start talking, until you finish your speech and then release the "white-space" key. The decoding results (infered transcription) will be displayed.
It could be possible to start the server and the client in two seperate machines, e.g. `demo_client.py` is usually started in a machine with a microphone hardware, while `demo_server.py` is usually started in a remote server with powerful GPUs. Please first make sure that these two machines have network access to each other, and then use `--host_ip` and `--host_port` to indicate the server machine's actual IP address (instead of the `localhost` as default) and TCP port, in both `demo_server.py` and `demo_client.py`. ## Released Models
#### Speech Model Released
Language | Model Name | Training Data | Training Hours
:-----------: | :------------: | :----------: | -------:
English | [LibriSpeech Model](http://cloud.dlnel.org/filepub/?uuid=17404caf-cf19-492f-9707-1fad07c19aae) | [LibriSpeech Dataset](http://www.openslr.org/12/) | 960 h
English | [Internal English Model](to-be-added) | Baidu English Dataset | 8628 h
Mandarin | [Aishell Model](http://cloud.dlnel.org/filepub/?uuid=6c83b9d8-3255-4adf-9726-0fe0be3d0274) | [Aishell Dataset](http://www.openslr.org/33/) | 151 h
Mandarin | [Internal Mandarin Model](to-be-added) | Baidu Mandarin Dataset | 2917 h
#### Language Model Released
Language Model | Training Data | Token-based | Size | Filter Configuraiton
:-------------:| :------------:| :-----: | -----: | -----------------:
[English LM](http://paddlepaddle.bj.bcebos.com/model_zoo/speech/common_crawl_00.prune01111.trie.klm) | To Be Added | Word-based | 8.3 GB | To Be Added
[Mandarin LM](http://cloud.dlnel.org/filepub/?uuid=d21861e4-4ed6-45bb-ad8e-ae417a43195e) | To Be Added | Character-based | 2.8 GB | To Be Added
## Experiments and Benchmarks
#### English Model Evaluation (Word Error Rate)
Test Set | LibriSpeech Model | Internal English Model
:---------------------: | ---------------: | -------------------:
LibriSpeech-Test-Clean | 7.96 | X.X
LibriSpeech-Test-Other | 23.87 | X.X
VoxForge-Test | X.X | X.X
Baidu-English-Test | X.X | X.X
(Beam size=2000)
#### Mandarin Model Evaluation (Character Error Rate)
Test Set | Aishell Model | Internal Mandarin Model
:---------------------: | :---------------: | :-------------------:
Aishell-Test | X.X | X.X
Baidu-Mandarin-Test | X.X | X.X
#### Acceleration with Multi-GPUs
We compare the training time with 1, 2, 4, 8, 16 Tesla K40m GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0 seconds). And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) cost for training is printed on the blue bars.
<img src="docs/images/multi_gpu_speedup.png" width=450><br/>
| # of GPU | Acceleration Rate |
| -------- | --------------: |
| 1 | 1.00 X |
| 2 | 1.97 X |
| 4 | 3.74 X |
| 8 | 6.21 X |
|16 | 10.70 X |
`tools/profile.sh` provides such a profiling tool.
## Questions and Help
You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/models/issues). You are also welcome to contribute to this project.
# Train DeepSpeech2 on PaddleCloud
>Note:
>Please make sure [PaddleCloud Client](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud) has be installed and current directory is `deep_speech_2/cloud/`
## Step 1: Upload Data
Provided with several input manifests, `pcloud_upload_data.sh` will pack and upload all the containing audio files to PaddleCloud filesystem, and also generate some corresponding manifest files with updated cloud paths.
Please modify the following arguments in `pcloud_upload_data.sh`:
- `IN_MANIFESTS`: Paths (in local filesystem) of manifest files containing the audio files to be uploaded. Multiple paths can be concatenated with a whitespace delimeter.
- `OUT_MANIFESTS`: Paths (in local filesystem) to write the updated output manifest files to. Multiple paths can be concatenated with a whitespace delimeter. The values of `audio_filepath` in the output manifests are updated with cloud filesystem paths.
- `CLOUD_DATA_DIR`: Directory (in PaddleCloud filesystem) to upload the data to. Don't forget to replace `USERNAME` in the default directory and make sure that you have the permission to write it.
- `NUM_SHARDS`: Number of data shards / parts (in tar files) to be generated when packing and uploading data. Smaller `num_shards` requires larger temoporal local disk space for packing data.
By running:
```
sh pcloud_upload_data.sh
```
all the audio files will be uploaded to PaddleCloud filesystem, and you will get modified manifests files in `OUT_MANIFESTS`.
You have to take this step only once, in the very first time you do the cloud training. Later on, the data is persisitent on the cloud filesystem and reusable for further job submissions.
## Step 2: Configure Training
Configure cloud training arguments in `pcloud_submit.sh`, with the following arguments:
- `TRAIN_MANIFEST`: Manifest filepath (in local filesystem) for training. Notice that the`audio_filepath` should be in cloud filesystem, like those generated by `pcloud_upload_data.sh`.
- `DEV_MANIFEST`: Manifest filepath (in local filesystem) for validation.
- `CLOUD_MODEL_DIR`: Directory (in PaddleCloud filesystem) to save the model parameters (checkpoints). Don't forget to replace `USERNAME` in the default directory and make sure that you have the permission to write it.
- `BATCH_SIZE`: Training batch size for a single node.
- `NUM_GPU`: Number of GPUs allocated for a single node.
- `NUM_NODE`: Number of nodes (machines) allocated for this job.
- `IS_LOCAL`: Set to False to enable parameter server, if using multiple nodes.
Configure other training hyper-parameters in `pcloud_train.sh` as you wish, just as what you can do in local training.
By running:
```
sh pcloud_submit.sh
```
you submit a training job to PaddleCloud. And you will see the job name when the submission is done.
## Step 3 Get Job Logs
Run this to list all the jobs you have submitted, as well as their running status:
```
paddlecloud get jobs
```
Run this, the corresponding job's logs will be printed.
```
paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
```
## More Help
For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).
"""Set up paths for DS2"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import sys
def add_path(path):
if path not in sys.path:
sys.path.insert(0, path)
this_dir = os.path.dirname(__file__)
proj_path = os.path.join(this_dir, '..')
add_path(proj_path)
#! /usr/bin/env bash
TRAIN_MANIFEST="cloud/cloud_manifests/cloud.manifest.train"
DEV_MANIFEST="cloud/cloud_manifests/cloud.manifest.dev"
CLOUD_MODEL_DIR="./checkpoints"
BATCH_SIZE=512
NUM_GPU=8
NUM_NODE=1
IS_LOCAL="True"
JOB_NAME=deepspeech-`date +%Y%m%d%H%M%S`
DS2_PATH=${PWD%/*}
cp -f pcloud_train.sh ${DS2_PATH}
paddlecloud submit \
-image bootstrapper:5000/paddlepaddle/pcloud_ds2:latest \
-jobname ${JOB_NAME} \
-cpu ${NUM_GPU} \
-gpu ${NUM_GPU} \
-memory 64Gi \
-parallelism ${NUM_NODE} \
-pscpu 1 \
-pservers 1 \
-psmemory 64Gi \
-passes 1 \
-entry "sh pcloud_train.sh ${TRAIN_MANIFEST} ${DEV_MANIFEST} ${CLOUD_MODEL_DIR} ${NUM_GPU} ${BATCH_SIZE} ${IS_LOCAL}" \
${DS2_PATH}
rm ${DS2_PATH}/pcloud_train.sh
#! /usr/bin/env bash
TRAIN_MANIFEST=$1
DEV_MANIFEST=$2
MODEL_PATH=$3
NUM_GPU=$4
BATCH_SIZE=$5
IS_LOCAL=$6
python ./cloud/split_data.py \
--in_manifest_path=${TRAIN_MANIFEST} \
--out_manifest_path='/local.manifest.train'
python ./cloud/split_data.py \
--in_manifest_path=${DEV_MANIFEST} \
--out_manifest_path='/local.manifest.dev'
mkdir ./logs
python -u train.py \
--batch_size=${BATCH_SIZE} \
--trainer_count=${NUM_GPU} \
--num_passes=200 \
--num_proc_data=${NUM_GPU} \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--num_iter_print=100 \
--learning_rate=5e-4 \
--max_duration=27.0 \
--min_duration=0.0 \
--use_sortagrad=True \
--use_gru=False \
--use_gpu=True \
--is_local=${IS_LOCAL} \
--share_rnn_weights=True \
--train_manifest='/local.manifest.train' \
--dev_manifest='/local.manifest.dev' \
--mean_std_path='data/librispeech/mean_std.npz' \
--vocab_path='data/librispeech/vocab.txt' \
--output_model_dir='./checkpoints' \
--output_model_dir=${MODEL_PATH} \
--augment_conf_path='conf/augmentation.config' \
--specgram_type='linear' \
--shuffle_method='batch_shuffle_clipped' \
2>&1 | tee ./logs/train.log
#! /usr/bin/env bash
mkdir cloud_manifests
IN_MANIFESTS="../data/librispeech/manifest.train ../data/librispeech/manifest.dev-clean ../data/librispeech/manifest.test-clean"
OUT_MANIFESTS="cloud_manifests/cloud.manifest.train cloud_manifests/cloud.manifest.dev cloud_manifests/cloud.manifest.test"
CLOUD_DATA_DIR="/pfs/dlnel/home/USERNAME/deepspeech2/data/librispeech"
NUM_SHARDS=50
python upload_data.py \
--in_manifest_paths ${IN_MANIFESTS} \
--out_manifest_paths ${OUT_MANIFESTS} \
--cloud_data_dir ${CLOUD_DATA_DIR} \
--num_shards ${NUM_SHARDS}
if [ $? -ne 0 ]
then
echo "Upload Data Failed!"
exit 1
fi
echo "All Done."
"""This tool is used for splitting data into each node of
paddlecloud. This script should be called in paddlecloud.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import json
import argparse
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--in_manifest_path",
type=str,
required=True,
help="Input manifest path for all nodes.")
parser.add_argument(
"--out_manifest_path",
type=str,
required=True,
help="Output manifest file path for current node.")
args = parser.parse_args()
def split_data(in_manifest_path, out_manifest_path):
with open("/trainer_id", "r") as f:
trainer_id = int(f.readline()[:-1])
with open("/trainer_count", "r") as f:
trainer_count = int(f.readline()[:-1])
out_manifest = []
for index, json_line in enumerate(open(in_manifest_path, 'r')):
if (index % trainer_count) == trainer_id:
out_manifest.append("%s\n" % json_line.strip())
with open(out_manifest_path, 'w') as f:
f.writelines(out_manifest)
if __name__ == '__main__':
split_data(args.in_manifest_path, args.out_manifest_path)
"""This script is for uploading data for DeepSpeech2 training on paddlecloud.
Steps:
1. Read original manifests and extract local sound files.
2. Tar all local sound files into multiple tar files and upload them.
3. Modify original manifests with updated paths in cloud filesystem.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import os
import tarfile
import sys
import argparse
import shutil
from subprocess import call
import _init_paths
from data_utils.utils import read_manifest
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--in_manifest_paths",
default=[
"../datasets/manifest.train", "../datasets/manifest.dev",
"../datasets/manifest.test"
],
type=str,
nargs='+',
help="Local filepaths of input manifests to load, pack and upload."
"(default: %(default)s)")
parser.add_argument(
"--out_manifest_paths",
default=[
"./cloud.manifest.train", "./cloud.manifest.dev",
"./cloud.manifest.test"
],
type=str,
nargs='+',
help="Local filepaths of modified manifests to write to. "
"(default: %(default)s)")
parser.add_argument(
"--cloud_data_dir",
required=True,
type=str,
help="Destination directory on paddlecloud to upload data to.")
parser.add_argument(
"--num_shards",
default=10,
type=int,
help="Number of parts to split data to. (default: %(default)s)")
parser.add_argument(
"--local_tmp_dir",
default="./tmp/",
type=str,
help="Local directory for storing temporary data. (default: %(default)s)")
args = parser.parse_args()
def upload_data(in_manifest_path_list, out_manifest_path_list, local_tmp_dir,
upload_tar_dir, num_shards):
"""Extract and pack sound files listed in the manifest files into multple
tar files and upload them to padldecloud. Besides, generate new manifest
files with updated paths in paddlecloud.
"""
# compute total audio number
total_line = 0
for manifest_path in in_manifest_path_list:
with open(manifest_path, 'r') as f:
total_line += len(f.readlines())
line_per_tar = (total_line // num_shards) + 1
# pack and upload shard by shard
line_count, tar_file = 0, None
for manifest_path, out_manifest_path in zip(in_manifest_path_list,
out_manifest_path_list):
manifest = read_manifest(manifest_path)
out_manifest = []
for json_data in manifest:
sound_filepath = json_data['audio_filepath']
sound_filename = os.path.basename(sound_filepath)
if line_count % line_per_tar == 0:
if tar_file != None:
tar_file.close()
pcloud_cp(tar_path, upload_tar_dir)
os.remove(tar_path)
tar_name = 'part-%s-of-%s.tar' % (
str(line_count // line_per_tar).zfill(5),
str(num_shards).zfill(5))
tar_path = os.path.join(local_tmp_dir, tar_name)
tar_file = tarfile.open(tar_path, 'w')
tar_file.add(sound_filepath, arcname=sound_filename)
line_count += 1
json_data['audio_filepath'] = "tar:%s#%s" % (
os.path.join(upload_tar_dir, tar_name), sound_filename)
out_manifest.append("%s\n" % json.dumps(json_data))
with open(out_manifest_path, 'w') as f:
f.writelines(out_manifest)
pcloud_cp(out_manifest_path, upload_tar_dir)
tar_file.close()
pcloud_cp(tar_path, upload_tar_dir)
os.remove(tar_path)
def pcloud_mkdir(dir):
"""Make directory in PaddleCloud filesystem.
"""
if call(['paddlecloud', 'mkdir', dir]) != 0:
raise IOError("PaddleCloud mkdir failed: %s." % dir)
def pcloud_cp(src, dst):
"""Copy src from local filesytem to dst in PaddleCloud filesystem,
or downlowd src from PaddleCloud filesystem to dst in local filesystem.
"""
if call(['paddlecloud', 'cp', src, dst]) != 0:
raise IOError("PaddleCloud cp failed: from [%s] to [%s]." % (src, dst))
if __name__ == '__main__':
if not os.path.exists(args.local_tmp_dir):
os.makedirs(args.local_tmp_dir)
pcloud_mkdir(args.cloud_data_dir)
upload_data(args.in_manifest_paths, args.out_manifest_paths,
args.local_tmp_dir, args.cloud_data_dir, args.num_shards)
shutil.rmtree(args.local_tmp_dir)
"""Prepare Aishell mandarin dataset
Download, unpack and create manifest files.
Manifest file is a json-format file with each line containing the
meta data (i.e. audio filepath, transcript and audio duration)
of each audio file in the data set.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import codecs
import soundfile
import json
import argparse
from data_utils.utility import download, unpack
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = 'http://www.openslr.org/resources/33'
DATA_URL = URL_ROOT + '/data_aishell.tgz'
MD5_DATA = '2f494334227864a8a8fec932999db9d8'
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--target_dir",
default=DATA_HOME + "/Aishell",
type=str,
help="Directory to save the dataset. (default: %(default)s)")
parser.add_argument(
"--manifest_prefix",
default="manifest",
type=str,
help="Filepath prefix for output manifests. (default: %(default)s)")
args = parser.parse_args()
def create_manifest(data_dir, manifest_path_prefix):
print("Creating manifest %s ..." % manifest_path_prefix)
json_lines = []
transcript_path = os.path.join(data_dir, 'transcript',
'aishell_transcript_v0.8.txt')
transcript_dict = {}
for line in codecs.open(transcript_path, 'r', 'utf-8'):
line = line.strip()
if line == '': continue
audio_id, text = line.split(' ', 1)
# remove withespace
text = ''.join(text.split())
transcript_dict[audio_id] = text
data_types = ['train', 'dev', 'test']
for type in data_types:
audio_dir = os.path.join(data_dir, 'wav', type)
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
for fname in filelist:
audio_path = os.path.join(subfolder, fname)
audio_id = fname[:-4]
# if no transcription for audio then skipped
if audio_id not in transcript_dict:
continue
audio_data, samplerate = soundfile.read(audio_path)
duration = float(len(audio_data) / samplerate)
text = transcript_dict[audio_id]
json_lines.append(
json.dumps(
{
'audio_filepath': audio_path,
'duration': duration,
'text': text
},
ensure_ascii=False))
manifest_path = manifest_path_prefix + '.' + type
with codecs.open(manifest_path, 'w', 'utf-8') as fout:
for line in json_lines:
fout.write(line + '\n')
def prepare_dataset(url, md5sum, target_dir, manifest_path):
"""Download, unpack and create manifest file."""
data_dir = os.path.join(target_dir, 'data_aishell')
if not os.path.exists(data_dir):
filepath = download(url, md5sum, target_dir)
unpack(filepath, target_dir)
# unpack all audio tar files
audio_dir = os.path.join(data_dir, 'wav')
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
for ftar in filelist:
unpack(os.path.join(subfolder, ftar), subfolder, True)
else:
print("Skip downloading and unpacking. Data already exists in %s." %
target_dir)
create_manifest(data_dir, manifest_path)
def main():
if args.target_dir.startswith('~'):
args.target_dir = os.path.expanduser(args.target_dir)
prepare_dataset(
url=DATA_URL,
md5sum=MD5_DATA,
target_dir=args.target_dir,
manifest_path=args.manifest_prefix)
if __name__ == '__main__':
main()
...@@ -12,13 +12,11 @@ from __future__ import print_function ...@@ -12,13 +12,11 @@ from __future__ import print_function
import distutils.util import distutils.util
import os import os
import sys import sys
import tarfile
import argparse import argparse
import soundfile import soundfile
import json import json
from paddle.v2.dataset.common import md5file import codecs
from data_utils.utility import download, unpack
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = "http://www.openslr.org/resources/12" URL_ROOT = "http://www.openslr.org/resources/12"
URL_TEST_CLEAN = URL_ROOT + "/test-clean.tar.gz" URL_TEST_CLEAN = URL_ROOT + "/test-clean.tar.gz"
...@@ -40,7 +38,7 @@ MD5_TRAIN_OTHER_500 = "d1a0fd59409feb2c614ce4d30c387708" ...@@ -40,7 +38,7 @@ MD5_TRAIN_OTHER_500 = "d1a0fd59409feb2c614ce4d30c387708"
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( parser.add_argument(
"--target_dir", "--target_dir",
default=DATA_HOME + "/Libri", default='~/.cache/paddle/dataset/speech/libri',
type=str, type=str,
help="Directory to save the dataset. (default: %(default)s)") help="Directory to save the dataset. (default: %(default)s)")
parser.add_argument( parser.add_argument(
...@@ -58,36 +56,8 @@ parser.add_argument( ...@@ -58,36 +56,8 @@ parser.add_argument(
args = parser.parse_args() args = parser.parse_args()
def download(url, md5sum, target_dir):
"""
Download file from url to target_dir, and check md5sum.
"""
if not os.path.exists(target_dir): os.makedirs(target_dir)
filepath = os.path.join(target_dir, url.split("/")[-1])
if not (os.path.exists(filepath) and md5file(filepath) == md5sum):
print("Downloading %s ..." % url)
os.system("wget -c " + url + " -P " + target_dir)
print("\nMD5 Chesksum %s ..." % filepath)
if not md5file(filepath) == md5sum:
raise RuntimeError("MD5 checksum failed.")
else:
print("File exists, skip downloading. (%s)" % filepath)
return filepath
def unpack(filepath, target_dir):
"""
Unpack the file to the target_dir.
"""
print("Unpacking %s ..." % filepath)
tar = tarfile.open(filepath)
tar.extractall(target_dir)
tar.close()
def create_manifest(data_dir, manifest_path): def create_manifest(data_dir, manifest_path):
""" """Create a manifest json file summarizing the data set, with each line
Create a manifest json file summarizing the data set, with each line
containing the meta data (i.e. audio filepath, transcription text, audio containing the meta data (i.e. audio filepath, transcription text, audio
duration) of each audio file within the data set. duration) of each audio file within the data set.
""" """
...@@ -112,14 +82,13 @@ def create_manifest(data_dir, manifest_path): ...@@ -112,14 +82,13 @@ def create_manifest(data_dir, manifest_path):
'duration': duration, 'duration': duration,
'text': text 'text': text
})) }))
with open(manifest_path, 'w') as out_file: with codecs.open(manifest_path, 'w', 'utf-8') as out_file:
for line in json_lines: for line in json_lines:
out_file.write(line + '\n') out_file.write(line + '\n')
def prepare_dataset(url, md5sum, target_dir, manifest_path): def prepare_dataset(url, md5sum, target_dir, manifest_path):
""" """Download, unpack and create summmary manifest file.
Download, unpack and create summmary manifest file.
""" """
if not os.path.exists(os.path.join(target_dir, "LibriSpeech")): if not os.path.exists(os.path.join(target_dir, "LibriSpeech")):
# download # download
...@@ -134,6 +103,9 @@ def prepare_dataset(url, md5sum, target_dir, manifest_path): ...@@ -134,6 +103,9 @@ def prepare_dataset(url, md5sum, target_dir, manifest_path):
def main(): def main():
if args.target_dir.startswith('~'):
args.target_dir = os.path.expanduser(args.target_dir)
prepare_dataset( prepare_dataset(
url=URL_TEST_CLEAN, url=URL_TEST_CLEAN,
md5sum=MD5_TEST_CLEAN, md5sum=MD5_TEST_CLEAN,
...@@ -144,12 +116,12 @@ def main(): ...@@ -144,12 +116,12 @@ def main():
md5sum=MD5_DEV_CLEAN, md5sum=MD5_DEV_CLEAN,
target_dir=os.path.join(args.target_dir, "dev-clean"), target_dir=os.path.join(args.target_dir, "dev-clean"),
manifest_path=args.manifest_prefix + ".dev-clean") manifest_path=args.manifest_prefix + ".dev-clean")
prepare_dataset(
url=URL_TRAIN_CLEAN_100,
md5sum=MD5_TRAIN_CLEAN_100,
target_dir=os.path.join(args.target_dir, "train-clean-100"),
manifest_path=args.manifest_prefix + ".train-clean-100")
if args.full_download: if args.full_download:
prepare_dataset(
url=URL_TRAIN_CLEAN_100,
md5sum=MD5_TRAIN_CLEAN_100,
target_dir=os.path.join(args.target_dir, "train-clean-100"),
manifest_path=args.manifest_prefix + ".train-clean-100")
prepare_dataset( prepare_dataset(
url=URL_TEST_OTHER, url=URL_TEST_OTHER,
md5sum=MD5_TEST_OTHER, md5sum=MD5_TEST_OTHER,
......
...@@ -4,23 +4,22 @@ from __future__ import division ...@@ -4,23 +4,22 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from data_utils.augmentor.base import AugmentorBase from data_utils.augmentor.base import AugmentorBase
from data_utils import utils from data_utils.utility import read_manifest
from data_utils.audio import AudioSegment from data_utils.audio import AudioSegment
class ImpulseResponseAugmentor(AugmentorBase): class ImpulseResponseAugmentor(AugmentorBase):
"""Augmentation model for adding impulse response effect. """Augmentation model for adding impulse response effect.
:param rng: Random generator object. :param rng: Random generator object.
:type rng: random.Random :type rng: random.Random
:param impulse_manifest_path: Manifest path for impulse audio data. :param impulse_manifest_path: Manifest path for impulse audio data.
:type impulse_manifest_path: basestring :type impulse_manifest_path: basestring
""" """
def __init__(self, rng, impulse_manifest_path): def __init__(self, rng, impulse_manifest_path):
self._rng = rng self._rng = rng
self._impulse_manifest = utils.read_manifest( self._impulse_manifest = read_manifest(impulse_manifest_path)
manifest_path=impulse_manifest_path)
def transform_audio(self, audio_segment): def transform_audio(self, audio_segment):
"""Add impulse response effect. """Add impulse response effect.
......
...@@ -4,13 +4,13 @@ from __future__ import division ...@@ -4,13 +4,13 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from data_utils.augmentor.base import AugmentorBase from data_utils.augmentor.base import AugmentorBase
from data_utils import utils from data_utils.utility import read_manifest
from data_utils.audio import AudioSegment from data_utils.audio import AudioSegment
class NoisePerturbAugmentor(AugmentorBase): class NoisePerturbAugmentor(AugmentorBase):
"""Augmentation model for adding background noise. """Augmentation model for adding background noise.
:param rng: Random generator object. :param rng: Random generator object.
:type rng: random.Random :type rng: random.Random
:param min_snr_dB: Minimal signal noise ratio, in decibels. :param min_snr_dB: Minimal signal noise ratio, in decibels.
...@@ -18,15 +18,14 @@ class NoisePerturbAugmentor(AugmentorBase): ...@@ -18,15 +18,14 @@ class NoisePerturbAugmentor(AugmentorBase):
:param max_snr_dB: Maximal signal noise ratio, in decibels. :param max_snr_dB: Maximal signal noise ratio, in decibels.
:type max_snr_dB: float :type max_snr_dB: float
:param noise_manifest_path: Manifest path for noise audio data. :param noise_manifest_path: Manifest path for noise audio data.
:type noise_manifest_path: basestring :type noise_manifest_path: basestring
""" """
def __init__(self, rng, min_snr_dB, max_snr_dB, noise_manifest_path): def __init__(self, rng, min_snr_dB, max_snr_dB, noise_manifest_path):
self._min_snr_dB = min_snr_dB self._min_snr_dB = min_snr_dB
self._max_snr_dB = max_snr_dB self._max_snr_dB = max_snr_dB
self._rng = rng self._rng = rng
self._noise_manifest = utils.read_manifest( self._noise_manifest = read_manifest(manifest_path=noise_manifest_path)
manifest_path=noise_manifest_path)
def transform_audio(self, audio_segment): def transform_audio(self, audio_segment):
"""Add background noise audio. """Add background noise audio.
......
...@@ -6,10 +6,12 @@ from __future__ import division ...@@ -6,10 +6,12 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import random import random
import numpy as np import tarfile
import multiprocessing import multiprocessing
import numpy as np
import paddle.v2 as paddle import paddle.v2 as paddle
from data_utils import utils from threading import local
from data_utils.utility import read_manifest
from data_utils.augmentor.augmentation import AugmentationPipeline from data_utils.augmentor.augmentation import AugmentationPipeline
from data_utils.featurizer.speech_featurizer import SpeechFeaturizer from data_utils.featurizer.speech_featurizer import SpeechFeaturizer
from data_utils.speech import SpeechSegment from data_utils.speech import SpeechSegment
...@@ -46,7 +48,7 @@ class DataGenerator(object): ...@@ -46,7 +48,7 @@ class DataGenerator(object):
:param specgram_type: Specgram feature type. Options: 'linear'. :param specgram_type: Specgram feature type. Options: 'linear'.
:type specgram_type: str :type specgram_type: str
:param use_dB_normalization: Whether to normalize the audio to -20 dB :param use_dB_normalization: Whether to normalize the audio to -20 dB
before extracting the features. before extracting the features.
:type use_dB_normalization: bool :type use_dB_normalization: bool
:param num_threads: Number of CPU threads for processing data. :param num_threads: Number of CPU threads for processing data.
:type num_threads: int :type num_threads: int
...@@ -82,16 +84,20 @@ class DataGenerator(object): ...@@ -82,16 +84,20 @@ class DataGenerator(object):
self._num_threads = num_threads self._num_threads = num_threads
self._rng = random.Random(random_seed) self._rng = random.Random(random_seed)
self._epoch = 0 self._epoch = 0
# for caching tar files info
self._local_data = local()
self._local_data.tar2info = {}
self._local_data.tar2object = {}
def process_utterance(self, filename, transcript): def process_utterance(self, filename, transcript):
"""Load, augment, featurize and normalize for speech data. """Load, augment, featurize and normalize for speech data.
:param filename: Audio filepath :param filename: Audio filepath
:type filename: basestring :type filename: basestring | file
:param transcript: Transcription text. :param transcript: Transcription text.
:type transcript: basestring :type transcript: basestring
:return: Tuple of audio feature tensor and list of token ids for :return: Tuple of audio feature tensor and list of token ids for
transcription. transcription.
:rtype: tuple of (2darray, list) :rtype: tuple of (2darray, list)
""" """
speech_segment = SpeechSegment.from_file(filename, transcript) speech_segment = SpeechSegment.from_file(filename, transcript)
...@@ -111,7 +117,7 @@ class DataGenerator(object): ...@@ -111,7 +117,7 @@ class DataGenerator(object):
""" """
Batch data reader creator for audio data. Return a callable generator Batch data reader creator for audio data. Return a callable generator
function to produce batches of data. function to produce batches of data.
Audio features within one batch will be padded with zeros to have the Audio features within one batch will be padded with zeros to have the
same shape, or a user-defined shape. same shape, or a user-defined shape.
...@@ -153,7 +159,7 @@ class DataGenerator(object): ...@@ -153,7 +159,7 @@ class DataGenerator(object):
def batch_reader(): def batch_reader():
# read manifest # read manifest
manifest = utils.read_manifest( manifest = read_manifest(
manifest_path=manifest_path, manifest_path=manifest_path,
max_duration=self._max_duration, max_duration=self._max_duration,
min_duration=self._min_duration) min_duration=self._min_duration)
...@@ -191,9 +197,9 @@ class DataGenerator(object): ...@@ -191,9 +197,9 @@ class DataGenerator(object):
@property @property
def feeding(self): def feeding(self):
"""Returns data reader's feeding dict. """Returns data reader's feeding dict.
:return: Data feeding dict. :return: Data feeding dict.
:rtype: dict :rtype: dict
""" """
return {"audio_spectrogram": 0, "transcript_text": 1} return {"audio_spectrogram": 0, "transcript_text": 1}
...@@ -215,6 +221,38 @@ class DataGenerator(object): ...@@ -215,6 +221,38 @@ class DataGenerator(object):
""" """
return self._speech_featurizer.vocab_list return self._speech_featurizer.vocab_list
def _parse_tar(self, file):
"""Parse a tar file to get a tarfile object
and a map containing tarinfoes
"""
result = {}
f = tarfile.open(file)
for tarinfo in f.getmembers():
result[tarinfo.name] = tarinfo
return f, result
def _get_file_object(self, file):
"""Get file object by file path.
If file startwith tar, it will return a tar file object
and cached tar file info for next reading request.
It will return file directly, if the type of file is not str.
"""
if file.startswith('tar:'):
tarpath, filename = file.split(':', 1)[1].split('#', 1)
if 'tar2info' not in self._local_data.__dict__:
self._local_data.tar2info = {}
if 'tar2object' not in self._local_data.__dict__:
self._local_data.tar2object = {}
if tarpath not in self._local_data.tar2info:
object, infoes = self._parse_tar(tarpath)
self._local_data.tar2info[tarpath] = infoes
self._local_data.tar2object[tarpath] = object
return self._local_data.tar2object[tarpath].extractfile(
self._local_data.tar2info[tarpath][filename])
else:
return open(file, 'r')
def _instance_reader_creator(self, manifest): def _instance_reader_creator(self, manifest):
""" """
Instance reader creator. Create a callable function to produce Instance reader creator. Create a callable function to produce
...@@ -229,8 +267,9 @@ class DataGenerator(object): ...@@ -229,8 +267,9 @@ class DataGenerator(object):
yield instance yield instance
def mapper(instance): def mapper(instance):
return self.process_utterance(instance["audio_filepath"], return self.process_utterance(
instance["text"]) self._get_file_object(instance["audio_filepath"]),
instance["text"])
return paddle.reader.xmap_readers( return paddle.reader.xmap_readers(
mapper, reader, self._num_threads, 1024, order=True) mapper, reader, self._num_threads, 1024, order=True)
......
...@@ -4,7 +4,7 @@ from __future__ import division ...@@ -4,7 +4,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import numpy as np import numpy as np
from data_utils import utils from data_utils.utility import read_manifest
from data_utils.audio import AudioSegment from data_utils.audio import AudioSegment
from python_speech_features import mfcc from python_speech_features import mfcc
from python_speech_features import delta from python_speech_features import delta
...@@ -57,7 +57,7 @@ class AudioFeaturizer(object): ...@@ -57,7 +57,7 @@ class AudioFeaturizer(object):
def featurize(self, def featurize(self,
audio_segment, audio_segment,
allow_downsampling=True, allow_downsampling=True,
allow_upsamplling=True): allow_upsampling=True):
"""Extract audio features from AudioSegment or SpeechSegment. """Extract audio features from AudioSegment or SpeechSegment.
:param audio_segment: Audio/speech segment to extract features from. :param audio_segment: Audio/speech segment to extract features from.
...@@ -159,24 +159,27 @@ class AudioFeaturizer(object): ...@@ -159,24 +159,27 @@ class AudioFeaturizer(object):
if max_freq is None: if max_freq is None:
max_freq = sample_rate / 2 max_freq = sample_rate / 2
if max_freq > sample_rate / 2: if max_freq > sample_rate / 2:
raise ValueError("max_freq must be greater than half of " raise ValueError("max_freq must not be greater than half of "
"sample rate.") "sample rate.")
if stride_ms > window_ms: if stride_ms > window_ms:
raise ValueError("Stride size must not be greater than " raise ValueError("Stride size must not be greater than "
"window size.") "window size.")
# compute 13 cepstral coefficients, and the first one is replaced # compute the 13 cepstral coefficients, and the first one is replaced
# by log(frame energy) # by log(frame energy)
mfcc_feat = np.transpose( mfcc_feat = mfcc(
mfcc( signal=samples,
signal=samples, samplerate=sample_rate,
samplerate=sample_rate, winlen=0.001 * window_ms,
winlen=0.001 * window_ms, winstep=0.001 * stride_ms,
winstep=0.001 * stride_ms, highfreq=max_freq)
highfreq=max_freq))
# Deltas # Deltas
d_mfcc_feat = delta(mfcc_feat, 2) d_mfcc_feat = delta(mfcc_feat, 2)
# Deltas-Deltas # Deltas-Deltas
dd_mfcc_feat = delta(d_mfcc_feat, 2) dd_mfcc_feat = delta(d_mfcc_feat, 2)
# transpose
mfcc_feat = np.transpose(mfcc_feat)
d_mfcc_feat = np.transpose(d_mfcc_feat)
dd_mfcc_feat = np.transpose(dd_mfcc_feat)
# concat above three features # concat above three features
concat_mfcc_feat = np.concatenate( concat_mfcc_feat = np.concatenate(
(mfcc_feat, d_mfcc_feat, dd_mfcc_feat)) (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))
......
...@@ -4,6 +4,7 @@ from __future__ import division ...@@ -4,6 +4,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import os import os
import codecs
class TextFeaturizer(object): class TextFeaturizer(object):
...@@ -59,7 +60,7 @@ class TextFeaturizer(object): ...@@ -59,7 +60,7 @@ class TextFeaturizer(object):
def _load_vocabulary_from_file(self, vocab_filepath): def _load_vocabulary_from_file(self, vocab_filepath):
"""Load vocabulary from file.""" """Load vocabulary from file."""
vocab_lines = [] vocab_lines = []
with open(vocab_filepath, 'r') as file: with codecs.open(vocab_filepath, 'r', 'utf-8') as file:
vocab_lines.extend(file.readlines()) vocab_lines.extend(file.readlines())
vocab_list = [line[:-1] for line in vocab_lines] vocab_list = [line[:-1] for line in vocab_lines]
vocab_dict = dict( vocab_dict = dict(
......
...@@ -5,7 +5,7 @@ from __future__ import print_function ...@@ -5,7 +5,7 @@ from __future__ import print_function
import numpy as np import numpy as np
import random import random
import data_utils.utils as utils from data_utils.utility import read_manifest
from data_utils.audio import AudioSegment from data_utils.audio import AudioSegment
...@@ -75,7 +75,7 @@ class FeatureNormalizer(object): ...@@ -75,7 +75,7 @@ class FeatureNormalizer(object):
def _compute_mean_std(self, manifest_path, featurize_func, num_samples): def _compute_mean_std(self, manifest_path, featurize_func, num_samples):
"""Compute mean and std from randomly sampled instances.""" """Compute mean and std from randomly sampled instances."""
manifest = utils.read_manifest(manifest_path) manifest = read_manifest(manifest_path)
sampled_manifest = self._rng.sample(manifest, num_samples) sampled_manifest = self._rng.sample(manifest, num_samples)
features = [] features = []
for instance in sampled_manifest: for instance in sampled_manifest:
......
...@@ -4,15 +4,19 @@ from __future__ import division ...@@ -4,15 +4,19 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import json import json
import codecs
import os
import tarfile
from paddle.v2.dataset.common import md5file
def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0): def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0):
"""Load and parse manifest file. """Load and parse manifest file.
Instances with durations outside [min_duration, max_duration] will be Instances with durations outside [min_duration, max_duration] will be
filtered out. filtered out.
:param manifest_path: Manifest file to load and parse. :param manifest_path: Manifest file to load and parse.
:type manifest_path: basestring :type manifest_path: basestring
:param max_duration: Maximal duration in seconds for instance filter. :param max_duration: Maximal duration in seconds for instance filter.
:type max_duration: float :type max_duration: float
...@@ -23,7 +27,7 @@ def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0): ...@@ -23,7 +27,7 @@ def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0):
:raises IOError: If failed to parse the manifest. :raises IOError: If failed to parse the manifest.
""" """
manifest = [] manifest = []
for json_line in open(manifest_path): for json_line in codecs.open(manifest_path, 'r', 'utf-8'):
try: try:
json_data = json.loads(json_line) json_data = json.loads(json_line)
except Exception as e: except Exception as e:
...@@ -32,3 +36,28 @@ def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0): ...@@ -32,3 +36,28 @@ def read_manifest(manifest_path, max_duration=float('inf'), min_duration=0.0):
json_data["duration"] >= min_duration): json_data["duration"] >= min_duration):
manifest.append(json_data) manifest.append(json_data)
return manifest return manifest
def download(url, md5sum, target_dir):
"""Download file from url to target_dir, and check md5sum."""
if not os.path.exists(target_dir): os.makedirs(target_dir)
filepath = os.path.join(target_dir, url.split("/")[-1])
if not (os.path.exists(filepath) and md5file(filepath) == md5sum):
print("Downloading %s ..." % url)
os.system("wget -c " + url + " -P " + target_dir)
print("\nMD5 Chesksum %s ..." % filepath)
if not md5file(filepath) == md5sum:
raise RuntimeError("MD5 checksum failed.")
else:
print("File exists, skip downloading. (%s)" % filepath)
return filepath
def unpack(filepath, target_dir, rm_tar=False):
"""Unpack the file to the target_dir."""
print("Unpacking %s ..." % filepath)
tar = tarfile.open(filepath)
tar.extractall(target_dir)
tar.close()
if rm_tar == True:
os.remove(filepath)
cd librispeech
python librispeech.py
if [ $? -ne 0 ]; then
echo "Prepare LibriSpeech failed. Terminated."
exit 1
fi
cd -
cat librispeech/manifest.train* | shuf > manifest.train
cat librispeech/manifest.dev-clean > manifest.dev
cat librispeech/manifest.test-clean > manifest.test
echo "All done."
cd noise
python chime3_background.py
if [ $? -ne 0 ]; then
echo "Prepare CHiME3 background noise failed. Terminated."
exit 1
fi
cd -
cat noise/manifest.* > manifest.noise
echo "All done."
'
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
...@@ -9,8 +9,9 @@ from math import log ...@@ -9,8 +9,9 @@ from math import log
import multiprocessing import multiprocessing
def ctc_best_path_decoder(probs_seq, vocabulary): def ctc_greedy_decoder(probs_seq, vocabulary):
"""Best path decoder, also called argmax decoder or greedy decoder. """CTC greedy (best path) decoder.
Path consisting of the most probable tokens are further post-processed to Path consisting of the most probable tokens are further post-processed to
remove consecutive repetitions and all blanks. remove consecutive repetitions and all blanks.
...@@ -41,14 +42,16 @@ def ctc_best_path_decoder(probs_seq, vocabulary): ...@@ -41,14 +42,16 @@ def ctc_best_path_decoder(probs_seq, vocabulary):
def ctc_beam_search_decoder(probs_seq, def ctc_beam_search_decoder(probs_seq,
beam_size, beam_size,
vocabulary, vocabulary,
blank_id,
cutoff_prob=1.0, cutoff_prob=1.0,
cutoff_top_n=40,
ext_scoring_func=None, ext_scoring_func=None,
nproc=False): nproc=False):
"""Beam search decoder for CTC-trained network. It utilizes beam search """CTC Beam search decoder.
to approximately select top best decoding labels and returning results
in the descending order. The implementation is based on Prefix It utilizes beam search to approximately select top best decoding
Beam Search (https://arxiv.org/abs/1408.2873), and the unclear part is labels and returning results in the descending order.
The implementation is based on Prefix Beam Search
(https://arxiv.org/abs/1408.2873), and the unclear part is
redesigned. Two important modifications: 1) in the iterative computation redesigned. Two important modifications: 1) in the iterative computation
of probabilities, the assignment operation is changed to accumulation for of probabilities, the assignment operation is changed to accumulation for
one prefix may comes from different paths; 2) the if condition "if l^+ not one prefix may comes from different paths; 2) the if condition "if l^+ not
...@@ -63,8 +66,6 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -63,8 +66,6 @@ def ctc_beam_search_decoder(probs_seq,
:type beam_size: int :type beam_size: int
:param vocabulary: Vocabulary list. :param vocabulary: Vocabulary list.
:type vocabulary: list :type vocabulary: list
:param blank_id: ID of blank.
:type blank_id: int
:param cutoff_prob: Cutoff probability in pruning, :param cutoff_prob: Cutoff probability in pruning,
default 1.0, no pruning. default 1.0, no pruning.
:type cutoff_prob: float :type cutoff_prob: float
...@@ -84,9 +85,8 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -84,9 +85,8 @@ def ctc_beam_search_decoder(probs_seq,
raise ValueError("The shape of prob_seq does not match with the " raise ValueError("The shape of prob_seq does not match with the "
"shape of the vocabulary.") "shape of the vocabulary.")
# blank_id check # blank_id assign
if not blank_id < len(probs_seq[0]): blank_id = len(vocabulary)
raise ValueError("blank_id shouldn't be greater than probs dimension")
# If the decoder called in the multiprocesses, then use the global scorer # If the decoder called in the multiprocesses, then use the global scorer
# instantiated in ctc_beam_search_decoder_batch(). # instantiated in ctc_beam_search_decoder_batch().
...@@ -111,7 +111,7 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -111,7 +111,7 @@ def ctc_beam_search_decoder(probs_seq,
prob_idx = list(enumerate(probs_seq[time_step])) prob_idx = list(enumerate(probs_seq[time_step]))
cutoff_len = len(prob_idx) cutoff_len = len(prob_idx)
#If pruning is enabled #If pruning is enabled
if cutoff_prob < 1.0: if cutoff_prob < 1.0 or cutoff_top_n < cutoff_len:
prob_idx = sorted(prob_idx, key=lambda asd: asd[1], reverse=True) prob_idx = sorted(prob_idx, key=lambda asd: asd[1], reverse=True)
cutoff_len, cum_prob = 0, 0.0 cutoff_len, cum_prob = 0, 0.0
for i in xrange(len(prob_idx)): for i in xrange(len(prob_idx)):
...@@ -119,6 +119,7 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -119,6 +119,7 @@ def ctc_beam_search_decoder(probs_seq,
cutoff_len += 1 cutoff_len += 1
if cum_prob >= cutoff_prob: if cum_prob >= cutoff_prob:
break break
cutoff_len = min(cutoff_len, cutoff_top_n)
prob_idx = prob_idx[0:cutoff_len] prob_idx = prob_idx[0:cutoff_len]
for l in prefix_set_prev: for l in prefix_set_prev:
...@@ -177,6 +178,8 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -177,6 +178,8 @@ def ctc_beam_search_decoder(probs_seq,
prob = prob * ext_scoring_func(result) prob = prob * ext_scoring_func(result)
log_prob = log(prob) log_prob = log(prob)
beam_result.append((log_prob, result)) beam_result.append((log_prob, result))
else:
beam_result.append((float('-inf'), ''))
## output top beam_size decoding results ## output top beam_size decoding results
beam_result = sorted(beam_result, key=lambda asd: asd[0], reverse=True) beam_result = sorted(beam_result, key=lambda asd: asd[0], reverse=True)
...@@ -186,9 +189,9 @@ def ctc_beam_search_decoder(probs_seq, ...@@ -186,9 +189,9 @@ def ctc_beam_search_decoder(probs_seq,
def ctc_beam_search_decoder_batch(probs_split, def ctc_beam_search_decoder_batch(probs_split,
beam_size, beam_size,
vocabulary, vocabulary,
blank_id,
num_processes, num_processes,
cutoff_prob=1.0, cutoff_prob=1.0,
cutoff_top_n=40,
ext_scoring_func=None): ext_scoring_func=None):
"""CTC beam search decoder using multiple processes. """CTC beam search decoder using multiple processes.
...@@ -199,8 +202,6 @@ def ctc_beam_search_decoder_batch(probs_split, ...@@ -199,8 +202,6 @@ def ctc_beam_search_decoder_batch(probs_split,
:type beam_size: int :type beam_size: int
:param vocabulary: Vocabulary list. :param vocabulary: Vocabulary list.
:type vocabulary: list :type vocabulary: list
:param blank_id: ID of blank.
:type blank_id: int
:param num_processes: Number of parallel processes. :param num_processes: Number of parallel processes.
:type num_processes: int :type num_processes: int
:param cutoff_prob: Cutoff probability in pruning, :param cutoff_prob: Cutoff probability in pruning,
...@@ -227,8 +228,8 @@ def ctc_beam_search_decoder_batch(probs_split, ...@@ -227,8 +228,8 @@ def ctc_beam_search_decoder_batch(probs_split,
pool = multiprocessing.Pool(processes=num_processes) pool = multiprocessing.Pool(processes=num_processes)
results = [] results = []
for i, probs_list in enumerate(probs_split): for i, probs_list in enumerate(probs_split):
args = (probs_list, beam_size, vocabulary, blank_id, cutoff_prob, None, args = (probs_list, beam_size, vocabulary, cutoff_prob, cutoff_top_n,
nproc) None, nproc)
results.append(pool.apply_async(ctc_beam_search_decoder, args)) results.append(pool.apply_async(ctc_beam_search_decoder, args))
pool.close() pool.close()
......
...@@ -8,7 +8,7 @@ import kenlm ...@@ -8,7 +8,7 @@ import kenlm
import numpy as np import numpy as np
class LmScorer(object): class Scorer(object):
"""External scorer to evaluate a prefix or whole sentence in """External scorer to evaluate a prefix or whole sentence in
beam search decoding, including the score from n-gram language beam search decoding, including the score from n-gram language
model and word count. model and word count.
......
"""Set up paths for DS2"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import sys
def add_path(path):
if path not in sys.path:
sys.path.insert(0, path)
this_dir = os.path.dirname(__file__)
# Add project path to PYTHONPATH
proj_path = os.path.join(this_dir, '..')
add_path(proj_path)
#include "ctc_beam_search_decoder.h"
#include <algorithm>
#include <cmath>
#include <iostream>
#include <limits>
#include <map>
#include <utility>
#include "ThreadPool.h"
#include "fst/fstlib.h"
#include "decoder_utils.h"
#include "path_trie.h"
using FSTMATCH = fst::SortedMatcher<fst::StdVectorFst>;
std::vector<std::pair<double, std::string>> ctc_beam_search_decoder(
const std::vector<std::vector<double>> &probs_seq,
const std::vector<std::string> &vocabulary,
size_t beam_size,
double cutoff_prob,
size_t cutoff_top_n,
Scorer *ext_scorer) {
// dimension check
size_t num_time_steps = probs_seq.size();
for (size_t i = 0; i < num_time_steps; ++i) {
VALID_CHECK_EQ(probs_seq[i].size(),
vocabulary.size() + 1,
"The shape of probs_seq does not match with "
"the shape of the vocabulary");
}
// assign blank id
size_t blank_id = vocabulary.size();
// assign space id
auto it = std::find(vocabulary.begin(), vocabulary.end(), " ");
int space_id = it - vocabulary.begin();
// if no space in vocabulary
if ((size_t)space_id >= vocabulary.size()) {
space_id = -2;
}
// init prefixes' root
PathTrie root;
root.score = root.log_prob_b_prev = 0.0;
std::vector<PathTrie *> prefixes;
prefixes.push_back(&root);
if (ext_scorer != nullptr && !ext_scorer->is_character_based()) {
auto fst_dict = static_cast<fst::StdVectorFst *>(ext_scorer->dictionary);
fst::StdVectorFst *dict_ptr = fst_dict->Copy(true);
root.set_dictionary(dict_ptr);
auto matcher = std::make_shared<FSTMATCH>(*dict_ptr, fst::MATCH_INPUT);
root.set_matcher(matcher);
}
// prefix search over time
for (size_t time_step = 0; time_step < num_time_steps; ++time_step) {
auto &prob = probs_seq[time_step];
float min_cutoff = -NUM_FLT_INF;
bool full_beam = false;
if (ext_scorer != nullptr) {
size_t num_prefixes = std::min(prefixes.size(), beam_size);
std::sort(
prefixes.begin(), prefixes.begin() + num_prefixes, prefix_compare);
min_cutoff = prefixes[num_prefixes - 1]->score +
std::log(prob[blank_id]) - std::max(0.0, ext_scorer->beta);
full_beam = (num_prefixes == beam_size);
}
std::vector<std::pair<size_t, float>> log_prob_idx =
get_pruned_log_probs(prob, cutoff_prob, cutoff_top_n);
// loop over chars
for (size_t index = 0; index < log_prob_idx.size(); index++) {
auto c = log_prob_idx[index].first;
auto log_prob_c = log_prob_idx[index].second;
for (size_t i = 0; i < prefixes.size() && i < beam_size; ++i) {
auto prefix = prefixes[i];
if (full_beam && log_prob_c + prefix->score < min_cutoff) {
break;
}
// blank
if (c == blank_id) {
prefix->log_prob_b_cur =
log_sum_exp(prefix->log_prob_b_cur, log_prob_c + prefix->score);
continue;
}
// repeated character
if (c == prefix->character) {
prefix->log_prob_nb_cur = log_sum_exp(
prefix->log_prob_nb_cur, log_prob_c + prefix->log_prob_nb_prev);
}
// get new prefix
auto prefix_new = prefix->get_path_trie(c);
if (prefix_new != nullptr) {
float log_p = -NUM_FLT_INF;
if (c == prefix->character &&
prefix->log_prob_b_prev > -NUM_FLT_INF) {
log_p = log_prob_c + prefix->log_prob_b_prev;
} else if (c != prefix->character) {
log_p = log_prob_c + prefix->score;
}
// language model scoring
if (ext_scorer != nullptr &&
(c == space_id || ext_scorer->is_character_based())) {
PathTrie *prefix_toscore = nullptr;
// skip scoring the space
if (ext_scorer->is_character_based()) {
prefix_toscore = prefix_new;
} else {
prefix_toscore = prefix;
}
double score = 0.0;
std::vector<std::string> ngram;
ngram = ext_scorer->make_ngram(prefix_toscore);
score = ext_scorer->get_log_cond_prob(ngram) * ext_scorer->alpha;
log_p += score;
log_p += ext_scorer->beta;
}
prefix_new->log_prob_nb_cur =
log_sum_exp(prefix_new->log_prob_nb_cur, log_p);
}
} // end of loop over prefix
} // end of loop over vocabulary
prefixes.clear();
// update log probs
root.iterate_to_vec(prefixes);
// only preserve top beam_size prefixes
if (prefixes.size() >= beam_size) {
std::nth_element(prefixes.begin(),
prefixes.begin() + beam_size,
prefixes.end(),
prefix_compare);
for (size_t i = beam_size; i < prefixes.size(); ++i) {
prefixes[i]->remove();
}
}
} // end of loop over time
// compute aproximate ctc score as the return score, without affecting the
// return order of decoding result. To delete when decoder gets stable.
for (size_t i = 0; i < beam_size && i < prefixes.size(); ++i) {
double approx_ctc = prefixes[i]->score;
if (ext_scorer != nullptr) {
std::vector<int> output;
prefixes[i]->get_path_vec(output);
auto prefix_length = output.size();
auto words = ext_scorer->split_labels(output);
// remove word insert
approx_ctc = approx_ctc - prefix_length * ext_scorer->beta;
// remove language model weight:
approx_ctc -= (ext_scorer->get_sent_log_prob(words)) * ext_scorer->alpha;
}
prefixes[i]->approx_ctc = approx_ctc;
}
return get_beam_search_result(prefixes, vocabulary, beam_size);
}
std::vector<std::vector<std::pair<double, std::string>>>
ctc_beam_search_decoder_batch(
const std::vector<std::vector<std::vector<double>>> &probs_split,
const std::vector<std::string> &vocabulary,
size_t beam_size,
size_t num_processes,
double cutoff_prob,
size_t cutoff_top_n,
Scorer *ext_scorer) {
VALID_CHECK_GT(num_processes, 0, "num_processes must be nonnegative!");
// thread pool
ThreadPool pool(num_processes);
// number of samples
size_t batch_size = probs_split.size();
// enqueue the tasks of decoding
std::vector<std::future<std::vector<std::pair<double, std::string>>>> res;
for (size_t i = 0; i < batch_size; ++i) {
res.emplace_back(pool.enqueue(ctc_beam_search_decoder,
probs_split[i],
vocabulary,
beam_size,
cutoff_prob,
cutoff_top_n,
ext_scorer));
}
// get decoding results
std::vector<std::vector<std::pair<double, std::string>>> batch_results;
for (size_t i = 0; i < batch_size; ++i) {
batch_results.emplace_back(res[i].get());
}
return batch_results;
}
#ifndef CTC_BEAM_SEARCH_DECODER_H_
#define CTC_BEAM_SEARCH_DECODER_H_
#include <string>
#include <utility>
#include <vector>
#include "scorer.h"
/* CTC Beam Search Decoder
* Parameters:
* probs_seq: 2-D vector that each element is a vector of probabilities
* over vocabulary of one time step.
* vocabulary: A vector of vocabulary.
* beam_size: The width of beam search.
* cutoff_prob: Cutoff probability for pruning.
* cutoff_top_n: Cutoff number for pruning.
* ext_scorer: External scorer to evaluate a prefix, which consists of
* n-gram language model scoring and word insertion term.
* Default null, decoding the input sample without scorer.
* Return:
* A vector that each element is a pair of score and decoding result,
* in desending order.
*/
std::vector<std::pair<double, std::string>> ctc_beam_search_decoder(
const std::vector<std::vector<double>> &probs_seq,
const std::vector<std::string> &vocabulary,
size_t beam_size,
double cutoff_prob = 1.0,
size_t cutoff_top_n = 40,
Scorer *ext_scorer = nullptr);
/* CTC Beam Search Decoder for batch data
* Parameters:
* probs_seq: 3-D vector that each element is a 2-D vector that can be used
* by ctc_beam_search_decoder().
* vocabulary: A vector of vocabulary.
* beam_size: The width of beam search.
* num_processes: Number of threads for beam search.
* cutoff_prob: Cutoff probability for pruning.
* cutoff_top_n: Cutoff number for pruning.
* ext_scorer: External scorer to evaluate a prefix, which consists of
* n-gram language model scoring and word insertion term.
* Default null, decoding the input sample without scorer.
* Return:
* A 2-D vector that each element is a vector of beam search decoding
* result for one audio sample.
*/
std::vector<std::vector<std::pair<double, std::string>>>
ctc_beam_search_decoder_batch(
const std::vector<std::vector<std::vector<double>>> &probs_split,
const std::vector<std::string> &vocabulary,
size_t beam_size,
size_t num_processes,
double cutoff_prob = 1.0,
size_t cutoff_top_n = 40,
Scorer *ext_scorer = nullptr);
#endif // CTC_BEAM_SEARCH_DECODER_H_
#include "ctc_greedy_decoder.h"
#include "decoder_utils.h"
std::string ctc_greedy_decoder(
const std::vector<std::vector<double>> &probs_seq,
const std::vector<std::string> &vocabulary) {
// dimension check
size_t num_time_steps = probs_seq.size();
for (size_t i = 0; i < num_time_steps; ++i) {
VALID_CHECK_EQ(probs_seq[i].size(),
vocabulary.size() + 1,
"The shape of probs_seq does not match with "
"the shape of the vocabulary");
}
size_t blank_id = vocabulary.size();
std::vector<size_t> max_idx_vec(num_time_steps, 0);
std::vector<size_t> idx_vec;
for (size_t i = 0; i < num_time_steps; ++i) {
double max_prob = 0.0;
size_t max_idx = 0;
const std::vector<double> &probs_step = probs_seq[i];
for (size_t j = 0; j < probs_step.size(); ++j) {
if (max_prob < probs_step[j]) {
max_idx = j;
max_prob = probs_step[j];
}
}
// id with maximum probability in current time step
max_idx_vec[i] = max_idx;
// deduplicate
if ((i == 0) || ((i > 0) && max_idx_vec[i] != max_idx_vec[i - 1])) {
idx_vec.push_back(max_idx_vec[i]);
}
}
std::string best_path_result;
for (size_t i = 0; i < idx_vec.size(); ++i) {
if (idx_vec[i] != blank_id) {
best_path_result += vocabulary[idx_vec[i]];
}
}
return best_path_result;
}
#ifndef CTC_GREEDY_DECODER_H
#define CTC_GREEDY_DECODER_H
#include <string>
#include <vector>
/* CTC Greedy (Best Path) Decoder
*
* Parameters:
* probs_seq: 2-D vector that each element is a vector of probabilities
* over vocabulary of one time step.
* vocabulary: A vector of vocabulary.
* Return:
* The decoding result in string
*/
std::string ctc_greedy_decoder(
const std::vector<std::vector<double>>& probs_seq,
const std::vector<std::string>& vocabulary);
#endif // CTC_GREEDY_DECODER_H
#include "decoder_utils.h"
#include <algorithm>
#include <cmath>
#include <limits>
std::vector<std::pair<size_t, float>> get_pruned_log_probs(
const std::vector<double> &prob_step,
double cutoff_prob,
size_t cutoff_top_n) {
std::vector<std::pair<int, double>> prob_idx;
for (size_t i = 0; i < prob_step.size(); ++i) {
prob_idx.push_back(std::pair<int, double>(i, prob_step[i]));
}
// pruning of vacobulary
size_t cutoff_len = prob_step.size();
if (cutoff_prob < 1.0 || cutoff_top_n < cutoff_len) {
std::sort(
prob_idx.begin(), prob_idx.end(), pair_comp_second_rev<int, double>);
if (cutoff_prob < 1.0) {
double cum_prob = 0.0;
cutoff_len = 0;
for (size_t i = 0; i < prob_idx.size(); ++i) {
cum_prob += prob_idx[i].second;
cutoff_len += 1;
if (cum_prob >= cutoff_prob || cutoff_len >= cutoff_top_n) break;
}
}
prob_idx = std::vector<std::pair<int, double>>(
prob_idx.begin(), prob_idx.begin() + cutoff_len);
}
std::vector<std::pair<size_t, float>> log_prob_idx;
for (size_t i = 0; i < cutoff_len; ++i) {
log_prob_idx.push_back(std::pair<int, float>(
prob_idx[i].first, log(prob_idx[i].second + NUM_FLT_MIN)));
}
return log_prob_idx;
}
std::vector<std::pair<double, std::string>> get_beam_search_result(
const std::vector<PathTrie *> &prefixes,
const std::vector<std::string> &vocabulary,
size_t beam_size) {
// allow for the post processing
std::vector<PathTrie *> space_prefixes;
if (space_prefixes.empty()) {
for (size_t i = 0; i < beam_size && i < prefixes.size(); ++i) {
space_prefixes.push_back(prefixes[i]);
}
}
std::sort(space_prefixes.begin(), space_prefixes.end(), prefix_compare);
std::vector<std::pair<double, std::string>> output_vecs;
for (size_t i = 0; i < beam_size && i < space_prefixes.size(); ++i) {
std::vector<int> output;
space_prefixes[i]->get_path_vec(output);
// convert index to string
std::string output_str;
for (size_t j = 0; j < output.size(); j++) {
output_str += vocabulary[output[j]];
}
std::pair<double, std::string> output_pair(-space_prefixes[i]->approx_ctc,
output_str);
output_vecs.emplace_back(output_pair);
}
return output_vecs;
}
size_t get_utf8_str_len(const std::string &str) {
size_t str_len = 0;
for (char c : str) {
str_len += ((c & 0xc0) != 0x80);
}
return str_len;
}
std::vector<std::string> split_utf8_str(const std::string &str) {
std::vector<std::string> result;
std::string out_str;
for (char c : str) {
if ((c & 0xc0) != 0x80) // new UTF-8 character
{
if (!out_str.empty()) {
result.push_back(out_str);
out_str.clear();
}
}
out_str.append(1, c);
}
result.push_back(out_str);
return result;
}
std::vector<std::string> split_str(const std::string &s,
const std::string &delim) {
std::vector<std::string> result;
std::size_t start = 0, delim_len = delim.size();
while (true) {
std::size_t end = s.find(delim, start);
if (end == std::string::npos) {
if (start < s.size()) {
result.push_back(s.substr(start));
}
break;
}
if (end > start) {
result.push_back(s.substr(start, end - start));
}
start = end + delim_len;
}
return result;
}
bool prefix_compare(const PathTrie *x, const PathTrie *y) {
if (x->score == y->score) {
if (x->character == y->character) {
return false;
} else {
return (x->character < y->character);
}
} else {
return x->score > y->score;
}
}
void add_word_to_fst(const std::vector<int> &word,
fst::StdVectorFst *dictionary) {
if (dictionary->NumStates() == 0) {
fst::StdVectorFst::StateId start = dictionary->AddState();
assert(start == 0);
dictionary->SetStart(start);
}
fst::StdVectorFst::StateId src = dictionary->Start();
fst::StdVectorFst::StateId dst;
for (auto c : word) {
dst = dictionary->AddState();
dictionary->AddArc(src, fst::StdArc(c, c, 0, dst));
src = dst;
}
dictionary->SetFinal(dst, fst::StdArc::Weight::One());
}
bool add_word_to_dictionary(
const std::string &word,
const std::unordered_map<std::string, int> &char_map,
bool add_space,
int SPACE_ID,
fst::StdVectorFst *dictionary) {
auto characters = split_utf8_str(word);
std::vector<int> int_word;
for (auto &c : characters) {
if (c == " ") {
int_word.push_back(SPACE_ID);
} else {
auto int_c = char_map.find(c);
if (int_c != char_map.end()) {
int_word.push_back(int_c->second);
} else {
return false; // return without adding
}
}
}
if (add_space) {
int_word.push_back(SPACE_ID);
}
add_word_to_fst(int_word, dictionary);
return true; // return with successful adding
}
#ifndef DECODER_UTILS_H_
#define DECODER_UTILS_H_
#include <utility>
#include "fst/log.h"
#include "path_trie.h"
const float NUM_FLT_INF = std::numeric_limits<float>::max();
const float NUM_FLT_MIN = std::numeric_limits<float>::min();
// inline function for validation check
inline void check(
bool x, const char *expr, const char *file, int line, const char *err) {
if (!x) {
std::cout << "[" << file << ":" << line << "] ";
LOG(FATAL) << "\"" << expr << "\" check failed. " << err;
}
}
#define VALID_CHECK(x, info) \
check(static_cast<bool>(x), #x, __FILE__, __LINE__, info)
#define VALID_CHECK_EQ(x, y, info) VALID_CHECK((x) == (y), info)
#define VALID_CHECK_GT(x, y, info) VALID_CHECK((x) > (y), info)
#define VALID_CHECK_LT(x, y, info) VALID_CHECK((x) < (y), info)
// Function template for comparing two pairs
template <typename T1, typename T2>
bool pair_comp_first_rev(const std::pair<T1, T2> &a,
const std::pair<T1, T2> &b) {
return a.first > b.first;
}
// Function template for comparing two pairs
template <typename T1, typename T2>
bool pair_comp_second_rev(const std::pair<T1, T2> &a,
const std::pair<T1, T2> &b) {
return a.second > b.second;
}
// Return the sum of two probabilities in log scale
template <typename T>
T log_sum_exp(const T &x, const T &y) {
static T num_min = -std::numeric_limits<T>::max();
if (x <= num_min) return y;
if (y <= num_min) return x;
T xmax = std::max(x, y);
return std::log(std::exp(x - xmax) + std::exp(y - xmax)) + xmax;
}
// Get pruned probability vector for each time step's beam search
std::vector<std::pair<size_t, float>> get_pruned_log_probs(
const std::vector<double> &prob_step,
double cutoff_prob,
size_t cutoff_top_n);
// Get beam search result from prefixes in trie tree
std::vector<std::pair<double, std::string>> get_beam_search_result(
const std::vector<PathTrie *> &prefixes,
const std::vector<std::string> &vocabulary,
size_t beam_size);
// Functor for prefix comparsion
bool prefix_compare(const PathTrie *x, const PathTrie *y);
/* Get length of utf8 encoding string
* See: http://stackoverflow.com/a/4063229
*/
size_t get_utf8_str_len(const std::string &str);
/* Split a string into a list of strings on a given string
* delimiter. NB: delimiters on beginning / end of string are
* trimmed. Eg, "FooBarFoo" split on "Foo" returns ["Bar"].
*/
std::vector<std::string> split_str(const std::string &s,
const std::string &delim);
/* Splits string into vector of strings representing
* UTF-8 characters (not same as chars)
*/
std::vector<std::string> split_utf8_str(const std::string &str);
// Add a word in index to the dicionary of fst
void add_word_to_fst(const std::vector<int> &word,
fst::StdVectorFst *dictionary);
// Add a word in string to dictionary
bool add_word_to_dictionary(
const std::string &word,
const std::unordered_map<std::string, int> &char_map,
bool add_space,
int SPACE_ID,
fst::StdVectorFst *dictionary);
#endif // DECODER_UTILS_H
%module swig_decoders
%{
#include "scorer.h"
#include "ctc_greedy_decoder.h"
#include "ctc_beam_search_decoder.h"
#include "decoder_utils.h"
%}
%include "std_vector.i"
%include "std_pair.i"
%include "std_string.i"
%import "decoder_utils.h"
namespace std {
%template(DoubleVector) std::vector<double>;
%template(IntVector) std::vector<int>;
%template(StringVector) std::vector<std::string>;
%template(VectorOfStructVector) std::vector<std::vector<double> >;
%template(FloatVector) std::vector<float>;
%template(Pair) std::pair<float, std::string>;
%template(PairFloatStringVector) std::vector<std::pair<float, std::string> >;
%template(PairDoubleStringVector) std::vector<std::pair<double, std::string> >;
%template(PairDoubleStringVector2) std::vector<std::vector<std::pair<double, std::string> > >;
%template(DoubleVector3) std::vector<std::vector<std::vector<double> > >;
}
%template(IntDoublePairCompSecondRev) pair_comp_second_rev<int, double>;
%template(StringDoublePairCompSecondRev) pair_comp_second_rev<std::string, double>;
%template(DoubleStringPairCompFirstRev) pair_comp_first_rev<double, std::string>;
%include "scorer.h"
%include "ctc_greedy_decoder.h"
%include "ctc_beam_search_decoder.h"
#include "path_trie.h"
#include <algorithm>
#include <limits>
#include <memory>
#include <utility>
#include <vector>
#include "decoder_utils.h"
PathTrie::PathTrie() {
log_prob_b_prev = -NUM_FLT_INF;
log_prob_nb_prev = -NUM_FLT_INF;
log_prob_b_cur = -NUM_FLT_INF;
log_prob_nb_cur = -NUM_FLT_INF;
score = -NUM_FLT_INF;
ROOT_ = -1;
character = ROOT_;
exists_ = true;
parent = nullptr;
dictionary_ = nullptr;
dictionary_state_ = 0;
has_dictionary_ = false;
matcher_ = nullptr;
}
PathTrie::~PathTrie() {
for (auto child : children_) {
delete child.second;
}
}
PathTrie* PathTrie::get_path_trie(int new_char, bool reset) {
auto child = children_.begin();
for (child = children_.begin(); child != children_.end(); ++child) {
if (child->first == new_char) {
break;
}
}
if (child != children_.end()) {
if (!child->second->exists_) {
child->second->exists_ = true;
child->second->log_prob_b_prev = -NUM_FLT_INF;
child->second->log_prob_nb_prev = -NUM_FLT_INF;
child->second->log_prob_b_cur = -NUM_FLT_INF;
child->second->log_prob_nb_cur = -NUM_FLT_INF;
}
return (child->second);
} else {
if (has_dictionary_) {
matcher_->SetState(dictionary_state_);
bool found = matcher_->Find(new_char);
if (!found) {
// Adding this character causes word outside dictionary
auto FSTZERO = fst::TropicalWeight::Zero();
auto final_weight = dictionary_->Final(dictionary_state_);
bool is_final = (final_weight != FSTZERO);
if (is_final && reset) {
dictionary_state_ = dictionary_->Start();
}
return nullptr;
} else {
PathTrie* new_path = new PathTrie;
new_path->character = new_char;
new_path->parent = this;
new_path->dictionary_ = dictionary_;
new_path->dictionary_state_ = matcher_->Value().nextstate;
new_path->has_dictionary_ = true;
new_path->matcher_ = matcher_;
children_.push_back(std::make_pair(new_char, new_path));
return new_path;
}
} else {
PathTrie* new_path = new PathTrie;
new_path->character = new_char;
new_path->parent = this;
children_.push_back(std::make_pair(new_char, new_path));
return new_path;
}
}
}
PathTrie* PathTrie::get_path_vec(std::vector<int>& output) {
return get_path_vec(output, ROOT_);
}
PathTrie* PathTrie::get_path_vec(std::vector<int>& output,
int stop,
size_t max_steps) {
if (character == stop || character == ROOT_ || output.size() == max_steps) {
std::reverse(output.begin(), output.end());
return this;
} else {
output.push_back(character);
return parent->get_path_vec(output, stop, max_steps);
}
}
void PathTrie::iterate_to_vec(std::vector<PathTrie*>& output) {
if (exists_) {
log_prob_b_prev = log_prob_b_cur;
log_prob_nb_prev = log_prob_nb_cur;
log_prob_b_cur = -NUM_FLT_INF;
log_prob_nb_cur = -NUM_FLT_INF;
score = log_sum_exp(log_prob_b_prev, log_prob_nb_prev);
output.push_back(this);
}
for (auto child : children_) {
child.second->iterate_to_vec(output);
}
}
void PathTrie::remove() {
exists_ = false;
if (children_.size() == 0) {
auto child = parent->children_.begin();
for (child = parent->children_.begin(); child != parent->children_.end();
++child) {
if (child->first == character) {
parent->children_.erase(child);
break;
}
}
if (parent->children_.size() == 0 && !parent->exists_) {
parent->remove();
}
delete this;
}
}
void PathTrie::set_dictionary(fst::StdVectorFst* dictionary) {
dictionary_ = dictionary;
dictionary_state_ = dictionary->Start();
has_dictionary_ = true;
}
using FSTMATCH = fst::SortedMatcher<fst::StdVectorFst>;
void PathTrie::set_matcher(std::shared_ptr<FSTMATCH> matcher) {
matcher_ = matcher;
}
#ifndef PATH_TRIE_H
#define PATH_TRIE_H
#include <algorithm>
#include <limits>
#include <memory>
#include <utility>
#include <vector>
#include "fst/fstlib.h"
/* Trie tree for prefix storing and manipulating, with a dictionary in
* finite-state transducer for spelling correction.
*/
class PathTrie {
public:
PathTrie();
~PathTrie();
// get new prefix after appending new char
PathTrie* get_path_trie(int new_char, bool reset = true);
// get the prefix in index from root to current node
PathTrie* get_path_vec(std::vector<int>& output);
// get the prefix in index from some stop node to current nodel
PathTrie* get_path_vec(std::vector<int>& output,
int stop,
size_t max_steps = std::numeric_limits<size_t>::max());
// update log probs
void iterate_to_vec(std::vector<PathTrie*>& output);
// set dictionary for FST
void set_dictionary(fst::StdVectorFst* dictionary);
void set_matcher(std::shared_ptr<fst::SortedMatcher<fst::StdVectorFst>>);
bool is_empty() { return ROOT_ == character; }
// remove current path from root
void remove();
float log_prob_b_prev;
float log_prob_nb_prev;
float log_prob_b_cur;
float log_prob_nb_cur;
float score;
float approx_ctc;
int character;
PathTrie* parent;
private:
int ROOT_;
bool exists_;
bool has_dictionary_;
std::vector<std::pair<int, PathTrie*>> children_;
// pointer to dictionary of FST
fst::StdVectorFst* dictionary_;
fst::StdVectorFst::StateId dictionary_state_;
// true if finding ars in FST
std::shared_ptr<fst::SortedMatcher<fst::StdVectorFst>> matcher_;
};
#endif // PATH_TRIE_H
#include "scorer.h"
#include <unistd.h>
#include <iostream>
#include "lm/config.hh"
#include "lm/model.hh"
#include "lm/state.hh"
#include "util/string_piece.hh"
#include "util/tokenize_piece.hh"
#include "decoder_utils.h"
using namespace lm::ngram;
Scorer::Scorer(double alpha,
double beta,
const std::string& lm_path,
const std::vector<std::string>& vocab_list) {
this->alpha = alpha;
this->beta = beta;
dictionary = nullptr;
is_character_based_ = true;
language_model_ = nullptr;
max_order_ = 0;
dict_size_ = 0;
SPACE_ID_ = -1;
setup(lm_path, vocab_list);
}
Scorer::~Scorer() {
if (language_model_ != nullptr) {
delete static_cast<lm::base::Model*>(language_model_);
}
if (dictionary != nullptr) {
delete static_cast<fst::StdVectorFst*>(dictionary);
}
}
void Scorer::setup(const std::string& lm_path,
const std::vector<std::string>& vocab_list) {
// load language model
load_lm(lm_path);
// set char map for scorer
set_char_map(vocab_list);
// fill the dictionary for FST
if (!is_character_based()) {
fill_dictionary(true);
}
}
void Scorer::load_lm(const std::string& lm_path) {
const char* filename = lm_path.c_str();
VALID_CHECK_EQ(access(filename, F_OK), 0, "Invalid language model path");
RetriveStrEnumerateVocab enumerate;
lm::ngram::Config config;
config.enumerate_vocab = &enumerate;
language_model_ = lm::ngram::LoadVirtual(filename, config);
max_order_ = static_cast<lm::base::Model*>(language_model_)->Order();
vocabulary_ = enumerate.vocabulary;
for (size_t i = 0; i < vocabulary_.size(); ++i) {
if (is_character_based_ && vocabulary_[i] != UNK_TOKEN &&
vocabulary_[i] != START_TOKEN && vocabulary_[i] != END_TOKEN &&
get_utf8_str_len(enumerate.vocabulary[i]) > 1) {
is_character_based_ = false;
}
}
}
double Scorer::get_log_cond_prob(const std::vector<std::string>& words) {
lm::base::Model* model = static_cast<lm::base::Model*>(language_model_);
double cond_prob;
lm::ngram::State state, tmp_state, out_state;
// avoid to inserting <s> in begin
model->NullContextWrite(&state);
for (size_t i = 0; i < words.size(); ++i) {
lm::WordIndex word_index = model->BaseVocabulary().Index(words[i]);
// encounter OOV
if (word_index == 0) {
return OOV_SCORE;
}
cond_prob = model->BaseScore(&state, word_index, &out_state);
tmp_state = state;
state = out_state;
out_state = tmp_state;
}
// return log10 prob
return cond_prob;
}
double Scorer::get_sent_log_prob(const std::vector<std::string>& words) {
std::vector<std::string> sentence;
if (words.size() == 0) {
for (size_t i = 0; i < max_order_; ++i) {
sentence.push_back(START_TOKEN);
}
} else {
for (size_t i = 0; i < max_order_ - 1; ++i) {
sentence.push_back(START_TOKEN);
}
sentence.insert(sentence.end(), words.begin(), words.end());
}
sentence.push_back(END_TOKEN);
return get_log_prob(sentence);
}
double Scorer::get_log_prob(const std::vector<std::string>& words) {
assert(words.size() > max_order_);
double score = 0.0;
for (size_t i = 0; i < words.size() - max_order_ + 1; ++i) {
std::vector<std::string> ngram(words.begin() + i,
words.begin() + i + max_order_);
score += get_log_cond_prob(ngram);
}
return score;
}
void Scorer::reset_params(float alpha, float beta) {
this->alpha = alpha;
this->beta = beta;
}
std::string Scorer::vec2str(const std::vector<int>& input) {
std::string word;
for (auto ind : input) {
word += char_list_[ind];
}
return word;
}
std::vector<std::string> Scorer::split_labels(const std::vector<int>& labels) {
if (labels.empty()) return {};
std::string s = vec2str(labels);
std::vector<std::string> words;
if (is_character_based_) {
words = split_utf8_str(s);
} else {
words = split_str(s, " ");
}
return words;
}
void Scorer::set_char_map(const std::vector<std::string>& char_list) {
char_list_ = char_list;
char_map_.clear();
for (size_t i = 0; i < char_list_.size(); i++) {
if (char_list_[i] == " ") {
SPACE_ID_ = i;
char_map_[' '] = i;
} else if (char_list_[i].size() == 1) {
char_map_[char_list_[i][0]] = i;
}
}
}
std::vector<std::string> Scorer::make_ngram(PathTrie* prefix) {
std::vector<std::string> ngram;
PathTrie* current_node = prefix;
PathTrie* new_node = nullptr;
for (int order = 0; order < max_order_; order++) {
std::vector<int> prefix_vec;
if (is_character_based_) {
new_node = current_node->get_path_vec(prefix_vec, SPACE_ID_, 1);
current_node = new_node;
} else {
new_node = current_node->get_path_vec(prefix_vec, SPACE_ID_);
current_node = new_node->parent; // Skipping spaces
}
// reconstruct word
std::string word = vec2str(prefix_vec);
ngram.push_back(word);
if (new_node->character == -1) {
// No more spaces, but still need order
for (int i = 0; i < max_order_ - order - 1; i++) {
ngram.push_back(START_TOKEN);
}
break;
}
}
std::reverse(ngram.begin(), ngram.end());
return ngram;
}
void Scorer::fill_dictionary(bool add_space) {
fst::StdVectorFst dictionary;
// First reverse char_list so ints can be accessed by chars
std::unordered_map<std::string, int> char_map;
for (size_t i = 0; i < char_list_.size(); i++) {
char_map[char_list_[i]] = i;
}
// For each unigram convert to ints and put in trie
int dict_size = 0;
for (const auto& word : vocabulary_) {
bool added = add_word_to_dictionary(
word, char_map, add_space, SPACE_ID_, &dictionary);
dict_size += added ? 1 : 0;
}
dict_size_ = dict_size;
/* Simplify FST
* This gets rid of "epsilon" transitions in the FST.
* These are transitions that don't require a string input to be taken.
* Getting rid of them is necessary to make the FST determinisitc, but
* can greatly increase the size of the FST
*/
fst::RmEpsilon(&dictionary);
fst::StdVectorFst* new_dict = new fst::StdVectorFst;
/* This makes the FST deterministic, meaning for any string input there's
* only one possible state the FST could be in. It is assumed our
* dictionary is deterministic when using it.
* (lest we'd have to check for multiple transitions at each state)
*/
fst::Determinize(dictionary, new_dict);
/* Finds the simplest equivalent fst. This is unnecessary but decreases
* memory usage of the dictionary
*/
fst::Minimize(new_dict);
this->dictionary = new_dict;
}
#ifndef SCORER_H_
#define SCORER_H_
#include <memory>
#include <string>
#include <unordered_map>
#include <vector>
#include "lm/enumerate_vocab.hh"
#include "lm/virtual_interface.hh"
#include "lm/word_index.hh"
#include "util/string_piece.hh"
#include "path_trie.h"
const double OOV_SCORE = -1000.0;
const std::string START_TOKEN = "<s>";
const std::string UNK_TOKEN = "<unk>";
const std::string END_TOKEN = "</s>";
// Implement a callback to retrive the dictionary of language model.
class RetriveStrEnumerateVocab : public lm::EnumerateVocab {
public:
RetriveStrEnumerateVocab() {}
void Add(lm::WordIndex index, const StringPiece &str) {
vocabulary.push_back(std::string(str.data(), str.length()));
}
std::vector<std::string> vocabulary;
};
/* External scorer to query score for n-gram or sentence, including language
* model scoring and word insertion.
*
* Example:
* Scorer scorer(alpha, beta, "path_of_language_model");
* scorer.get_log_cond_prob({ "WORD1", "WORD2", "WORD3" });
* scorer.get_sent_log_prob({ "WORD1", "WORD2", "WORD3" });
*/
class Scorer {
public:
Scorer(double alpha,
double beta,
const std::string &lm_path,
const std::vector<std::string> &vocabulary);
~Scorer();
double get_log_cond_prob(const std::vector<std::string> &words);
double get_sent_log_prob(const std::vector<std::string> &words);
// return the max order
size_t get_max_order() const { return max_order_; }
// return the dictionary size of language model
size_t get_dict_size() const { return dict_size_; }
// retrun true if the language model is character based
bool is_character_based() const { return is_character_based_; }
// reset params alpha & beta
void reset_params(float alpha, float beta);
// make ngram for a given prefix
std::vector<std::string> make_ngram(PathTrie *prefix);
// trransform the labels in index to the vector of words (word based lm) or
// the vector of characters (character based lm)
std::vector<std::string> split_labels(const std::vector<int> &labels);
// language model weight
double alpha;
// word insertion weight
double beta;
// pointer to the dictionary of FST
void *dictionary;
protected:
// necessary setup: load language model, set char map, fill FST's dictionary
void setup(const std::string &lm_path,
const std::vector<std::string> &vocab_list);
// load language model from given path
void load_lm(const std::string &lm_path);
// fill dictionary for FST
void fill_dictionary(bool add_space);
// set char map
void set_char_map(const std::vector<std::string> &char_list);
double get_log_prob(const std::vector<std::string> &words);
// translate the vector in index to string
std::string vec2str(const std::vector<int> &input);
private:
void *language_model_;
bool is_character_based_;
size_t max_order_;
size_t dict_size_;
int SPACE_ID_;
std::vector<std::string> char_list_;
std::unordered_map<char, int> char_map_;
std::vector<std::string> vocabulary_;
};
#endif // SCORER_H_
"""Script to build and install decoder package."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from setuptools import setup, Extension, distutils
import glob
import platform
import os, sys
import multiprocessing.pool
import argparse
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--num_processes",
default=1,
type=int,
help="Number of cpu processes to build package. (default: %(default)d)")
args = parser.parse_known_args()
# reconstruct sys.argv to pass to setup below
sys.argv = [sys.argv[0]] + args[1]
# monkey-patch for parallel compilation
# See: https://stackoverflow.com/a/13176803
def parallelCCompile(self,
sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
# those lines are copied from distutils.ccompiler.CCompiler directly
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
# parallel code
def _single_compile(obj):
try:
src, ext = build[obj]
except KeyError:
return
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# convert to list, imap is evaluated on-demand
thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)
list(thread_pool.imap(_single_compile, objects))
return objects
def compile_test(header, library):
dummy_path = os.path.join(os.path.dirname(__file__), "dummy")
command = "bash -c \"g++ -include " + header \
+ " -l" + library + " -x c++ - <<<'int main() {}' -o " \
+ dummy_path + " >/dev/null 2>/dev/null && rm " \
+ dummy_path + " 2>/dev/null\""
return os.system(command) == 0
# hack compile to support parallel compiling
distutils.ccompiler.CCompiler.compile = parallelCCompile
FILES = glob.glob('kenlm/util/*.cc') \
+ glob.glob('kenlm/lm/*.cc') \
+ glob.glob('kenlm/util/double-conversion/*.cc')
FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')
# FILES + glob.glob('glog/src/*.cc')
FILES = [
fn for fn in FILES
if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(
'unittest.cc'))
]
LIBS = ['stdc++']
if platform.system() != 'Darwin':
LIBS.append('rt')
ARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']
if compile_test('zlib.h', 'z'):
ARGS.append('-DHAVE_ZLIB')
LIBS.append('z')
if compile_test('bzlib.h', 'bz2'):
ARGS.append('-DHAVE_BZLIB')
LIBS.append('bz2')
if compile_test('lzma.h', 'lzma'):
ARGS.append('-DHAVE_XZLIB')
LIBS.append('lzma')
os.system('swig -python -c++ ./decoders.i')
decoders_module = [
Extension(
name='_swig_decoders',
sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),
language='c++',
include_dirs=[
'.',
'kenlm',
'openfst-1.6.3/src/include',
'ThreadPool',
#'glog/src'
],
libraries=LIBS,
extra_compile_args=ARGS)
]
setup(
name='swig_decoders',
version='0.1',
description="""CTC decoders""",
ext_modules=decoders_module,
py_modules=['swig_decoders'], )
#!/usr/bin/env bash
if [ ! -d kenlm ]; then
git clone https://github.com/luotao1/kenlm.git
echo -e "\n"
fi
if [ ! -d openfst-1.6.3 ]; then
echo "Download and extract openfst ..."
wget http://www.openfst.org/twiki/pub/FST/FstDownload/openfst-1.6.3.tar.gz
tar -xzvf openfst-1.6.3.tar.gz
echo -e "\n"
fi
if [ ! -d ThreadPool ]; then
git clone https://github.com/progschj/ThreadPool.git
echo -e "\n"
fi
echo "Install decoders ..."
python setup.py install --num_processes 4
"""Wrapper for various CTC decoders in SWIG."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import swig_decoders
class Scorer(swig_decoders.Scorer):
"""Wrapper for Scorer.
:param alpha: Parameter associated with language model. Don't use
language model when alpha = 0.
:type alpha: float
:param beta: Parameter associated with word count. Don't use word
count when beta = 0.
:type beta: float
:model_path: Path to load language model.
:type model_path: basestring
"""
def __init__(self, alpha, beta, model_path, vocabulary):
swig_decoders.Scorer.__init__(self, alpha, beta, model_path, vocabulary)
def ctc_greedy_decoder(probs_seq, vocabulary):
"""Wrapper for ctc best path decoder in swig.
:param probs_seq: 2-D list of probability distributions over each time
step, with each element being a list of normalized
probabilities over vocabulary and blank.
:type probs_seq: 2-D list
:param vocabulary: Vocabulary list.
:type vocabulary: list
:return: Decoding result string.
:rtype: basestring
"""
return swig_decoders.ctc_greedy_decoder(probs_seq.tolist(), vocabulary)
def ctc_beam_search_decoder(probs_seq,
vocabulary,
beam_size,
cutoff_prob=1.0,
cutoff_top_n=40,
ext_scoring_func=None):
"""Wrapper for the CTC Beam Search Decoder.
:param probs_seq: 2-D list of probability distributions over each time
step, with each element being a list of normalized
probabilities over vocabulary and blank.
:type probs_seq: 2-D list
:param vocabulary: Vocabulary list.
:type vocabulary: list
:param beam_size: Width for beam search.
:type beam_size: int
:param cutoff_prob: Cutoff probability in pruning,
default 1.0, no pruning.
:type cutoff_prob: float
:param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n
characters with highest probs in vocabulary will be
used in beam search, default 40.
:type cutoff_top_n: int
:param ext_scoring_func: External scoring function for
partially decoded sentence, e.g. word count
or language model.
:type external_scoring_func: callable
:return: List of tuples of log probability and sentence as decoding
results, in descending order of the probability.
:rtype: list
"""
return swig_decoders.ctc_beam_search_decoder(probs_seq.tolist(), vocabulary,
beam_size, cutoff_prob,
cutoff_top_n, ext_scoring_func)
def ctc_beam_search_decoder_batch(probs_split,
vocabulary,
beam_size,
num_processes,
cutoff_prob=1.0,
cutoff_top_n=40,
ext_scoring_func=None):
"""Wrapper for the batched CTC beam search decoder.
:param probs_seq: 3-D list with each element as an instance of 2-D list
of probabilities used by ctc_beam_search_decoder().
:type probs_seq: 3-D list
:param vocabulary: Vocabulary list.
:type vocabulary: list
:param beam_size: Width for beam search.
:type beam_size: int
:param num_processes: Number of parallel processes.
:type num_processes: int
:param cutoff_prob: Cutoff probability in vocabulary pruning,
default 1.0, no pruning.
:type cutoff_prob: float
:param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n
characters with highest probs in vocabulary will be
used in beam search, default 40.
:type cutoff_top_n: int
:param num_processes: Number of parallel processes.
:type num_processes: int
:param ext_scoring_func: External scoring function for
partially decoded sentence, e.g. word count
or language model.
:type external_scoring_function: callable
:return: List of tuples of log probability and sentence as decoding
results, in descending order of the probability.
:rtype: list
"""
probs_split = [probs_seq.tolist() for probs_seq in probs_split]
return swig_decoders.ctc_beam_search_decoder_batch(
probs_split, vocabulary, beam_size, num_processes, cutoff_prob,
cutoff_top_n, ext_scoring_func)
...@@ -4,7 +4,7 @@ from __future__ import division ...@@ -4,7 +4,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import unittest import unittest
from decoder import * from decoders import decoders_deprecated as decoder
class TestDecoders(unittest.TestCase): class TestDecoders(unittest.TestCase):
...@@ -49,39 +49,38 @@ class TestDecoders(unittest.TestCase): ...@@ -49,39 +49,38 @@ class TestDecoders(unittest.TestCase):
0.15882358, 0.1235788, 0.23376776, 0.20510435, 0.00279306, 0.15882358, 0.1235788, 0.23376776, 0.20510435, 0.00279306,
0.05294827, 0.22298418 0.05294827, 0.22298418
]] ]]
self.best_path_result = ["ac'bdc", "b'da"] self.greedy_result = ["ac'bdc", "b'da"]
self.beam_search_result = ['acdc', "b'a"] self.beam_search_result = ['acdc', "b'a"]
def test_best_path_decoder_1(self): def test_greedy_decoder_1(self):
bst_result = ctc_best_path_decoder(self.probs_seq1, self.vocab_list) bst_result = decoder.ctc_greedy_decoder(self.probs_seq1,
self.assertEqual(bst_result, self.best_path_result[0]) self.vocab_list)
self.assertEqual(bst_result, self.greedy_result[0])
def test_best_path_decoder_2(self): def test_greedy_decoder_2(self):
bst_result = ctc_best_path_decoder(self.probs_seq2, self.vocab_list) bst_result = decoder.ctc_greedy_decoder(self.probs_seq2,
self.assertEqual(bst_result, self.best_path_result[1]) self.vocab_list)
self.assertEqual(bst_result, self.greedy_result[1])
def test_beam_search_decoder_1(self): def test_beam_search_decoder_1(self):
beam_result = ctc_beam_search_decoder( beam_result = decoder.ctc_beam_search_decoder(
probs_seq=self.probs_seq1, probs_seq=self.probs_seq1,
beam_size=self.beam_size, beam_size=self.beam_size,
vocabulary=self.vocab_list, vocabulary=self.vocab_list)
blank_id=len(self.vocab_list))
self.assertEqual(beam_result[0][1], self.beam_search_result[0]) self.assertEqual(beam_result[0][1], self.beam_search_result[0])
def test_beam_search_decoder_2(self): def test_beam_search_decoder_2(self):
beam_result = ctc_beam_search_decoder( beam_result = decoder.ctc_beam_search_decoder(
probs_seq=self.probs_seq2, probs_seq=self.probs_seq2,
beam_size=self.beam_size, beam_size=self.beam_size,
vocabulary=self.vocab_list, vocabulary=self.vocab_list)
blank_id=len(self.vocab_list))
self.assertEqual(beam_result[0][1], self.beam_search_result[1]) self.assertEqual(beam_result[0][1], self.beam_search_result[1])
def test_beam_search_decoder_batch(self): def test_beam_search_decoder_batch(self):
beam_results = ctc_beam_search_decoder_batch( beam_results = decoder.ctc_beam_search_decoder_batch(
probs_split=[self.probs_seq1, self.probs_seq2], probs_split=[self.probs_seq1, self.probs_seq2],
beam_size=self.beam_size, beam_size=self.beam_size,
vocabulary=self.vocab_list, vocabulary=self.vocab_list,
blank_id=len(self.vocab_list),
num_processes=24) num_processes=24)
self.assertEqual(beam_results[0][0][1], self.beam_search_result[0]) self.assertEqual(beam_results[0][0][1], self.beam_search_result[0])
self.assertEqual(beam_results[1][0][1], self.beam_search_result[1]) self.assertEqual(beam_results[1][0][1], self.beam_search_result[1])
......
...@@ -3,111 +3,64 @@ import os ...@@ -3,111 +3,64 @@ import os
import time import time
import random import random
import argparse import argparse
import distutils.util import functools
from time import gmtime, strftime from time import gmtime, strftime
import SocketServer import SocketServer
import struct import struct
import wave import wave
import paddle.v2 as paddle import paddle.v2 as paddle
from utils import print_arguments import _init_paths
from data_utils.data import DataGenerator from data_utils.data import DataGenerator
from model import DeepSpeech2Model from model_utils.model import DeepSpeech2Model
from data_utils.utils import read_manifest from data_utils.utils import read_manifest
from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( add_arg = functools.partial(add_arguments, argparser=parser)
"--host_ip", # yapf: disable
default="localhost", add_arg('host_port', int, 8086, "Server's IP port.")
type=str, add_arg('beam_size', int, 500, "Beam search width.")
help="Server IP address. (default: %(default)s)") add_arg('num_conv_layers', int, 2, "# of convolution layers.")
parser.add_argument( add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
"--host_port", add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
default=8086, add_arg('alpha', float, 0.36, "Coef of LM for beam search.")
type=int, add_arg('beta', float, 0.25, "Coef of WC for beam search.")
help="Server Port. (default: %(default)s)") add_arg('cutoff_prob', float, 0.99, "Cutoff probability for pruning.")
parser.add_argument( add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
"--speech_save_dir", add_arg('use_gpu', bool, True, "Use GPU or not.")
default="demo_cache", add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
type=str, "bi-directional RNNs. Not for GRU.")
help="Directory for saving demo speech. (default: %(default)s)") add_arg('host_ip', str,
parser.add_argument( 'localhost',
"--vocab_filepath", "Server's IP address.")
default='datasets/vocab/eng_vocab.txt', add_arg('speech_save_dir', str,
type=str, 'demo_cache',
help="Vocabulary filepath. (default: %(default)s)") "Directory to save demo audios.")
parser.add_argument( add_arg('warmup_manifest', str,
"--mean_std_filepath", 'data/librispeech/manifest.test-clean',
default='mean_std.npz', "Filepath of manifest to warm up.")
type=str, add_arg('mean_std_path', str,
help="Manifest path for normalizer. (default: %(default)s)") 'data/librispeech/mean_std.npz',
parser.add_argument( "Filepath of normalizer's mean & std.")
"--warmup_manifest_path", add_arg('vocab_path', str,
default='datasets/manifest.test', 'data/librispeech/eng_vocab.txt',
type=str, "Filepath of vocabulary.")
help="Manifest path for warmup test. (default: %(default)s)") add_arg('model_path', str,
parser.add_argument( './checkpoints/libri/params.latest.tar.gz',
"--specgram_type", "If None, the training starts from scratch, "
default='linear', "otherwise, it resumes from the pre-trained model.")
type=str, add_arg('lang_model_path', str,
help="Feature type of audio data: 'linear' (power spectrum)" 'lm/data/common_crawl_00.prune01111.trie.klm',
" or 'mfcc'. (default: %(default)s)") "Filepath for language model.")
parser.add_argument( add_arg('decoding_method', str,
"--num_conv_layers", 'ctc_beam_search',
default=2, "Decoding method. Options: ctc_beam_search, ctc_greedy",
type=int, choices = ['ctc_beam_search', 'ctc_greedy'])
help="Convolution layer number. (default: %(default)s)") add_arg('specgram_type', str,
parser.add_argument( 'linear',
"--num_rnn_layers", "Audio feature type. Options: linear, mfcc.",
default=3, choices=['linear', 'mfcc'])
type=int, # yapf: disable
help="RNN layer number. (default: %(default)s)")
parser.add_argument(
"--rnn_layer_size",
default=512,
type=int,
help="RNN layer cell number. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=True,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default='checkpoints/params.latest.tar.gz',
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--decode_method",
default='beam_search',
type=str,
help="Method for ctc decoding: best_path or beam_search. "
"(default: %(default)s)")
parser.add_argument(
"--beam_size",
default=100,
type=int,
help="Width for beam search decoding. (default: %(default)d)")
parser.add_argument(
"--language_model_path",
default="lm/data/common_crawl_00.prune01111.trie.klm",
type=str,
help="Path for language model. (default: %(default)s)")
parser.add_argument(
"--alpha",
default=0.36,
type=float,
help="Parameter associated with language model. (default: %(default)f)")
parser.add_argument(
"--beta",
default=0.25,
type=float,
help="Parameter associated with word count. (default: %(default)f)")
parser.add_argument(
"--cutoff_prob",
default=0.99,
type=float,
help="The cutoff probability of pruning"
"in beam search. (default: %(default)f)")
args = parser.parse_args() args = parser.parse_args()
...@@ -147,7 +100,7 @@ class AsrRequestHandler(SocketServer.BaseRequestHandler): ...@@ -147,7 +100,7 @@ class AsrRequestHandler(SocketServer.BaseRequestHandler):
finish_time = time.time() finish_time = time.time()
print("Response Time: %f, Transcript: %s" % print("Response Time: %f, Transcript: %s" %
(finish_time - start_time, transcript)) (finish_time - start_time, transcript))
self.request.sendall(transcript) self.request.sendall(transcript.encode('utf-8'))
def _write_to_file(self, data): def _write_to_file(self, data):
# prepare save dir and filename # prepare save dir and filename
...@@ -188,8 +141,8 @@ def start_server(): ...@@ -188,8 +141,8 @@ def start_server():
"""Start the ASR server""" """Start the ASR server"""
# prepare data generator # prepare data generator
data_generator = DataGenerator( data_generator = DataGenerator(
vocab_filepath=args.vocab_filepath, vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_filepath, mean_std_filepath=args.mean_std_path,
augmentation_config='{}', augmentation_config='{}',
specgram_type=args.specgram_type, specgram_type=args.specgram_type,
num_threads=1) num_threads=1)
...@@ -199,20 +152,22 @@ def start_server(): ...@@ -199,20 +152,22 @@ def start_server():
num_conv_layers=args.num_conv_layers, num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers, num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size, rnn_layer_size=args.rnn_layer_size,
pretrained_model_path=args.model_filepath) use_gru=args.use_gru,
pretrained_model_path=args.model_path,
share_rnn_weights=args.share_rnn_weights)
# prepare ASR inference handler # prepare ASR inference handler
def file_to_transcript(filename): def file_to_transcript(filename):
feature = data_generator.process_utterance(filename, "") feature = data_generator.process_utterance(filename, "")
result_transcript = ds2_model.infer_batch( result_transcript = ds2_model.infer_batch(
infer_data=[feature], infer_data=[feature],
decode_method=args.decode_method, decoding_method=args.decoding_method,
beam_alpha=args.alpha, beam_alpha=args.alpha,
beam_beta=args.beta, beam_beta=args.beta,
beam_size=args.beam_size, beam_size=args.beam_size,
cutoff_prob=args.cutoff_prob, cutoff_prob=args.cutoff_prob,
vocab_list=data_generator.vocab_list, vocab_list=data_generator.vocab_list,
language_model_path=args.language_model_path, language_model_path=args.lang_model_path,
num_processes=1) num_processes=1)
return result_transcript[0] return result_transcript[0]
...@@ -221,7 +176,7 @@ def start_server(): ...@@ -221,7 +176,7 @@ def start_server():
print('Warming up ...') print('Warming up ...')
warm_up_test( warm_up_test(
audio_process_handler=file_to_transcript, audio_process_handler=file_to_transcript,
manifest_path=args.warmup_manifest_path, manifest_path=args.warmup_manifest,
num_test_cases=3) num_test_cases=3)
print('-----------------------------------------------------------') print('-----------------------------------------------------------')
......
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download data, generate manifests
PYTHONPATH=.:$PYTHONPATH python data/aishell/aishell.py \
--manifest_prefix='data/aishell/manifest' \
--target_dir='~/.cache/paddle/dataset/speech/Aishell'
if [ $? -ne 0 ]; then
echo "Prepare Aishell failed. Terminated."
exit 1
fi
# build vocabulary
python tools/build_vocab.py \
--count_threshold=0 \
--vocab_path='data/aishell/vocab.txt' \
--manifest_paths='data/aishell/manifest.train'
if [ $? -ne 0 ]; then
echo "Build vocabulary failed. Terminated."
exit 1
fi
# compute mean and stddev for normalizer
python tools/compute_mean_std.py \
--manifest_path='data/aishell/manifest.train' \
--num_samples=2000 \
--specgram_type='linear' \
--output_path='data/aishell/mean_std.npz'
if [ $? -ne 0 ]; then
echo "Compute mean and stddev failed. Terminated."
exit 1
fi
echo "Aishell data preparation done."
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# train model
# if you wish to resume from an exists model, uncomment --init_model_path
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u train.py \
--batch_size=64 \
--trainer_count=8 \
--num_passes=50 \
--num_proc_data=12 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=1024 \
--num_iter_print=100 \
--learning_rate=5e-4 \
--max_duration=27.0 \
--min_duration=0.0 \
--test_off=False \
--use_sortagrad=True \
--use_gru=False \
--use_gpu=True \
--is_local=True \
--share_rnn_weights=False \
--train_manifest='data/aishell/manifest.train' \
--dev_manifest='data/aishell/manifest.dev' \
--mean_std_path='data/aishell/mean_std.npz' \
--vocab_path='data/aishell/vocab.txt' \
--output_model_dir='./checkpoints/aishell' \
--augment_conf_path='conf/augmentation.config' \
--specgram_type='linear' \
--shuffle_method='batch_shuffle_clipped'
if [ $? -ne 0 ]; then
echo "Failed in training!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download data, generate manifests
PYTHONPATH=.:$PYPYTHONPATH python data/librispeech/librispeech.py \
--manifest_prefix='data/librispeech/manifest' \
--target_dir='~/.cache/paddle/dataset/speech/Libri' \
--full_download='True'
if [ $? -ne 0 ]; then
echo "Prepare LibriSpeech failed. Terminated."
exit 1
fi
cat data/librispeech/manifest.train-* | shuf > data/librispeech/manifest.train
# build vocabulary
python tools/build_vocab.py \
--count_threshold=0 \
--vocab_path='data/librispeech/vocab.txt' \
--manifest_paths='data/librispeech/manifest.train'
if [ $? -ne 0 ]; then
echo "Build vocabulary failed. Terminated."
exit 1
fi
# compute mean and stddev for normalizer
python tools/compute_mean_std.py \
--manifest_path='data/librispeech/manifest.train' \
--num_samples=2000 \
--specgram_type='linear' \
--output_path='data/librispeech/mean_std.npz'
if [ $? -ne 0 ]; then
echo "Compute mean and stddev failed. Terminated."
exit 1
fi
echo "LibriSpeech Data preparation done."
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# infer
CUDA_VISIBLE_DEVICES=0 \
python -u infer.py \
--num_samples=10 \
--trainer_count=1 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--infer_manifest='data/librispeech/manifest.test-clean' \
--mean_std_path='data/librispeech/mean_std.npz' \
--vocab_path='data/librispeech/vocab.txt' \
--model_path='checkpoints/libri/params.latest.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in inference!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# download well-trained model
pushd models/librispeech > /dev/null
sh download_model.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# infer
CUDA_VISIBLE_DEVICES=0 \
python -u infer.py \
--num_samples=10 \
--trainer_count=1 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--infer_manifest='data/librispeech/manifest.test-clean' \
--mean_std_path='models/librispeech/mean_std.npz' \
--vocab_path='models/librispeech/vocab.txt' \
--model_path='models/librispeech/params.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in inference!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# evaluate model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u test.py \
--batch_size=128 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_proc_data=4 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--test_manifest='data/librispeech/manifest.test-clean' \
--mean_std_path='data/librispeech/mean_std.npz' \
--vocab_path='data/librispeech/vocab.txt' \
--model_path='checkpoints/libri/params.latest.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in evaluation!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# download well-trained model
pushd models/librispeech > /dev/null
sh download_model.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# evaluate model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u test.py \
--batch_size=128 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_proc_data=4 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--test_manifest='data/librispeech/manifest.test-clean' \
--mean_std_path='models/librispeech/mean_std.npz' \
--vocab_path='models/librispeech/vocab.txt' \
--model_path='models/librispeech/params.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in evaluation!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# train model
# if you wish to resume from an exists model, uncomment --init_model_path
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u train.py \
--batch_size=512 \
--trainer_count=8 \
--num_passes=50 \
--num_proc_data=12 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--num_iter_print=100 \
--learning_rate=5e-4 \
--max_duration=27.0 \
--min_duration=0.0 \
--test_off=False \
--use_sortagrad=True \
--use_gru=False \
--use_gpu=True \
--is_local=True \
--share_rnn_weights=True \
--train_manifest='data/librispeech/manifest.train' \
--dev_manifest='data/librispeech/manifest.dev' \
--mean_std_path='data/librispeech/mean_std.npz' \
--vocab_path='data/librispeech/vocab.txt' \
--output_model_dir='./checkpoints/libri' \
--augment_conf_path='conf/augmentation.config' \
--specgram_type='linear' \
--shuffle_method='batch_shuffle_clipped'
if [ $? -ne 0 ]; then
echo "Failed in training!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# grid-search for hyper-parameters in language model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u tools/tune.py \
--num_samples=100 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=12 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--num_alphas=14 \
--num_betas=20 \
--alpha_from=0.1 \
--alpha_to=0.36 \
--beta_from=0.05 \
--beta_to=1.0 \
--cutoff_prob=0.99 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--tune_manifest='data/librispeech/manifest.dev-clean' \
--mean_std_path='data/librispeech/mean_std.npz' \
--vocab_path='data/librispeech/vocab.txt' \
--model_path='checkpoints/libri/params.latest.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in tuning!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# start demo client
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip='localhost' \
--host_port=8086 \
if [ $? -ne 0 ]; then
echo "Failed in starting demo client!"
exit 1
fi
exit 0
#! /usr/bin/env bash
# TODO: replace the model with a mandarin model
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# download well-trained model
pushd models/librispeech > /dev/null
sh download_model.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# start demo server
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_server.py \
--host_ip='localhost' \
--host_port=8086 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=0.36 \
--beta=0.25 \
--cutoff_prob=0.99 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--speech_save_dir='demo_cache' \
--warmup_manifest='data/tiny/manifest.test-clean' \
--mean_std_path='models/librispeech/mean_std.npz' \
--vocab_path='models/librispeech/vocab.txt' \
--model_path='models/librispeech/params.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in starting demo server!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# prepare folder
if [ ! -e data/tiny ]; then
mkdir data/tiny
fi
# download data, generate manifests
python data/librispeech/librispeech.py \
--manifest_prefix='data/tiny/manifest' \
--target_dir='~/.cache/paddle/dataset/speech/libri' \
--full_download='False'
if [ $? -ne 0 ]; then
echo "Prepare LibriSpeech failed. Terminated."
exit 1
fi
head -n 64 data/tiny/manifest.dev-clean > data/tiny/manifest.tiny
# build vocabulary
python tools/build_vocab.py \
--count_threshold=0 \
--vocab_path='data/tiny/vocab.txt' \
--manifest_paths='data/tiny/manifest.dev'
if [ $? -ne 0 ]; then
echo "Build vocabulary failed. Terminated."
exit 1
fi
# compute mean and stddev for normalizer
python tools/compute_mean_std.py \
--manifest_path='data/tiny/manifest.tiny' \
--num_samples=64 \
--specgram_type='linear' \
--output_path='data/tiny/mean_std.npz'
if [ $? -ne 0 ]; then
echo "Compute mean and stddev failed. Terminated."
exit 1
fi
echo "Tiny data preparation done."
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# infer
CUDA_VISIBLE_DEVICES=0 \
python -u infer.py \
--num_samples=10 \
--trainer_count=1 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--infer_manifest='data/tiny/manifest.tiny' \
--mean_std_path='data/tiny/mean_std.npz' \
--vocab_path='data/tiny/vocab.txt' \
--model_path='checkpoints/tiny/params.pass-19.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in inference!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# download well-trained model
pushd models/librispeech > /dev/null
sh download_model.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# infer
CUDA_VISIBLE_DEVICES=0 \
python -u infer.py \
--num_samples=10 \
--trainer_count=1 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--infer_manifest='data/tiny/manifest.test-clean' \
--mean_std_path='models/librispeech/mean_std.npz' \
--vocab_path='models/librispeech/vocab.txt' \
--model_path='models/librispeech/params.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in inference!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# evaluate model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u test.py \
--batch_size=16 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_proc_data=4 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--test_manifest='data/tiny/manifest.tiny' \
--mean_std_path='data/tiny/mean_std.npz' \
--vocab_path='data/tiny/vocab.txt' \
--model_path='checkpoints/params.pass-19.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in evaluation!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# download language model
pushd models/lm > /dev/null
sh download_lm_en.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# download well-trained model
pushd models/librispeech > /dev/null
sh download_model.sh
if [ $? -ne 0 ]; then
exit 1
fi
popd > /dev/null
# evaluate model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u test.py \
--batch_size=128 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=8 \
--num_proc_data=4 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--cutoff_prob=1.0 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--test_manifest='data/tiny/manifest.test-clean' \
--mean_std_path='models/librispeech/mean_std.npz' \
--vocab_path='models/librispeech/vocab.txt' \
--model_path='models/librispeech/params.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in evaluation!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# train model
# if you wish to resume from an exists model, uncomment --init_model_path
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -u train.py \
--batch_size=16 \
--trainer_count=4 \
--num_passes=20 \
--num_proc_data=1 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--num_iter_print=100 \
--learning_rate=1e-5 \
--max_duration=27.0 \
--min_duration=0.0 \
--test_off=False \
--use_sortagrad=True \
--use_gru=False \
--use_gpu=True \
--is_local=True \
--share_rnn_weights=True \
--train_manifest='data/tiny/manifest.tiny' \
--dev_manifest='data/tiny/manifest.tiny' \
--mean_std_path='data/tiny/mean_std.npz' \
--vocab_path='data/tiny/vocab.txt' \
--output_model_dir='./checkpoints/tiny' \
--augment_conf_path='conf/augmentation.config' \
--specgram_type='linear' \
--shuffle_method='batch_shuffle_clipped'
if [ $? -ne 0 ]; then
echo "Fail to do inference!"
exit 1
fi
exit 0
#! /usr/bin/env bash
pushd ../.. > /dev/null
# grid-search for hyper-parameters in language model
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -u tools/tune.py \
--num_samples=100 \
--trainer_count=8 \
--beam_size=500 \
--num_proc_bsearch=12 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--num_alphas=14 \
--num_betas=20 \
--alpha_from=0.1 \
--alpha_to=0.36 \
--beta_from=0.05 \
--beta_to=1.0 \
--cutoff_prob=0.99 \
--use_gru=False \
--use_gpu=True \
--share_rnn_weights=True \
--tune_manifest='data/tiny/manifest.tiny' \
--mean_std_path='data/tiny/mean_std.npz' \
--vocab_path='data/tiny/vocab.txt' \
--model_path='checkpoints/params.pass-9.tar.gz' \
--lang_model_path='models/lm/common_crawl_00.prune01111.trie.klm' \
--error_rate_type='wer' \
--specgram_type='linear'
if [ $? -ne 0 ]; then
echo "Failed in tuning!"
exit 1
fi
exit 0
...@@ -4,126 +4,73 @@ from __future__ import division ...@@ -4,126 +4,73 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import argparse import argparse
import distutils.util import functools
import multiprocessing
import paddle.v2 as paddle import paddle.v2 as paddle
from data_utils.data import DataGenerator from data_utils.data import DataGenerator
from model import DeepSpeech2Model from model_utils.model import DeepSpeech2Model
from error_rate import wer from utils.error_rate import wer, cer
import utils from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( add_arg = functools.partial(add_arguments, argparser=parser)
"--num_samples", # yapf: disable
default=10, add_arg('num_samples', int, 10, "# of samples to infer.")
type=int, add_arg('trainer_count', int, 8, "# of Trainers (CPUs or GPUs).")
help="Number of samples for inference. (default: %(default)s)") add_arg('beam_size', int, 500, "Beam search width.")
parser.add_argument( add_arg('num_proc_bsearch', int, 12, "# of CPUs for beam search.")
"--num_conv_layers", add_arg('num_conv_layers', int, 2, "# of convolution layers.")
default=2, add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
type=int, add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
help="Convolution layer number. (default: %(default)s)") add_arg('alpha', float, 2.15, "Coef of LM for beam search.")
parser.add_argument( add_arg('beta', float, 0.35, "Coef of WC for beam search.")
"--num_rnn_layers", add_arg('cutoff_prob', float, 1.0, "Cutoff probability for pruning.")
default=3, add_arg('cutoff_top_n', int, 40, "Cutoff number for pruning.")
type=int, add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
help="RNN layer number. (default: %(default)s)") add_arg('use_gpu', bool, True, "Use GPU or not.")
parser.add_argument( add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
"--rnn_layer_size", "bi-directional RNNs. Not for GRU.")
default=512, add_arg('infer_manifest', str,
type=int, 'data/librispeech/manifest.dev-clean',
help="RNN layer cell number. (default: %(default)s)") "Filepath of manifest to infer.")
parser.add_argument( add_arg('mean_std_path', str,
"--use_gpu", 'data/librispeech/mean_std.npz',
default=True, "Filepath of normalizer's mean & std.")
type=distutils.util.strtobool, add_arg('vocab_path', str,
help="Use gpu or not. (default: %(default)s)") 'data/librispeech/vocab.txt',
parser.add_argument( "Filepath of vocabulary.")
"--num_threads_data", add_arg('lang_model_path', str,
default=1, 'models/lm/common_crawl_00.prune01111.trie.klm',
type=int, "Filepath for language model.")
help="Number of cpu threads for preprocessing data. (default: %(default)s)") add_arg('model_path', str,
parser.add_argument( './checkpoints/libri/params.latest.tar.gz',
"--num_processes_beam_search", "If None, the training starts from scratch, "
default=multiprocessing.cpu_count() // 2, "otherwise, it resumes from the pre-trained model.")
type=int, add_arg('decoding_method', str,
help="Number of cpu processes for beam search. (default: %(default)s)") 'ctc_beam_search',
parser.add_argument( "Decoding method. Options: ctc_beam_search, ctc_greedy",
"--specgram_type", choices = ['ctc_beam_search', 'ctc_greedy'])
default='linear', add_arg('error_rate_type', str,
type=str, 'wer',
help="Feature type of audio data: 'linear' (power spectrum)" "Error rate type for evaluation.",
" or 'mfcc'. (default: %(default)s)") choices=['wer', 'cer'])
parser.add_argument( add_arg('specgram_type', str,
"--trainer_count", 'linear',
default=8, "Audio feature type. Options: linear, mfcc.",
type=int, choices=['linear', 'mfcc'])
help="Trainer number. (default: %(default)s)") # yapf: disable
parser.add_argument(
"--mean_std_filepath",
default='mean_std.npz',
type=str,
help="Manifest path for normalizer. (default: %(default)s)")
parser.add_argument(
"--decode_manifest_path",
default='datasets/manifest.test',
type=str,
help="Manifest path for decoding. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default='checkpoints/params.latest.tar.gz',
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
parser.add_argument(
"--decode_method",
default='beam_search',
type=str,
help="Method for ctc decoding: best_path or beam_search. "
"(default: %(default)s)")
parser.add_argument(
"--beam_size",
default=500,
type=int,
help="Width for beam search decoding. (default: %(default)d)")
parser.add_argument(
"--language_model_path",
default="lm/data/common_crawl_00.prune01111.trie.klm",
type=str,
help="Path for language model. (default: %(default)s)")
parser.add_argument(
"--alpha",
default=0.36,
type=float,
help="Parameter associated with language model. (default: %(default)f)")
parser.add_argument(
"--beta",
default=0.25,
type=float,
help="Parameter associated with word count. (default: %(default)f)")
parser.add_argument(
"--cutoff_prob",
default=0.99,
type=float,
help="The cutoff probability of pruning"
"in beam search. (default: %(default)f)")
args = parser.parse_args() args = parser.parse_args()
def infer(): def infer():
"""Inference for DeepSpeech2.""" """Inference for DeepSpeech2."""
data_generator = DataGenerator( data_generator = DataGenerator(
vocab_filepath=args.vocab_filepath, vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_filepath, mean_std_filepath=args.mean_std_path,
augmentation_config='{}', augmentation_config='{}',
specgram_type=args.specgram_type, specgram_type=args.specgram_type,
num_threads=args.num_threads_data) num_threads=1)
batch_reader = data_generator.batch_reader_creator( batch_reader = data_generator.batch_reader_creator(
manifest_path=args.decode_manifest_path, manifest_path=args.infer_manifest,
batch_size=args.num_samples, batch_size=args.num_samples,
min_batch_size=1, min_batch_size=1,
sortagrad=False, sortagrad=False,
...@@ -135,18 +82,26 @@ def infer(): ...@@ -135,18 +82,26 @@ def infer():
num_conv_layers=args.num_conv_layers, num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers, num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size, rnn_layer_size=args.rnn_layer_size,
pretrained_model_path=args.model_filepath) use_gru=args.use_gru,
pretrained_model_path=args.model_path,
share_rnn_weights=args.share_rnn_weights)
# decoders only accept string encoded in utf-8
vocab_list = [chars.encode("utf-8") for chars in data_generator.vocab_list]
result_transcripts = ds2_model.infer_batch( result_transcripts = ds2_model.infer_batch(
infer_data=infer_data, infer_data=infer_data,
decode_method=args.decode_method, decoding_method=args.decoding_method,
beam_alpha=args.alpha, beam_alpha=args.alpha,
beam_beta=args.beta, beam_beta=args.beta,
beam_size=args.beam_size, beam_size=args.beam_size,
cutoff_prob=args.cutoff_prob, cutoff_prob=args.cutoff_prob,
vocab_list=data_generator.vocab_list, cutoff_top_n=args.cutoff_top_n,
language_model_path=args.language_model_path, vocab_list=vocab_list,
num_processes=args.num_processes_beam_search) language_model_path=args.lang_model_path,
num_processes=args.num_proc_bsearch)
error_rate_func = cer if args.error_rate_type == 'cer' else wer
target_transcripts = [ target_transcripts = [
''.join([data_generator.vocab_list[token] for token in transcript]) ''.join([data_generator.vocab_list[token] for token in transcript])
for _, transcript in infer_data for _, transcript in infer_data
...@@ -154,11 +109,13 @@ def infer(): ...@@ -154,11 +109,13 @@ def infer():
for target, result in zip(target_transcripts, result_transcripts): for target, result in zip(target_transcripts, result_transcripts):
print("\nTarget Transcription: %s\nOutput Transcription: %s" % print("\nTarget Transcription: %s\nOutput Transcription: %s" %
(target, result)) (target, result))
print("Current wer = %f" % wer(target, result)) print("Current error rate [%s] = %f" %
(args.error_rate_type, error_rate_func(target, result)))
ds2_model.logger.info("finish inference")
def main(): def main():
utils.print_arguments(args) print_arguments(args)
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count) paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
infer() infer()
......
echo "Downloading language model ..."
mkdir data
LM=common_crawl_00.prune01111.trie.klm
MD5="099a601759d467cd0a8523ff939819c5"
wget -c http://paddlepaddle.bj.bcebos.com/model_zoo/speech/$LM -P ./data
echo "Checking md5sum ..."
md5_tmp=`md5sum ./data/$LM | awk -F[' '] '{print $1}'`
if [ $MD5 != $md5_tmp ]; then
echo "Fail to download the language model!"
exit 1
fi
...@@ -6,11 +6,17 @@ from __future__ import print_function ...@@ -6,11 +6,17 @@ from __future__ import print_function
import sys import sys
import os import os
import time import time
import logging
import gzip import gzip
from decoder import * from distutils.dir_util import mkpath
from lm.lm_scorer import LmScorer
import paddle.v2 as paddle import paddle.v2 as paddle
from layer import * from decoders.swig_wrapper import Scorer
from decoders.swig_wrapper import ctc_greedy_decoder
from decoders.swig_wrapper import ctc_beam_search_decoder_batch
from model_utils.network import deep_speech_v2_network
logging.basicConfig(
format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')
class DeepSpeech2Model(object): class DeepSpeech2Model(object):
...@@ -27,16 +33,23 @@ class DeepSpeech2Model(object): ...@@ -27,16 +33,23 @@ class DeepSpeech2Model(object):
:param pretrained_model_path: Pretrained model path. If None, will train :param pretrained_model_path: Pretrained model path. If None, will train
from stratch. from stratch.
:type pretrained_model_path: basestring|None :type pretrained_model_path: basestring|None
:param share_rnn_weights: Whether to share input-hidden weights between
forward and backward directional RNNs.Notice that
for GRU, weight sharing is not supported.
:type share_rnn_weights: bool
""" """
def __init__(self, vocab_size, num_conv_layers, num_rnn_layers, def __init__(self, vocab_size, num_conv_layers, num_rnn_layers,
rnn_layer_size, pretrained_model_path): rnn_layer_size, use_gru, pretrained_model_path,
share_rnn_weights):
self._create_network(vocab_size, num_conv_layers, num_rnn_layers, self._create_network(vocab_size, num_conv_layers, num_rnn_layers,
rnn_layer_size) rnn_layer_size, use_gru, share_rnn_weights)
self._create_parameters(pretrained_model_path) self._create_parameters(pretrained_model_path)
self._inferer = None self._inferer = None
self._loss_inferer = None self._loss_inferer = None
self._ext_scorer = None self._ext_scorer = None
self.logger = logging.getLogger("")
self.logger.setLevel(level=logging.INFO)
def train(self, def train(self,
train_batch_reader, train_batch_reader,
...@@ -46,7 +59,9 @@ class DeepSpeech2Model(object): ...@@ -46,7 +59,9 @@ class DeepSpeech2Model(object):
gradient_clipping, gradient_clipping,
num_passes, num_passes,
output_model_dir, output_model_dir,
num_iterations_print=100): is_local=True,
num_iterations_print=100,
test_off=False):
"""Train the model. """Train the model.
:param train_batch_reader: Train data reader. :param train_batch_reader: Train data reader.
...@@ -65,12 +80,16 @@ class DeepSpeech2Model(object): ...@@ -65,12 +80,16 @@ class DeepSpeech2Model(object):
:param num_iterations_print: Number of training iterations for printing :param num_iterations_print: Number of training iterations for printing
a training loss. a training loss.
:type rnn_iteratons_print: int :type rnn_iteratons_print: int
:param is_local: Set to False if running with pserver with multi-nodes.
:type is_local: bool
:param output_model_dir: Directory for saving the model (every pass). :param output_model_dir: Directory for saving the model (every pass).
:type output_model_dir: basestring :type output_model_dir: basestring
:param test_off: Turn off testing.
:type test_off: bool
""" """
# prepare model output directory # prepare model output directory
if not os.path.exists(output_model_dir): if not os.path.exists(output_model_dir):
os.mkdir(output_model_dir) mkpath(output_model_dir)
# prepare optimizer and trainer # prepare optimizer and trainer
optimizer = paddle.optimizer.Adam( optimizer = paddle.optimizer.Adam(
...@@ -79,7 +98,8 @@ class DeepSpeech2Model(object): ...@@ -79,7 +98,8 @@ class DeepSpeech2Model(object):
trainer = paddle.trainer.SGD( trainer = paddle.trainer.SGD(
cost=self._loss, cost=self._loss,
parameters=self._parameters, parameters=self._parameters,
update_equation=optimizer) update_equation=optimizer,
is_local=is_local)
# create event handler # create event handler
def event_handler(event): def event_handler(event):
...@@ -103,14 +123,19 @@ class DeepSpeech2Model(object): ...@@ -103,14 +123,19 @@ class DeepSpeech2Model(object):
start_time = time.time() start_time = time.time()
cost_sum, cost_counter = 0.0, 0 cost_sum, cost_counter = 0.0, 0
if isinstance(event, paddle.event.EndPass): if isinstance(event, paddle.event.EndPass):
result = trainer.test( if test_off:
reader=dev_batch_reader, feeding=feeding_dict) print("\n------- Time: %d sec, Pass: %d" %
(time.time() - start_time, event.pass_id))
else:
result = trainer.test(
reader=dev_batch_reader, feeding=feeding_dict)
print("\n------- Time: %d sec, Pass: %d, "
"ValidationCost: %s" %
(time.time() - start_time, event.pass_id, 0))
output_model_path = os.path.join( output_model_path = os.path.join(
output_model_dir, "params.pass-%d.tar.gz" % event.pass_id) output_model_dir, "params.pass-%d.tar.gz" % event.pass_id)
with gzip.open(output_model_path, 'w') as f: with gzip.open(output_model_path, 'w') as f:
self._parameters.to_tar(f) self._parameters.to_tar(f)
print("\n------- Time: %d sec, Pass: %d, ValidationCost: %s" %
(time.time() - start_time, event.pass_id, result.cost))
# run train # run train
trainer.train( trainer.train(
...@@ -137,9 +162,9 @@ class DeepSpeech2Model(object): ...@@ -137,9 +162,9 @@ class DeepSpeech2Model(object):
# run inference # run inference
return self._loss_inferer.infer(input=infer_data) return self._loss_inferer.infer(input=infer_data)
def infer_batch(self, infer_data, decode_method, beam_alpha, beam_beta, def infer_batch(self, infer_data, decoding_method, beam_alpha, beam_beta,
beam_size, cutoff_prob, vocab_list, language_model_path, beam_size, cutoff_prob, cutoff_top_n, vocab_list,
num_processes): language_model_path, num_processes):
"""Model inference. Infer the transcription for a batch of speech """Model inference. Infer the transcription for a batch of speech
utterances. utterances.
...@@ -147,9 +172,9 @@ class DeepSpeech2Model(object): ...@@ -147,9 +172,9 @@ class DeepSpeech2Model(object):
consisting of a tuple of audio features and consisting of a tuple of audio features and
transcription text (empty string). transcription text (empty string).
:type infer_data: list :type infer_data: list
:param decode_method: Decoding method name, 'best_path' or :param decoding_method: Decoding method name, 'ctc_greedy' or
'beam search'. 'ctc_beam_search'.
:param decode_method: string :param decoding_method: string
:param beam_alpha: Parameter associated with language model. :param beam_alpha: Parameter associated with language model.
:type beam_alpha: float :type beam_alpha: float
:param beam_beta: Parameter associated with word count. :param beam_beta: Parameter associated with word count.
...@@ -159,6 +184,10 @@ class DeepSpeech2Model(object): ...@@ -159,6 +184,10 @@ class DeepSpeech2Model(object):
:param cutoff_prob: Cutoff probability in pruning, :param cutoff_prob: Cutoff probability in pruning,
default 1.0, no pruning. default 1.0, no pruning.
:type cutoff_prob: float :type cutoff_prob: float
:param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n
characters with highest probs in vocabulary will be
used in beam search, default 40.
:type cutoff_top_n: int
:param vocab_list: List of tokens in the vocabulary, for decoding. :param vocab_list: List of tokens in the vocabulary, for decoding.
:type vocab_list: list :type vocab_list: list
:param language_model_path: Filepath for language model. :param language_model_path: Filepath for language model.
...@@ -181,36 +210,47 @@ class DeepSpeech2Model(object): ...@@ -181,36 +210,47 @@ class DeepSpeech2Model(object):
] ]
# run decoder # run decoder
results = [] results = []
if decode_method == "best_path": if decoding_method == "ctc_greedy":
# best path decode # best path decode
for i, probs in enumerate(probs_split): for i, probs in enumerate(probs_split):
output_transcription = ctc_best_path_decoder( output_transcription = ctc_greedy_decoder(
probs_seq=probs, vocabulary=data_generator.vocab_list) probs_seq=probs, vocabulary=vocab_list)
results.append(output_transcription) results.append(output_transcription)
elif decode_method == "beam_search": elif decoding_method == "ctc_beam_search":
# initialize external scorer # initialize external scorer
if self._ext_scorer == None: if self._ext_scorer == None:
self._ext_scorer = LmScorer(beam_alpha, beam_beta,
language_model_path)
self._loaded_lm_path = language_model_path self._loaded_lm_path = language_model_path
self.logger.info("begin to initialize the external scorer "
"for decoding")
self._ext_scorer = Scorer(beam_alpha, beam_beta,
language_model_path, vocab_list)
lm_char_based = self._ext_scorer.is_character_based()
lm_max_order = self._ext_scorer.get_max_order()
lm_dict_size = self._ext_scorer.get_dict_size()
self.logger.info("language model: "
"is_character_based = %d," % lm_char_based +
" max_order = %d," % lm_max_order +
" dict_size = %d" % lm_dict_size)
self.logger.info("end initializing scorer. Start decoding ...")
else: else:
self._ext_scorer.reset_params(beam_alpha, beam_beta) self._ext_scorer.reset_params(beam_alpha, beam_beta)
assert self._loaded_lm_path == language_model_path assert self._loaded_lm_path == language_model_path
# beam search decode # beam search decode
num_processes = min(num_processes, len(probs_split))
beam_search_results = ctc_beam_search_decoder_batch( beam_search_results = ctc_beam_search_decoder_batch(
probs_split=probs_split, probs_split=probs_split,
vocabulary=vocab_list, vocabulary=vocab_list,
beam_size=beam_size, beam_size=beam_size,
blank_id=len(vocab_list),
num_processes=num_processes, num_processes=num_processes,
ext_scoring_func=self._ext_scorer, ext_scoring_func=self._ext_scorer,
cutoff_prob=cutoff_prob) cutoff_prob=cutoff_prob,
cutoff_top_n=cutoff_top_n)
results = [result[0][1] for result in beam_search_results] results = [result[0][1] for result in beam_search_results]
else: else:
raise ValueError("Decoding method [%s] is not supported." % raise ValueError("Decoding method [%s] is not supported." %
decode_method) decoding_method)
return results return results
def _create_parameters(self, model_path=None): def _create_parameters(self, model_path=None):
...@@ -222,7 +262,7 @@ class DeepSpeech2Model(object): ...@@ -222,7 +262,7 @@ class DeepSpeech2Model(object):
gzip.open(model_path)) gzip.open(model_path))
def _create_network(self, vocab_size, num_conv_layers, num_rnn_layers, def _create_network(self, vocab_size, num_conv_layers, num_rnn_layers,
rnn_layer_size): rnn_layer_size, use_gru, share_rnn_weights):
"""Create data layers and model network.""" """Create data layers and model network."""
# paddle.data_type.dense_array is used for variable batch input. # paddle.data_type.dense_array is used for variable batch input.
# The size 161 * 161 is only an placeholder value and the real shape # The size 161 * 161 is only an placeholder value and the real shape
...@@ -233,10 +273,12 @@ class DeepSpeech2Model(object): ...@@ -233,10 +273,12 @@ class DeepSpeech2Model(object):
text_data = paddle.layer.data( text_data = paddle.layer.data(
name="transcript_text", name="transcript_text",
type=paddle.data_type.integer_value_sequence(vocab_size)) type=paddle.data_type.integer_value_sequence(vocab_size))
self._log_probs, self._loss = deep_speech2( self._log_probs, self._loss = deep_speech_v2_network(
audio_data=audio_data, audio_data=audio_data,
text_data=text_data, text_data=text_data,
dict_size=vocab_size, dict_size=vocab_size,
num_conv_layers=num_conv_layers, num_conv_layers=num_conv_layers,
num_rnn_layers=num_rnn_layers, num_rnn_layers=num_rnn_layers,
rnn_size=rnn_layer_size) rnn_size=rnn_layer_size,
use_gru=use_gru,
share_rnn_weights=share_rnn_weights)
"""Contains DeepSpeech2 layers.""" """Contains DeepSpeech2 layers and networks."""
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
...@@ -39,7 +39,7 @@ def conv_bn_layer(input, filter_size, num_channels_in, num_channels_out, stride, ...@@ -39,7 +39,7 @@ def conv_bn_layer(input, filter_size, num_channels_in, num_channels_out, stride,
return paddle.layer.batch_norm(input=conv_layer, act=act) return paddle.layer.batch_norm(input=conv_layer, act=act)
def bidirectional_simple_rnn_bn_layer(name, input, size, act): def bidirectional_simple_rnn_bn_layer(name, input, size, act, share_weights):
"""Bidirectonal simple rnn layer with sequence-wise batch normalization. """Bidirectonal simple rnn layer with sequence-wise batch normalization.
The batch normalization is only performed on input-state weights. The batch normalization is only performed on input-state weights.
...@@ -51,23 +51,91 @@ def bidirectional_simple_rnn_bn_layer(name, input, size, act): ...@@ -51,23 +51,91 @@ def bidirectional_simple_rnn_bn_layer(name, input, size, act):
:type size: int :type size: int
:param act: Activation type. :param act: Activation type.
:type act: BaseActivation :type act: BaseActivation
:param share_weights: Whether to share input-hidden weights between
forward and backward directional RNNs.
:type share_weights: bool
:return: Bidirectional simple rnn layer. :return: Bidirectional simple rnn layer.
:rtype: LayerOutput :rtype: LayerOutput
""" """
# input-hidden weights shared across bi-direcitonal rnn. if share_weights:
input_proj = paddle.layer.fc( # input-hidden weights shared between bi-direcitonal rnn.
input=input, size=size, act=paddle.activation.Linear(), bias_attr=False) input_proj = paddle.layer.fc(
# batch norm is only performed on input-state projection input=input,
input_proj_bn = paddle.layer.batch_norm( size=size,
input=input_proj, act=paddle.activation.Linear()) act=paddle.activation.Linear(),
# forward and backward in time bias_attr=False)
forward_simple_rnn = paddle.layer.recurrent( # batch norm is only performed on input-state projection
input=input_proj_bn, act=act, reverse=False) input_proj_bn = paddle.layer.batch_norm(
backward_simple_rnn = paddle.layer.recurrent( input=input_proj, act=paddle.activation.Linear())
input=input_proj_bn, act=act, reverse=True) # forward and backward in time
forward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn, act=act, reverse=False)
backward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn, act=act, reverse=True)
else:
input_proj_forward = paddle.layer.fc(
input=input,
size=size,
act=paddle.activation.Linear(),
bias_attr=False)
input_proj_backward = paddle.layer.fc(
input=input,
size=size,
act=paddle.activation.Linear(),
bias_attr=False)
# batch norm is only performed on input-state projection
input_proj_bn_forward = paddle.layer.batch_norm(
input=input_proj_forward, act=paddle.activation.Linear())
input_proj_bn_backward = paddle.layer.batch_norm(
input=input_proj_backward, act=paddle.activation.Linear())
# forward and backward in time
forward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn_forward, act=act, reverse=False)
backward_simple_rnn = paddle.layer.recurrent(
input=input_proj_bn_backward, act=act, reverse=True)
return paddle.layer.concat(input=[forward_simple_rnn, backward_simple_rnn]) return paddle.layer.concat(input=[forward_simple_rnn, backward_simple_rnn])
def bidirectional_gru_bn_layer(name, input, size, act):
"""Bidirectonal gru layer with sequence-wise batch normalization.
The batch normalization is only performed on input-state weights.
:param name: Name of the layer.
:type name: string
:param input: Input layer.
:type input: LayerOutput
:param size: Number of RNN cells.
:type size: int
:param act: Activation type.
:type act: BaseActivation
:return: Bidirectional simple rnn layer.
:rtype: LayerOutput
"""
input_proj_forward = paddle.layer.fc(
input=input,
size=size * 3,
act=paddle.activation.Linear(),
bias_attr=False)
input_proj_backward = paddle.layer.fc(
input=input,
size=size * 3,
act=paddle.activation.Linear(),
bias_attr=False)
# batch norm is only performed on input-related projections
input_proj_bn_forward = paddle.layer.batch_norm(
input=input_proj_forward, act=paddle.activation.Linear())
input_proj_bn_backward = paddle.layer.batch_norm(
input=input_proj_backward, act=paddle.activation.Linear())
# forward and backward in time
forward_gru = paddle.layer.grumemory(
input=input_proj_bn_forward, act=act, reverse=False)
backward_gru = paddle.layer.grumemory(
input=input_proj_bn_backward, act=act, reverse=True)
return paddle.layer.concat(input=[forward_gru, backward_gru])
def conv_group(input, num_stacks): def conv_group(input, num_stacks):
"""Convolution group with stacked convolution layers. """Convolution group with stacked convolution layers.
...@@ -100,7 +168,7 @@ def conv_group(input, num_stacks): ...@@ -100,7 +168,7 @@ def conv_group(input, num_stacks):
return conv, output_num_channels, output_height return conv, output_num_channels, output_height
def rnn_group(input, size, num_stacks): def rnn_group(input, size, num_stacks, use_gru, share_rnn_weights):
"""RNN group with stacked bidirectional simple RNN layers. """RNN group with stacked bidirectional simple RNN layers.
:param input: Input layer. :param input: Input layer.
...@@ -109,24 +177,43 @@ def rnn_group(input, size, num_stacks): ...@@ -109,24 +177,43 @@ def rnn_group(input, size, num_stacks):
:type size: int :type size: int
:param num_stacks: Number of stacked rnn layers. :param num_stacks: Number of stacked rnn layers.
:type num_stacks: int :type num_stacks: int
:param use_gru: Use gru if set True. Use simple rnn if set False.
:type use_gru: bool
:param share_rnn_weights: Whether to share input-hidden weights between
forward and backward directional RNNs.
It is only available when use_gru=False.
:type share_weights: bool
:return: Output layer of the RNN group. :return: Output layer of the RNN group.
:rtype: LayerOutput :rtype: LayerOutput
""" """
output = input output = input
for i in xrange(num_stacks): for i in xrange(num_stacks):
output = bidirectional_simple_rnn_bn_layer( if use_gru:
name=str(i), input=output, size=size, act=paddle.activation.BRelu()) output = bidirectional_gru_bn_layer(
name=str(i),
input=output,
size=size,
act=paddle.activation.Relu())
# BRelu does not support hppl, need to add later. Use Relu instead.
else:
output = bidirectional_simple_rnn_bn_layer(
name=str(i),
input=output,
size=size,
act=paddle.activation.BRelu(),
share_weights=share_rnn_weights)
return output return output
def deep_speech2(audio_data, def deep_speech_v2_network(audio_data,
text_data, text_data,
dict_size, dict_size,
num_conv_layers=2, num_conv_layers=2,
num_rnn_layers=3, num_rnn_layers=3,
rnn_size=256): rnn_size=256,
""" use_gru=False,
The whole DeepSpeech2 model structure (a simplified version). share_rnn_weights=True):
"""The DeepSpeech2 network structure.
:param audio_data: Audio spectrogram data layer. :param audio_data: Audio spectrogram data layer.
:type audio_data: LayerOutput :type audio_data: LayerOutput
...@@ -140,6 +227,12 @@ def deep_speech2(audio_data, ...@@ -140,6 +227,12 @@ def deep_speech2(audio_data,
:type num_rnn_layers: int :type num_rnn_layers: int
:param rnn_size: RNN layer size (number of RNN cells). :param rnn_size: RNN layer size (number of RNN cells).
:type rnn_size: int :type rnn_size: int
:param use_gru: Use gru if set True. Use simple rnn if set False.
:type use_gru: bool
:param share_rnn_weights: Whether to share input-hidden weights between
forward and backward direction RNNs.
It is only available when use_gru=False.
:type share_weights: bool
:return: A tuple of an output unnormalized log probability layer ( :return: A tuple of an output unnormalized log probability layer (
before softmax) and a ctc cost layer. before softmax) and a ctc cost layer.
:rtype: tuple of LayerOutput :rtype: tuple of LayerOutput
...@@ -157,7 +250,11 @@ def deep_speech2(audio_data, ...@@ -157,7 +250,11 @@ def deep_speech2(audio_data,
block_y=conv_group_height) block_y=conv_group_height)
# rnn group # rnn group
rnn_group_output = rnn_group( rnn_group_output = rnn_group(
input=conv2seq, size=rnn_size, num_stacks=num_rnn_layers) input=conv2seq,
size=rnn_size,
num_stacks=num_rnn_layers,
use_gru=use_gru,
share_rnn_weights=share_rnn_weights)
fc = paddle.layer.fc( fc = paddle.layer.fc(
input=rnn_group_output, input=rnn_group_output,
size=dict_size + 1, size=dict_size + 1,
......
#! /usr/bin/env bash
source ../../utils/utility.sh
URL='http://cloud.dlnel.org/filepub/?uuid=6c83b9d8-3255-4adf-9726-0fe0be3d0274'
MD5=28521a58552885a81cf92a1e9b133a71
TARGET=./aishell_model.tar.gz
echo "Download Aishell model ..."
download $URL $MD5 $TARGET
if [ $? -ne 0 ]; then
echo "Fail to download Aishell model!"
exit 1
fi
tar -zxvf $TARGET
exit 0
#! /usr/bin/env bash
source ../../utils/utility.sh
URL='http://cloud.dlnel.org/filepub/?uuid=8e3cf742-2ff3-41ce-a49d-f6158cc06a23'
MD5=2ef08f8b608a7c555592161fc14d81a6
TARGET=./librispeech_model.tar.gz
echo "Download LibriSpeech model ..."
download $URL $MD5 $TARGET
if [ $? -ne 0 ]; then
echo "Fail to download LibriSpeech model!"
exit 1
fi
tar -zxvf $TARGET
exit 0
#! /usr/bin/env bash
source ../../utils/utility.sh
URL=http://cloud.dlnel.org/filepub/?uuid=d21861e4-4ed6-45bb-ad8e-ae417a43195e
MD5="29e02312deb2e59b3c8686c7966d4fe3"
TARGET=./zh_giga.no_cna_cmn.prune01244.klm
echo "Download language model ..."
download $URL $MD5 $TARGET
if [ $? -ne 0 ]; then
echo "Fail to download the language model!"
exit 1
fi
exit 0
#! /usr/bin/env bash
source ../../utils/utility.sh
URL=http://paddlepaddle.bj.bcebos.com/model_zoo/speech/common_crawl_00.prune01111.trie.klm
MD5="099a601759d467cd0a8523ff939819c5"
TARGET=./common_crawl_00.prune01111.trie.klm
echo "Download language model ..."
download $URL $MD5 $TARGET
if [ $? -ne 0 ]; then
echo "Fail to download the language model!"
exit 1
fi
exit 0
...@@ -2,4 +2,3 @@ scipy==0.13.1 ...@@ -2,4 +2,3 @@ scipy==0.13.1
resampy==0.1.5 resampy==0.1.5
SoundFile==0.9.0.post1 SoundFile==0.9.0.post1
python_speech_features python_speech_features
https://github.com/luotao1/kenlm/archive/master.zip
#!/bin/bash #! /usr/bin/env bash
# install python dependencies # install python dependencies
if [ -f "requirements.txt" ]; then if [ -f "requirements.txt" ]; then
...@@ -13,17 +13,26 @@ fi ...@@ -13,17 +13,26 @@ fi
python -c "import soundfile" python -c "import soundfile"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Install package libsndfile into default system path." echo "Install package libsndfile into default system path."
curl -O "http://www.mega-nerd.com/libsndfile/files/libsndfile-1.0.28.tar.gz" wget "http://www.mega-nerd.com/libsndfile/files/libsndfile-1.0.28.tar.gz"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Download libsndfile-1.0.28.tar.gz failed !!!" echo "Download libsndfile-1.0.28.tar.gz failed !!!"
exit 1 exit 1
fi fi
tar -zxvf libsndfile-1.0.28.tar.gz tar -zxvf libsndfile-1.0.28.tar.gz
cd libsndfile-1.0.28 cd libsndfile-1.0.28
./configure && make && make install ./configure > /dev/null && make > /dev/null && make install > /dev/null
cd .. cd ..
rm -rf libsndfile-1.0.28 rm -rf libsndfile-1.0.28
rm libsndfile-1.0.28.tar.gz rm libsndfile-1.0.28.tar.gz
fi fi
# install decoders
python -c "import swig_decoders"
if [ $? != 0 ]; then
cd decoders/swig > /dev/null
sh setup.sh
cd - > /dev/null
fi
echo "Install all dependencies successfully." echo "Install all dependencies successfully."
...@@ -3,127 +3,75 @@ from __future__ import absolute_import ...@@ -3,127 +3,75 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import distutils.util
import argparse import argparse
import multiprocessing import functools
import paddle.v2 as paddle import paddle.v2 as paddle
from data_utils.data import DataGenerator from data_utils.data import DataGenerator
from model import DeepSpeech2Model from model_utils.model import DeepSpeech2Model
from error_rate import wer from utils.error_rate import wer, cer
import utils from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( add_arg = functools.partial(add_arguments, argparser=parser)
"--batch_size", # yapf: disable
default=128, add_arg('batch_size', int, 128, "Minibatch size.")
type=int, add_arg('trainer_count', int, 8, "# of Trainers (CPUs or GPUs).")
help="Minibatch size for evaluation. (default: %(default)s)") add_arg('beam_size', int, 500, "Beam search width.")
parser.add_argument( add_arg('num_proc_bsearch', int, 12, "# of CPUs for beam search.")
"--trainer_count", add_arg('num_proc_data', int, 12, "# of CPUs for data preprocessing.")
default=8, add_arg('num_conv_layers', int, 2, "# of convolution layers.")
type=int, add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
help="Trainer number. (default: %(default)s)") add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
parser.add_argument( add_arg('alpha', float, 2.15, "Coef of LM for beam search.")
"--num_conv_layers", add_arg('beta', float, 0.35, "Coef of WC for beam search.")
default=2, add_arg('cutoff_prob', float, 1.0, "Cutoff probability for pruning.")
type=int, add_arg('cutoff_top_n', int, 40, "Cutoff number for pruning.")
help="Convolution layer number. (default: %(default)s)") add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
parser.add_argument( add_arg('use_gpu', bool, True, "Use GPU or not.")
"--num_rnn_layers", add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
default=3, "bi-directional RNNs. Not for GRU.")
type=int, add_arg('test_manifest', str,
help="RNN layer number. (default: %(default)s)") 'data/librispeech/manifest.test-clean',
parser.add_argument( "Filepath of manifest to evaluate.")
"--rnn_layer_size", add_arg('mean_std_path', str,
default=512, 'data/librispeech/mean_std.npz',
type=int, "Filepath of normalizer's mean & std.")
help="RNN layer cell number. (default: %(default)s)") add_arg('vocab_path', str,
parser.add_argument( 'data/librispeech/vocab.txt',
"--use_gpu", "Filepath of vocabulary.")
default=True, add_arg('model_path', str,
type=distutils.util.strtobool, './checkpoints/libri/params.latest.tar.gz',
help="Use gpu or not. (default: %(default)s)") "If None, the training starts from scratch, "
parser.add_argument( "otherwise, it resumes from the pre-trained model.")
"--num_threads_data", add_arg('lang_model_path', str,
default=multiprocessing.cpu_count() // 2, 'models/lm/common_crawl_00.prune01111.trie.klm',
type=int, "Filepath for language model.")
help="Number of cpu threads for preprocessing data. (default: %(default)s)") add_arg('decoding_method', str,
parser.add_argument( 'ctc_beam_search',
"--num_processes_beam_search", "Decoding method. Options: ctc_beam_search, ctc_greedy",
default=multiprocessing.cpu_count() // 2, choices = ['ctc_beam_search', 'ctc_greedy'])
type=int, add_arg('error_rate_type', str,
help="Number of cpu processes for beam search. (default: %(default)s)") 'wer',
parser.add_argument( "Error rate type for evaluation.",
"--mean_std_filepath", choices=['wer', 'cer'])
default='mean_std.npz', add_arg('specgram_type', str,
type=str, 'linear',
help="Manifest path for normalizer. (default: %(default)s)") "Audio feature type. Options: linear, mfcc.",
parser.add_argument( choices=['linear', 'mfcc'])
"--decode_method", # yapf: disable
default='beam_search',
type=str,
help="Method for ctc decoding, best_path or beam_search. "
"(default: %(default)s)")
parser.add_argument(
"--language_model_path",
default="lm/data/common_crawl_00.prune01111.trie.klm",
type=str,
help="Path for language model. (default: %(default)s)")
parser.add_argument(
"--alpha",
default=0.36,
type=float,
help="Parameter associated with language model. (default: %(default)f)")
parser.add_argument(
"--beta",
default=0.25,
type=float,
help="Parameter associated with word count. (default: %(default)f)")
parser.add_argument(
"--cutoff_prob",
default=0.99,
type=float,
help="The cutoff probability of pruning"
"in beam search. (default: %(default)f)")
parser.add_argument(
"--beam_size",
default=500,
type=int,
help="Width for beam search decoding. (default: %(default)d)")
parser.add_argument(
"--specgram_type",
default='linear',
type=str,
help="Feature type of audio data: 'linear' (power spectrum)"
" or 'mfcc'. (default: %(default)s)")
parser.add_argument(
"--decode_manifest_path",
default='datasets/manifest.test',
type=str,
help="Manifest path for decoding. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default='checkpoints/params.latest.tar.gz',
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
args = parser.parse_args() args = parser.parse_args()
def evaluate(): def evaluate():
"""Evaluate on whole test data for DeepSpeech2.""" """Evaluate on whole test data for DeepSpeech2."""
data_generator = DataGenerator( data_generator = DataGenerator(
vocab_filepath=args.vocab_filepath, vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_filepath, mean_std_filepath=args.mean_std_path,
augmentation_config='{}', augmentation_config='{}',
specgram_type=args.specgram_type, specgram_type=args.specgram_type,
num_threads=args.num_threads_data) num_threads=args.num_proc_data)
batch_reader = data_generator.batch_reader_creator( batch_reader = data_generator.batch_reader_creator(
manifest_path=args.decode_manifest_path, manifest_path=args.test_manifest,
batch_size=args.batch_size, batch_size=args.batch_size,
min_batch_size=1, min_batch_size=1,
sortagrad=False, sortagrad=False,
...@@ -134,33 +82,43 @@ def evaluate(): ...@@ -134,33 +82,43 @@ def evaluate():
num_conv_layers=args.num_conv_layers, num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers, num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size, rnn_layer_size=args.rnn_layer_size,
pretrained_model_path=args.model_filepath) use_gru=args.use_gru,
pretrained_model_path=args.model_path,
share_rnn_weights=args.share_rnn_weights)
wer_sum, num_ins = 0.0, 0 # decoders only accept string encoded in utf-8
vocab_list = [chars.encode("utf-8") for chars in data_generator.vocab_list]
error_rate_func = cer if args.error_rate_type == 'cer' else wer
error_sum, num_ins = 0.0, 0
for infer_data in batch_reader(): for infer_data in batch_reader():
result_transcripts = ds2_model.infer_batch( result_transcripts = ds2_model.infer_batch(
infer_data=infer_data, infer_data=infer_data,
decode_method=args.decode_method, decoding_method=args.decoding_method,
beam_alpha=args.alpha, beam_alpha=args.alpha,
beam_beta=args.beta, beam_beta=args.beta,
beam_size=args.beam_size, beam_size=args.beam_size,
cutoff_prob=args.cutoff_prob, cutoff_prob=args.cutoff_prob,
vocab_list=data_generator.vocab_list, cutoff_top_n=args.cutoff_top_n,
language_model_path=args.language_model_path, vocab_list=vocab_list,
num_processes=args.num_processes_beam_search) language_model_path=args.lang_model_path,
num_processes=args.num_proc_bsearch)
target_transcripts = [ target_transcripts = [
''.join([data_generator.vocab_list[token] for token in transcript]) ''.join([data_generator.vocab_list[token] for token in transcript])
for _, transcript in infer_data for _, transcript in infer_data
] ]
for target, result in zip(target_transcripts, result_transcripts): for target, result in zip(target_transcripts, result_transcripts):
wer_sum += wer(target, result) error_sum += error_rate_func(target, result)
num_ins += 1 num_ins += 1
print("WER (%d/?) = %f" % (num_ins, wer_sum / num_ins)) print("Error rate [%s] (%d/?) = %f" %
print("Final WER (%d/%d) = %f" % (num_ins, num_ins, wer_sum / num_ins)) (args.error_rate_type, num_ins, error_sum / num_ins))
print("Final error rate [%s] (%d/%d) = %f" %
(args.error_rate_type, num_ins, num_ins, error_sum / num_ins))
ds2_model.logger.info("finish evaluation")
def main(): def main():
utils.print_arguments(args) print_arguments(args)
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count) paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
evaluate() evaluate()
......
"""Test Setup."""
import unittest
import numpy as np
import os
class TestSetup(unittest.TestCase):
def test_soundfile(self):
import soundfile as sf
# floating point data is typically limited to the interval [-1.0, 1.0],
# but smaller/larger values are supported as well
data = np.array([[1.75, -1.75], [1.0, -1.0], [0.5, -0.5],
[0.25, -0.25]])
file = 'test.wav'
sf.write(file, data, 44100, format='WAV', subtype='FLOAT')
read, fs = sf.read(file)
self.assertTrue(np.all(read == data))
self.assertEqual(fs, 44100)
os.remove(file)
if __name__ == '__main__':
unittest.main()
"""Set up paths for DS2"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import sys
def add_path(path):
if path not in sys.path:
sys.path.insert(0, path)
this_dir = os.path.dirname(__file__)
# Add project path to PYTHONPATH
proj_path = os.path.join(this_dir, '..')
add_path(proj_path)
"""Build vocabulary from manifest files.
Each item in vocabulary file is a character.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import functools
import codecs
import json
from collections import Counter
import os.path
import _init_paths
from data_utils.utility import read_manifest
from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# yapf: disable
add_arg('count_threshold', int, 0, "Truncation threshold for char counts.")
add_arg('vocab_path', str,
'data/librispeech/vocab.txt',
"Filepath to write the vocabulary.")
add_arg('manifest_paths', str,
None,
"Filepaths of manifests for building vocabulary. "
"You can provide multiple manifest files.",
nargs='+',
required=True)
# yapf: disable
args = parser.parse_args()
def count_manifest(counter, manifest_path):
manifest_jsons = read_manifest(manifest_path)
for line_json in manifest_jsons:
for char in line_json['text']:
counter.update(char)
def main():
print_arguments(args)
counter = Counter()
for manifest_path in args.manifest_paths:
count_manifest(counter, manifest_path)
count_sorted = sorted(counter.items(), key=lambda x: x[1], reverse=True)
with codecs.open(args.vocab_path, 'w', 'utf-8') as fout:
for char, count in count_sorted:
if count < args.count_threshold: break
fout.write(char + '\n')
if __name__ == '__main__':
main()
...@@ -4,47 +4,35 @@ from __future__ import division ...@@ -4,47 +4,35 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import argparse import argparse
import functools
import _init_paths
from data_utils.normalizer import FeatureNormalizer from data_utils.normalizer import FeatureNormalizer
from data_utils.augmentor.augmentation import AugmentationPipeline from data_utils.augmentor.augmentation import AugmentationPipeline
from data_utils.featurizer.audio_featurizer import AudioFeaturizer from data_utils.featurizer.audio_featurizer import AudioFeaturizer
from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(
description='Computing mean and stddev for feature normalizer.') parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( add_arg = functools.partial(add_arguments, argparser=parser)
"--specgram_type", # yapf: disable
default='linear', add_arg('num_samples', int, 2000, "# of samples to for statistics.")
type=str, add_arg('specgram_type', str,
help="Feature type of audio data: 'linear' (power spectrum)" 'linear',
" or 'mfcc'. (default: %(default)s)") "Audio feature type. Options: linear, mfcc.",
parser.add_argument( choices=['linear', 'mfcc'])
"--manifest_path", add_arg('manifest_path', str,
default='datasets/manifest.train', 'data/librispeech/manifest.train',
type=str, "Filepath of manifest to compute normalizer's mean and stddev.")
help="Manifest path for computing normalizer's mean and stddev." add_arg('output_path', str,
"(default: %(default)s)") 'data/librispeech/mean_std.npz',
parser.add_argument( "Filepath of write mean and stddev to (.npz).")
"--num_samples", # yapf: disable
default=2000,
type=int,
help="Number of samples for computing mean and stddev. "
"(default: %(default)s)")
parser.add_argument(
"--augmentation_config",
default='{}',
type=str,
help="Augmentation configuration in json-format. "
"(default: %(default)s)")
parser.add_argument(
"--output_file",
default='mean_std.npz',
type=str,
help="Filepath to write mean and std to (.npz)."
"(default: %(default)s)")
args = parser.parse_args() args = parser.parse_args()
def main(): def main():
augmentation_pipeline = AugmentationPipeline(args.augmentation_config) print_arguments(args)
augmentation_pipeline = AugmentationPipeline('{}')
audio_featurizer = AudioFeaturizer(specgram_type=args.specgram_type) audio_featurizer = AudioFeaturizer(specgram_type=args.specgram_type)
def augment_and_featurize(audio_segment): def augment_and_featurize(audio_segment):
...@@ -56,7 +44,7 @@ def main(): ...@@ -56,7 +44,7 @@ def main():
manifest_path=args.manifest_path, manifest_path=args.manifest_path,
featurize_func=augment_and_featurize, featurize_func=augment_and_featurize,
num_samples=args.num_samples) num_samples=args.num_samples)
normalizer.write_to_file(args.output_file) normalizer.write_to_file(args.output_path)
if __name__ == '__main__': if __name__ == '__main__':
......
#! /usr/bin/env bash
BATCH_SIZE_PER_GPU=64
MIN_DURATION=6.0
MAX_DURATION=7.0
function join_by { local IFS="$1"; shift; echo "$*"; }
for NUM_GPUS in 16 8 4 2 1
do
DEVICES=$(join_by , $(seq 0 $(($NUM_GPUS-1))))
BATCH_SIZE=$(($BATCH_SIZE_PER_GPU * $NUM_GPUS))
CUDA_VISIBLE_DEVICES=$DEVICES \
python train.py \
--batch_size=$BATCH_SIZE \
--num_passes=1 \
--test_off=True \
--trainer_count=$NUM_GPUS \
--min_duration=$MIN_DURATION \
--max_duration=$MAX_DURATION > tmp.log 2>&1
if [ $? -ne 0 ];then
exit 1
fi
cat tmp.log | grep "Time" | awk '{print "GPU Num: " "'"$NUM_GPUS"'" " Time: "$3}'
rm tmp.log
done
"""Beam search parameters tuning for DeepSpeech2 model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import argparse
import functools
import paddle.v2 as paddle
import _init_paths
from data_utils.data import DataGenerator
from model_utils.model import DeepSpeech2Model
from utils.error_rate import wer
from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# yapf: disable
add_arg('num_samples', int, 100, "# of samples to infer.")
add_arg('trainer_count', int, 8, "# of Trainers (CPUs or GPUs).")
add_arg('beam_size', int, 500, "Beam search width.")
add_arg('num_proc_bsearch', int, 12, "# of CPUs for beam search.")
add_arg('num_conv_layers', int, 2, "# of convolution layers.")
add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
add_arg('num_alphas', int, 14, "# of alpha candidates for tuning.")
add_arg('num_betas', int, 20, "# of beta candidates for tuning.")
add_arg('alpha_from', float, 0.1, "Where alpha starts tuning from.")
add_arg('alpha_to', float, 0.36, "Where alpha ends tuning with.")
add_arg('beta_from', float, 0.05, "Where beta starts tuning from.")
add_arg('beta_to', float, 1.0, "Where beta ends tuning with.")
add_arg('cutoff_prob', float, 0.99, "Cutoff probability for pruning.")
add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
add_arg('use_gpu', bool, True, "Use GPU or not.")
add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
"bi-directional RNNs. Not for GRU.")
add_arg('tune_manifest', str,
'data/librispeech/manifest.dev',
"Filepath of manifest to tune.")
add_arg('mean_std_path', str,
'data/librispeech/mean_std.npz',
"Filepath of normalizer's mean & std.")
add_arg('vocab_path', str,
'data/librispeech/vocab.txt',
"Filepath of vocabulary.")
add_arg('lang_model_path', str,
'models/lm/common_crawl_00.prune01111.trie.klm',
"Filepath for language model.")
add_arg('model_path', str,
'./checkpoints/libri/params.latest.tar.gz',
"If None, the training starts from scratch, "
"otherwise, it resumes from the pre-trained model.")
add_arg('error_rate_type', str,
'wer',
"Error rate type for evaluation.",
choices=['wer', 'cer'])
add_arg('specgram_type', str,
'linear',
"Audio feature type. Options: linear, mfcc.",
choices=['linear', 'mfcc'])
# yapf: disable
args = parser.parse_args()
def tune():
"""Tune parameters alpha and beta on one minibatch."""
if not args.num_alphas >= 0:
raise ValueError("num_alphas must be non-negative!")
if not args.num_betas >= 0:
raise ValueError("num_betas must be non-negative!")
data_generator = DataGenerator(
vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_path,
augmentation_config='{}',
specgram_type=args.specgram_type,
num_threads=1)
batch_reader = data_generator.batch_reader_creator(
manifest_path=args.tune_manifest,
batch_size=args.num_samples,
sortagrad=False,
shuffle_method=None)
tune_data = batch_reader().next()
target_transcripts = [
''.join([data_generator.vocab_list[token] for token in transcript])
for _, transcript in tune_data
]
ds2_model = DeepSpeech2Model(
vocab_size=data_generator.vocab_size,
num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size,
use_gru=args.use_gru,
pretrained_model_path=args.model_path,
share_rnn_weights=args.share_rnn_weights)
# create grid for search
cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)
cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)
params_grid = [(alpha, beta) for alpha in cand_alphas
for beta in cand_betas]
## tune parameters in loop
for alpha, beta in params_grid:
result_transcripts = ds2_model.infer_batch(
infer_data=tune_data,
decoding_method='ctc_beam_search',
beam_alpha=alpha,
beam_beta=beta,
beam_size=args.beam_size,
cutoff_prob=args.cutoff_prob,
vocab_list=data_generator.vocab_list,
language_model_path=args.lang_model_path,
num_processes=args.num_proc_bsearch)
wer_sum, num_ins = 0.0, 0
for target, result in zip(target_transcripts, result_transcripts):
wer_sum += wer(target, result)
num_ins += 1
print("alpha = %f\tbeta = %f\tWER = %f" %
(alpha, beta, wer_sum / num_ins))
def main():
print_arguments(args)
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
tune()
if __name__ == '__main__':
main()
...@@ -4,156 +4,92 @@ from __future__ import division ...@@ -4,156 +4,92 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import argparse import argparse
import distutils.util import functools
import multiprocessing
import paddle.v2 as paddle import paddle.v2 as paddle
from model import DeepSpeech2Model from model_utils.model import DeepSpeech2Model
from data_utils.data import DataGenerator from data_utils.data import DataGenerator
import utils from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument( add_arg = functools.partial(add_arguments, argparser=parser)
"--batch_size", default=256, type=int, help="Minibatch size.") # yapf: disable
parser.add_argument( add_arg('batch_size', int, 256, "Minibatch size.")
"--num_passes", add_arg('trainer_count', int, 8, "# of Trainers (CPUs or GPUs).")
default=200, add_arg('num_passes', int, 200, "# of training epochs.")
type=int, add_arg('num_proc_data', int, 12, "# of CPUs for data preprocessing.")
help="Training pass number. (default: %(default)s)") add_arg('num_conv_layers', int, 2, "# of convolution layers.")
parser.add_argument( add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
"--num_iterations_print", add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
default=100, add_arg('num_iter_print', int, 100, "Every # iterations for printing "
type=int, "train cost.")
help="Number of iterations for every train cost printing. " add_arg('learning_rate', float, 5e-4, "Learning rate.")
"(default: %(default)s)") add_arg('max_duration', float, 27.0, "Longest audio duration allowed.")
parser.add_argument( add_arg('min_duration', float, 0.0, "Shortest audio duration allowed.")
"--num_conv_layers", add_arg('test_off', bool, False, "Turn off testing.")
default=2, add_arg('use_sortagrad', bool, True, "Use SortaGrad or not.")
type=int, add_arg('use_gpu', bool, True, "Use GPU or not.")
help="Convolution layer number. (default: %(default)s)") add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
parser.add_argument( add_arg('is_local', bool, True, "Use pserver or not.")
"--num_rnn_layers", add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
default=3, "bi-directional RNNs. Not for GRU.")
type=int, add_arg('train_manifest', str,
help="RNN layer number. (default: %(default)s)") 'data/librispeech/manifest.train',
parser.add_argument( "Filepath of train manifest.")
"--rnn_layer_size", add_arg('dev_manifest', str,
default=512, 'data/librispeech/manifest.dev-clean',
type=int, "Filepath of validation manifest.")
help="RNN layer cell number. (default: %(default)s)") add_arg('mean_std_path', str,
parser.add_argument( 'data/librispeech/mean_std.npz',
"--adam_learning_rate", "Filepath of normalizer's mean & std.")
default=5e-4, add_arg('vocab_path', str,
type=float, 'data/librispeech/vocab.txt',
help="Learning rate for ADAM Optimizer. (default: %(default)s)") "Filepath of vocabulary.")
parser.add_argument( add_arg('init_model_path', str,
"--use_gpu", None,
default=True, "If None, the training starts from scratch, "
type=distutils.util.strtobool, "otherwise, it resumes from the pre-trained model.")
help="Use gpu or not. (default: %(default)s)") add_arg('output_model_dir', str,
parser.add_argument( "./checkpoints/libri",
"--use_sortagrad", "Directory for saving checkpoints.")
default=True, add_arg('augment_conf_path',str,
type=distutils.util.strtobool, 'conf/augmentation.config',
help="Use sortagrad or not. (default: %(default)s)") "Filepath of augmentation configuration file (json-format).")
parser.add_argument( add_arg('specgram_type', str,
"--specgram_type", 'linear',
default='linear', "Audio feature type. Options: linear, mfcc.",
type=str, choices=['linear', 'mfcc'])
help="Feature type of audio data: 'linear' (power spectrum)" add_arg('shuffle_method', str,
" or 'mfcc'. (default: %(default)s)") 'batch_shuffle_clipped',
parser.add_argument( "Shuffle method.",
"--max_duration", choices=['instance_shuffle', 'batch_shuffle', 'batch_shuffle_clipped'])
default=27.0, # yapf: disable
type=float,
help="Audios with duration larger than this will be discarded. "
"(default: %(default)s)")
parser.add_argument(
"--min_duration",
default=0.0,
type=float,
help="Audios with duration smaller than this will be discarded. "
"(default: %(default)s)")
parser.add_argument(
"--shuffle_method",
default='batch_shuffle_clipped',
type=str,
help="Shuffle method: 'instance_shuffle', 'batch_shuffle', "
"'batch_shuffle_batch'. (default: %(default)s)")
parser.add_argument(
"--trainer_count",
default=8,
type=int,
help="Trainer number. (default: %(default)s)")
parser.add_argument(
"--num_threads_data",
default=multiprocessing.cpu_count() // 2,
type=int,
help="Number of cpu threads for preprocessing data. (default: %(default)s)")
parser.add_argument(
"--mean_std_filepath",
default='mean_std.npz',
type=str,
help="Manifest path for normalizer. (default: %(default)s)")
parser.add_argument(
"--train_manifest_path",
default='datasets/manifest.train',
type=str,
help="Manifest path for training. (default: %(default)s)")
parser.add_argument(
"--dev_manifest_path",
default='datasets/manifest.dev',
type=str,
help="Manifest path for validation. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
parser.add_argument(
"--init_model_path",
default=None,
type=str,
help="If set None, the training will start from scratch. "
"Otherwise, the training will resume from "
"the existing model of this path. (default: %(default)s)")
parser.add_argument(
"--output_model_dir",
default="./checkpoints",
type=str,
help="Directory for saving models. (default: %(default)s)")
parser.add_argument(
"--augmentation_config",
default=open('conf/augmentation.config', 'r').read(),
type=str,
help="Augmentation configuration in json-format. "
"(default: %(default)s)")
args = parser.parse_args() args = parser.parse_args()
def train(): def train():
"""DeepSpeech2 training.""" """DeepSpeech2 training."""
train_generator = DataGenerator( train_generator = DataGenerator(
vocab_filepath=args.vocab_filepath, vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_filepath, mean_std_filepath=args.mean_std_path,
augmentation_config=args.augmentation_config, augmentation_config=open(args.augment_conf_path, 'r').read(),
max_duration=args.max_duration, max_duration=args.max_duration,
min_duration=args.min_duration, min_duration=args.min_duration,
specgram_type=args.specgram_type, specgram_type=args.specgram_type,
num_threads=args.num_threads_data) num_threads=args.num_proc_data)
dev_generator = DataGenerator( dev_generator = DataGenerator(
vocab_filepath=args.vocab_filepath, vocab_filepath=args.vocab_path,
mean_std_filepath=args.mean_std_filepath, mean_std_filepath=args.mean_std_path,
augmentation_config="{}", augmentation_config="{}",
specgram_type=args.specgram_type, specgram_type=args.specgram_type,
num_threads=args.num_threads_data) num_threads=args.num_proc_data)
train_batch_reader = train_generator.batch_reader_creator( train_batch_reader = train_generator.batch_reader_creator(
manifest_path=args.train_manifest_path, manifest_path=args.train_manifest,
batch_size=args.batch_size, batch_size=args.batch_size,
min_batch_size=args.trainer_count, min_batch_size=args.trainer_count,
sortagrad=args.use_sortagrad if args.init_model_path is None else False, sortagrad=args.use_sortagrad if args.init_model_path is None else False,
shuffle_method=args.shuffle_method) shuffle_method=args.shuffle_method)
dev_batch_reader = dev_generator.batch_reader_creator( dev_batch_reader = dev_generator.batch_reader_creator(
manifest_path=args.dev_manifest_path, manifest_path=args.dev_manifest,
batch_size=args.batch_size, batch_size=args.batch_size,
min_batch_size=1, # must be 1, but will have errors. min_batch_size=1, # must be 1, but will have errors.
sortagrad=False, sortagrad=False,
...@@ -164,20 +100,24 @@ def train(): ...@@ -164,20 +100,24 @@ def train():
num_conv_layers=args.num_conv_layers, num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers, num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size, rnn_layer_size=args.rnn_layer_size,
pretrained_model_path=args.init_model_path) use_gru=args.use_gru,
pretrained_model_path=args.init_model_path,
share_rnn_weights=args.share_rnn_weights)
ds2_model.train( ds2_model.train(
train_batch_reader=train_batch_reader, train_batch_reader=train_batch_reader,
dev_batch_reader=dev_batch_reader, dev_batch_reader=dev_batch_reader,
feeding_dict=train_generator.feeding, feeding_dict=train_generator.feeding,
learning_rate=args.adam_learning_rate, learning_rate=args.learning_rate,
gradient_clipping=400, gradient_clipping=400,
num_passes=args.num_passes, num_passes=args.num_passes,
num_iterations_print=args.num_iterations_print, num_iterations_print=args.num_iter_print,
output_model_dir=args.output_model_dir) output_model_dir=args.output_model_dir,
is_local=args.is_local,
test_off=args.test_off)
def main(): def main():
utils.print_arguments(args) print_arguments(args)
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count) paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
train() train()
......
"""Parameters tuning for DeepSpeech2 model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import distutils.util
import argparse
import multiprocessing
import paddle.v2 as paddle
from data_utils.data import DataGenerator
from model import DeepSpeech2Model
from error_rate import wer
import utils
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--batch_size",
default=128,
type=int,
help="Minibatch size for parameters tuning. (default: %(default)s)")
parser.add_argument(
"--num_conv_layers",
default=2,
type=int,
help="Convolution layer number. (default: %(default)s)")
parser.add_argument(
"--num_rnn_layers",
default=3,
type=int,
help="RNN layer number. (default: %(default)s)")
parser.add_argument(
"--rnn_layer_size",
default=512,
type=int,
help="RNN layer cell number. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=True,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--trainer_count",
default=8,
type=int,
help="Trainer number. (default: %(default)s)")
parser.add_argument(
"--num_threads_data",
default=1,
type=int,
help="Number of cpu threads for preprocessing data. (default: %(default)s)")
parser.add_argument(
"--num_processes_beam_search",
default=multiprocessing.cpu_count(),
type=int,
help="Number of cpu processes for beam search. (default: %(default)s)")
parser.add_argument(
"--specgram_type",
default='linear',
type=str,
help="Feature type of audio data: 'linear' (power spectrum)"
" or 'mfcc'. (default: %(default)s)")
parser.add_argument(
"--mean_std_filepath",
default='mean_std.npz',
type=str,
help="Manifest path for normalizer. (default: %(default)s)")
parser.add_argument(
"--tune_manifest_path",
default='datasets/manifest.dev',
type=str,
help="Manifest path for tuning. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default='checkpoints/params.latest.tar.gz',
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--vocab_filepath",
default='datasets/vocab/eng_vocab.txt',
type=str,
help="Vocabulary filepath. (default: %(default)s)")
parser.add_argument(
"--beam_size",
default=500,
type=int,
help="Width for beam search decoding. (default: %(default)d)")
parser.add_argument(
"--language_model_path",
default="lm/data/common_crawl_00.prune01111.trie.klm",
type=str,
help="Path for language model. (default: %(default)s)")
parser.add_argument(
"--alpha_from",
default=0.1,
type=float,
help="Where alpha starts from. (default: %(default)f)")
parser.add_argument(
"--num_alphas",
default=14,
type=int,
help="Number of candidate alphas. (default: %(default)d)")
parser.add_argument(
"--alpha_to",
default=0.36,
type=float,
help="Where alpha ends with. (default: %(default)f)")
parser.add_argument(
"--beta_from",
default=0.05,
type=float,
help="Where beta starts from. (default: %(default)f)")
parser.add_argument(
"--num_betas",
default=20,
type=float,
help="Number of candidate betas. (default: %(default)d)")
parser.add_argument(
"--beta_to",
default=1.0,
type=float,
help="Where beta ends with. (default: %(default)f)")
parser.add_argument(
"--cutoff_prob",
default=0.99,
type=float,
help="The cutoff probability of pruning"
"in beam search. (default: %(default)f)")
args = parser.parse_args()
def tune():
"""Tune parameters alpha and beta for the CTC beam search decoder
incrementally. The optimal parameters up to now would be output real time
at the end of each minibatch data, until all the development data is
taken into account. And the tuning process can be terminated at any time
as long as the two parameters get stable.
"""
if not args.num_alphas >= 0:
raise ValueError("num_alphas must be non-negative!")
if not args.num_betas >= 0:
raise ValueError("num_betas must be non-negative!")
data_generator = DataGenerator(
vocab_filepath=args.vocab_filepath,
mean_std_filepath=args.mean_std_filepath,
augmentation_config='{}',
specgram_type=args.specgram_type,
num_threads=args.num_threads_data)
batch_reader = data_generator.batch_reader_creator(
manifest_path=args.tune_manifest_path,
batch_size=args.batch_size,
sortagrad=False,
shuffle_method=None)
ds2_model = DeepSpeech2Model(
vocab_size=data_generator.vocab_size,
num_conv_layers=args.num_conv_layers,
num_rnn_layers=args.num_rnn_layers,
rnn_layer_size=args.rnn_layer_size,
pretrained_model_path=args.model_filepath)
# create grid for search
cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)
cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)
params_grid = [(alpha, beta) for alpha in cand_alphas
for beta in cand_betas]
wer_sum = [0.0 for i in xrange(len(params_grid))]
ave_wer = [0.0 for i in xrange(len(params_grid))]
num_ins = 0
num_batches = 0
## incremental tuning parameters over multiple batches
for infer_data in batch_reader():
target_transcripts = [
''.join([data_generator.vocab_list[token] for token in transcript])
for _, transcript in infer_data
]
num_ins += len(target_transcripts)
# grid search
for index, (alpha, beta) in enumerate(params_grid):
result_transcripts = ds2_model.infer_batch(
infer_data=infer_data,
decode_method='beam_search',
beam_alpha=alpha,
beam_beta=beta,
beam_size=args.beam_size,
cutoff_prob=args.cutoff_prob,
vocab_list=data_generator.vocab_list,
language_model_path=args.language_model_path,
num_processes=args.num_processes_beam_search)
for target, result in zip(target_transcripts, result_transcripts):
wer_sum[index] += wer(target, result)
ave_wer[index] = wer_sum[index] / num_ins
print("alpha = %f, beta = %f, WER = %f" %
(alpha, beta, ave_wer[index]))
# output on-line tuning result at the the end of current batch
ave_wer_min = min(ave_wer)
min_index = ave_wer.index(ave_wer_min)
print("Finish batch %d, optimal (alpha, beta, WER) = (%f, %f, %f)\n" %
(num_batches, params_grid[min_index][0],
params_grid[min_index][1], ave_wer_min))
num_batches += 1
def main():
utils.print_arguments(args)
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
tune()
if __name__ == '__main__':
main()
...@@ -10,47 +10,54 @@ import numpy as np ...@@ -10,47 +10,54 @@ import numpy as np
def _levenshtein_distance(ref, hyp): def _levenshtein_distance(ref, hyp):
"""Levenshtein distance is a string metric for measuring the difference between """Levenshtein distance is a string metric for measuring the difference
two sequences. Informally, the levenshtein disctance is defined as the minimum between two sequences. Informally, the levenshtein disctance is defined as
number of single-character edits (substitutions, insertions or deletions) the minimum number of single-character edits (substitutions, insertions or
required to change one word into the other. We can naturally extend the edits to deletions) required to change one word into the other. We can naturally
word level when calculate levenshtein disctance for two sentences. extend the edits to word level when calculate levenshtein disctance for
two sentences.
""" """
ref_len = len(ref) m = len(ref)
hyp_len = len(hyp) n = len(hyp)
# special case # special case
if ref == hyp: if ref == hyp:
return 0 return 0
if ref_len == 0: if m == 0:
return hyp_len return n
if hyp_len == 0: if n == 0:
return ref_len return m
distance = np.zeros((ref_len + 1, hyp_len + 1), dtype=np.int32) if m < n:
ref, hyp = hyp, ref
m, n = n, m
# use O(min(m, n)) space
distance = np.zeros((2, n + 1), dtype=np.int32)
# initialize distance matrix # initialize distance matrix
for j in xrange(hyp_len + 1): for j in xrange(n + 1):
distance[0][j] = j distance[0][j] = j
for i in xrange(ref_len + 1):
distance[i][0] = i
# calculate levenshtein distance # calculate levenshtein distance
for i in xrange(1, ref_len + 1): for i in xrange(1, m + 1):
for j in xrange(1, hyp_len + 1): prev_row_idx = (i - 1) % 2
cur_row_idx = i % 2
distance[cur_row_idx][0] = i
for j in xrange(1, n + 1):
if ref[i - 1] == hyp[j - 1]: if ref[i - 1] == hyp[j - 1]:
distance[i][j] = distance[i - 1][j - 1] distance[cur_row_idx][j] = distance[prev_row_idx][j - 1]
else: else:
s_num = distance[i - 1][j - 1] + 1 s_num = distance[prev_row_idx][j - 1] + 1
i_num = distance[i][j - 1] + 1 i_num = distance[cur_row_idx][j - 1] + 1
d_num = distance[i - 1][j] + 1 d_num = distance[prev_row_idx][j] + 1
distance[i][j] = min(s_num, i_num, d_num) distance[cur_row_idx][j] = min(s_num, i_num, d_num)
return distance[ref_len][hyp_len] return distance[m % 2][n]
def wer(reference, hypothesis, ignore_case=False, delimiter=' '): def wer(reference, hypothesis, ignore_case=False, delimiter=' '):
"""Calculate word error rate (WER). WER compares reference text and """Calculate word error rate (WER). WER compares reference text and
hypothesis text in word-level. WER is defined as: hypothesis text in word-level. WER is defined as:
.. math:: .. math::
...@@ -65,8 +72,8 @@ def wer(reference, hypothesis, ignore_case=False, delimiter=' '): ...@@ -65,8 +72,8 @@ def wer(reference, hypothesis, ignore_case=False, delimiter=' '):
Iw is the number of words inserted, Iw is the number of words inserted,
Nw is the number of words in the reference Nw is the number of words in the reference
We can use levenshtein distance to calculate WER. Please draw an attention that We can use levenshtein distance to calculate WER. Please draw an attention
empty items will be removed when splitting sentences by delimiter. that empty items will be removed when splitting sentences by delimiter.
:param reference: The reference sentence. :param reference: The reference sentence.
:type reference: basestring :type reference: basestring
...@@ -95,7 +102,7 @@ def wer(reference, hypothesis, ignore_case=False, delimiter=' '): ...@@ -95,7 +102,7 @@ def wer(reference, hypothesis, ignore_case=False, delimiter=' '):
return wer return wer
def cer(reference, hypothesis, ignore_case=False): def cer(reference, hypothesis, ignore_case=False, remove_space=False):
"""Calculate charactor error rate (CER). CER compares reference text and """Calculate charactor error rate (CER). CER compares reference text and
hypothesis text in char-level. CER is defined as: hypothesis text in char-level. CER is defined as:
...@@ -111,10 +118,10 @@ def cer(reference, hypothesis, ignore_case=False): ...@@ -111,10 +118,10 @@ def cer(reference, hypothesis, ignore_case=False):
Ic is the number of characters inserted Ic is the number of characters inserted
Nc is the number of characters in the reference Nc is the number of characters in the reference
We can use levenshtein distance to calculate CER. Chinese input should be We can use levenshtein distance to calculate CER. Chinese input should be
encoded to unicode. Please draw an attention that the leading and tailing encoded to unicode. Please draw an attention that the leading and tailing
white space characters will be truncated and multiple consecutive white space characters will be truncated and multiple consecutive space
space characters in a sentence will be replaced by one white space character. characters in a sentence will be replaced by one space character.
:param reference: The reference sentence. :param reference: The reference sentence.
:type reference: basestring :type reference: basestring
...@@ -122,6 +129,8 @@ def cer(reference, hypothesis, ignore_case=False): ...@@ -122,6 +129,8 @@ def cer(reference, hypothesis, ignore_case=False):
:type hypothesis: basestring :type hypothesis: basestring
:param ignore_case: Whether case-sensitive or not. :param ignore_case: Whether case-sensitive or not.
:type ignore_case: bool :type ignore_case: bool
:param remove_space: Whether remove internal space characters
:type remove_space: bool
:return: Character error rate. :return: Character error rate.
:rtype: float :rtype: float
:raises ValueError: If the reference length is zero. :raises ValueError: If the reference length is zero.
...@@ -130,8 +139,12 @@ def cer(reference, hypothesis, ignore_case=False): ...@@ -130,8 +139,12 @@ def cer(reference, hypothesis, ignore_case=False):
reference = reference.lower() reference = reference.lower()
hypothesis = hypothesis.lower() hypothesis = hypothesis.lower()
reference = ' '.join(filter(None, reference.split(' '))) join_char = ' '
hypothesis = ' '.join(filter(None, hypothesis.split(' '))) if remove_space == True:
join_char = ''
reference = join_char.join(filter(None, reference.split(' ')))
hypothesis = join_char.join(filter(None, hypothesis.split(' ')))
if len(reference) == 0: if len(reference) == 0:
raise ValueError("Length of reference should be greater than 0.") raise ValueError("Length of reference should be greater than 0.")
......
...@@ -5,22 +5,60 @@ from __future__ import division ...@@ -5,22 +5,60 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import unittest import unittest
import error_rate from utils import error_rate
class TestParse(unittest.TestCase): class TestParse(unittest.TestCase):
def test_wer_1(self): def test_wer_1(self):
ref = 'i UM the PHONE IS i LEFT THE portable PHONE UPSTAIRS last night' ref = 'i UM the PHONE IS i LEFT THE portable PHONE UPSTAIRS last night'
hyp = 'i GOT IT TO the FULLEST i LOVE TO portable FROM OF STORES last night' hyp = 'i GOT IT TO the FULLEST i LOVE TO portable FROM OF STORES last '\
'night'
word_error_rate = error_rate.wer(ref, hyp) word_error_rate = error_rate.wer(ref, hyp)
self.assertTrue(abs(word_error_rate - 0.769230769231) < 1e-6) self.assertTrue(abs(word_error_rate - 0.769230769231) < 1e-6)
def test_wer_2(self): def test_wer_2(self):
ref = 'as any in england i would say said gamewell proudly that is '\
'in his day'
hyp = 'as any in england i would say said came well proudly that is '\
'in his day'
word_error_rate = error_rate.wer(ref, hyp)
self.assertTrue(abs(word_error_rate - 0.1333333) < 1e-6)
def test_wer_3(self):
ref = 'the lieutenant governor lilburn w boggs afterward governor '\
'was a pronounced mormon hater and throughout the period of '\
'the troubles he manifested sympathy with the persecutors'
hyp = 'the lieutenant governor little bit how bags afterward '\
'governor was a pronounced warman hater and throughout the '\
'period of th troubles he manifests sympathy with the '\
'persecutors'
word_error_rate = error_rate.wer(ref, hyp)
self.assertTrue(abs(word_error_rate - 0.2692307692) < 1e-6)
def test_wer_4(self):
ref = 'the wood flamed up splendidly under the large brewing copper '\
'and it sighed so deeply'
hyp = 'the wood flame do splendidly under the large brewing copper '\
'and its side so deeply'
word_error_rate = error_rate.wer(ref, hyp)
self.assertTrue(abs(word_error_rate - 0.2666666667) < 1e-6)
def test_wer_5(self):
ref = 'all the morning they trudged up the mountain path and at noon '\
'unc and ojo sat on a fallen tree trunk and ate the last of '\
'the bread which the old munchkin had placed in his pocket'
hyp = 'all the morning they trudged up the mountain path and at noon '\
'unc in ojo sat on a fallen tree trunk and ate the last of '\
'the bread which the old munchkin had placed in his pocket'
word_error_rate = error_rate.wer(ref, hyp)
self.assertTrue(abs(word_error_rate - 0.027027027) < 1e-6)
def test_wer_6(self):
ref = 'i UM the PHONE IS i LEFT THE portable PHONE UPSTAIRS last night' ref = 'i UM the PHONE IS i LEFT THE portable PHONE UPSTAIRS last night'
word_error_rate = error_rate.wer(ref, ref) word_error_rate = error_rate.wer(ref, ref)
self.assertEqual(word_error_rate, 0.0) self.assertEqual(word_error_rate, 0.0)
def test_wer_3(self): def test_wer_7(self):
ref = ' ' ref = ' '
hyp = 'Hypothesis sentence' hyp = 'Hypothesis sentence'
with self.assertRaises(ValueError): with self.assertRaises(ValueError):
...@@ -33,22 +71,40 @@ class TestParse(unittest.TestCase): ...@@ -33,22 +71,40 @@ class TestParse(unittest.TestCase):
self.assertTrue(abs(char_error_rate - 0.25) < 1e-6) self.assertTrue(abs(char_error_rate - 0.25) < 1e-6)
def test_cer_2(self): def test_cer_2(self):
ref = 'werewolf'
hyp = 'weae wolf'
char_error_rate = error_rate.cer(ref, hyp, remove_space=True)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_3(self):
ref = 'were wolf'
hyp = 'were wolf'
char_error_rate = error_rate.cer(ref, hyp)
self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)
def test_cer_4(self):
ref = 'werewolf' ref = 'werewolf'
char_error_rate = error_rate.cer(ref, ref) char_error_rate = error_rate.cer(ref, ref)
self.assertEqual(char_error_rate, 0.0) self.assertEqual(char_error_rate, 0.0)
def test_cer_3(self): def test_cer_5(self):
ref = u'我是中国人' ref = u'我是中国人'
hyp = u'我是 美洲人' hyp = u'我是 美洲人'
char_error_rate = error_rate.cer(ref, hyp) char_error_rate = error_rate.cer(ref, hyp)
self.assertTrue(abs(char_error_rate - 0.6) < 1e-6) self.assertTrue(abs(char_error_rate - 0.6) < 1e-6)
def test_cer_4(self): def test_cer_6(self):
ref = u'我 是 中 国 人'
hyp = u'我 是 美 洲 人'
char_error_rate = error_rate.cer(ref, hyp, remove_space=True)
self.assertTrue(abs(char_error_rate - 0.4) < 1e-6)
def test_cer_7(self):
ref = u'我是中国人' ref = u'我是中国人'
char_error_rate = error_rate.cer(ref, ref) char_error_rate = error_rate.cer(ref, ref)
self.assertFalse(char_error_rate, 0.0) self.assertFalse(char_error_rate, 0.0)
def test_cer_5(self): def test_cer_8(self):
ref = '' ref = ''
hyp = 'Hypothesis' hyp = 'Hypothesis'
with self.assertRaises(ValueError): with self.assertRaises(ValueError):
......
...@@ -3,6 +3,8 @@ from __future__ import absolute_import ...@@ -3,6 +3,8 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import distutils.util
def print_arguments(args): def print_arguments(args):
"""Print argparse's arguments. """Print argparse's arguments.
...@@ -10,16 +12,36 @@ def print_arguments(args): ...@@ -10,16 +12,36 @@ def print_arguments(args):
Usage: Usage:
.. code-block:: python .. code-block:: python
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("name", default="Jonh", type=str, help="User name.") parser.add_argument("name", default="Jonh", type=str, help="User name.")
args = parser.parse_args() args = parser.parse_args()
print_arguments(args) print_arguments(args)
:param args: Input argparse.Namespace for printing. :param args: Input argparse.Namespace for printing.
:type args: argparse.Namespace :type args: argparse.Namespace
""" """
print("----- Configuration Arguments -----") print("----------- Configuration Arguments -----------")
for arg, value in vars(args).iteritems(): for arg, value in sorted(vars(args).iteritems()):
print("%s: %s" % (arg, value)) print("%s: %s" % (arg, value))
print("------------------------------------") print("------------------------------------------------")
def add_arguments(argname, type, default, help, argparser, **kwargs):
"""Add argparse's argument.
Usage:
.. code-block:: python
parser = argparse.ArgumentParser()
add_argument("name", str, "Jonh", "User name.", parser)
args = parser.parse_args()
"""
type = distutils.util.strtobool if type == bool else type
argparser.add_argument(
"--" + argname,
default=default,
type=type,
help=help + ' Default: %(default)s.',
**kwargs)
download() {
URL=$1
MD5=$2
TARGET=$3
if [ -e $TARGET ]; then
md5_result=`md5sum $TARGET | awk -F[' '] '{print $1}'`
if [ $MD5 == $md5_result ]; then
echo "$TARGET already exists, download skipped."
return 0
fi
fi
wget -c $URL -O "$TARGET"
if [ $? -ne 0 ]; then
return 1
fi
md5_result=`md5sum $TARGET | awk -F[' '] '{print $1}'`
if [ ! $MD5 == $md5_result ]; then
return 1
fi
}
# 深度结构化语义模型 (Deep Structured Semantic Models, DSSM)
DSSM使用DNN模型在一个连续的语义空间中学习文本低纬的表示向量,并且建模两个句子间的语义相似度。
本例演示如何使用 PaddlePaddle实现一个通用的DSSM 模型,用于建模两个字符串间的语义相似度,
模型实现支持通用的数据格式,用户替换数据便可以在真实场景中使用该模型。
## 背景介绍
DSSM \[[1](##参考文献)\]是微软研究院13年提出来的经典的语义模型,用于学习两个文本之间的语义距离,
广义上模型也可以推广和适用如下场景:
1. CTR预估模型,衡量用户搜索词(Query)与候选网页集合(Documents)之间的相关联程度。
2. 文本相关性,衡量两个字符串间的语义相关程度。
3. 自动推荐,衡量User与被推荐的Item之间的关联程度。
DSSM 已经发展成了一个框架,可以很自然地建模两个记录之间的距离关系,
例如对于文本相关性问题,可以用余弦相似度 (cosin similarity) 来刻画语义距离;
而对于搜索引擎的结果排序,可以在DSSM上接上Rank损失训练处一个排序模型。
## 模型简介
在原论文\[[1](#参考文献)\]中,DSSM模型用来衡量用户搜索词 Query 和文档集合 Documents 之间隐含的语义关系,模型结构如下
<p align="center">
<img src="./images/dssm.png"/><br/><br/>
图 1. DSSM 原始结构
</p>
其贯彻的思想是, **用DNN将高维特征向量转化为低纬空间的连续向量(图中红色框部分)**
**在上层用cosin similarity来衡量用户搜索词与候选文档间的语义相关性**
在最顶层损失函数的设计上,原始模型使用类似Word2Vec中负例采样的方法,
一个Query会抽取正例 $D+$ 和4个负例 $D-$ 整体上算条件概率用对数似然函数作为损失,
这也就是图 1中类似 $P(D_1|Q)$ 的结构,具体细节请参考原论文。
随着后续优化DSSM模型的结构得以简化\[[3](#参考文献)\],演变为:
<p align="center">
<img src="./images/dssm2.png" width="600"/><br/><br/>
图 2. DSSM通用结构
</p>
图中的空白方框可以用任何模型替代,比如全连接FC,卷积CNN,RNN等都可以,
该模型结构专门用于衡量两个元素(比如字符串)间的语义距离。
在现实使用中,DSSM模型会作为基础的积木,搭配上不同的损失函数来实现具体的功能,比如
- 在排序学习中,将 图 2 中结构添加 pairwise rank损失,变成一个排序模型
- 在CTR预估中,对点击与否做0,1二元分类,添加交叉熵损失变成一个分类模型
- 在需要对一个子串打分时,可以使用余弦相似度来计算相似度,变成一个回归模型
本例将尝试面向应用提供一个比较通用的解决方案,在模型任务类型上支持
- 分类
- [-1, 1] 值域内的回归
- Pairwise-Rank
在生成低纬语义向量的模型结构上,本模型支持以下三种:
- FC, 多层全连接层
- CNN,卷积神经网络
- RNN,递归神经网络
## 模型实现
DSSM模型可以拆成三小块实现,分别是左边和右边的DNN,以及顶层的损失函数。
在复杂任务中,左右两边DNN的结构可以是不同的,比如在原始论文中左右分别学习Query和Document的semantic vector,
两者数据的数据不同,建议对应定制DNN的结构。
本例中为了简便和通用,将左右两个DNN的结构都设为相同的,因此只有三个选项FC,CNN,RNN等。
在损失函数的设计方面,也支持三种,分类, 回归, 排序;
其中,在回归和排序两种损失中,左右两边的匹配程度通过余弦相似度(cossim)来计算;
在分类任务中,类别预测的分布通过softmax计算。
在其它教程中,对上述很多内容都有过详细的介绍,例如:
- 如何CNN, FC 做文本信息提取可以参考 [text classification](https://github.com/PaddlePaddle/models/blob/develop/text_classification/README.md#模型详解)
- RNN/GRU 的内容可以参考 [Machine Translation](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md#gated-recurrent-unit-gru)
- Pairwise Rank即排序学习可参考 [learn to rank](https://github.com/PaddlePaddle/models/blob/develop/ltr/README.md)
相关原理在此不再赘述,本文接下来的篇幅主要集中介绍使用PaddlePaddle实现这些结构上。
如图3,回归和分类模型的结构很相似
<p align="center">
<img src="./images/dssm3.jpg"/><br/><br/>
图 3. DSSM for REGRESSION or CLASSIFICATION
</p>
最重要的组成部分包括词向量,图中`(1)`,`(2)`两个低纬向量的学习器(可以用RNN/CNN/FC中的任意一种实现),
最上层对应的损失函数。
而Pairwise Rank的结构会复杂一些,类似两个 图 4. 中的结构,增加了对应的损失函数:
- 模型总体思想是,用同一个source(源)为左右两个target(目标)分别打分——`(a),(b)`,学习目标是(a),(b)间的大小关系
- `(a)``(b)`类似图3中结构,用于给source和target的pair打分
- `(1)``(2)`的结构其实是共用的,都表示同一个source,图中为了表达效果展开成两个
<p align="center">
<img src="./images/dssm2.jpg"/><br/><br/>
图 4. DSSM for Pairwise Rank
</p>
下面是各个部分具体的实现方法,所有的代码均包含在 `./network_conf.py` 中。
### 创建文本的词向量表
```python
def create_embedding(self, input, prefix=''):
'''
Create an embedding table whose name has a `prefix`.
'''
logger.info("create embedding table [%s] which dimention is %d" %
(prefix, self.dnn_dims[0]))
emb = paddle.layer.embedding(
input=input,
size=self.dnn_dims[0],
param_attr=ParamAttr(name='%s_emb.w' % prefix))
return emb
```
由于输入给词向量表(embedding table)的是一个句子对应的词的ID的列表 ,因此词向量表输出的是词向量的序列。
### CNN 结构实现
```python
def create_cnn(self, emb, prefix=''):
'''
A multi-layer CNN.
@emb: paddle.layer
output of the embedding layer
@prefix: str
prefix of layers' names, used to share parameters between more than one `cnn` parts.
'''
def create_conv(context_len, hidden_size, prefix):
key = "%s_%d_%d" % (prefix, context_len, hidden_size)
conv = paddle.networks.sequence_conv_pool(
input=emb,
context_len=context_len,
hidden_size=hidden_size,
# set parameter attr for parameter sharing
context_proj_param_attr=ParamAttr(name=key + 'contex_proj.w'),
fc_param_attr=ParamAttr(name=key + '_fc.w'),
fc_bias_attr=ParamAttr(name=key + '_fc.b'),
pool_bias_attr=ParamAttr(name=key + '_pool.b'))
return conv
logger.info('create a sequence_conv_pool which context width is 3')
conv_3 = create_conv(3, self.dnn_dims[1], "cnn")
logger.info('create a sequence_conv_pool which context width is 4')
conv_4 = create_conv(4, self.dnn_dims[1], "cnn")
return conv_3, conv_4
```
CNN 接受 embedding table输出的词向量序列,通过卷积和池化操作捕捉到原始句子的关键信息,
最终输出一个语义向量(可以认为是句子向量)。
本例的实现中,分别使用了窗口长度为3和4的CNN学到的句子向量按元素求和得到最终的句子向量。
### RNN 结构实现
RNN很适合学习变长序列的信息,使用RNN来学习句子的信息几乎是自然语言处理任务的标配。
```python
def create_rnn(self, emb, prefix=''):
'''
A GRU sentence vector learner.
'''
gru = paddle.layer.gru_memory(input=emb,)
sent_vec = paddle.layer.last_seq(gru)
return sent_vec
```
### FC 结构实现
```python
def create_fc(self, emb, prefix=''):
'''
A multi-layer fully connected neural networks.
@emb: paddle.layer
output of the embedding layer
@prefix: str
prefix of layers' names, used to share parameters between more than one `fc` parts.
'''
_input_layer = paddle.layer.pooling(
input=emb, pooling_type=paddle.pooling.Max())
fc = paddle.layer.fc(input=_input_layer, size=self.dnn_dims[1])
return fc
```
在构建FC时需要首先使用`paddle.layer.pooling` 对词向量序列进行最大池化操作,将边长序列转化为一个固定维度向量,
作为整个句子的语义表达,使用最大池化能够降低句子长度对句向量表达的影响。
### 多层DNN实现
在 CNN/DNN/FC提取出 semantic vector后,在上层可继续接多层FC来实现深层DNN结构。
```python
def create_dnn(self, sent_vec, prefix):
# if more than three layers exists, a fc layer will be added.
if len(self.dnn_dims) > 1:
_input_layer = sent_vec
for id, dim in enumerate(self.dnn_dims[1:]):
name = "%s_fc_%d_%d" % (prefix, id, dim)
logger.info("create fc layer [%s] which dimention is %d" %
(name, dim))
fc = paddle.layer.fc(
input=_input_layer,
size=dim,
name=name,
act=paddle.activation.Tanh(),
param_attr=ParamAttr(name='%s.w' % name),
bias_attr=ParamAttr(name='%s.b' % name),
)
_input_layer = fc
return _input_layer
```
### 分类或回归实现
分类和回归的结构比较相似,因此可以用一个函数创建出来
```python
def _build_classification_or_regression_model(self, is_classification):
'''
Build a classification/regression model, and the cost is returned.
A Classification has 3 inputs:
- source sentence
- target sentence
- classification label
'''
# prepare inputs.
assert self.class_num
source = paddle.layer.data(
name='source_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[0]))
target = paddle.layer.data(
name='target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
label = paddle.layer.data(
name='label_input',
type=paddle.data_type.integer_value(self.class_num)
if is_classification else paddle.data_type.dense_input)
prefixs = '_ _'.split(
) if self.share_semantic_generator else 'left right'.split()
embed_prefixs = '_ _'.split(
) if self.share_embed else 'left right'.split()
word_vecs = []
for id, input in enumerate([source, target]):
x = self.create_embedding(input, prefix=embed_prefixs[id])
word_vecs.append(x)
semantics = []
for id, input in enumerate(word_vecs):
x = self.model_arch_creater(input, prefix=prefixs[id])
semantics.append(x)
concated_vector = paddle.layer.concat(semantics)
prediction = paddle.layer.fc(
input=concated_vector,
size=self.class_num,
act=paddle.activation.Softmax())
cost = paddle.layer.classification_cost(
input=prediction,
label=label) if is_classification else paddle.layer.mse_cost(
prediction, label)
return cost, prediction, label
```
### Pairwise Rank实现
Pairwise Rank复用上面的DNN结构,同一个source对两个target求相似度打分,
如果左边的target打分高,预测为1,否则预测为 0。
```python
def _build_rank_model(self):
'''
Build a pairwise rank model, and the cost is returned.
A pairwise rank model has 3 inputs:
- source sentence
- left_target sentence
- right_target sentence
- label, 1 if left_target should be sorted in front of right_target, otherwise 0.
'''
source = paddle.layer.data(
name='source_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[0]))
left_target = paddle.layer.data(
name='left_target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
right_target = paddle.layer.data(
name='right_target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
label = paddle.layer.data(
name='label_input', type=paddle.data_type.integer_value(1))
prefixs = '_ _ _'.split(
) if self.share_semantic_generator else 'source left right'.split()
embed_prefixs = '_ _'.split(
) if self.share_embed else 'source target target'.split()
word_vecs = []
for id, input in enumerate([source, left_target, right_target]):
x = self.create_embedding(input, prefix=embed_prefixs[id])
word_vecs.append(x)
semantics = []
for id, input in enumerate(word_vecs):
x = self.model_arch_creater(input, prefix=prefixs[id])
semantics.append(x)
# cossim score of source and left_target
left_score = paddle.layer.cos_sim(semantics[0], semantics[1])
# cossim score of source and right target
right_score = paddle.layer.cos_sim(semantics[0], semantics[2])
# rank cost
cost = paddle.layer.rank_cost(left_score, right_score, label=label)
# prediction = left_score - right_score
# but this operator is not supported currently.
# so AUC will not used.
return cost, None, None
```
## 数据格式
`./data` 中有简单的示例数据
### 回归的数据格式
```
# 3 fields each line:
# - source's word ids
# - target's word ids
# - target
<ids> \t <ids> \t <float>
```
比如:
```
3 6 10 \t 6 8 33 \t 0.7
6 0 \t 6 9 330 \t 0.03
```
### 分类的数据格式
```
# 3 fields each line:
# - source's word ids
# - target's word ids
# - target
<ids> \t <ids> \t <label>
```
比如:
```
3 6 10 \t 6 8 33 \t 0
6 10 \t 8 3 1 \t 1
```
### 排序的数据格式
```
# 4 fields each line:
# - source's word ids
# - target1's word ids
# - target2's word ids
# - label
<ids> \t <ids> \t <ids> \t <label>
```
比如:
```
7 2 4 \t 2 10 12 \t 9 2 7 10 23 \t 0
7 2 4 \t 10 12 \t 9 2 21 23 \t 1
```
## 执行训练
可以直接执行 `python train.py -y 0 --model_arch 0` 使用 `./data/classification` 目录里简单的数据来训练一个分类的FC模型。
其他模型结构也可以通过命令行实现定制,详细命令行参数如下
```
usage: train.py [-h] [-i TRAIN_DATA_PATH] [-t TEST_DATA_PATH]
[-s SOURCE_DIC_PATH] [--target_dic_path TARGET_DIC_PATH]
[-b BATCH_SIZE] [-p NUM_PASSES] -y MODEL_TYPE -a MODEL_ARCH
[--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET]
[--share_embed SHARE_EMBED] [--dnn_dims DNN_DIMS]
[--num_workers NUM_WORKERS] [--use_gpu USE_GPU] [-c CLASS_NUM]
[--model_output_prefix MODEL_OUTPUT_PREFIX]
[-g NUM_BATCHES_TO_LOG] [-e NUM_BATCHES_TO_TEST]
[-z NUM_BATCHES_TO_SAVE_MODEL]
PaddlePaddle DSSM example
optional arguments:
-h, --help show this help message and exit
-i TRAIN_DATA_PATH, --train_data_path TRAIN_DATA_PATH
path of training dataset
-t TEST_DATA_PATH, --test_data_path TEST_DATA_PATH
path of testing dataset
-s SOURCE_DIC_PATH, --source_dic_path SOURCE_DIC_PATH
path of the source's word dic
--target_dic_path TARGET_DIC_PATH
path of the target's word dic, if not set, the
`source_dic_path` will be used
-b BATCH_SIZE, --batch_size BATCH_SIZE
size of mini-batch (default:10)
-p NUM_PASSES, --num_passes NUM_PASSES
number of passes to run(default:10)
-y MODEL_TYPE, --model_type MODEL_TYPE
model type, 0 for classification, 1 for pairwise rank,
2 for regression (default: classification)
-a MODEL_ARCH, --model_arch MODEL_ARCH
model architecture, 1 for CNN, 0 for FC, 2 for RNN
--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET
whether to share network parameters between source and
target
--share_embed SHARE_EMBED
whether to share word embedding between source and
target
--dnn_dims DNN_DIMS dimentions of dnn layers, default is '256,128,64,32',
which means create a 4-layer dnn, demention of each
layer is 256, 128, 64 and 32
--num_workers NUM_WORKERS
num worker threads, default 1
--use_gpu USE_GPU whether to use GPU devices (default: False)
-c CLASS_NUM, --class_num CLASS_NUM
number of categories for classification task.
--model_output_prefix MODEL_OUTPUT_PREFIX
prefix of the path for model to store, (default: ./)
-g NUM_BATCHES_TO_LOG, --num_batches_to_log NUM_BATCHES_TO_LOG
number of batches to output train log, (default: 100)
-e NUM_BATCHES_TO_TEST, --num_batches_to_test NUM_BATCHES_TO_TEST
number of batches to test, (default: 200)
-z NUM_BATCHES_TO_SAVE_MODEL, --num_batches_to_save_model NUM_BATCHES_TO_SAVE_MODEL
number of batches to output model, (default: 400)
```
重要的参数描述如下
- `train_data_path` 训练数据路径
- `test_data_path` 测试数据路局,可以不设置
- `source_dic_path` 源字典字典路径
- `target_dic_path` 目标字典路径
- `model_type` 模型的损失函数的类型,分类0,排序1,回归2
- `model_arch` 模型结构,FC 0, CNN 1, RNN 2
- `dnn_dims` 模型各层的维度设置,默认为 `256,128,64,32`,即模型有4层,各层维度如上设置
## 用训练好的模型预测
```
usage: infer.py [-h] --model_path MODEL_PATH -i DATA_PATH -o
PREDICTION_OUTPUT_PATH -y MODEL_TYPE [-s SOURCE_DIC_PATH]
[--target_dic_path TARGET_DIC_PATH] -a MODEL_ARCH
[--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET]
[--share_embed SHARE_EMBED] [--dnn_dims DNN_DIMS]
[-c CLASS_NUM]
PaddlePaddle DSSM infer
optional arguments:
-h, --help show this help message and exit
--model_path MODEL_PATH
path of model parameters file
-i DATA_PATH, --data_path DATA_PATH
path of the dataset to infer
-o PREDICTION_OUTPUT_PATH, --prediction_output_path PREDICTION_OUTPUT_PATH
path to output the prediction
-y MODEL_TYPE, --model_type MODEL_TYPE
model type, 0 for classification, 1 for pairwise rank,
2 for regression (default: classification)
-s SOURCE_DIC_PATH, --source_dic_path SOURCE_DIC_PATH
path of the source's word dic
--target_dic_path TARGET_DIC_PATH
path of the target's word dic, if not set, the
`source_dic_path` will be used
-a MODEL_ARCH, --model_arch MODEL_ARCH
model architecture, 1 for CNN, 0 for FC, 2 for RNN
--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET
whether to share network parameters between source and
target
--share_embed SHARE_EMBED
whether to share word embedding between source and
target
--dnn_dims DNN_DIMS dimentions of dnn layers, default is '256,128,64,32',
which means create a 4-layer dnn, demention of each
layer is 256, 128, 64 and 32
-c CLASS_NUM, --class_num CLASS_NUM
number of categories for classification task.
```
部分参数可以参考 `train.py`,重要参数解释如下
- `data_path` 需要预测的数据路径
- `prediction_output_path` 预测的输出路径
## 参考文献
1. Huang P S, He X, Gao J, et al. Learning deep structured semantic models for web search using clickthrough data[C]//Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, 2013: 2333-2338.
2. [Microsoft Learning to Rank Datasets](https://www.microsoft.com/en-us/research/project/mslr/)
3. [Gao J, He X, Deng L. Deep Learning for Web Search and Natural Language Processing[J]. Microsoft Research Technical Report, 2015.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/wsdm2015.v3.pdf)
# 深度结构化语义模型 (Deep Structured Semantic Models, DSSM) # Deep Structured Semantic Models (DSSM)
DSSM使用DNN模型在一个连续的语义空间中学习文本低纬的表示向量,并且建模两个句子间的语义相似度。 Deep Structured Semantic Models (DSSM) is simple but powerful DNN based model for matching web search queries and the URL based documents. This example demonstrates how to use PaddlePaddle to implement a generic DSSM model for modeling the semantic similarity between two strings.
本例演示如何使用 PaddlePaddle实现一个通用的DSSM 模型,用于建模两个字符串间的语义相似度,
模型实现支持通用的数据格式,用户替换数据便可以在真实场景中使用该模型。
## 背景介绍 ## Background Introduction
DSSM \[[1](##参考文献)\]是微软研究院13年提出来的经典的语义模型,用于学习两个文本之间的语义距离, DSSM \[[1](##References)]is a classic semantic model proposed by the Institute of Physics. It is used to study the semantic distance between two texts. The general implementation of DSSM is as follows.
广义上模型也可以推广和适用如下场景:
1. CTR预估模型,衡量用户搜索词(Query)与候选网页集合(Documents)之间的相关联程度。 1. The CTR predictor measures the degree of association between a user search query and a candidate web page.
2. 文本相关性,衡量两个字符串间的语义相关程度。 2. Text relevance, which measures the degree of semantic correlation between two strings.
3. 自动推荐,衡量User与被推荐的Item之间的关联程度。 3. Automatically recommend, measure the degree of association between User and the recommended Item.
DSSM 已经发展成了一个框架,可以很自然地建模两个记录之间的距离关系,
例如对于文本相关性问题,可以用余弦相似度 (cosin similarity) 来刻画语义距离;
而对于搜索引擎的结果排序,可以在DSSM上接上Rank损失训练处一个排序模型。
## 模型简介 ## Model Architecture
在原论文\[[1](#参考文献)\]中,DSSM模型用来衡量用户搜索词 Query 和文档集合 Documents 之间隐含的语义关系,模型结构如下
In the original paper \[[1](#References)] the DSSM model uses the implicit semantic relation between the user search query and the document as metric. The model structure is as follows
<p align="center"> <p align="center">
<img src="./images/dssm.png"/><br/><br/> <img src="./images/dssm.png"/><br/><br/>
图 1. DSSM 原始结构 Figure 1. DSSM In the original paper
</p> </p>
其贯彻的思想是, **用DNN将高维特征向量转化为低纬空间的连续向量(图中红色框部分)**
**在上层用cosin similarity来衡量用户搜索词与候选文档间的语义相关性**
在最顶层损失函数的设计上,原始模型使用类似Word2Vec中负例采样的方法,
一个Query会抽取正例 $D+$ 和4个负例 $D-$ 整体上算条件概率用对数似然函数作为损失,
这也就是图 1中类似 $P(D_1|Q)$ 的结构,具体细节请参考原论文。
随着后续优化DSSM模型的结构得以简化\[[3](#参考文献)\],演变为 With the subsequent optimization of the DSSM model to simplify the structure \[[3](#References)],the model becomes
<p align="center"> <p align="center">
<img src="./images/dssm2.png" width="600"/><br/><br/> <img src="./images/dssm2.png" width="600"/><br/><br/>
图 2. DSSM通用结构 Figure 2. DSSM generic structure
</p> </p>
图中的空白方框可以用任何模型替代,比如全连接FC,卷积CNN,RNN等都可以, The blank box in the figure can be replaced by any model, such as fully connected FC, convoluted CNN, RNN, etc. The structure is designed to measure the semantic distance between two elements (such as strings).
该模型结构专门用于衡量两个元素(比如字符串)间的语义距离。
在现实使用中,DSSM模型会作为基础的积木,搭配上不同的损失函数来实现具体的功能,比如
- 在排序学习中,将 图 2 中结构添加 pairwise rank损失,变成一个排序模型
- 在CTR预估中,对点击与否做0,1二元分类,添加交叉熵损失变成一个分类模型
- 在需要对一个子串打分时,可以使用余弦相似度来计算相似度,变成一个回归模型
本例将尝试面向应用提供一个比较通用的解决方案,在模型任务类型上支持
- 分类 In practice,DSSM model serves as a basic building block, with different loss functions to achieve specific functions, such as
- [-1, 1] 值域内的回归
- Pairwise-Rank
在生成低纬语义向量的模型结构上,本模型支持以下三种: - In ranking system, the pairwise rank loss function.
- In the CTR estimate, instead of the binary classification on the click, use cross-entropy loss for a classification model
- In regression model, the cosine similarity is used to calculate the similarity
- FC, 多层全连接层 ## Model Implementation
- CNN,卷积神经网络 At a high level, DSSM model is composed of three components: the left and right DNN, and loss function on top of them. In complex tasks, the structure of the left DNN and the light DNN can be different. In this example, we keep these two DNN structures the same. And we choose any of FC, CNN, and RNN for the DNN architecture.
- RNN,递归神经网络
## 模型实现 In PaddlePaddle, the loss functions are supported for any of classification, regression, and ranking. Among them, the distance between the left and right DNN is calculated by the cosine similarity. In the classification task, the predicted distribution is calculated by softmax.
DSSM模型可以拆成三小块实现,分别是左边和右边的DNN,以及顶层的损失函数。
在复杂任务中,左右两边DNN的结构可以是不同的,比如在原始论文中左右分别学习Query和Document的semantic vector,
两者数据的数据不同,建议对应定制DNN的结构。
本例中为了简便和通用,将左右两个DNN的结构都设为相同的,因此只有三个选项FC,CNN,RNN等。 Here we demonstrate:
在损失函数的设计方面,也支持三种,分类, 回归, 排序; - How CNN, FC do text information extraction can refer to [text classification](https://github.com/PaddlePaddle/models/blob/develop/text_classification/README.md#模型详解)
其中,在回归和排序两种损失中,左右两边的匹配程度通过余弦相似度(cossim)来计算; - The contents of the RNN / GRU can be found in [Machine Translation](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md#gated-recurrent-unit-gru)
在分类任务中,类别预测的分布通过softmax计算。 - For Pairwise Rank learning, please refer to [learn to rank](https://github.com/PaddlePaddle/models/blob/develop/ltr/README.md)
在其它教程中,对上述很多内容都有过详细的介绍,例如: Figure 3 shows the general architecture for both regression and classification models.
- 如何CNN, FC 做文本信息提取可以参考 [text classification](https://github.com/PaddlePaddle/models/blob/develop/text_classification/README.md#模型详解)
- RNN/GRU 的内容可以参考 [Machine Translation](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md#gated-recurrent-unit-gru)
- Pairwise Rank即排序学习可参考 [learn to rank](https://github.com/PaddlePaddle/models/blob/develop/ltr/README.md)
相关原理在此不再赘述,本文接下来的篇幅主要集中介绍使用PaddlePaddle实现这些结构上。
如图3,回归和分类模型的结构很相似
<p align="center"> <p align="center">
<img src="./images/dssm3.jpg"/><br/><br/> <img src="./images/dssm3.jpg"/><br/><br/>
3. DSSM for REGRESSION or CLASSIFICATION Figure 3. DSSM for REGRESSION or CLASSIFICATION
</p> </p>
最重要的组成部分包括词向量,图中`(1)`,`(2)`两个低纬向量的学习器(可以用RNN/CNN/FC中的任意一种实现), The structure of the Pairwise Rank is more complex, as shown in Figure 4.
最上层对应的损失函数。
而Pairwise Rank的结构会复杂一些,类似两个 图 4. 中的结构,增加了对应的损失函数:
- 模型总体思想是,用同一个source(源)为左右两个target(目标)分别打分——`(a),(b)`,学习目标是(a),(b)间的大小关系
- `(a)``(b)`类似图3中结构,用于给source和target的pair打分
- `(1)``(2)`的结构其实是共用的,都表示同一个source,图中为了表达效果展开成两个
<p align="center"> <p align="center">
<img src="./images/dssm2.jpg"/><br/><br/> <img src="./images/dssm2.jpg"/><br/><br/>
图 4. DSSM for Pairwise Rank 图 4. DSSM for Pairwise Rank
</p> </p>
下面是各个部分具体的实现方法,所有的代码均包含在 `./network_conf.py` 中。 In below, we describe how to train DSSM model in PaddlePaddle. All the codes are included in `./network_conf.py`.
### 创建文本的词向量表
### Create a word vector table for the text
```python ```python
def create_embedding(self, input, prefix=''): def create_embedding(self, input, prefix=''):
''' '''
...@@ -117,10 +77,9 @@ def create_embedding(self, input, prefix=''): ...@@ -117,10 +77,9 @@ def create_embedding(self, input, prefix=''):
return emb return emb
``` ```
由于输入给词向量表(embedding table)的是一个句子对应的词的ID的列表 ,因此词向量表输出的是词向量的序列。 Since the input (embedding table) is a list of the IDs of the words corresponding to a sentence, the word vector table outputs the sequence of word vectors.
### CNN 结构实现
### CNN implementation
```python ```python
def create_cnn(self, emb, prefix=''): def create_cnn(self, emb, prefix=''):
''' '''
...@@ -151,14 +110,11 @@ def create_cnn(self, emb, prefix=''): ...@@ -151,14 +110,11 @@ def create_cnn(self, emb, prefix=''):
return conv_3, conv_4 return conv_3, conv_4
``` ```
CNN 接受 embedding table输出的词向量序列,通过卷积和池化操作捕捉到原始句子的关键信息, CNN accepts the word sequence of the embedding table, then process the data by convolution and pooling, and finally outputs a semantic vector.
最终输出一个语义向量(可以认为是句子向量)。
本例的实现中,分别使用了窗口长度为3和4的CNN学到的句子向量按元素求和得到最终的句子向量。
### RNN 结构实现 ### RNN implementation
RNN很适合学习变长序列的信息,使用RNN来学习句子的信息几乎是自然语言处理任务的标配。 RNN is suitable for learning variable length of the information
```python ```python
def create_rnn(self, emb, prefix=''): def create_rnn(self, emb, prefix=''):
...@@ -170,7 +126,7 @@ def create_rnn(self, emb, prefix=''): ...@@ -170,7 +126,7 @@ def create_rnn(self, emb, prefix=''):
return sent_vec return sent_vec
``` ```
### FC 结构实现 ### FC implementation
```python ```python
def create_fc(self, emb, prefix=''): def create_fc(self, emb, prefix=''):
...@@ -188,11 +144,9 @@ def create_fc(self, emb, prefix=''): ...@@ -188,11 +144,9 @@ def create_fc(self, emb, prefix=''):
return fc return fc
``` ```
在构建FC时需要首先使用`paddle.layer.pooling` 对词向量序列进行最大池化操作,将边长序列转化为一个固定维度向量, In the construction of FC, we use `paddle.layer.pooling` for the maximum pooling operation on the word vector sequence. Then we transform the sequence into a fixed dimensional vector.
作为整个句子的语义表达,使用最大池化能够降低句子长度对句向量表达的影响。
### 多层DNN实现 ### Multi-layer DNN implementation
在 CNN/DNN/FC提取出 semantic vector后,在上层可继续接多层FC来实现深层DNN结构。
```python ```python
def create_dnn(self, sent_vec, prefix): def create_dnn(self, sent_vec, prefix):
...@@ -215,8 +169,8 @@ def create_dnn(self, sent_vec, prefix): ...@@ -215,8 +169,8 @@ def create_dnn(self, sent_vec, prefix):
return _input_layer return _input_layer
``` ```
### 分类或回归实现 ### Classification / Regression
分类和回归的结构比较相似,因此可以用一个函数创建出来 The structure of classification and regression is similar. Below function can be used for both tasks.
```python ```python
def _build_classification_or_regression_model(self, is_classification): def _build_classification_or_regression_model(self, is_classification):
...@@ -269,9 +223,9 @@ def _build_classification_or_regression_model(self, is_classification): ...@@ -269,9 +223,9 @@ def _build_classification_or_regression_model(self, is_classification):
prediction, label) prediction, label)
return cost, prediction, label return cost, prediction, label
``` ```
### Pairwise Rank实现
Pairwise Rank复用上面的DNN结构,同一个source对两个target求相似度打分, ### Pairwise Rank
如果左边的target打分高,预测为1,否则预测为 0。
```python ```python
def _build_rank_model(self): def _build_rank_model(self):
...@@ -323,10 +277,10 @@ def _build_rank_model(self): ...@@ -323,10 +277,10 @@ def _build_rank_model(self):
# so AUC will not used. # so AUC will not used.
return cost, None, None return cost, None, None
``` ```
## 数据格式 ## Data Format
`./data` 中有简单的示例数据 Below is a simple example for the data in `./data`
### 回归的数据格式 ### Regression data format
``` ```
# 3 fields each line: # 3 fields each line:
# - source's word ids # - source's word ids
...@@ -335,13 +289,14 @@ def _build_rank_model(self): ...@@ -335,13 +289,14 @@ def _build_rank_model(self):
<ids> \t <ids> \t <float> <ids> \t <ids> \t <float>
``` ```
比如: The example of this format is as follows.
``` ```
3 6 10 \t 6 8 33 \t 0.7 3 6 10 \t 6 8 33 \t 0.7
6 0 \t 6 9 330 \t 0.03 6 0 \t 6 9 330 \t 0.03
``` ```
### 分类的数据格式
### Classification data format
``` ```
# 3 fields each line: # 3 fields each line:
# - source's word ids # - source's word ids
...@@ -350,7 +305,8 @@ def _build_rank_model(self): ...@@ -350,7 +305,8 @@ def _build_rank_model(self):
<ids> \t <ids> \t <label> <ids> \t <ids> \t <label>
``` ```
比如: The example of this format is as follows.
``` ```
3 6 10 \t 6 8 33 \t 0 3 6 10 \t 6 8 33 \t 0
...@@ -358,7 +314,7 @@ def _build_rank_model(self): ...@@ -358,7 +314,7 @@ def _build_rank_model(self):
``` ```
### 排序的数据格式 ### Ranking data format
``` ```
# 4 fields each line: # 4 fields each line:
# - source's word ids # - source's word ids
...@@ -368,18 +324,17 @@ def _build_rank_model(self): ...@@ -368,18 +324,17 @@ def _build_rank_model(self):
<ids> \t <ids> \t <ids> \t <label> <ids> \t <ids> \t <ids> \t <label>
``` ```
比如: The example of this format is as follows.
``` ```
7 2 4 \t 2 10 12 \t 9 2 7 10 23 \t 0 7 2 4 \t 2 10 12 \t 9 2 7 10 23 \t 0
7 2 4 \t 10 12 \t 9 2 21 23 \t 1 7 2 4 \t 10 12 \t 9 2 21 23 \t 1
``` ```
## 执行训练 ## Training
可以直接执行 `python train.py -y 0 --model_arch 0` 使用 `./data/classification` 目录里简单的数据来训练一个分类的FC模型。 We use `python train.py -y 0 --model_arch 0` with the data in `./data/classification` to train a DSSM model for classification.
其他模型结构也可以通过命令行实现定制,详细命令行参数如下
``` ```
usage: train.py [-h] [-i TRAIN_DATA_PATH] [-t TEST_DATA_PATH] usage: train.py [-h] [-i TRAIN_DATA_PATH] [-t TEST_DATA_PATH]
...@@ -438,17 +393,17 @@ optional arguments: ...@@ -438,17 +393,17 @@ optional arguments:
number of batches to output model, (default: 400) number of batches to output model, (default: 400)
``` ```
重要的参数描述如下 Parameter description:
- `train_data_path` 训练数据路径 - `train_data_path` Training data path
- `test_data_path` 测试数据路局,可以不设置 - `test_data_path` Test data path, optional
- `source_dic_path` 源字典字典路径 - `source_dic_path` Source dictionary path
- `target_dic_path`标字典路径 - `target_dic_path`Target dictionary path
- `model_type` 模型的损失函数的类型,分类0,排序1,回归2 - `model_type` The type of loss function of the model: classification 0, sort 1, regression 2
- `model_arch` 模型结构,FC 0, CNN 1, RNN 2 - `model_arch` Model structure: FC 0,CNN 1, RNN 2
- `dnn_dims` 模型各层的维度设置,默认为 `256,128,64,32`,即模型有4层,各层维度如上设置 - `dnn_dims` The dimension of each layer of the model is set, the default is `256,128,64,32`,with 4 layers.
## 用训练好的模型预测 ## To predict using the trained model
``` ```
usage: infer.py [-h] --model_path MODEL_PATH -i DATA_PATH -o usage: infer.py [-h] --model_path MODEL_PATH -i DATA_PATH -o
PREDICTION_OUTPUT_PATH -y MODEL_TYPE [-s SOURCE_DIC_PATH] PREDICTION_OUTPUT_PATH -y MODEL_TYPE [-s SOURCE_DIC_PATH]
...@@ -490,12 +445,12 @@ optional arguments: ...@@ -490,12 +445,12 @@ optional arguments:
number of categories for classification task. number of categories for classification task.
``` ```
部分参数可以参考 `train.py`,重要参数解释如下 Important parameters are
- `data_path` 需要预测的数据路径 - `data_path` Path for the data to predict
- `prediction_output_path` 预测的输出路径 - `prediction_output_path` Prediction output path
## 参考文献 ## References
1. Huang P S, He X, Gao J, et al. Learning deep structured semantic models for web search using clickthrough data[C]//Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, 2013: 2333-2338. 1. Huang P S, He X, Gao J, et al. Learning deep structured semantic models for web search using clickthrough data[C]//Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, 2013: 2333-2338.
2. [Microsoft Learning to Rank Datasets](https://www.microsoft.com/en-us/research/project/mslr/) 2. [Microsoft Learning to Rank Datasets](https://www.microsoft.com/en-us/research/project/mslr/)
......
<html>
<head>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tools/theme/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tools/theme/github-markdown.css" rel='stylesheet'>
</head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body>
<div id="context" class="container-fluid markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
# Deep Structured Semantic Models (DSSM)
Deep Structured Semantic Models (DSSM) is simple but powerful DNN based model for matching web search queries and the URL based documents. This example demonstrates how to use PaddlePaddle to implement a generic DSSM model for modeling the semantic similarity between two strings.
## Background Introduction
DSSM \[[1](##References)]is a classic semantic model proposed by the Institute of Physics. It is used to study the semantic distance between two texts. The general implementation of DSSM is as follows.
1. The CTR predictor measures the degree of association between a user search query and a candidate web page.
2. Text relevance, which measures the degree of semantic correlation between two strings.
3. Automatically recommend, measure the degree of association between User and the recommended Item.
## Model Architecture
In the original paper \[[1](#References)] the DSSM model uses the implicit semantic relation between the user search query and the document as metric. The model structure is as follows
<p align="center">
<img src="./images/dssm.png"/><br/><br/>
Figure 1. DSSM In the original paper
</p>
With the subsequent optimization of the DSSM model to simplify the structure \[[3](#References)],the model becomes:
<p align="center">
<img src="./images/dssm2.png" width="600"/><br/><br/>
Figure 2. DSSM generic structure
</p>
The blank box in the figure can be replaced by any model, such as fully connected FC, convoluted CNN, RNN, etc. The structure is designed to measure the semantic distance between two elements (such as strings).
In practice,DSSM model serves as a basic building block, with different loss functions to achieve specific functions, such as
- In ranking system, the pairwise rank loss function.
- In the CTR estimate, instead of the binary classification on the click, use cross-entropy loss for a classification model
- In regression model, the cosine similarity is used to calculate the similarity
## Model Implementation
At a high level, DSSM model is composed of three components: the left and right DNN, and loss function on top of them. In complex tasks, the structure of the left DNN and the light DNN can be different. In this example, we keep these two DNN structures the same. And we choose any of FC, CNN, and RNN for the DNN architecture.
In PaddlePaddle, the loss functions are supported for any of classification, regression, and ranking. Among them, the distance between the left and right DNN is calculated by the cosine similarity. In the classification task, the predicted distribution is calculated by softmax.
Here we demonstrate:
- How CNN, FC do text information extraction can refer to [text classification](https://github.com/PaddlePaddle/models/blob/develop/text_classification/README.md#模型详解)
- The contents of the RNN / GRU can be found in [Machine Translation](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md#gated-recurrent-unit-gru)
- For Pairwise Rank learning, please refer to [learn to rank](https://github.com/PaddlePaddle/models/blob/develop/ltr/README.md)
Figure 3 shows the general architecture for both regression and classification models.
<p align="center">
<img src="./images/dssm3.jpg"/><br/><br/>
Figure 3. DSSM for REGRESSION or CLASSIFICATION
</p>
The structure of the Pairwise Rank is more complex, as shown in Figure 4.
<p align="center">
<img src="./images/dssm2.jpg"/><br/><br/>
图 4. DSSM for Pairwise Rank
</p>
In below, we describe how to train DSSM model in PaddlePaddle. All the codes are included in `./network_conf.py`.
### Create a word vector table for the text
```python
def create_embedding(self, input, prefix=''):
'''
Create an embedding table whose name has a `prefix`.
'''
logger.info("create embedding table [%s] which dimention is %d" %
(prefix, self.dnn_dims[0]))
emb = paddle.layer.embedding(
input=input,
size=self.dnn_dims[0],
param_attr=ParamAttr(name='%s_emb.w' % prefix))
return emb
```
Since the input (embedding table) is a list of the IDs of the words corresponding to a sentence, the word vector table outputs the sequence of word vectors.
### CNN implementation
```python
def create_cnn(self, emb, prefix=''):
'''
A multi-layer CNN.
@emb: paddle.layer
output of the embedding layer
@prefix: str
prefix of layers' names, used to share parameters between more than one `cnn` parts.
'''
def create_conv(context_len, hidden_size, prefix):
key = "%s_%d_%d" % (prefix, context_len, hidden_size)
conv = paddle.networks.sequence_conv_pool(
input=emb,
context_len=context_len,
hidden_size=hidden_size,
# set parameter attr for parameter sharing
context_proj_param_attr=ParamAttr(name=key + 'contex_proj.w'),
fc_param_attr=ParamAttr(name=key + '_fc.w'),
fc_bias_attr=ParamAttr(name=key + '_fc.b'),
pool_bias_attr=ParamAttr(name=key + '_pool.b'))
return conv
logger.info('create a sequence_conv_pool which context width is 3')
conv_3 = create_conv(3, self.dnn_dims[1], "cnn")
logger.info('create a sequence_conv_pool which context width is 4')
conv_4 = create_conv(4, self.dnn_dims[1], "cnn")
return conv_3, conv_4
```
CNN accepts the word sequence of the embedding table, then process the data by convolution and pooling, and finally outputs a semantic vector.
### RNN implementation
RNN is suitable for learning variable length of the information
```python
def create_rnn(self, emb, prefix=''):
'''
A GRU sentence vector learner.
'''
gru = paddle.layer.gru_memory(input=emb,)
sent_vec = paddle.layer.last_seq(gru)
return sent_vec
```
### FC implementation
```python
def create_fc(self, emb, prefix=''):
'''
A multi-layer fully connected neural networks.
@emb: paddle.layer
output of the embedding layer
@prefix: str
prefix of layers' names, used to share parameters between more than one `fc` parts.
'''
_input_layer = paddle.layer.pooling(
input=emb, pooling_type=paddle.pooling.Max())
fc = paddle.layer.fc(input=_input_layer, size=self.dnn_dims[1])
return fc
```
In the construction of FC, we use `paddle.layer.pooling` for the maximum pooling operation on the word vector sequence. Then we transform the sequence into a fixed dimensional vector.
### Multi-layer DNN implementation
```python
def create_dnn(self, sent_vec, prefix):
# if more than three layers exists, a fc layer will be added.
if len(self.dnn_dims) > 1:
_input_layer = sent_vec
for id, dim in enumerate(self.dnn_dims[1:]):
name = "%s_fc_%d_%d" % (prefix, id, dim)
logger.info("create fc layer [%s] which dimention is %d" %
(name, dim))
fc = paddle.layer.fc(
input=_input_layer,
size=dim,
name=name,
act=paddle.activation.Tanh(),
param_attr=ParamAttr(name='%s.w' % name),
bias_attr=ParamAttr(name='%s.b' % name),
)
_input_layer = fc
return _input_layer
```
### Classification / Regression
The structure of classification and regression is similar. Below function can be used for both tasks.
```python
def _build_classification_or_regression_model(self, is_classification):
'''
Build a classification/regression model, and the cost is returned.
A Classification has 3 inputs:
- source sentence
- target sentence
- classification label
'''
# prepare inputs.
assert self.class_num
source = paddle.layer.data(
name='source_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[0]))
target = paddle.layer.data(
name='target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
label = paddle.layer.data(
name='label_input',
type=paddle.data_type.integer_value(self.class_num)
if is_classification else paddle.data_type.dense_input)
prefixs = '_ _'.split(
) if self.share_semantic_generator else 'left right'.split()
embed_prefixs = '_ _'.split(
) if self.share_embed else 'left right'.split()
word_vecs = []
for id, input in enumerate([source, target]):
x = self.create_embedding(input, prefix=embed_prefixs[id])
word_vecs.append(x)
semantics = []
for id, input in enumerate(word_vecs):
x = self.model_arch_creater(input, prefix=prefixs[id])
semantics.append(x)
concated_vector = paddle.layer.concat(semantics)
prediction = paddle.layer.fc(
input=concated_vector,
size=self.class_num,
act=paddle.activation.Softmax())
cost = paddle.layer.classification_cost(
input=prediction,
label=label) if is_classification else paddle.layer.mse_cost(
prediction, label)
return cost, prediction, label
```
### Pairwise Rank
```python
def _build_rank_model(self):
'''
Build a pairwise rank model, and the cost is returned.
A pairwise rank model has 3 inputs:
- source sentence
- left_target sentence
- right_target sentence
- label, 1 if left_target should be sorted in front of right_target, otherwise 0.
'''
source = paddle.layer.data(
name='source_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[0]))
left_target = paddle.layer.data(
name='left_target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
right_target = paddle.layer.data(
name='right_target_input',
type=paddle.data_type.integer_value_sequence(self.vocab_sizes[1]))
label = paddle.layer.data(
name='label_input', type=paddle.data_type.integer_value(1))
prefixs = '_ _ _'.split(
) if self.share_semantic_generator else 'source left right'.split()
embed_prefixs = '_ _'.split(
) if self.share_embed else 'source target target'.split()
word_vecs = []
for id, input in enumerate([source, left_target, right_target]):
x = self.create_embedding(input, prefix=embed_prefixs[id])
word_vecs.append(x)
semantics = []
for id, input in enumerate(word_vecs):
x = self.model_arch_creater(input, prefix=prefixs[id])
semantics.append(x)
# cossim score of source and left_target
left_score = paddle.layer.cos_sim(semantics[0], semantics[1])
# cossim score of source and right target
right_score = paddle.layer.cos_sim(semantics[0], semantics[2])
# rank cost
cost = paddle.layer.rank_cost(left_score, right_score, label=label)
# prediction = left_score - right_score
# but this operator is not supported currently.
# so AUC will not used.
return cost, None, None
```
## Data Format
Below is a simple example for the data in `./data`
### Regression data format
```
# 3 fields each line:
# - source's word ids
# - target's word ids
# - target
<ids> \t <ids> \t <float>
```
The example of this format is as follows.
```
3 6 10 \t 6 8 33 \t 0.7
6 0 \t 6 9 330 \t 0.03
```
### Classification data format
```
# 3 fields each line:
# - source's word ids
# - target's word ids
# - target
<ids> \t <ids> \t <label>
```
The example of this format is as follows.
```
3 6 10 \t 6 8 33 \t 0
6 10 \t 8 3 1 \t 1
```
### Ranking data format
```
# 4 fields each line:
# - source's word ids
# - target1's word ids
# - target2's word ids
# - label
<ids> \t <ids> \t <ids> \t <label>
```
The example of this format is as follows.
```
7 2 4 \t 2 10 12 \t 9 2 7 10 23 \t 0
7 2 4 \t 10 12 \t 9 2 21 23 \t 1
```
## Training
We use `python train.py -y 0 --model_arch 0` with the data in `./data/classification` to train a DSSM model for classification.
```
usage: train.py [-h] [-i TRAIN_DATA_PATH] [-t TEST_DATA_PATH]
[-s SOURCE_DIC_PATH] [--target_dic_path TARGET_DIC_PATH]
[-b BATCH_SIZE] [-p NUM_PASSES] -y MODEL_TYPE -a MODEL_ARCH
[--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET]
[--share_embed SHARE_EMBED] [--dnn_dims DNN_DIMS]
[--num_workers NUM_WORKERS] [--use_gpu USE_GPU] [-c CLASS_NUM]
[--model_output_prefix MODEL_OUTPUT_PREFIX]
[-g NUM_BATCHES_TO_LOG] [-e NUM_BATCHES_TO_TEST]
[-z NUM_BATCHES_TO_SAVE_MODEL]
PaddlePaddle DSSM example
optional arguments:
-h, --help show this help message and exit
-i TRAIN_DATA_PATH, --train_data_path TRAIN_DATA_PATH
path of training dataset
-t TEST_DATA_PATH, --test_data_path TEST_DATA_PATH
path of testing dataset
-s SOURCE_DIC_PATH, --source_dic_path SOURCE_DIC_PATH
path of the source's word dic
--target_dic_path TARGET_DIC_PATH
path of the target's word dic, if not set, the
`source_dic_path` will be used
-b BATCH_SIZE, --batch_size BATCH_SIZE
size of mini-batch (default:10)
-p NUM_PASSES, --num_passes NUM_PASSES
number of passes to run(default:10)
-y MODEL_TYPE, --model_type MODEL_TYPE
model type, 0 for classification, 1 for pairwise rank,
2 for regression (default: classification)
-a MODEL_ARCH, --model_arch MODEL_ARCH
model architecture, 1 for CNN, 0 for FC, 2 for RNN
--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET
whether to share network parameters between source and
target
--share_embed SHARE_EMBED
whether to share word embedding between source and
target
--dnn_dims DNN_DIMS dimentions of dnn layers, default is '256,128,64,32',
which means create a 4-layer dnn, demention of each
layer is 256, 128, 64 and 32
--num_workers NUM_WORKERS
num worker threads, default 1
--use_gpu USE_GPU whether to use GPU devices (default: False)
-c CLASS_NUM, --class_num CLASS_NUM
number of categories for classification task.
--model_output_prefix MODEL_OUTPUT_PREFIX
prefix of the path for model to store, (default: ./)
-g NUM_BATCHES_TO_LOG, --num_batches_to_log NUM_BATCHES_TO_LOG
number of batches to output train log, (default: 100)
-e NUM_BATCHES_TO_TEST, --num_batches_to_test NUM_BATCHES_TO_TEST
number of batches to test, (default: 200)
-z NUM_BATCHES_TO_SAVE_MODEL, --num_batches_to_save_model NUM_BATCHES_TO_SAVE_MODEL
number of batches to output model, (default: 400)
```
Parameter description:
- `train_data_path` Training data path
- `test_data_path` Test data path, optional
- `source_dic_path` Source dictionary path
- `target_dic_path` 目Target dictionary path
- `model_type` The type of loss function of the model: classification 0, sort 1, regression 2
- `model_arch` Model structure: FC 0,CNN 1, RNN 2
- `dnn_dims` The dimension of each layer of the model is set, the default is `256,128,64,32`,with 4 layers.
## To predict using the trained model
```
usage: infer.py [-h] --model_path MODEL_PATH -i DATA_PATH -o
PREDICTION_OUTPUT_PATH -y MODEL_TYPE [-s SOURCE_DIC_PATH]
[--target_dic_path TARGET_DIC_PATH] -a MODEL_ARCH
[--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET]
[--share_embed SHARE_EMBED] [--dnn_dims DNN_DIMS]
[-c CLASS_NUM]
PaddlePaddle DSSM infer
optional arguments:
-h, --help show this help message and exit
--model_path MODEL_PATH
path of model parameters file
-i DATA_PATH, --data_path DATA_PATH
path of the dataset to infer
-o PREDICTION_OUTPUT_PATH, --prediction_output_path PREDICTION_OUTPUT_PATH
path to output the prediction
-y MODEL_TYPE, --model_type MODEL_TYPE
model type, 0 for classification, 1 for pairwise rank,
2 for regression (default: classification)
-s SOURCE_DIC_PATH, --source_dic_path SOURCE_DIC_PATH
path of the source's word dic
--target_dic_path TARGET_DIC_PATH
path of the target's word dic, if not set, the
`source_dic_path` will be used
-a MODEL_ARCH, --model_arch MODEL_ARCH
model architecture, 1 for CNN, 0 for FC, 2 for RNN
--share_network_between_source_target SHARE_NETWORK_BETWEEN_SOURCE_TARGET
whether to share network parameters between source and
target
--share_embed SHARE_EMBED
whether to share word embedding between source and
target
--dnn_dims DNN_DIMS dimentions of dnn layers, default is '256,128,64,32',
which means create a 4-layer dnn, demention of each
layer is 256, 128, 64 and 32
-c CLASS_NUM, --class_num CLASS_NUM
number of categories for classification task.
```
Important parameters are
- `data_path` Path for the data to predict
- `prediction_output_path` Prediction output path
## References
1. Huang P S, He X, Gao J, et al. Learning deep structured semantic models for web search using clickthrough data[C]//Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, 2013: 2333-2338.
2. [Microsoft Learning to Rank Datasets](https://www.microsoft.com/en-us/research/project/mslr/)
3. [Gao J, He X, Deng L. Deep Learning for Web Search and Natural Language Processing[J]. Microsoft Research Technical Report, 2015.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/wsdm2015.v3.pdf)
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body>
...@@ -94,7 +94,7 @@ class Inferer(object): ...@@ -94,7 +94,7 @@ class Inferer(object):
def __init__(self, param_path): def __init__(self, param_path):
logger.info("create DSSM model") logger.info("create DSSM model")
cost, prediction, label = DSSM( prediction = DSSM(
dnn_dims=layer_dims, dnn_dims=layer_dims,
vocab_sizes=[ vocab_sizes=[
len(load_dic(path)) len(load_dic(path))
...@@ -104,7 +104,8 @@ class Inferer(object): ...@@ -104,7 +104,8 @@ class Inferer(object):
model_arch=args.model_arch, model_arch=args.model_arch,
share_semantic_generator=args.share_network_between_source_target, share_semantic_generator=args.share_network_between_source_target,
class_num=args.class_num, class_num=args.class_num,
share_embed=args.share_embed)() share_embed=args.share_embed,
is_infer=True)()
# load parameter # load parameter
logger.info("load model parameters from %s" % param_path) logger.info("load model parameters from %s" % param_path)
......
...@@ -102,8 +102,7 @@ class DSSM(object): ...@@ -102,8 +102,7 @@ class DSSM(object):
''' '''
A GRU sentence vector learner. A GRU sentence vector learner.
''' '''
gru = paddle.layer.gru_memory( gru = paddle.networks.simple_gru(input=emb, size=256)
input=emb, )
sent_vec = paddle.layer.last_seq(gru) sent_vec = paddle.layer.last_seq(gru)
return sent_vec return sent_vec
...@@ -219,7 +218,7 @@ class DSSM(object): ...@@ -219,7 +218,7 @@ class DSSM(object):
# but this operator is not supported currently. # but this operator is not supported currently.
# so AUC will not used. # so AUC will not used.
return cost, None, label return cost, None, label
return None, [left_score, right_score], label return right_score
def _build_classification_or_regression_model(self, is_classification): def _build_classification_or_regression_model(self, is_classification):
''' '''
...@@ -271,8 +270,8 @@ class DSSM(object): ...@@ -271,8 +270,8 @@ class DSSM(object):
input=prediction, label=label) input=prediction, label=label)
else: else:
prediction = paddle.layer.cos_sim(*semantics) prediction = paddle.layer.cos_sim(*semantics)
cost = paddle.layer.mse_cost(prediction, label) cost = paddle.layer.square_error_cost(prediction, label)
if not self.is_infer: if not self.is_infer:
return cost, prediction, label return cost, prediction, label
return None, prediction, label return prediction
...@@ -19,7 +19,7 @@ def ndcg(score_list): ...@@ -19,7 +19,7 @@ def ndcg(score_list):
n = len(score_list) n = len(score_list)
cost = .0 cost = .0
for i in range(n): for i in range(n):
cost += float(score_list[i]) / np.log((i + 1) + 1) cost += float(np.power(2, score_list[i])) / np.log((i + 1) + 1)
return cost return cost
dcg_cost = dcg(score_list) dcg_cost = dcg(score_list)
...@@ -28,14 +28,11 @@ def ndcg(score_list): ...@@ -28,14 +28,11 @@ def ndcg(score_list):
return dcg_cost / ideal_cost return dcg_cost / ideal_cost
class NdcgTest(unittest.TestCase): class TestNDCG(unittest.TestCase):
def __init__(self): def test_array(self):
pass
def runcase(self):
a = [3, 2, 3, 0, 1, 2] a = [3, 2, 3, 0, 1, 2]
value = ndcg(a) value = ndcg(a)
self.assertAlmostEqual(0.961, value, places=3) self.assertAlmostEqual(0.9583, value, places=3)
if __name__ == '__main__': if __name__ == '__main__':
......
TBD # 带外部记忆机制的神经机器翻译
**外部记忆**(External Memory)机制的神经机器翻译模型(Neural Machine Translation, NMT),是神经机器翻译模型的一个重要扩展。它引入可微分的记忆网络作为额外的记忆单元,拓展神经翻译模型内部工作记忆(Working Memory)的容量或带宽,辅助完成翻译等任务中信息的临时存取,改善模型表现。
类似模型不仅可应用于翻译任务,同时可广泛应用于其他需 “大容量动态记忆” 的任务,例如:机器阅读理解 / 问答、多轮对话、长文本生成等。同时,“记忆” 作为认知的重要部分之一,可用于强化其他多种机器学习模型的表现。
本文所采用的外部记忆机制,主要指**神经图灵机** \[[1](#参考文献)\] 方式(将于后文详细描述)。值得一提的是,神经图灵机仅仅是神经网络模拟记忆机制的尝试之一。记忆机制长久以来被广泛研究,近年来在深度学习的背景下,涌现出一系列有价值的工作,例如记忆网络(Memory Networks)、可微分神经计算机(Differentiable Neural Computers, DNC)等。本文仅讨论和实现神经图灵机机制。
本文的实现主要参考论文\[[2](#参考文献)\], 并假设读者已充分阅读并理解 PaddlePaddle Book 中 [机器翻译](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation) 一章。
## 模型概述
### 记忆机制简介
记忆(Memory),是认知的重要环节之一。记忆赋予认知在时间上的协调性,使得复杂认知(如推理、规划,不同于静态感知)成为可能。灵活的记忆机制,是机器模仿人类智能所需要拥有的关键能力之一。
#### 静态记忆
任何机器学习模型,原生就拥有一定的静态记忆能力:无论它是参数模型(模型参数即记忆),还是非参模型(样本即记忆);无论是传统的 SVM(支持向量即记忆),还是神经网络模型(网络连接权值即记忆)。然而,这里的 “记忆” 绝大部分是指**静态记忆**,即在模型训练结束后,“记忆” 是固化的;在模型推断时,模型是静态一致的,不拥有额外的跨时间步的信息记忆能力。
#### 动态记忆 1 --- RNNs 中的隐状态向量
在处理序列认知问题(如自然语言处理、序列决策等)时,由于每个时间步对信息的处理需要依赖其他时间步的信息,我们往往需要在不同时间步上维持一个持久的信息通路。带有隐状态向量 $h$(或 LSTM 中的状态 $c$)的循环神经网络(Recurrent Neural Networks, RNNs) ,即拥有这样的 “**动态记忆**” 能力。每一个时间步,模型均可从 $h$ 或 $c$ 中获取过去时间步的 “记忆” 信息,并可往上持续叠加新的信息以更新记忆。在模型推断时,不同的样本具有完全不同的一组记忆信息($h$ 或 $c$),具有 “动态” 性。
尽管上述对 LSTM中细胞状态 $c$ 的直觉说法有着诸多不严谨之处:例如从优化的角度看, $c$ 的引入或者 GRU 中的线性 Leaky 结构的引入,是为了在梯度计算中使得单步梯度的雅克比矩阵的谱分布更接近单位阵,以减轻长程梯度衰减问题,降低优化难度。但这不妨碍我们从直觉的角度将它理解为增加 “线性通路” 使得 “记忆通道” 更顺畅,如图1(引自[此文](http://colah.github.io/posts/2015-08-Understanding-LSTMs/))所示的 LSTM 中的细胞状态向量 $c$ 可视为这样一个用于信息持久化的 “线性记忆通道”。
<div align="center">
<img src="image/lstm_c_state.png" width=700><br/>
图1. LSTM 中的细胞状态向量作为 “记忆通道” 示意图
</div>
#### 动态记忆 2 --- Seq2Seq 中的注意力机制
然而上节所述的单个向量 $h$ 或 $c$ 的信息带宽有限。在序列到序列生成模型中,这样的带宽瓶颈更表现在信息从编码器(Encoder)转移至解码器(Decoder)的过程中:仅仅依赖一个有限长度的状态向量来编码整个变长的源语句,有着较大的潜在信息丢失。
\[[3](#参考文献)\] 提出了注意力机制(Attention Mechanism),以克服上述困难。在解码时,解码器不再仅仅依赖来自编码器的唯一的句级编码向量的信息,而是依赖一个向量组的记忆信息:向量组中的每个向量为编码器的各字符(Token)的编码向量(例如 $h_t$)。通过一组可学习的注意强度(Attention Weights) 来动态分配注意力资源,以线性加权方式读取信息,用于序列的不同时间步的符号生成(可参考 PaddlePaddle Book [机器翻译](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation)一章)。这种注意强度的分布,可看成基于内容的寻址(请参考神经图灵机 \[[1](#参考文献)\] 中的寻址描述),即在源语句的不同位置根据其内容决定不同的读取强度,起到一种和源语句 “软对齐(Soft Alignment)” 的作用。
相比上节的单个状态向量,这里的 “向量组” 蕴含着更多更精准的信息,例如它可以被认为是一个无界的外部记忆模块(Unbounded External Memory),有效拓宽记忆信息带宽。“无界” 指的是向量组的向量个数非固定,而是随着源语句的字符数的变化而变化,数量不受限。在源语句的编码完成时,该外部存储即被初始化为各字符的状态向量,而在其后的整个解码过程中被读取使用。
#### 动态记忆 3 --- 神经图灵机
图灵机(Turing Machine)或冯诺依曼体系(Von Neumann Architecture),是计算机体系结构的雏形。运算器(如代数计算)、控制器(如逻辑分支控制)和存储器三者一体,共同构成了当代计算机的核心运行机制。神经图灵机(Neural Turing Machines)\[[1](#参考文献)\] 试图利用神经网络模拟可微分(即可通过梯度下降来学习)的图灵机,以实现更复杂的智能。而一般的机器学习模型,大部分忽略了显式的动态存储。神经图灵机正是要弥补这样的潜在缺陷。
<div align="center">
<img src="image/turing_machine_cartoon.gif"><br/>
图2. 图灵机结构漫画
</div>
图灵机的存储机制,常被形象比喻成在一个纸带(Tape)的读写操作。读头(Read Head)和 写头(Write Head)负责在纸带上读出或者写入信息;纸袋的移动、读写头的读写动作和内容,则受控制器 (Contoller) 控制(见图2,引自[此处](http://www.worldofcomputing.net/theory/turing-machine.html));同时纸带的长度通常有限。
神经图灵机则以矩阵 $M \in \mathcal{R}^{n \times m}$ 模拟 “纸带”,其中 $n$ 为记忆向量(又成记忆槽)的数量,$m$ 为记忆向量的长度。以前馈神经网络或循环神经网络来模拟控制器,决定本次读写在不同的记忆槽上的读写强度分布,即寻址:
- 基于内容的寻址(Content-based Addressing):寻址强度依赖于记忆槽的内容和该次读写的实际内容;
- 基于位置的寻址(Location-based Addressing):寻址强度依赖于上次寻址操作的寻址强度(例如偏移);
- 混合寻址:混合上述寻址方式(例如线性插值);
(详情请参考论文\[[1](#参考文献)\])。根据寻址情况,图灵机写入 $M$ 或从 $M$ 读出信息,供其他网络使用。神经图灵机结构示意图,见图3,引自\[[1](#参考文献)\]
<div align="center">
<img src="image/neural_turing_machine_arch.png"><br/>
图3. 神经图灵机结构示意图
</div>
和上节的注意力机制相比,神经图灵机有着诸多相同点和不同点。相同点例如:
- 均利用矩阵(或向量组)形式的外部存储。
- 均利用可微分的寻址方式。
不同在于:
- 神经图灵机有读有写,是真正意义上的存储器;而注意力机制在编码完成时即初始化存储内容(仅简单缓存,非可微分的写操作),在其后的解码过程中只读不写。
- 神经图灵机不仅有基于内容的寻址,同时结合基于位置的寻址,使得例如 “序列复制” 等需 “连续寻址” 的任务更容易;而注意力机制仅考虑基于内容的寻址,以实现 Soft Aligment。
- 神经图灵机利用有界(Bounded) 存储;而注意力机制利用无界(Unbounded)存储。
#### 三种记忆方式的混合,以强化神经机器翻译模型
尽管在一般的序列到序列模型中,注意力机制已经是标配。然而,注意机制中的外部存储仅用于存储编码器信息。在解码器内部,信息通路仍依赖 RNN 的状态单向量 $h$ 或 $c$。于是,利用神经图灵机的外部存储机制,来补充解码器内部的单向量信息通路,成为自然而然的想法。
于是,我们混合上述的三种动态记忆机制,即RNN 原有的状态向量、注意力机制被保留;同时,基于简化版的神经图灵机的有界外部记忆机制被引入以补充解码器单状态向量记忆。整体的模型实现参考论文\[[2](#参考文献)\]。少量的实现差异,详见[其他讨论](#其他讨论)一章。
这里额外需要理解的是,为什么不直接通过增加 $h$ 或 $c$的维度来扩大信息带宽?
- 一方面因为通过增加 $h$ 或 $c$的维度是以 $O(n^2)$ 的存储和计算复杂度为代价(状态-状态转移矩阵);而基于神经图灵机的记忆扩展代价是 $O(n)$的,因其寻址是以记忆槽(Memory Slot)为单位,而控制器的参数结构仅仅是和 $m$(记忆槽的大小)有关。
- 基于状态单向量的记忆读写机制,仅有唯一的读写强度,即本质上是**全局**的;而神经图灵机的机制是**局部**的,即读写本质上仅在部分记忆槽(寻址强度的分布锐利,即真正大的强度仅分布于部分记忆槽)。局部的特性让记忆的存取更干净,干扰更小。
### 模型网络结构
网络总体结构在带注意机制的序列到序列结构(即RNNsearch\[[3](##参考文献)\]) 基础上叠加简化版神经图灵机\[[1](#参考文献)\]外部记忆模块。
- 编码器(Encoder)采用标准**双向 GRU 结构**(非 stack),不赘述。
- 解码器(Decoder)采用和论文\[[2](#参考文献)\] 基本相同的结构,见图4(修改自论文\[[2](#参考文献参考文献)\]) 。
<div align="center">
<img src="image/memory_enhanced_decoder.png" width=450><br/>
图4. 通过外部记忆增强的解码器结构示意图
</div>
解码器结构图,解释如下:
1. $M_{t-1}^B$ 和 $M_t^B$ 为有界外部存储矩阵,前者为上一时间步存储矩阵的状态,后者为当前时间步的状态。$\textrm{read}^B$ 和 $\textrm{write}$ 为对应的读写头(包含其控制器)。$r_t$ 为对应的读出向量。
2. $M^S$ 为无界外部存储矩阵,$\textrm{read}^S$ 为对应的读头,二者配合即实现传统的注意力机制。$c_t$ 为对应的读出向量。
3. $y_{t-1}$ 为解码器上一步的输出字符并做词向量(Word Embedding)映射,作为当前步的输入,$y_t$ 为解码器当前步的解码符号的概率分布。
4. 虚线框内(除$M^S$外),整体可视为有界外部存储模块。可以看到,除去该部分,网络结构和 RNNsearch\[[3](#参考文献)\] 基本一致(略有不一致之处为:用于 attention 的 decoder state 被改进,即叠加了一隐层并引入了 $y_{t-1}$)。
## 算法实现
算法实现于以下几个文件中:
- `external_memory.py`: 主要实现简化版的 **神经图灵机**`ExternalMemory` 类,对外提供初始化和读写函数。
- `model.py`: 相关模型配置函数,包括双向 GPU 编码器(`bidirectional_gru_encoder`),带外部记忆强化的解码器(`memory_enhanced_decoder`),带外部记忆强化的序列到序列模型(`memory_enhanced_decoder`)。
- `data_utils.py`: 相关数据处理辅助函数。
- `train.py`: 模型训练。
- `infer.py`: 部分示例样本的翻译(模型推断)。
### `ExternalMemory` 类
`ExternalMemory` 类实现通用的简化版**神经图灵机**。相比完整版神经图灵机,该类仅实现了基于内容的寻址(Content Addressing, Interpolation),不包括基于位置的寻址( Convolutional Shift, Sharpening)。读者可以自行将其补充成为一个完整的神经图灵机。
该类结构如下:
```python
class ExternalMemory(object):
"""External neural memory class.
A simplified Neural Turing Machines (NTM) with only content-based
addressing (including content addressing and interpolation, but excluding
convolutional shift and sharpening). It serves as an external differential
memory bank, with differential write/read head controllers to store
and read information dynamically as needed. Simple feedforward networks are
used as the write/read head controllers.
For more details, please refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
"""
def __init__(self,
name,
mem_slot_size,
boot_layer,
initial_weight,
readonly=False,
enable_interpolation=True):
""" Initialization.
:param name: Memory name.
:type name: basestring
:param mem_slot_size: Size of memory slot/vector.
:type mem_slot_size: int
:param boot_layer: Boot layer for initializing the external memory. The
sequence layer has sequence length indicating the number
of memory slots, and size as memory slot size.
:type boot_layer: LayerOutput
:param initial_weight: Initializer for addressing weights.
:type initial_weight: LayerOutput
:param readonly: If true, the memory is read-only, and write function cannot
be called. Default is false.
:type readonly: bool
:param enable_interpolation: If set true, the read/write addressing weights
will be interpolated with the weights in the
last step, with the affine coefficients being
a learnable gate function.
:type enable_interpolation: bool
"""
def _content_addressing(self, key_vector):
"""Get write/read head's addressing weights via content-based addressing.
"""
pass
def _interpolation(self, head_name, key_vector, addressing_weight):
"""Interpolate between previous and current addressing weights.
"""
pass
def _get_addressing_weight(self, head_name, key_vector):
"""Get final addressing weights for read/write heads, including content
addressing and interpolation.
"""
pass
def write(self, write_key):
"""Write onto the external memory.
It cannot be called if "readonly" set True.
:param write_key: Key vector for write heads to generate writing
content and addressing signals.
:type write_key: LayerOutput
pass
def read(self, read_key):
"""Read from the external memory.
:param write_key: Key vector for read head to generate addressing
signals.
:type write_key: LayerOutput
:return: Content (vector) read from external memory.
:rtype: LayerOutput
"""
pass
```
其中,私有方法包含:
- `_content_addressing`: 通过基于内容的寻址,计算得到读写操作的寻址强度。
- `_interpolation`: 通过插值寻址(当前寻址强度和上一时间步寻址强度的线性加权),更新当前寻址强度。
- `_get_addressing_weight`: 调用上述两个寻址操作,获得对存储单元的读写操作的最终寻址强度。
对外接口包含:
- `__init__`:类实例初始化。
- 输入参数 `name`: 外部记忆单元名,不同实例的相同命名将共享同一外部记忆单元。
- 输入参数 `mem_slot_size`: 单个记忆槽(向量)的维度。
- 输入参数 `boot_layer`: 用于内存槽初始化的层。需为序列类型,序列长度表明记忆槽的数量。
- 输入参数 `initial_weight`: 用于初始化寻址强度。
- 输入参数 `readonly`: 是否打开只读模式(例如打开只读模式,该实例可用于注意力机制)。打开只读模式,`write` 方法不可被调用。
- 输入参数 `enable_interpolation`: 是否允许插值寻址(例如当用于注意力机制时,需要关闭插值寻址)。
- `write`: 写操作。
- 输入参数 `write_key`:某层的输出,其包含的信息用于写头的寻址和实际写入信息的生成。
- `read`: 读操作。
- 输入参数 `read_key`:某层的输出,其包含的信息用于读头的寻址。
- 返回:读出的信息(可直接作为其他层的输入)。
部分关键实现逻辑:
- 神经图灵机的 “外部存储矩阵” 采用 `Paddle.layer.memory`实现。该序列的长度表示记忆槽的数量,序列的 `size` 表示记忆槽(向量)的大小。该序列依赖一个外部层作为初始化, 其记忆槽的数量取决于该层输出序列的长度。因此,该类不仅可用来实现有界记忆(Bounded Memory),同时可用来实现无界记忆 (Unbounded Memory,即记忆槽数量可变)。
```python
self.external_memory = paddle.layer.memory(
name=self.name,
size=self.mem_slot_size,
boot_layer=boot_layer)
```
- `ExternalMemory`类的寻址逻辑通过 `_content_addressing``_interpolation` 两个私有方法实现。读和写操作通过 `read``write` 两个函数实现,包括上述的寻址操作。并且读和写的寻址独立进行,不同于 \[[2](#参考文献)\] 中的二者共享同一个寻址强度,目的是为了使得该类更通用。
- 为了简单起见,控制器(Controller)未被专门模块化,而是分散在各个寻址和读写函数中。控制器主要包括寻址操作和写操作时生成写入/擦除向量等,其中寻址操作通过上述的`_content_addressing``_interpolation` 两个私有方法实现,写操作时的写入/擦除向量的生成则在 `write` 方法中实现。上述均采用简单的前馈网络模拟控制器。读者可尝试剥离控制器逻辑并模块化,同时可尝试循环神经网络做控制器。
- `ExternalMemory` 类具有只读模式,同时差值寻址操作可关闭。主要目的是便于用该类等价实现传统的注意力机制。
- `ExternalMemory` 只能和 `paddle.layer.recurrent_group`配合使用,具体在用户自定义的 `step` 函数中使用(示例请详细代码),它不可以单独存在。
### `memory_enhanced_seq2seq` 及相关函数
涉及三个主要函数:
```python
def bidirectional_gru_encoder(input, size, word_vec_dim):
"""Bidirectional GRU encoder.
:params size: Hidden cell number in decoder rnn.
:type size: int
:params word_vec_dim: Word embedding size.
:type word_vec_dim: int
:return: Tuple of 1. concatenated forward and backward hidden sequence.
2. last state of backward rnn.
:rtype: tuple of LayerOutput
"""
pass
def memory_enhanced_decoder(input, target, initial_state, source_context, size,
word_vec_dim, dict_size, is_generating, beam_size):
"""GRU sequence decoder enhanced with external memory.
The "external memory" refers to two types of memories.
- Unbounded memory: i.e. attention mechanism in Seq2Seq.
- Bounded memory: i.e. external memory in NTM.
Both types of external memories can be implemented with
ExternalMemory class, and are both exploited in this enhanced RNN decoder.
The vanilla RNN/LSTM/GRU also has a narrow memory mechanism, namely the
hidden state vector (or cell state in LSTM) carrying information through
a span of sequence time, which is a successful design enriching the model
with the capability to "remember" things in the long run. However, such a
vector state is somewhat limited to a very narrow memory bandwidth. External
memory introduced here could easily increase the memory capacity with linear
complexity cost (rather than quadratic for vector state).
This enhanced decoder expands its "memory passage" through two
ExternalMemory objects:
- Bounded memory for handling long-term information exchange within decoder
itself. A direct expansion of traditional "vector" state.
- Unbounded memory for handling source language's token-wise information.
Exactly the attention mechanism over Seq2Seq.
Notice that we take the attention mechanism as a particular form of external
memory, with read-only memory bank initialized with encoder states, and a
read head with content-based addressing (attention). From this view point,
we arrive at a better understanding of attention mechanism itself and other
external memory, and a concise and unified implementation for them.
For more details about external memory, please refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
For more details about this memory-enhanced decoder, please
refer to `Memory-enhanced Decoder for Neural Machine Translation
<https://arxiv.org/abs/1606.02003>`_. This implementation is highly
correlated to this paper, but with minor differences (e.g. put "write"
before "read" to bypass a potential bug in V2 APIs. See
(`issue <https://github.com/PaddlePaddle/Paddle/issues/2061>`_).
"""
pass
def memory_enhanced_seq2seq(encoder_input, decoder_input, decoder_target,
hidden_size, word_vec_dim, dict_size, is_generating,
beam_size):
"""Seq2Seq Model enhanced with external memory.
The "external memory" refers to two types of memories.
- Unbounded memory: i.e. attention mechanism in Seq2Seq.
- Bounded memory: i.e. external memory in NTM.
Both types of external memories can be implemented with
ExternalMemory class, and are both exploited in this Seq2Seq model.
:params encoder_input: Encoder input.
:type encoder_input: LayerOutput
:params decoder_input: Decoder input.
:type decoder_input: LayerOutput
:params decoder_target: Decoder target.
:type decoder_target: LayerOutput
:params hidden_size: Hidden cell number, both in encoder and decoder rnn.
:type hidden_size: int
:params word_vec_dim: Word embedding size.
:type word_vec_dim: int
:param dict_size: Vocabulary size.
:type dict_size: int
:params is_generating: Whether for beam search inferencing (True) or
for training (False).
:type is_generating: bool
:params beam_size: Beam search width.
:type beam_size: int
:return: Cost layer if is_generating=False; Beam search layer if
is_generating = True.
:rtype: LayerOutput
"""
pass
```
- `bidirectional_gru_encoder` 函数实现双向单层 GRU(Gated Recurrent Unit) 编码器。返回两组结果:一组为字符级编码向量序列(包含前后向),一组为整个源语句的句级编码向量(仅后向)。前者用于解码器的注意力机制中记忆矩阵的初始化,后者用于解码器的状态向量的初始化。
- `memory_enhanced_decoder` 函数实现通过外部记忆增强的 GRU 解码器。它利用同一个`ExternalMemory` 类实现两种外部记忆模块:
- 无界外部记忆:即传统的注意力机制。利用`ExternalMemory`,打开只读开关,关闭插值寻址。并利用解码器的第一组输出作为 `ExternalMemory` 中存储矩阵的初始化(`boot_layer`)。因此,该存储的记忆槽数目是动态可变的,取决于编码器的字符数。
```python
unbounded_memory = ExternalMemory(
name="unbounded_memory",
mem_slot_size=size * 2,
boot_layer=unbounded_memory_init,
initial_weight=unbounded_memory_weight_init,
readonly=True,
enable_interpolation=False)
```
- 有界外部记忆:利用`ExternalMemory`,关闭只读开关,打开插值寻址。并利用解码器的第一组输出,取均值池化(pooling)后并扩展为指定序列长度后,叠加随机噪声(训练和推断时保持一致),作为 `ExternalMemory` 中存储矩阵的初始化(`boot_layer`)。因此,该存储的记忆槽数目是固定的。即代码中的:
```python
bounded_memory = ExternalMemory(
name="bounded_memory",
mem_slot_size=size,
boot_layer=bounded_memory_init,
initial_weight=bounded_memory_weight_init,
readonly=False,
enable_interpolation=True)
```
注意到,在我们的实现中,注意力机制(或无界外部存储)和神经图灵机(或有界外部存储)被实现成相同的 `ExternalMemory` 类。前者是**只读**的, 后者**可读可写**。这样处理仅仅是为了便于统一我们对 “注意机制” 和 “记忆机制” 的理解和认识,同时也提供更简洁和统一的实现版本。注意力机制也可以通过 `paddle.networks.simple_attention` 实现。
- `memory_enhanced_seq2seq` 函数定义整个带外部记忆机制的序列到序列模型,是模型定义的主调函数。它首先调用`bidirectional_gru_encoder` 对源语言进行编码,然后通过 `memory_enhanced_decoder` 进行解码。
此外,在该实现中,将 `ExternalMemory``write` 操作提前至 `read` 之前,以避开潜在的拓扑连接局限,详见 [Issue](https://github.com/PaddlePaddle/Paddle/issues/2061)。我们可以看到,本质上他们是等价的。
## 快速开始
### 数据自定义
数据是通过无参的 `reader()` 迭代器函数,进入训练过程。因此我们需要为训练数据和测试数据分别构造两个 `reader()` 迭代器。`reader()` 函数使用 `yield` 来实现迭代器功能(即可通过 `for instance in reader()` 方式迭代运行), 例如
```python
def reader():
for instance in data_list:
yield instance
```
`yield` 返回的每条样本需为三元组,分别包含编码器输入字符列表(即源语言序列,需 ID 化),解码器输入字符列表(即目标语言序列,需 ID 化,且序列右移一位),解码器输出字符列表(即目标语言序列,需 ID 化)。
用户需自行完成字符的切分 (Tokenize) ,并构建字典完成 ID 化。
PaddlePaddle 的接口 [paddle.paddle.wmt14](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/dataset/wmt14.py), 默认提供了一个经过预处理的、较小规模的 [wmt14 英法翻译数据集的子集](http://paddlepaddle.bj.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz)(该数据集有193319条训练数据,6003条测试数据,词典长度为30000)。并提供了两个reader creator函数如下:
```python
paddle.dataset.wmt14.train(dict_size)
paddle.dataset.wmt14.test(dict_size)
```
这两个函数被调用时即返回相应的`reader()`函数,供`paddle.traner.SGD.train`使用。当我们需要使用其他数据时,可参考 [paddle.paddle.wmt14](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/dataset/wmt14.py) 构造相应的 data creator,并替换 `paddle.dataset.wmt14.train``paddle.dataset.wmt14.train` 成相应函数名。
### 训练
命令行输入:
```bash
python mt_with_external_memory.py
```
或自定义部分参数, 例如:
```bash
python train.py \
--dict_size 30000 \
--word_vec_dim 512 \
--hidden_size 1024 \
--memory_slot_num 8 \
--use_gpu False \
--trainer_count 1 \
--num_passes 100 \
--batch_size 128 \
--memory_perturb_stddev 0.1
```
即可运行训练脚本,训练模型将被定期保存于本地 `./checkpoints`。参数含义可运行
```bash
python train.py --help
```
### 解码
命令行输入:
```bash
python infer.py
```
或自定义部分参数, 例如:
```bash
python train.py \
--dict_size 30000 \
--word_vec_dim 512 \
--hidden_size 1024 \
--memory_slot_num 8 \
--use_gpu False \
--trainer_count 1 \
--memory_perturb_stddev 0.1 \
--infer_num_data 10 \
--model_filepath checkpoints/params.latest.tar.gz \
--beam_size 3
```
即可运行解码脚本,产生示例翻译结果。参数含义可运行:
```bash
python infer.py --help
```
## 其他讨论
#### 和论文\[[2](#参考文献)\]实现的差异
差异如下:
1. 基于内容的寻址公式不同: 原文为 $a = v^T(WM^B + Us)$,本示例为 $a = v^T \textrm{tanh}(WM^B + Us)$,以保持和 \[[3](#参考文献)\] 中的注意力机制寻址方式一致。
2. 有界外部存储的初始化方式不同: 原文为 $M^B = \sigma(W\sum_{i=0}^{i=n}h_i)/n + V$, $V_{i,j}~\in \mathcal{N}(0, 0.1)$,本示例为 $M^B = \sigma(\frac{1}{n}W\sum_{i=0}^{i=n}h_i) + V$。
3. 外部记忆机制的读和写的寻址逻辑不同:原文二者共享同一个寻址强度,相当于权值联结(Weight Tying)正则。本示例不施加该正则,读和写采用独立寻址。
4. 同时间步内的外部记忆读写次序不同:原文为先读后写,本示例为先写后读,本质等价。
## 参考文献
1. Alex Graves, Greg Wayne, Ivo Danihelka, [Neural Turing Machines](https://arxiv.org/abs/1410.5401). arXiv preprint arXiv:1410.5401, 2014.
2. Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu, [Memory-enhanced Decoder Neural Machine Translation](https://arxiv.org/abs/1606.02003). In Proceedings of EMNLP, 2016, pages 278–286.
3. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473). arXiv preprint arXiv:1409.0473, 2014.
"""
Contains data utilities.
"""
def reader_append_wrapper(reader, append_tuple):
"""
Data reader wrapper for appending extra data to exisiting reader.
"""
def new_reader():
for ins in reader():
yield ins + append_tuple
return new_reader
"""
External neural memory class.
"""
import paddle.v2 as paddle
class ExternalMemory(object):
"""External neural memory class.
A simplified Neural Turing Machines (NTM) with only content-based
addressing (including content addressing and interpolation, but excluding
convolutional shift and sharpening). It serves as an external differential
memory bank, with differential write/read head controllers to store
and read information dynamically. Simple feedforward networks are
used as the write/read head controllers.
The ExternalMemory class could be utilized by many neural network structures
to easily expand their memory bandwidth and accomplish a long-term memory
handling. Besides, some existing mechanism can be realized directly with
the ExternalMemory class, e.g. the attention mechanism in Seq2Seq (i.e. an
unbounded external memory).
Besides, the ExternalMemory class must be used together with
paddle.layer.recurrent_group (within its step function). It can never be
used in a standalone manner.
For more details, please refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
:param name: Memory name.
:type name: basestring
:param mem_slot_size: Size of memory slot/vector.
:type mem_slot_size: int
:param boot_layer: Boot layer for initializing the external memory. The
sequence layer has sequence length indicating the number
of memory slots, and size as memory slot size.
:type boot_layer: LayerOutput
:param initial_weight: Initializer for addressing weights.
:type initial_weight: LayerOutput
:param readonly: If true, the memory is read-only, and write function cannot
be called. Default is false.
:type readonly: bool
:param enable_interpolation: If set true, the read/write addressing weights
will be interpolated with the weights in the
last step, with the affine coefficients being
a learnable gate function.
:type enable_interpolation: bool
"""
def __init__(self,
name,
mem_slot_size,
boot_layer,
initial_weight,
readonly=False,
enable_interpolation=True):
self.name = name
self.mem_slot_size = mem_slot_size
self.readonly = readonly
self.enable_interpolation = enable_interpolation
self.external_memory = paddle.layer.memory(
name=self.name, size=self.mem_slot_size, boot_layer=boot_layer)
self.initial_weight = initial_weight
# set memory to constant when readonly=True
if self.readonly:
self.updated_external_memory = paddle.layer.mixed(
name=self.name,
input=[
paddle.layer.identity_projection(input=self.external_memory)
],
size=self.mem_slot_size)
def _content_addressing(self, key_vector):
"""Get write/read head's addressing weights via content-based addressing.
"""
# content-based addressing: a=tanh(W*M + U*key)
key_projection = paddle.layer.fc(
input=key_vector,
size=self.mem_slot_size,
act=paddle.activation.Linear(),
bias_attr=False)
key_proj_expanded = paddle.layer.expand(
input=key_projection, expand_as=self.external_memory)
memory_projection = paddle.layer.fc(
input=self.external_memory,
size=self.mem_slot_size,
act=paddle.activation.Linear(),
bias_attr=False)
merged_projection = paddle.layer.addto(
input=[key_proj_expanded, memory_projection],
act=paddle.activation.Tanh())
# softmax addressing weight: w=softmax(v^T a)
addressing_weight = paddle.layer.fc(
input=merged_projection,
size=1,
act=paddle.activation.SequenceSoftmax(),
bias_attr=False)
return addressing_weight
def _interpolation(self, head_name, key_vector, addressing_weight):
"""Interpolate between previous and current addressing weights.
"""
# prepare interpolation scalar gate: g=sigmoid(W*key)
gate = paddle.layer.fc(
input=key_vector,
size=1,
act=paddle.activation.Sigmoid(),
bias_attr=False)
# interpolation: w_t = g*w_t+(1-g)*w_{t-1}
last_addressing_weight = paddle.layer.memory(
name=self.name + "_addressing_weight_" + head_name,
size=1,
boot_layer=self.initial_weight)
interpolated_weight = paddle.layer.interpolation(
name=self.name + "_addressing_weight_" + head_name,
input=[addressing_weight, addressing_weight],
weight=paddle.layer.expand(input=gate, expand_as=addressing_weight))
return interpolated_weight
def _get_addressing_weight(self, head_name, key_vector):
"""Get final addressing weights for read/write heads, including content
addressing and interpolation.
"""
# current content-based addressing
addressing_weight = self._content_addressing(key_vector)
# interpolation with previous addresing weight
if self.enable_interpolation:
return self._interpolation(head_name, key_vector, addressing_weight)
else:
return addressing_weight
def write(self, write_key):
"""Write onto the external memory.
It cannot be called if "readonly" set True.
:param write_key: Key vector for write heads to generate writing
content and addressing signals.
:type write_key: LayerOutput
"""
# check readonly
if self.readonly:
raise ValueError("ExternalMemory with readonly=True cannot write.")
# get addressing weight for write head
write_weight = self._get_addressing_weight("write_head", write_key)
# prepare add_vector and erase_vector
erase_vector = paddle.layer.fc(
input=write_key,
size=self.mem_slot_size,
act=paddle.activation.Sigmoid(),
bias_attr=False)
add_vector = paddle.layer.fc(
input=write_key,
size=self.mem_slot_size,
act=paddle.activation.Sigmoid(),
bias_attr=False)
erase_vector_expand = paddle.layer.expand(
input=erase_vector, expand_as=self.external_memory)
add_vector_expand = paddle.layer.expand(
input=add_vector, expand_as=self.external_memory)
# prepare scaled add part and erase part
scaled_erase_vector_expand = paddle.layer.scaling(
weight=write_weight, input=erase_vector_expand)
erase_memory_part = paddle.layer.mixed(
input=paddle.layer.dotmul_operator(
a=self.external_memory,
b=scaled_erase_vector_expand,
scale=-1.0))
add_memory_part = paddle.layer.scaling(
weight=write_weight, input=add_vector_expand)
# update external memory
self.updated_external_memory = paddle.layer.addto(
input=[self.external_memory, add_memory_part, erase_memory_part],
name=self.name)
def read(self, read_key):
"""Read from the external memory.
:param write_key: Key vector for read head to generate addressing
signals.
:type write_key: LayerOutput
:return: Content (vector) read from external memory.
:rtype: LayerOutput
"""
# get addressing weight for write head
read_weight = self._get_addressing_weight("read_head", read_key)
# read content from external memory
scaled = paddle.layer.scaling(
weight=read_weight, input=self.updated_external_memory)
return paddle.layer.pooling(
input=scaled, pooling_type=paddle.pooling.Sum())
"""
Contains infering script for machine translation with external memory.
"""
import distutils.util
import argparse
import gzip
import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import memory_enhanced_seq2seq
from data_utils import reader_append_wrapper
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--dict_size",
default=30000,
type=int,
help="Vocabulary size. (default: %(default)s)")
parser.add_argument(
"--word_vec_dim",
default=512,
type=int,
help="Word embedding size. (default: %(default)s)")
parser.add_argument(
"--hidden_size",
default=1024,
type=int,
help="Hidden cell number in RNN. (default: %(default)s)")
parser.add_argument(
"--memory_slot_num",
default=8,
type=int,
help="External memory slot number. (default: %(default)s)")
parser.add_argument(
"--beam_size",
default=3,
type=int,
help="Beam search width. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=False,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--trainer_count",
default=1,
type=int,
help="Trainer number. (default: %(default)s)")
parser.add_argument(
"--batch_size",
default=5,
type=int,
help="Batch size. (default: %(default)s)")
parser.add_argument(
"--infer_data_num",
default=3,
type=int,
help="Instance num to infer. (default: %(default)s)")
parser.add_argument(
"--model_filepath",
default="checkpoints/params.latest.tar.gz",
type=str,
help="Model filepath. (default: %(default)s)")
parser.add_argument(
"--memory_perturb_stddev",
default=0.1,
type=float,
help="Memory perturb stddev for memory initialization."
"(default: %(default)s)")
args = parser.parse_args()
def parse_beam_search_result(beam_result, dictionary):
"""
Beam search result parser.
"""
sentence_list = []
sentence = []
for word in beam_result[1]:
if word != -1:
sentence.append(word)
else:
sentence_list.append(
' '.join([dictionary.get(word) for word in sentence[1:]]))
sentence = []
beam_probs = beam_result[0]
beam_size = len(beam_probs[0])
beam_sentences = [
sentence_list[i:i + beam_size]
for i in range(0, len(sentence_list), beam_size)
]
return beam_probs, beam_sentences
def infer():
"""
For inferencing.
"""
# create network config
source_words = paddle.layer.data(
name="source_words",
type=paddle.data_type.integer_value_sequence(args.dict_size))
beam_gen = memory_enhanced_seq2seq(
encoder_input=source_words,
decoder_input=None,
decoder_target=None,
hidden_size=args.hidden_size,
word_vec_dim=args.word_vec_dim,
dict_size=args.dict_size,
is_generating=True,
beam_size=args.beam_size)
# load parameters
parameters = paddle.parameters.Parameters.from_tar(
gzip.open(args.model_filepath))
# prepare infer data
infer_data = []
random.seed(0) # for keeping consitancy for multiple runs
bounded_memory_perturbation = [[
random.gauss(0, memory_perturb_stddev) for i in xrange(args.hidden_size)
] for j in xrange(args.memory_slot_num)]
test_append_reader = reader_append_wrapper(
reader=paddle.dataset.wmt14.test(dict_size),
append_tuple=(bounded_memory_perturbation, ))
for i, item in enumerate(test_append_reader()):
if i < args.infer_data_num:
infer_data.append((item[0], item[3], ))
# run inference
beam_result = paddle.infer(
output_layer=beam_gen,
parameters=parameters,
input=infer_data,
field=['prob', 'id'])
# parse beam result and print
source_dict, target_dict = paddle.dataset.wmt14.get_dict(dict_size)
beam_probs, beam_sentences = parse_beam_search_result(beam_result,
target_dict)
for i in xrange(args.infer_data_num):
print "\n***************************************************\n"
print "src:", ' '.join(
[source_dict.get(word) for word in infer_data[i][0]]), "\n"
for j in xrange(args.beam_size):
print "prob = %f : %s" % (beam_probs[i][j], beam_sentences[i][j])
def main():
paddle.init(use_gpu=False, trainer_count=1)
infer()
if __name__ == '__main__':
main()
"""
Contains model configuration for external-memory-enhanced seq2seq.
The "external memory" refers to two types of memories.
- Unbounded memory: i.e. vanilla attention mechanism in Seq2Seq.
- Bounded memory: i.e. external memory in NTM.
Both types of external memories are exploited to enhance the vanilla
Seq2Seq neural machine translation.
The implementation primarily follows the paper
`Memory-enhanced Decoder for Neural Machine Translation
<https://arxiv.org/abs/1606.02003>`_,
with some minor differences (will be listed in README.md).
For details about "external memory", please also refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
"""
import paddle.v2 as paddle
from external_memory import ExternalMemory
def bidirectional_gru_encoder(input, size, word_vec_dim):
"""Bidirectional GRU encoder.
:params size: Hidden cell number in decoder rnn.
:type size: int
:params word_vec_dim: Word embedding size.
:type word_vec_dim: int
:return: Tuple of 1. concatenated forward and backward hidden sequence.
2. last state of backward rnn.
:rtype: tuple of LayerOutput
"""
# token embedding
embeddings = paddle.layer.embedding(input=input, size=word_vec_dim)
# token-level forward and backard encoding for attentions
forward = paddle.networks.simple_gru(
input=embeddings, size=size, reverse=False)
backward = paddle.networks.simple_gru(
input=embeddings, size=size, reverse=True)
forward_backward = paddle.layer.concat(input=[forward, backward])
# sequence-level encoding
backward_first = paddle.layer.first_seq(input=backward)
return forward_backward, backward_first
def memory_enhanced_decoder(input, target, initial_state, source_context, size,
word_vec_dim, dict_size, is_generating, beam_size):
"""GRU sequence decoder enhanced with external memory.
The "external memory" refers to two types of memories.
- Unbounded memory: i.e. attention mechanism in Seq2Seq.
- Bounded memory: i.e. external memory in NTM.
Both types of external memories can be implemented with
ExternalMemory class, and are both exploited in this enhanced RNN decoder.
The vanilla RNN/LSTM/GRU also has a narrow memory mechanism, namely the
hidden state vector (or cell state in LSTM) carrying information through
a span of sequence time, which is a successful design enriching the model
with the capability to "remember" things in the long run. However, such a
vector state is somewhat limited to a very narrow memory bandwidth. External
memory introduced here could easily increase the memory capacity with linear
complexity cost (rather than quadratic for vector state).
This enhanced decoder expands its "memory passage" through two
ExternalMemory objects:
- Bounded memory for handling long-term information exchange within decoder
itself. A direct expansion of traditional "vector" state.
- Unbounded memory for handling source language's token-wise information.
Exactly the attention mechanism over Seq2Seq.
Notice that we take the attention mechanism as a particular form of external
memory, with read-only memory bank initialized with encoder states, and a
read head with content-based addressing (attention). From this view point,
we arrive at a better understanding of attention mechanism itself and other
external memory, and a concise and unified implementation for them.
For more details about external memory, please refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
For more details about this memory-enhanced decoder, please
refer to `Memory-enhanced Decoder for Neural Machine Translation
<https://arxiv.org/abs/1606.02003>`_. This implementation is highly
correlated to this paper, but with minor differences (e.g. put "write"
before "read" to bypass a potential bug in V2 APIs. See
(`issue <https://github.com/PaddlePaddle/Paddle/issues/2061>`_).
:params input: Decoder input.
:type input: LayerOutput
:params target: Decoder target.
:type target: LayerOutput
:params initial_state: Initial hidden state.
:type initial_state: LayerOutput
:params source_context: Group of context hidden states for each token in the
source sentence, for attention mechanisim.
:type source_context: LayerOutput
:params size: Hidden cell number in decoder rnn.
:type size: int
:params word_vec_dim: Word embedding size.
:type word_vec_dim: int
:param dict_size: Vocabulary size.
:type dict_size: int
:params is_generating: Whether for beam search inferencing (True) or
for training (False).
:type is_generating: bool
:params beam_size: Beam search width.
:type beam_size: int
:return: Cost layer if is_generating=False; Beam search layer if
is_generating = True.
:rtype: LayerOutput
"""
# prepare initial bounded and unbounded memory
bounded_memory_slot_init = paddle.layer.fc(
input=paddle.layer.pooling(
input=source_context, pooling_type=paddle.pooling.Avg()),
size=size,
act=paddle.activation.Sigmoid())
bounded_memory_perturbation = paddle.layer.data(
name='bounded_memory_perturbation',
type=paddle.data_type.dense_vector_sequence(size))
bounded_memory_init = paddle.layer.addto(
input=[
paddle.layer.expand(
input=bounded_memory_slot_init,
expand_as=bounded_memory_perturbation),
bounded_memory_perturbation
],
act=paddle.activation.Linear())
bounded_memory_weight_init = paddle.layer.slope_intercept(
input=paddle.layer.fc(input=bounded_memory_init, size=1),
slope=0.0,
intercept=0.0)
unbounded_memory_init = source_context
unbounded_memory_weight_init = paddle.layer.slope_intercept(
input=paddle.layer.fc(input=unbounded_memory_init, size=1),
slope=0.0,
intercept=0.0)
# prepare step function for reccurent group
def recurrent_decoder_step(cur_embedding):
# create hidden state, bounded and unbounded memory.
state = paddle.layer.memory(
name="gru_decoder", size=size, boot_layer=initial_state)
bounded_memory = ExternalMemory(
name="bounded_memory",
mem_slot_size=size,
boot_layer=bounded_memory_init,
initial_weight=bounded_memory_weight_init,
readonly=False,
enable_interpolation=True)
unbounded_memory = ExternalMemory(
name="unbounded_memory",
mem_slot_size=size * 2,
boot_layer=unbounded_memory_init,
initial_weight=unbounded_memory_weight_init,
readonly=True,
enable_interpolation=False)
# write bounded memory
bounded_memory.write(state)
# read bounded memory
bounded_memory_read = bounded_memory.read(state)
# prepare key for unbounded memory
key_for_unbounded_memory = paddle.layer.fc(
input=[bounded_memory_read, cur_embedding],
size=size,
act=paddle.activation.Tanh(),
bias_attr=False)
# read unbounded memory (i.e. attention mechanism)
context = unbounded_memory.read(key_for_unbounded_memory)
# gated recurrent unit
gru_inputs = paddle.layer.fc(
input=[context, cur_embedding, bounded_memory_read],
size=size * 3,
act=paddle.activation.Linear(),
bias_attr=False)
gru_output = paddle.layer.gru_step(
name="gru_decoder", input=gru_inputs, output_mem=state, size=size)
# step output
return paddle.layer.fc(
input=[gru_output, context, cur_embedding],
size=dict_size,
act=paddle.activation.Softmax(),
bias_attr=True)
if not is_generating:
target_embeddings = paddle.layer.embedding(
input=input,
size=word_vec_dim,
param_attr=paddle.attr.ParamAttr(name="_decoder_word_embedding"))
decoder_result = paddle.layer.recurrent_group(
name="decoder_group",
step=recurrent_decoder_step,
input=[target_embeddings])
cost = paddle.layer.classification_cost(
input=decoder_result, label=target)
return cost
else:
target_embeddings = paddle.layer.GeneratedInput(
size=dict_size,
embedding_name="_decoder_word_embedding",
embedding_size=word_vec_dim)
beam_gen = paddle.layer.beam_search(
name="decoder_group",
step=recurrent_decoder_step,
input=[target_embeddings],
bos_id=0,
eos_id=1,
beam_size=beam_size,
max_length=100)
return beam_gen
def memory_enhanced_seq2seq(encoder_input, decoder_input, decoder_target,
hidden_size, word_vec_dim, dict_size, is_generating,
beam_size):
"""Seq2Seq Model enhanced with external memory.
The "external memory" refers to two types of memories.
- Unbounded memory: i.e. attention mechanism in Seq2Seq.
- Bounded memory: i.e. external memory in NTM.
Both types of external memories can be implemented with
ExternalMemory class, and are both exploited in this Seq2Seq model.
Please refer to the function comments of memory_enhanced_decoder(...).
For more details about external memory, please refer to
`Neural Turing Machines <https://arxiv.org/abs/1410.5401>`_.
For more details about this memory-enhanced Seq2Seq, please
refer to `Memory-enhanced Decoder for Neural Machine Translation
<https://arxiv.org/abs/1606.02003>`_.
:params encoder_input: Encoder input.
:type encoder_input: LayerOutput
:params decoder_input: Decoder input.
:type decoder_input: LayerOutput
:params decoder_target: Decoder target.
:type decoder_target: LayerOutput
:params hidden_size: Hidden cell number, both in encoder and decoder rnn.
:type hidden_size: int
:params word_vec_dim: Word embedding size.
:type word_vec_dim: int
:param dict_size: Vocabulary size.
:type dict_size: int
:params is_generating: Whether for beam search inferencing (True) or
for training (False).
:type is_generating: bool
:params beam_size: Beam search width.
:type beam_size: int
:return: Cost layer if is_generating=False; Beam search layer if
is_generating = True.
:rtype: LayerOutput
"""
# encoder
context_encodings, sequence_encoding = bidirectional_gru_encoder(
input=encoder_input, size=hidden_size, word_vec_dim=word_vec_dim)
# decoder
return memory_enhanced_decoder(
input=decoder_input,
target=decoder_target,
initial_state=sequence_encoding,
source_context=context_encodings,
size=hidden_size,
word_vec_dim=word_vec_dim,
dict_size=dict_size,
is_generating=is_generating,
beam_size=beam_size)
"""
Contains training script for machine translation with external memory.
"""
import argparse
import sys
import gzip
import distutils.util
import random
import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import memory_enhanced_seq2seq
from data_utils import reader_append_wrapper
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--dict_size",
default=30000,
type=int,
help="Vocabulary size. (default: %(default)s)")
parser.add_argument(
"--word_vec_dim",
default=512,
type=int,
help="Word embedding size. (default: %(default)s)")
parser.add_argument(
"--hidden_size",
default=1024,
type=int,
help="Hidden cell number in RNN. (default: %(default)s)")
parser.add_argument(
"--memory_slot_num",
default=8,
type=int,
help="External memory slot number. (default: %(default)s)")
parser.add_argument(
"--use_gpu",
default=False,
type=distutils.util.strtobool,
help="Use gpu or not. (default: %(default)s)")
parser.add_argument(
"--trainer_count",
default=1,
type=int,
help="Trainer number. (default: %(default)s)")
parser.add_argument(
"--num_passes",
default=100,
type=int,
help="Training epochs. (default: %(default)s)")
parser.add_argument(
"--batch_size",
default=5,
type=int,
help="Batch size. (default: %(default)s)")
parser.add_argument(
"--memory_perturb_stddev",
default=0.1,
type=float,
help="Memory perturb stddev for memory initialization."
"(default: %(default)s)")
args = parser.parse_args()
def train():
"""
For training.
"""
# create optimizer
optimizer = paddle.optimizer.Adam(
learning_rate=5e-5,
gradient_clipping_threshold=5,
regularization=paddle.optimizer.L2Regularization(rate=8e-4))
# create network config
source_words = paddle.layer.data(
name="source_words",
type=paddle.data_type.integer_value_sequence(args.dict_size))
target_words = paddle.layer.data(
name="target_words",
type=paddle.data_type.integer_value_sequence(args.dict_size))
target_next_words = paddle.layer.data(
name='target_next_words',
type=paddle.data_type.integer_value_sequence(args.dict_size))
cost = memory_enhanced_seq2seq(
encoder_input=source_words,
decoder_input=target_words,
decoder_target=target_next_words,
hidden_size=args.hidden_size,
word_vec_dim=args.word_vec_dim,
dict_size=args.dict_size,
is_generating=False,
beam_size=None)
# create parameters and trainer
parameters = paddle.parameters.create(cost)
trainer = paddle.trainer.SGD(
cost=cost, parameters=parameters, update_equation=optimizer)
# create data readers
feeding = {
"source_words": 0,
"target_words": 1,
"target_next_words": 2,
"bounded_memory_perturbation": 3
}
random.seed(0) # for keeping consitancy for multiple runs
bounded_memory_perturbation = [[
random.gauss(0, args.memory_perturb_stddev)
for i in xrange(args.hidden_size)
] for j in xrange(args.memory_slot_num)]
train_append_reader = reader_append_wrapper(
reader=paddle.dataset.wmt14.train(args.dict_size),
append_tuple=(bounded_memory_perturbation, ))
train_batch_reader = paddle.batch(
reader=paddle.reader.shuffle(reader=train_append_reader, buf_size=8192),
batch_size=args.batch_size)
test_append_reader = reader_append_wrapper(
reader=paddle.dataset.wmt14.test(args.dict_size),
append_tuple=(bounded_memory_perturbation, ))
test_batch_reader = paddle.batch(
reader=paddle.reader.shuffle(reader=test_append_reader, buf_size=8192),
batch_size=args.batch_size)
# create event handler
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 10 == 0:
print "Pass: %d, Batch: %d, TrainCost: %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
with gzip.open("checkpoints/params.latest.tar.gz", 'w') as f:
parameters.to_tar(f)
else:
sys.stdout.write('.')
sys.stdout.flush()
if isinstance(event, paddle.event.EndPass):
result = trainer.test(reader=test_batch_reader, feeding=feeding)
print "Pass: %d, TestCost: %f, %s" % (event.pass_id, result.cost,
result.metrics)
with gzip.open("checkpoints/params.pass-%d.tar.gz" % event.pass_id,
'w') as f:
parameters.to_tar(f)
# run train
if not os.path.exists('checkpoints'):
os.mkdir('checkpoints')
trainer.train(
reader=train_batch_reader,
event_handler=event_handler,
num_passes=args.num_passes,
feeding=feeding)
def main():
paddle.init(use_gpu=args.use_gpu, trainer_count=args.trainer_count)
train()
if __name__ == '__main__':
main()
# 神经网络机器翻译模型
## 背景介绍
机器翻译利用计算机将源语言转换成目标语言的同义表达,是自然语言处理中重要的研究方向,有着广泛的应用需求,其实现方式也经历了不断地演化。传统机器翻译方法主要基于规则或统计模型,需要人为地指定翻译规则或设计语言特征,效果依赖于人对源语言与目标语言的理解程度。近些年来,深度学习的提出与迅速发展使得特征的自动学习成为可能。深度学习首先在图像识别和语音识别中取得成功,进而在机器翻译等自然语言处理领域中掀起了研究热潮。机器翻译中的深度学习模型直接学习源语言到目标语言的映射,大为减少了学习过程中人的介入,同时显著地提高了翻译质量。本例介绍在PaddlePaddle中如何利用循环神经网络(Recurrent Neural Network, RNN)构建一个端到端(End-to-End)的神经网络机器翻译(Neural Machine Translation, NMT)模型。
## 模型概览
基于 RNN 的神经网络机器翻译模型遵循编码器-解码器结构,其中的编码器和解码器均是一个循环神经网络。将构成编码器和解码器的两个 RNN 沿时间步展开,得到如下的模型结构图:
<p align="center"><img src="images/encoder-decoder.png" width = "90%" align="center"/><br/>图 1. 编码器-解码器框架 </p>
神经机器翻译模型的输入输出可以是字符,也可以是词或者短语。不失一般性,本例以基于词的模型为例说明编码器/解码器的工作机制:
- **编码器**:将源语言句子编码成一个向量,作为解码器的输入。解码器的原始输入是表示词的 `id` 序列 $w = {w_1, w_2, ..., w_T}$,用独热(One-hot)码表示。为了对输入进行降维,同时建立词语之间的语义关联,模型为热独码表示的单词学习一个词嵌入(Word Embedding)表示,也就是常说的词向量,关于词向量的详细介绍请参考 PaddleBook 的[词向量](https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md)一章。最后 RNN 单元逐个词地处理输入,得到完整句子的编码向量。
- **解码器**:接受编码器的输入,逐个词地解码出目标语言序列 $u = {u_1, u_2, ..., u_{T'}}$。每个时间步,RNN 单元输出一个隐藏向量,之后经 `Softmax` 归一化计算出下一个目标词的条件概率,即 $P(u_i | w, u_1, u_2, ..., u_{t-1})$。因此,给定输入 $w$,其对应的翻译结果为 $u$ 的概率则为
$$ P(u_1,u_2,...,u_{T'} | w) = \prod_{t=1}^{t={T'}}p(u_t|w, u_1, u_2, u_{t-1})$$
以中文到英文的翻译为例,源语言是中文,目标语言是英文。下面是一句源语言分词后的句子
```
祝愿 祖国 繁荣 昌盛
```
对应的目标语言英文翻译结果为:
```
Wish motherland rich and powerful
```
在预处理阶段,准备源语言与目标语言互译的平行语料数据,并分别构建源语言和目标语言的词典;在训练阶段,用这样成对的平行语料训练模型;在模型测试阶段,输入中文句子,模型自动生成对应的英语翻译,然后将生成结果与标准翻译对比进行评估。在机器翻译领域,BLEU 是最流行的自动评估指标之一。
### RNN 单元
RNN 的原始结构用一个向量来存储隐状态,然而这种结构的 RNN 在训练时容易发生梯度弥散(gradient vanishing),对于长时间的依赖关系难以建模。因此人们对 RNN 单元进行了改进,提出了 LSTM\[[1](#参考文献)] 和 GRU\[[2](#参考文献)],这两种单元以门来控制应该记住的和遗忘的信息,较好地解决了序列数据的长时依赖问题。以本例所用的 GRU 为例,其基本结构如下:
<p align="center">
<img src="images/gru.png" width = "90%" align="center"/><br/>
图 2. GRU 单元
</p>
可以看到除了隐含状态以外,GRU 内部还包含了两个门:更新门(Update Gate)、重置门(Reset Gate)。在每一个时间步,门限和隐状态的更新由图 2 右侧的公式决定。这两个门限决定了状态以何种方式更新。
### 双向编码器
在上述的基本模型中,编码器在顺序处理输入句子序列时,当前时刻的状态只包含了历史输入信息,而没有未来时刻的序列信息。而对于序列建模,未来时刻的上下文同样包含了重要的信息。可以使用如图 3 所示的这种双向编码器来同时获取当前时刻输入的上下文:
<p align="center">
<img src="images/bidirectional-encoder.png" width = "90%" align="center"/><br/>
图 3. 双向编码器结构示意图
</p>
图 3 所示的双向编码器\[[3](#参考文献)\]由两个独立的 RNN 构成,分别从前向和后向对输入序列进行编码,然后将两个 RNN 的输出合并在一起,作为最终的编码输出。
在 PaddlePaddle 中,双向编码器可以很方便地调用相关 APIs 实现:
```python
src_word_id = paddle.layer.data(
name='source_language_word',
type=paddle.data_type.integer_value_sequence(source_dict_dim))
# source embedding
src_embedding = paddle.layer.embedding(
input=src_word_id, size=word_vector_dim)
# bidirectional GRU as encoder
encoded_vector = paddle.networks.bidirectional_gru(
input=src_embedding,
size=encoder_size,
fwd_act=paddle.activation.Tanh(),
fwd_gate_act=paddle.activation.Sigmoid(),
bwd_act=paddle.activation.Tanh(),
bwd_gate_act=paddle.activation.Sigmoid(),
return_seq=True)
```
### 柱搜索(Beam Search) 算法
训练完成后的生成阶段,模型根据源语言输入,解码生成对应的目标语言翻译结果。解码时,一个直接的方式是取每一步条件概率最大的词,作为当前时刻的输出。但局部最优并不一定能得到全局最优,即这种做法并不能保证最后得到的完整句子出现的概率最大。如果对解的全空间进行搜索,其代价又过大。为了解决这个问题,通常采用柱搜索(Beam Search)算法。柱搜索是一种启发式的图搜索算法,用一个参数 $k$ 控制搜索宽度,其要点如下:
**1**. 在解码的过程中,始终维护 $k$ 个已解码出的子序列;
**2**. 在中间时刻 $t$, 对于 $k$ 个子序列中的每个序列,计算下一个词出现的概率并取概率最大的前 $k$ 个词,组合得到 $k^2$ 个新子序列;
**3**. 取 **2** 中这些组合序列中概率最大的前 $k$ 个以更新原来的子序列;
**4**. 不断迭代下去,直至得到 $k$ 个完整的句子,作为翻译结果的候选。
关于柱搜索的更多介绍,可以参考 PaddleBook 中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)一章中[柱搜索](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md#柱搜索算法)一节。
### 无注意力机制的解码器
- PaddleBook中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)的相关章节中,已介绍了带注意力机制(Attention Mechanism)的 Encoder-Decoder 结构,本例介绍的则是不带注意力机制的 Encoder-Decoder 结构。关于注意力机制,读者可进一步参考 PaddleBook 和参考文献\[[3](#参考文献)]。
对于流行的RNN单元,PaddlePaddle 已有很好的实现均可直接调用。如果希望在 RNN 每一个时间步实现某些自定义操作,可使用 PaddlePaddle 中的`recurrent_layer_group`。首先,自定义单步逻辑函数,再利用函数 `recurrent_group()` 循环调用单步逻辑函数处理整个序列。本例中的无注意力机制的解码器便是使用`recurrent_layer_group`来实现,其中,单步逻辑函数`gru_decoder_without_attention()`相关代码如下:
```python
# the initialization state for decoder GRU
encoder_last = paddle.layer.last_seq(input=encoded_vector)
encoder_last_projected = paddle.layer.fc(
size=decoder_size, act=paddle.activation.Tanh(), input=encoder_last)
# the step function for decoder GRU
def gru_decoder_without_attention(enc_vec, current_word):
'''
Step function for gru decoder
:param enc_vec: encoded vector of source language
:type enc_vec: layer object
:param current_word: current input of decoder
:type current_word: layer object
'''
decoder_mem = paddle.layer.memory(
name="gru_decoder",
size=decoder_size,
boot_layer=encoder_last_projected)
context = paddle.layer.last_seq(input=enc_vec)
decoder_inputs = paddle.layer.fc(
size=decoder_size * 3, input=[context, current_word])
gru_step = paddle.layer.gru_step(
name="gru_decoder",
act=paddle.activation.Tanh(),
gate_act=paddle.activation.Sigmoid(),
input=decoder_inputs,
output_mem=decoder_mem,
size=decoder_size)
out = paddle.layer.fc(
size=target_dict_dim,
bias_attr=True,
act=paddle.activation.Softmax(),
input=gru_step)
return out
```
在模型训练和测试阶段,解码器的行为有很大的不同:
- **训练阶段**:目标翻译结果的词向量`trg_embedding`作为参数传递给单步逻辑`gru_decoder_without_attention()`,函数`recurrent_group()`循环调用单步逻辑执行,最后计算目标翻译与实际解码的差异cost并返回;
- **测试阶段**:解码器根据最后一个生成的词预测下一个词,`GeneratedInput()`自动取出模型预测出的概率最高的$k$个词的词向量传递给单步逻辑,`beam_search()`函数调用单步逻辑函数`gru_decoder_without_attention()`完成柱搜索并作为结果返回。
训练和生成的逻辑分别实现在如下的`if-else`条件分支中:
```python
group_input1 = paddle.layer.StaticInput(input=encoded_vector)
group_inputs = [group_input1]
decoder_group_name = "decoder_group"
if is_generating:
trg_embedding = paddle.layer.GeneratedInput(
size=target_dict_dim,
embedding_name="_target_language_embedding",
embedding_size=word_vector_dim)
group_inputs.append(trg_embedding)
beam_gen = paddle.layer.beam_search(
name=decoder_group_name,
step=gru_decoder_without_attention,
input=group_inputs,
bos_id=0,
eos_id=1,
beam_size=beam_size,
max_length=max_length)
return beam_gen
else:
trg_embedding = paddle.layer.embedding(
input=paddle.layer.data(
name="target_language_word",
type=paddle.data_type.integer_value_sequence(target_dict_dim)),
size=word_vector_dim,
param_attr=paddle.attr.ParamAttr(name="_target_language_embedding"))
group_inputs.append(trg_embedding)
decoder = paddle.layer.recurrent_group(
name=decoder_group_name,
step=gru_decoder_without_attention,
input=group_inputs)
lbl = paddle.layer.data(
name="target_language_next_word",
type=paddle.data_type.integer_value_sequence(target_dict_dim))
cost = paddle.layer.classification_cost(input=decoder, label=lbl)
return cost
```
## 数据准备
本例所用到的数据来自[WMT14](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/),该数据集是法文到英文互译的平行语料。用[bitexts](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz)作为训练数据,[dev+test data](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz)作为验证与测试数据。在PaddlePaddle中已经封装好了该数据集的读取接口,在首次运行的时候,程序会自动完成下载,用户无需手动完成相关的数据准备。
## 模型的训练与测试
### 模型训练
启动模型训练的十分简单,只需在命令行窗口中执行`python train.py`。模型训练阶段 `train.py` 脚本中的 `train()` 函数依次完成了如下的逻辑:
**a) 由网络定义,解析网络结构,初始化模型参数**
```python
# define the network topolgy.
cost = seq2seq_net(source_dict_dim, target_dict_dim)
parameters = paddle.parameters.create(cost)
```
**b) 设定训练过程中的优化策略、定义训练数据读取 `reader`**
```python
# define optimization method
optimizer = paddle.optimizer.RMSProp(
learning_rate=1e-3,
gradient_clipping_threshold=10.0,
regularization=paddle.optimizer.L2Regularization(rate=8e-4))
# define the trainer instance
trainer = paddle.trainer.SGD(
cost=cost, parameters=parameters, update_equation=optimizer)
# define data reader
wmt14_reader = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.wmt14.train(source_dict_dim), buf_size=8192),
batch_size=55)
```
**c) 定义事件句柄,打印训练中间结果、保存模型快照**
```python
# define the event_handler callback
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if not event.batch_id % 100 and event.batch_id:
with gzip.open(
os.path.join(save_path,
"nmt_without_att_%05d_batch_%05d.tar.gz" %
event.pass_id, event.batch_id), "w") as f:
parameters.to_tar(f)
if event.batch_id and not event.batch_id % 10:
logger.info("Pass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics))
```
**d) 开始训练**
```python
# start training
trainer.train(
reader=wmt14_reader, event_handler=event_handler, num_passes=2)
```
输出样例为
```text
Pass 0, Batch 0, Cost 267.674663, {'classification_error_evaluator': 1.0}
.........
Pass 0, Batch 10, Cost 172.892294, {'classification_error_evaluator': 0.953895092010498}
.........
Pass 0, Batch 20, Cost 177.989329, {'classification_error_evaluator': 0.9052488207817078}
.........
Pass 0, Batch 30, Cost 153.633665, {'classification_error_evaluator': 0.8643803596496582}
.........
Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.8348183631896973}
```
### 生成翻译结果
利用训练好的模型生成翻译文本也十分简单。
1. 首先请修改`generate.py`脚本中`main`中传递给`generate`函数的参数,以选择使用哪一个保存的模型来生成。默认参数如下所示:
```python
generate(
source_dict_dim=30000,
target_dict_dim=30000,
batch_size=20,
beam_size=3,
model_path="models/nmt_without_att_params_batch_00100.tar.gz")
```
2. 在终端执行命令 `python generate.py`,脚本中的`generate()`执行了依次如下逻辑:
**a) 加载测试样本**
```python
# load data samples for generation
gen_creator = paddle.dataset.wmt14.gen(source_dict_dim)
gen_data = []
for item in gen_creator():
gen_data.append((item[0], ))
```
**b) 初始化模型,执行`infer()`为每个输入样本生成`beam search`的翻译结果**
```python
beam_gen = seq2seq_net(source_dict_dim, target_dict_dim, True)
with gzip.open(init_models_path) as f:
parameters = paddle.parameters.Parameters.from_tar(f)
# prob is the prediction probabilities, and id is the prediction word.
beam_result = paddle.infer(
output_layer=beam_gen,
parameters=parameters,
input=gen_data,
field=['prob', 'id'])
```
**c) 加载源语言和目标语言词典,将`id`序列表示的句子转化成原语言并输出结果**
```python
beam_result = inferer.infer(input=test_batch, field=["prob", "id"])
gen_sen_idx = np.where(beam_result[1] == -1)[0]
assert len(gen_sen_idx) == len(test_batch) * beam_size
start_pos, end_pos = 1, 0
for i, sample in enumerate(test_batch):
print(" ".join([
src_dict[w] for w in sample[0][1:-1]
])) # skip the start and ending mark when print the source sentence
for j in xrange(beam_size):
end_pos = gen_sen_idx[i * beam_size + j]
print("%.4f\t%s" % (beam_result[0][i][j], " ".join(
trg_dict[w] for w in beam_result[1][start_pos:end_pos])))
start_pos = end_pos + 2
print("\n")
```
设置beam search的宽度为3,输入为一个法文句子,则自动为测试数据生成对应的翻译结果,输出格式如下:
```text
Elles connaissent leur entreprise mieux que personne .
-3.754819 They know their business better than anyone . <e>
-4.445528 They know their businesses better than anyone . <e>
-5.026885 They know their business better than anybody . <e>
```
- 第一行为输入的源语言句子。
- 第二 ~ beam_size + 1 行是柱搜索生成的 `beam_size` 条翻译结果
- 相同行的输出以“\t”分隔为两列,第一列是句子的log 概率,第二列是翻译结果的文本。
- 符号`<s>` 表示句子的开始,符号`<e>`表示一个句子的结束,如果出现了在词典中未包含的词,则用符号`<unk>`替代。
至此,我们在 PaddlePaddle 上实现了一个初步的机器翻译模型。我们可以看到,PaddlePaddle 提供了灵活丰富的API供大家选择和使用,使得我们能够很方便完成各种复杂网络的配置。机器翻译本身也是个快速发展的领域,各种新方法新思想在不断涌现。在学习完本例后,读者若有兴趣和余力,可基于 PaddlePaddle 平台实现更为复杂、性能更优的机器翻译模型。
## 参考文献
[1] Sutskever I, Vinyals O, Le Q V. [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)[J]. 2014, 4:3104-3112.
[2]Cho K, Van Merriënboer B, Gulcehre C, et al. [Learning phrase representations using RNN encoder-decoder for statistical machine translation](http://www.aclweb.org/anthology/D/D14/D14-1179.pdf)[C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1724-1734.
[3] Bahdanau D, Cho K, Bengio Y. [Neural machine translation by jointly learning to align and translate](https://arxiv.org/abs/1409.0473)[C]. Proceedings of ICLR 2015, 2015
# 神经网络机器翻译模型 # Neural Machine Translation Model
## 背景介绍 ## Background Introduction
机器翻译利用计算机将源语言转换成目标语言的同义表达,是自然语言处理中重要的研究方向,有着广泛的应用需求,其实现方式也经历了不断地演化。传统机器翻译方法主要基于规则或统计模型,需要人为地指定翻译规则或设计语言特征,效果依赖于人对源语言与目标语言的理解程度。近些年来,深度学习的提出与迅速发展使得特征的自动学习成为可能。深度学习首先在图像识别和语音识别中取得成功,进而在机器翻译等自然语言处理领域中掀起了研究热潮。机器翻译中的深度学习模型直接学习源语言到目标语言的映射,大为减少了学习过程中人的介入,同时显著地提高了翻译质量。本例介绍在PaddlePaddle中如何利用循环神经网络(Recurrent Neural Network, RNN)构建一个端到端(End-to-End)的神经网络机器翻译(Neural Machine Translation, NMT)模型。 Neural Machine Translation (NMT) is a simple new architecture for getting machines to learn to translate. Traditional machine translation methods are mainly based on phrase-based statistical translation approaches that use separately engineered subcomponents rules or statistical models. NMT models use deep learning and representation learning. This example describes how to construct an end-to-end neural machine translation (NMT) model using the recurrent neural network (RNN) in PaddlePaddle.
## 模型概览 ## Model Overview
基于 RNN 的神经网络机器翻译模型遵循编码器-解码器结构,其中的编码器和解码器均是一个循环神经网络。将构成编码器和解码器的两个 RNN 沿时间步展开,得到如下的模型结构图: RNN-based neural machine translation follows the encoder-decoder architecture. A common choice for the encoder and decoder is the recurrent neural network (RNN), used by most NMT models. Below is an example diagram of a general approach for NMT.
<p align="center"><img src="images/encoder-decoder.png" width = "90%" align="center"/><br/>图 1. 编码器-解码器框架 </p> <p align="center"><img src="images/encoder-decoder.png" width = "90%" align="center"/><br/>Figure 1. Encoder - Decoder frame </ p>
神经机器翻译模型的输入输出可以是字符,也可以是词或者短语。不失一般性,本例以基于词的模型为例说明编码器/解码器的工作机制: The input and output of the neural machine translation model can be any of character, word or phrase. This example illustrates the word-based NMT.
- **编码器**:将源语言句子编码成一个向量,作为解码器的输入。解码器的原始输入是表示词的 `id` 序列 $w = {w_1, w_2, ..., w_T}$,用独热(One-hot)码表示。为了对输入进行降维,同时建立词语之间的语义关联,模型为热独码表示的单词学习一个词嵌入(Word Embedding)表示,也就是常说的词向量,关于词向量的详细介绍请参考 PaddleBook 的[词向量](https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md)一章。最后 RNN 单元逐个词地处理输入,得到完整句子的编码向量。 - **Encoder**: Encodes the source language sentence into a vector as input to the decoder. The original input of the decoder is the `id` sequence $ w = {w_1, w_2, ..., w_T} $ of the word, expressed in the one-hot code. In order to reduce the input dimension, and to establish the semantic association between words, the model is a word that is expressed by hot independent code. Word embedding is a word vector. For more information about word vector, please refer to PaddleBook [word vector] (https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md) chapter. Finally, the RNN unit processes the input word by word to get the encoding vector of the complete sentence.
- **解码器**:接受编码器的输入,逐个词地解码出目标语言序列 $u = {u_1, u_2, ..., u_{T'}}$。每个时间步,RNN 单元输出一个隐藏向量,之后经 `Softmax` 归一化计算出下一个目标词的条件概率,即 $P(u_i | w, u_1, u_2, ..., u_{t-1})$。因此,给定输入 $w$,其对应的翻译结果为 $u$ 的概率则为 - **Decoder**: Accepts the input of the encoder, decoding the target language sequence $ u = {u_1, u_2, ..., u_ {T '}} $ one by one. For each time step, the RNN unit outputs a hidden vector. Then the conditional probability of the next target word is calculated by `Softmax` normalization, i.e. $ P (u_i | w, u_1, u_2, ..., u_ {t- 1}) $. Thus, given the input $ w $, the corresponding translation result is $ u $
$$ P(u_1,u_2,...,u_{T'} | w) = \prod_{t=1}^{t={T'}}p(u_t|w, u_1, u_2, u_{t-1})$$ $$ P(u_1,u_2,...,u_{T'} | w) = \prod_{t=1}^{t={T'}}p(u_t|w, u_1, u_2, u_{t-1})$$
以中文到英文的翻译为例,源语言是中文,目标语言是英文。下面是一句源语言分词后的句子 In Chinese to English translation, for example, the source language is Chinese, and the target language is English. The following is a sentence after the source language word segmentation.
``` ```
祝愿 祖国 繁荣 昌盛 祝愿 祖国 繁荣 昌盛
``` ```
对应的目标语言英文翻译结果为: Corresponding target language English translation results for:
``` ```
Wish motherland rich and powerful Wish motherland rich and powerful
``` ```
在预处理阶段,准备源语言与目标语言互译的平行语料数据,并分别构建源语言和目标语言的词典;在训练阶段,用这样成对的平行语料训练模型;在模型测试阶段,输入中文句子,模型自动生成对应的英语翻译,然后将生成结果与标准翻译对比进行评估。在机器翻译领域,BLEU 是最流行的自动评估指标之一。 In the preprocessing step, we prepare the parallel corpus data of the source language and the target language. Then we construct the dictionaries of the source language and the target language respectively. In the training stage, we use the pairwise parallel corpus training model. In the model test stage, the model automatically generates the corresponding English translation, and then it evaluates the resulting results with standard translations. For the evaluation metric, BLEU is most commonly used.
### RNN 单元 ### RNN unit
RNN 的原始结构用一个向量来存储隐状态,然而这种结构的 RNN 在训练时容易发生梯度弥散(gradient vanishing),对于长时间的依赖关系难以建模。因此人们对 RNN 单元进行了改进,提出了 LSTM\[[1](#参考文献)] 和 GRU\[[2](#参考文献)],这两种单元以门来控制应该记住的和遗忘的信息,较好地解决了序列数据的长时依赖问题。以本例所用的 GRU 为例,其基本结构如下: The original structure of the RNN uses a vector to store the hidden state. However, the RNN of this structure is prone to have gradient vanishing problem, which is difficult to model for a long time. This issue can be addressed by using LSTM \[[1](#References)] and GRU (Gated Recurrent Unit) \[[2](#References)]. This solves the problem of long-term dependency by carefully forgetting the previous information. In this example, we demonstrate the GRU based model.
<p align="center"> <p align="center">
<img src="images/gru.png" width = "90%" align="center"/><br/> <img src="images/gru.png" width = "90%" align="center"/><br/>
图 2. GRU 单元 图 2. GRU 单元
</p> </p>
可以看到除了隐含状态以外,GRU 内部还包含了两个门:更新门(Update Gate)、重置门(Reset Gate)。在每一个时间步,门限和隐状态的更新由图 2 右侧的公式决定。这两个门限决定了状态以何种方式更新。 We can see that, in addition to the implicit state, the GRU also contains two gates: the Update Gate and the Reset Gate. At each time step, the update of the threshold and the hidden state is determined by the formula on the right side of Figure 2. These two thresholds determine how the state is updated.
### Bi-directional Encoder
In the above basic model, when the encoder sequentially processes the input sentence sequence, the state of the current time contains only the history input information without the sequence information of the future time. For sequence modeling, the context of the future also contains important information. With the bi-directional encoder (Figure 3), we can get both information at the same time:
### 双向编码器
在上述的基本模型中,编码器在顺序处理输入句子序列时,当前时刻的状态只包含了历史输入信息,而没有未来时刻的序列信息。而对于序列建模,未来时刻的上下文同样包含了重要的信息。可以使用如图 3 所示的这种双向编码器来同时获取当前时刻输入的上下文:
<p align="center"> <p align="center">
<img src="images/bidirectional-encoder.png" width = "90%" align="center"/><br/> <img src="images/bidirectional-encoder.png" width = "90%" align="center"/><br/>
图 3. 双向编码器结构示意图 Figure 3. Bi-directional encoder structure diagram
</p> </p>
图 3 所示的双向编码器\[[3](#参考文献)\]由两个独立的 RNN 构成,分别从前向和后向对输入序列进行编码,然后将两个 RNN 的输出合并在一起,作为最终的编码输出。
在 PaddlePaddle 中,双向编码器可以很方便地调用相关 APIs 实现: The bi-directional encoder \[[3](#References)\] shown in Figure 3 consists of two independent RNNs that encode the input sequence from the forward and backward, respectively. Then it combines the outputs of the two RNNs together, as the final encoding output.
In PaddlePaddle, bi-directional encoders can easily call using APIs:
```python ```python
src_word_id = paddle.layer.data( src_word_id = paddle.layer.data(
...@@ -70,24 +73,25 @@ encoded_vector = paddle.networks.bidirectional_gru( ...@@ -70,24 +73,25 @@ encoded_vector = paddle.networks.bidirectional_gru(
return_seq=True) return_seq=True)
``` ```
### 柱搜索(Beam Search) 算法 ### Beam Search Algorithm
训练完成后的生成阶段,模型根据源语言输入,解码生成对应的目标语言翻译结果。解码时,一个直接的方式是取每一步条件概率最大的词,作为当前时刻的输出。但局部最优并不一定能得到全局最优,即这种做法并不能保证最后得到的完整句子出现的概率最大。如果对解的全空间进行搜索,其代价又过大。为了解决这个问题,通常采用柱搜索(Beam Search)算法。柱搜索是一种启发式的图搜索算法,用一个参数 $k$ 控制搜索宽度,其要点如下: After the training is completed, the model will input and decode the corresponding target language translation result according to the source language. Decoding, a direct way is to take each step conditional probability of the largest word, as the current moment of output. But the local optimal does not necessarily guarantee the global optimal. If the search for the full space is large, the cost is too large. In order to solve this problem, beam search algorithm is commonly used. Beam search is a heuristic graph search algorithm that controls the search width with a parameter $ k $] as follows:
**1**. During decoding, always maintain $ k $ decoded sub-sequences;
**1**. 在解码的过程中,始终维护 $k$ 个已解码出的子序列; **2**. At the middle of time $ t $, for each sequence in the $ k $ sub-sequence, calculate the probability of the next word and take the maximum of $ k $ words with the largest probability, combining $ k ^ 2 $ New child sequence;
**2**. 在中间时刻 $t$, 对于 $k$ 个子序列中的每个序列,计算下一个词出现的概率并取概率最大的前 $k$ 个词,组合得到 $k^2$ 个新子序列; **3**. Take the maximum probability of $ k $ in these combination sequences to update the original subsequence;
**3**. 取 **2** 中这些组合序列中概率最大的前 $k$ 个以更新原来的子序列; **4**. iterate through it until you get $ k $ complete sentences as candidates for translation results.
**4**. 不断迭代下去,直至得到 $k$ 个完整的句子,作为翻译结果的候选。 For more information on beam search, refer to the [beam search] chapter in PaddleBook [machine translation] (https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md) (https://github.com/PaddlePaddle/book/blob/develop.org.machine_translation/README.cn.md# beam search algorithm) section.
关于柱搜索的更多介绍,可以参考 PaddleBook 中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)一章中[柱搜索](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md#柱搜索算法)一节。
### Decoder without Attention mechanism
- In the relevant section of PaddleBook (https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md), the Attention Mechanism has been introduced. This example demonstrates Encoder-Decoder structure without attention mechanism. With regard to the attention mechanism, please refer to PaddleBook and references \[[3](#References)].
### 无注意力机制的解码器 In PaddlePaddle, commonly used RNN units can be conveniently called using APIs. For example, `recurrent_layer_group` can be used to implement custom actions at each point in the RNN. First, customize the single-step logic function, and then use the function `recurrent_group ()` to cycle through the single-step logic function to process the entire sequence. In this case, the unattended mechanism of the decoder uses `recurrent_layer_group` to implement the function` gru_decoder_without_attention () `. Corresponding code is as follows:
- PaddleBook中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)的相关章节中,已介绍了带注意力机制(Attention Mechanism)的 Encoder-Decoder 结构,本例介绍的则是不带注意力机制的 Encoder-Decoder 结构。关于注意力机制,读者可进一步参考 PaddleBook 和参考文献\[[3](#参考文献)]。
对于流行的RNN单元,PaddlePaddle 已有很好的实现均可直接调用。如果希望在 RNN 每一个时间步实现某些自定义操作,可使用 PaddlePaddle 中的`recurrent_layer_group`。首先,自定义单步逻辑函数,再利用函数 `recurrent_group()` 循环调用单步逻辑函数处理整个序列。本例中的无注意力机制的解码器便是使用`recurrent_layer_group`来实现,其中,单步逻辑函数`gru_decoder_without_attention()`相关代码如下:
```python ```python
# the initialization state for decoder GRU # the initialization state for decoder GRU
...@@ -130,12 +134,12 @@ def gru_decoder_without_attention(enc_vec, current_word): ...@@ -130,12 +134,12 @@ def gru_decoder_without_attention(enc_vec, current_word):
return out return out
``` ```
在模型训练和测试阶段,解码器的行为有很大的不同: In the model training and testing phase, the behavior of the decoder is different:
- **训练阶段**:目标翻译结果的词向量`trg_embedding`作为参数传递给单步逻辑`gru_decoder_without_attention()`,函数`recurrent_group()`循环调用单步逻辑执行,最后计算目标翻译与实际解码的差异cost并返回; - **Training phase**: The word vector of the target translation `trg_embedding` is passed as a parameter to the single step logic` gru_decoder_without_attention () `. The function` recurrent_group () `loop calls the single step logic execution, and finally calculates the target translation with the actual decoding;
- **测试阶段**:解码器根据最后一个生成的词预测下一个词,`GeneratedInput()`自动取出模型预测出的概率最高的$k$个词的词向量传递给单步逻辑,`beam_search()`函数调用单步逻辑函数`gru_decoder_without_attention()`完成柱搜索并作为结果返回。 - **Testing phase**: The decoder predicts the next word based on the last generated word, `GeneratedInput ()`. The automatic fetch model predicts the highest probability of the $ k $ word vector passed to the single step logic. Then the beam_search () function calls the function `gru_decoder_without_attention ()` to complete the beam search and returns as a result.
训练和生成的逻辑分别实现在如下的`if-else`条件分支中: The training and generated returns are implemented in the following `if-else` conditional branches:
```python ```python
group_input1 = paddle.layer.StaticInput(input=encoded_vector) group_input1 = paddle.layer.StaticInput(input=encoded_vector)
...@@ -181,16 +185,16 @@ else: ...@@ -181,16 +185,16 @@ else:
return cost return cost
``` ```
## 数据准备 ## Data Preparation
本例所用到的数据来自[WMT14](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/),该数据集是法文到英文互译的平行语料。用[bitexts](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz)作为训练数据,[dev+test data](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz)作为验证与测试数据。在PaddlePaddle中已经封装好了该数据集的读取接口,在首次运行的时候,程序会自动完成下载,用户无需手动完成相关的数据准备。 The data used in this example is from [WMT14] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/), which is a parallel corpus of French-to-English translation. Use [bitexts] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz) as training data, [dev + test data] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz) as validation and test data. PaddlePaddle has been packaged in the data set of the read interface, in the first run, the program will automatically complete the download. Users do not need to manually complete the relevant data preparation!
## 模型的训练与测试 ## Model Training and Testing
### 模型训练 ### Model Training
启动模型训练的十分简单,只需在命令行窗口中执行`python train.py`。模型训练阶段 `train.py` 脚本中的 `train()` 函数依次完成了如下的逻辑: Starting the model training is very simple, just in the command line window to execute `python train.py`. The `train ()` function in the `train.py` script of the model training phase completes the following logic:
**a) 由网络定义,解析网络结构,初始化模型参数** **a) Define the network, parse the network structure, initialize the model parameters.**
```python ```python
# define the network topolgy. # define the network topolgy.
...@@ -198,7 +202,7 @@ cost = seq2seq_net(source_dict_dim, target_dict_dim) ...@@ -198,7 +202,7 @@ cost = seq2seq_net(source_dict_dim, target_dict_dim)
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
``` ```
**b) 设定训练过程中的优化策略、定义训练数据读取 `reader`** **b) Set the training process optimization strategy. Define the training data to read `reader`**
```python ```python
# define optimization method # define optimization method
...@@ -218,7 +222,7 @@ wmt14_reader = paddle.batch( ...@@ -218,7 +222,7 @@ wmt14_reader = paddle.batch(
batch_size=55) batch_size=55)
``` ```
**c) 定义事件句柄,打印训练中间结果、保存模型快照** **c) Define the event handle, print the training intermediate results, save the model snapshot**
```python ```python
# define the event_handler callback # define the event_handler callback
...@@ -236,7 +240,7 @@ def event_handler(event): ...@@ -236,7 +240,7 @@ def event_handler(event):
event.pass_id, event.batch_id, event.cost, event.metrics)) event.pass_id, event.batch_id, event.cost, event.metrics))
``` ```
**d) 开始训练** **d) Start training**
```python ```python
# start training # start training
...@@ -244,7 +248,7 @@ trainer.train( ...@@ -244,7 +248,7 @@ trainer.train(
reader=wmt14_reader, event_handler=event_handler, num_passes=2) reader=wmt14_reader, event_handler=event_handler, num_passes=2)
``` ```
输出样例为 The output sample is
```text ```text
Pass 0, Batch 0, Cost 267.674663, {'classification_error_evaluator': 1.0} Pass 0, Batch 0, Cost 267.674663, {'classification_error_evaluator': 1.0}
...@@ -258,10 +262,10 @@ Pass 0, Batch 30, Cost 153.633665, {'classification_error_evaluator': 0.86438035 ...@@ -258,10 +262,10 @@ Pass 0, Batch 30, Cost 153.633665, {'classification_error_evaluator': 0.86438035
Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.8348183631896973} Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.8348183631896973}
``` ```
### 生成翻译结果 ### Generate Translation Results
利用训练好的模型生成翻译文本也十分简单。 In PaddlePaddle, it is also easy to use translated models to generate translated text.
1. 首先请修改`generate.py`脚本中`main`中传递给`generate`函数的参数,以选择使用哪一个保存的模型来生成。默认参数如下所示: 1. First of all, please modify the `generate.py` script` main` passed to the `generate` function parameters to choose which saved model to use. The default parameters are as follows:
```python ```python
generate( generate(
...@@ -272,9 +276,9 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -272,9 +276,9 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
model_path="models/nmt_without_att_params_batch_00100.tar.gz") model_path="models/nmt_without_att_params_batch_00100.tar.gz")
``` ```
2. 在终端执行命令 `python generate.py`,脚本中的`generate()`执行了依次如下逻辑: 2. In the terminal phase, execute the `python generate.py` command. The` generate () `in the script executes the following code:
**a) 加载测试样本** **a) Load the test sample**
```python ```python
# load data samples for generation # load data samples for generation
...@@ -284,7 +288,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -284,7 +288,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
gen_data.append((item[0], )) gen_data.append((item[0], ))
``` ```
**b) 初始化模型,执行`infer()`为每个输入样本生成`beam search`的翻译结果** **b) Initialize the model, execute `infer ()` for each input sample to generate `beam search` translation results**
```python ```python
beam_gen = seq2seq_net(source_dict_dim, target_dict_dim, True) beam_gen = seq2seq_net(source_dict_dim, target_dict_dim, True)
...@@ -298,7 +302,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -298,7 +302,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
field=['prob', 'id']) field=['prob', 'id'])
``` ```
**c) 加载源语言和目标语言词典,将`id`序列表示的句子转化成原语言并输出结果** **c) Next, load the source and target language dictionaries, convert the sentences represented by the `id` sequence into the original language and output the results.**
```python ```python
beam_result = inferer.infer(input=test_batch, field=["prob", "id"]) beam_result = inferer.infer(input=test_batch, field=["prob", "id"])
...@@ -319,7 +323,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -319,7 +323,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
print("\n") print("\n")
``` ```
设置beam search的宽度为3,输入为一个法文句子,则自动为测试数据生成对应的翻译结果,输出格式如下: Set the width of the beam search to 3, enter a French sentence. Then it automatically generate the corresponding test data for the translation results, the output format is as follows:
```text ```text
Elles connaissent leur entreprise mieux que personne . Elles connaissent leur entreprise mieux que personne .
...@@ -328,17 +332,17 @@ Elles connaissent leur entreprise mieux que personne . ...@@ -328,17 +332,17 @@ Elles connaissent leur entreprise mieux que personne .
-5.026885 They know their business better than anybody . <e> -5.026885 They know their business better than anybody . <e>
``` ```
- 第一行为输入的源语言句子。 - The first line of input for the source language.
- 第二 ~ beam_size + 1 行是柱搜索生成的 `beam_size` 条翻译结果 - Second ~ beam_size + 1 line is the result of the `beam_size` translation generated by the column search
- 相同行的输出以“\t”分隔为两列,第一列是句子的log 概率,第二列是翻译结果的文本。 - the output of the same row is separated into two columns by "\ t", the first column is the log probability of the sentence, and the second column is the text of the translation result.
- 符号`<s>` 表示句子的开始,符号`<e>`表示一个句子的结束,如果出现了在词典中未包含的词,则用符号`<unk>`替代。 - the symbol `<s>` represents the beginning of the sentence, the symbol `<e>` indicates the end of a sentence, and if there is a word that is not included in the dictionary, it is replaced with the symbol `<unk>`.
至此,我们在 PaddlePaddle 上实现了一个初步的机器翻译模型。我们可以看到,PaddlePaddle 提供了灵活丰富的API供大家选择和使用,使得我们能够很方便完成各种复杂网络的配置。机器翻译本身也是个快速发展的领域,各种新方法新思想在不断涌现。在学习完本例后,读者若有兴趣和余力,可基于 PaddlePaddle 平台实现更为复杂、性能更优的机器翻译模型。 So far, we have implemented a basic machine translation model using PaddlePaddle. We can see, PaddlePaddle provides a flexible and rich API. This enables users to easily choose and use a various complex network configuration. NMT itself is also a rapidly developing field, and many new ideas continue to emerge. This example is a basic implementation of NMT. Users can also implement more complex NMT models using PaddlePaddle.
## 参考文献 ## References
[1] Sutskever I, Vinyals O, Le Q V. [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)[J]. 2014, 4:3104-3112. [1] Sutskever I, Vinyals O, Le Q V. [Sequence to Sequence Learning with Neural Networks] (https://arxiv.org/abs/1409.3215) [J]. 2014, 4: 3104-3112.
[2]Cho K, Van Merriënboer B, Gulcehre C, et al. [Learning phrase representations using RNN encoder-decoder for statistical machine translation](http://www.aclweb.org/anthology/D/D14/D14-1179.pdf)[C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1724-1734. [2] Cho K, Van Merriënboer B, Gulcehre C, et al. [Learning phrase representations using RNN encoder-decoder for statistical machine translation (http://www.aclweb.org/anthology/D/D14/D14-1179 .pdf) [C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1724-1734.
[3] Bahdanau D, Cho K, Bengio Y. [Neural machine translation by jointly learning to align and translate](https://arxiv.org/abs/1409.0473)[C]. Proceedings of ICLR 2015, 2015 [3] Bahdanau D, Cho K, Bengio Y. [Neural machine translation by exclusive learning to align and translate] (https://arxiv.org/abs/1409.0473) [C]. Proceedings of ICLR 2015, 2015
...@@ -40,57 +40,60 @@ ...@@ -40,57 +40,60 @@
<!-- This block will be replaced by each markdown file content. Please do not change lines below.--> <!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'> <div id="markdown" style='display:none'>
# 神经网络机器翻译模型 # Neural Machine Translation Model
## 背景介绍 ## Background Introduction
机器翻译利用计算机将源语言转换成目标语言的同义表达,是自然语言处理中重要的研究方向,有着广泛的应用需求,其实现方式也经历了不断地演化。传统机器翻译方法主要基于规则或统计模型,需要人为地指定翻译规则或设计语言特征,效果依赖于人对源语言与目标语言的理解程度。近些年来,深度学习的提出与迅速发展使得特征的自动学习成为可能。深度学习首先在图像识别和语音识别中取得成功,进而在机器翻译等自然语言处理领域中掀起了研究热潮。机器翻译中的深度学习模型直接学习源语言到目标语言的映射,大为减少了学习过程中人的介入,同时显著地提高了翻译质量。本例介绍在PaddlePaddle中如何利用循环神经网络(Recurrent Neural Network, RNN)构建一个端到端(End-to-End)的神经网络机器翻译(Neural Machine Translation, NMT)模型。 Neural Machine Translation (NMT) is a simple new architecture for getting machines to learn to translate. Traditional machine translation methods are mainly based on phrase-based statistical translation approaches that use separately engineered subcomponents rules or statistical models. NMT models use deep learning and representation learning. This example describes how to construct an end-to-end neural machine translation (NMT) model using the recurrent neural network (RNN) in PaddlePaddle.
## 模型概览 ## Model Overview
基于 RNN 的神经网络机器翻译模型遵循编码器-解码器结构,其中的编码器和解码器均是一个循环神经网络。将构成编码器和解码器的两个 RNN 沿时间步展开,得到如下的模型结构图: RNN-based neural machine translation follows the encoder-decoder architecture. A common choice for the encoder and decoder is the recurrent neural network (RNN), used by most NMT models. Below is an example diagram of a general approach for NMT.
<p align="center"><img src="images/encoder-decoder.png" width = "90%" align="center"/><br/>图 1. 编码器-解码器框架 </p> <p align="center"><img src="images/encoder-decoder.png" width = "90%" align="center"/><br/>Figure 1. Encoder - Decoder frame </ p>
神经机器翻译模型的输入输出可以是字符,也可以是词或者短语。不失一般性,本例以基于词的模型为例说明编码器/解码器的工作机制: The input and output of the neural machine translation model can be any of character, word or phrase. This example illustrates the word-based NMT.
- **编码器**:将源语言句子编码成一个向量,作为解码器的输入。解码器的原始输入是表示词的 `id` 序列 $w = {w_1, w_2, ..., w_T}$,用独热(One-hot)码表示。为了对输入进行降维,同时建立词语之间的语义关联,模型为热独码表示的单词学习一个词嵌入(Word Embedding)表示,也就是常说的词向量,关于词向量的详细介绍请参考 PaddleBook 的[词向量](https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md)一章。最后 RNN 单元逐个词地处理输入,得到完整句子的编码向量。 - **Encoder**: Encodes the source language sentence into a vector as input to the decoder. The original input of the decoder is the `id` sequence $ w = {w_1, w_2, ..., w_T} $ of the word, expressed in the one-hot code. In order to reduce the input dimension, and to establish the semantic association between words, the model is a word that is expressed by hot independent code. Word embedding is a word vector. For more information about word vector, please refer to PaddleBook [word vector] (https://github.com/PaddlePaddle/book/blob/develop/04.word2vec/README.cn.md) chapter. Finally, the RNN unit processes the input word by word to get the encoding vector of the complete sentence.
- **解码器**:接受编码器的输入,逐个词地解码出目标语言序列 $u = {u_1, u_2, ..., u_{T'}}$。每个时间步,RNN 单元输出一个隐藏向量,之后经 `Softmax` 归一化计算出下一个目标词的条件概率,即 $P(u_i | w, u_1, u_2, ..., u_{t-1})$。因此,给定输入 $w$,其对应的翻译结果为 $u$ 的概率则为 - **Decoder**: Accepts the input of the encoder, decoding the target language sequence $ u = {u_1, u_2, ..., u_ {T '}} $ one by one. For each time step, the RNN unit outputs a hidden vector. Then the conditional probability of the next target word is calculated by `Softmax` normalization, i.e. $ P (u_i | w, u_1, u_2, ..., u_ {t- 1}) $. Thus, given the input $ w $, the corresponding translation result is $ u $
$$ P(u_1,u_2,...,u_{T'} | w) = \prod_{t=1}^{t={T'}}p(u_t|w, u_1, u_2, u_{t-1})$$ $$ P(u_1,u_2,...,u_{T'} | w) = \prod_{t=1}^{t={T'}}p(u_t|w, u_1, u_2, u_{t-1})$$
以中文到英文的翻译为例,源语言是中文,目标语言是英文。下面是一句源语言分词后的句子 In Chinese to English translation, for example, the source language is Chinese, and the target language is English. The following is a sentence after the source language word segmentation.
``` ```
祝愿 祖国 繁荣 昌盛 祝愿 祖国 繁荣 昌盛
``` ```
对应的目标语言英文翻译结果为: Corresponding target language English translation results for:
``` ```
Wish motherland rich and powerful Wish motherland rich and powerful
``` ```
在预处理阶段,准备源语言与目标语言互译的平行语料数据,并分别构建源语言和目标语言的词典;在训练阶段,用这样成对的平行语料训练模型;在模型测试阶段,输入中文句子,模型自动生成对应的英语翻译,然后将生成结果与标准翻译对比进行评估。在机器翻译领域,BLEU 是最流行的自动评估指标之一。 In the preprocessing step, we prepare the parallel corpus data of the source language and the target language. Then we construct the dictionaries of the source language and the target language respectively. In the training stage, we use the pairwise parallel corpus training model. In the model test stage, the model automatically generates the corresponding English translation, and then it evaluates the resulting results with standard translations. For the evaluation metric, BLEU is most commonly used.
### RNN 单元 ### RNN unit
RNN 的原始结构用一个向量来存储隐状态,然而这种结构的 RNN 在训练时容易发生梯度弥散(gradient vanishing),对于长时间的依赖关系难以建模。因此人们对 RNN 单元进行了改进,提出了 LSTM\[[1](#参考文献)] 和 GRU\[[2](#参考文献)],这两种单元以门来控制应该记住的和遗忘的信息,较好地解决了序列数据的长时依赖问题。以本例所用的 GRU 为例,其基本结构如下: The original structure of the RNN uses a vector to store the hidden state. However, the RNN of this structure is prone to have gradient vanishing problem, which is difficult to model for a long time. This issue can be addressed by using LSTM \[[1](#References)] and GRU (Gated Recurrent Unit) \[[2](#References)]. This solves the problem of long-term dependency by carefully forgetting the previous information. In this example, we demonstrate the GRU based model.
<p align="center"> <p align="center">
<img src="images/gru.png" width = "90%" align="center"/><br/> <img src="images/gru.png" width = "90%" align="center"/><br/>
图 2. GRU 单元 图 2. GRU 单元
</p> </p>
可以看到除了隐含状态以外,GRU 内部还包含了两个门:更新门(Update Gate)、重置门(Reset Gate)。在每一个时间步,门限和隐状态的更新由图 2 右侧的公式决定。这两个门限决定了状态以何种方式更新。 We can see that, in addition to the implicit state, the GRU also contains two gates: the Update Gate and the Reset Gate. At each time step, the update of the threshold and the hidden state is determined by the formula on the right side of Figure 2. These two thresholds determine how the state is updated.
### Bi-directional Encoder
In the above basic model, when the encoder sequentially processes the input sentence sequence, the state of the current time contains only the history input information without the sequence information of the future time. For sequence modeling, the context of the future also contains important information. With the bi-directional encoder (Figure 3), we can get both information at the same time:
### 双向编码器
在上述的基本模型中,编码器在顺序处理输入句子序列时,当前时刻的状态只包含了历史输入信息,而没有未来时刻的序列信息。而对于序列建模,未来时刻的上下文同样包含了重要的信息。可以使用如图 3 所示的这种双向编码器来同时获取当前时刻输入的上下文:
<p align="center"> <p align="center">
<img src="images/bidirectional-encoder.png" width = "90%" align="center"/><br/> <img src="images/bidirectional-encoder.png" width = "90%" align="center"/><br/>
图 3. 双向编码器结构示意图 Figure 3. Bi-directional encoder structure diagram
</p> </p>
图 3 所示的双向编码器\[[3](#参考文献)\]由两个独立的 RNN 构成,分别从前向和后向对输入序列进行编码,然后将两个 RNN 的输出合并在一起,作为最终的编码输出。
在 PaddlePaddle 中,双向编码器可以很方便地调用相关 APIs 实现: The bi-directional encoder \[[3](#References)\] shown in Figure 3 consists of two independent RNNs that encode the input sequence from the forward and backward, respectively. Then it combines the outputs of the two RNNs together, as the final encoding output.
In PaddlePaddle, bi-directional encoders can easily call using APIs:
```python ```python
src_word_id = paddle.layer.data( src_word_id = paddle.layer.data(
...@@ -112,24 +115,25 @@ encoded_vector = paddle.networks.bidirectional_gru( ...@@ -112,24 +115,25 @@ encoded_vector = paddle.networks.bidirectional_gru(
return_seq=True) return_seq=True)
``` ```
### 柱搜索(Beam Search) 算法 ### Beam Search Algorithm
训练完成后的生成阶段,模型根据源语言输入,解码生成对应的目标语言翻译结果。解码时,一个直接的方式是取每一步条件概率最大的词,作为当前时刻的输出。但局部最优并不一定能得到全局最优,即这种做法并不能保证最后得到的完整句子出现的概率最大。如果对解的全空间进行搜索,其代价又过大。为了解决这个问题,通常采用柱搜索(Beam Search)算法。柱搜索是一种启发式的图搜索算法,用一个参数 $k$ 控制搜索宽度,其要点如下: After the training is completed, the model will input and decode the corresponding target language translation result according to the source language. Decoding, a direct way is to take each step conditional probability of the largest word, as the current moment of output. But the local optimal does not necessarily guarantee the global optimal. If the search for the full space is large, the cost is too large. In order to solve this problem, beam search algorithm is commonly used. Beam search is a heuristic graph search algorithm that controls the search width with a parameter $ k $] as follows:
**1**. During decoding, always maintain $ k $ decoded sub-sequences;
**1**. 在解码的过程中,始终维护 $k$ 个已解码出的子序列; **2**. At the middle of time $ t $, for each sequence in the $ k $ sub-sequence, calculate the probability of the next word and take the maximum of $ k $ words with the largest probability, combining $ k ^ 2 $ New child sequence;
**2**. 在中间时刻 $t$, 对于 $k$ 个子序列中的每个序列,计算下一个词出现的概率并取概率最大的前 $k$ 个词,组合得到 $k^2$ 个新子序列; **3**. Take the maximum probability of $ k $ in these combination sequences to update the original subsequence;
**3**. 取 **2** 中这些组合序列中概率最大的前 $k$ 个以更新原来的子序列; **4**. iterate through it until you get $ k $ complete sentences as candidates for translation results.
**4**. 不断迭代下去,直至得到 $k$ 个完整的句子,作为翻译结果的候选。 For more information on beam search, refer to the [beam search] chapter in PaddleBook [machine translation] (https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md) (https://github.com/PaddlePaddle/book/blob/develop.org.machine_translation/README.cn.md# beam search algorithm) section.
关于柱搜索的更多介绍,可以参考 PaddleBook 中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)一章中[柱搜索](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md#柱搜索算法)一节。
### Decoder without Attention mechanism
- In the relevant section of PaddleBook (https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md), the Attention Mechanism has been introduced. This example demonstrates Encoder-Decoder structure without attention mechanism. With regard to the attention mechanism, please refer to PaddleBook and references \[[3](#References)].
### 无注意力机制的解码器 In PaddlePaddle, commonly used RNN units can be conveniently called using APIs. For example, `recurrent_layer_group` can be used to implement custom actions at each point in the RNN. First, customize the single-step logic function, and then use the function `recurrent_group ()` to cycle through the single-step logic function to process the entire sequence. In this case, the unattended mechanism of the decoder uses `recurrent_layer_group` to implement the function` gru_decoder_without_attention () `. Corresponding code is as follows:
- PaddleBook中[机器翻译](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.cn.md)的相关章节中,已介绍了带注意力机制(Attention Mechanism)的 Encoder-Decoder 结构,本例介绍的则是不带注意力机制的 Encoder-Decoder 结构。关于注意力机制,读者可进一步参考 PaddleBook 和参考文献\[[3](#参考文献)]。
对于流行的RNN单元,PaddlePaddle 已有很好的实现均可直接调用。如果希望在 RNN 每一个时间步实现某些自定义操作,可使用 PaddlePaddle 中的`recurrent_layer_group`。首先,自定义单步逻辑函数,再利用函数 `recurrent_group()` 循环调用单步逻辑函数处理整个序列。本例中的无注意力机制的解码器便是使用`recurrent_layer_group`来实现,其中,单步逻辑函数`gru_decoder_without_attention()`相关代码如下:
```python ```python
# the initialization state for decoder GRU # the initialization state for decoder GRU
...@@ -172,12 +176,12 @@ def gru_decoder_without_attention(enc_vec, current_word): ...@@ -172,12 +176,12 @@ def gru_decoder_without_attention(enc_vec, current_word):
return out return out
``` ```
在模型训练和测试阶段,解码器的行为有很大的不同: In the model training and testing phase, the behavior of the decoder is different:
- **训练阶段**:目标翻译结果的词向量`trg_embedding`作为参数传递给单步逻辑`gru_decoder_without_attention()`,函数`recurrent_group()`循环调用单步逻辑执行,最后计算目标翻译与实际解码的差异cost并返回; - **Training phase**: The word vector of the target translation `trg_embedding` is passed as a parameter to the single step logic` gru_decoder_without_attention () `. The function` recurrent_group () `loop calls the single step logic execution, and finally calculates the target translation with the actual decoding;
- **测试阶段**:解码器根据最后一个生成的词预测下一个词,`GeneratedInput()`自动取出模型预测出的概率最高的$k$个词的词向量传递给单步逻辑,`beam_search()`函数调用单步逻辑函数`gru_decoder_without_attention()`完成柱搜索并作为结果返回。 - **Testing phase**: The decoder predicts the next word based on the last generated word, `GeneratedInput ()`. The automatic fetch model predicts the highest probability of the $ k $ word vector passed to the single step logic. Then the beam_search () function calls the function `gru_decoder_without_attention ()` to complete the beam search and returns as a result.
训练和生成的逻辑分别实现在如下的`if-else`条件分支中: The training and generated returns are implemented in the following `if-else` conditional branches:
```python ```python
group_input1 = paddle.layer.StaticInput(input=encoded_vector) group_input1 = paddle.layer.StaticInput(input=encoded_vector)
...@@ -223,16 +227,16 @@ else: ...@@ -223,16 +227,16 @@ else:
return cost return cost
``` ```
## 数据准备 ## Data Preparation
本例所用到的数据来自[WMT14](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/),该数据集是法文到英文互译的平行语料。用[bitexts](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz)作为训练数据,[dev+test data](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz)作为验证与测试数据。在PaddlePaddle中已经封装好了该数据集的读取接口,在首次运行的时候,程序会自动完成下载,用户无需手动完成相关的数据准备。 The data used in this example is from [WMT14] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/), which is a parallel corpus of French-to-English translation. Use [bitexts] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz) as training data, [dev + test data] (http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz) as validation and test data. PaddlePaddle has been packaged in the data set of the read interface, in the first run, the program will automatically complete the download. Users do not need to manually complete the relevant data preparation!
## 模型的训练与测试 ## Model Training and Testing
### 模型训练 ### Model Training
启动模型训练的十分简单,只需在命令行窗口中执行`python train.py`。模型训练阶段 `train.py` 脚本中的 `train()` 函数依次完成了如下的逻辑: Starting the model training is very simple, just in the command line window to execute `python train.py`. The `train ()` function in the `train.py` script of the model training phase completes the following logic:
**a) 由网络定义,解析网络结构,初始化模型参数** **a) Define the network, parse the network structure, initialize the model parameters.**
```python ```python
# define the network topolgy. # define the network topolgy.
...@@ -240,7 +244,7 @@ cost = seq2seq_net(source_dict_dim, target_dict_dim) ...@@ -240,7 +244,7 @@ cost = seq2seq_net(source_dict_dim, target_dict_dim)
parameters = paddle.parameters.create(cost) parameters = paddle.parameters.create(cost)
``` ```
**b) 设定训练过程中的优化策略、定义训练数据读取 `reader`** **b) Set the training process optimization strategy. Define the training data to read `reader`**
```python ```python
# define optimization method # define optimization method
...@@ -260,7 +264,7 @@ wmt14_reader = paddle.batch( ...@@ -260,7 +264,7 @@ wmt14_reader = paddle.batch(
batch_size=55) batch_size=55)
``` ```
**c) 定义事件句柄,打印训练中间结果、保存模型快照** **c) Define the event handle, print the training intermediate results, save the model snapshot**
```python ```python
# define the event_handler callback # define the event_handler callback
...@@ -278,7 +282,7 @@ def event_handler(event): ...@@ -278,7 +282,7 @@ def event_handler(event):
event.pass_id, event.batch_id, event.cost, event.metrics)) event.pass_id, event.batch_id, event.cost, event.metrics))
``` ```
**d) 开始训练** **d) Start training**
```python ```python
# start training # start training
...@@ -286,7 +290,7 @@ trainer.train( ...@@ -286,7 +290,7 @@ trainer.train(
reader=wmt14_reader, event_handler=event_handler, num_passes=2) reader=wmt14_reader, event_handler=event_handler, num_passes=2)
``` ```
输出样例为 The output sample is
```text ```text
Pass 0, Batch 0, Cost 267.674663, {'classification_error_evaluator': 1.0} Pass 0, Batch 0, Cost 267.674663, {'classification_error_evaluator': 1.0}
...@@ -300,10 +304,10 @@ Pass 0, Batch 30, Cost 153.633665, {'classification_error_evaluator': 0.86438035 ...@@ -300,10 +304,10 @@ Pass 0, Batch 30, Cost 153.633665, {'classification_error_evaluator': 0.86438035
Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.8348183631896973} Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.8348183631896973}
``` ```
### 生成翻译结果 ### Generate Translation Results
利用训练好的模型生成翻译文本也十分简单。 In PaddlePaddle, it is also easy to use translated models to generate translated text.
1. 首先请修改`generate.py`脚本中`main`中传递给`generate`函数的参数,以选择使用哪一个保存的模型来生成。默认参数如下所示: 1. First of all, please modify the `generate.py` script` main` passed to the `generate` function parameters to choose which saved model to use. The default parameters are as follows:
```python ```python
generate( generate(
...@@ -314,9 +318,9 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -314,9 +318,9 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
model_path="models/nmt_without_att_params_batch_00100.tar.gz") model_path="models/nmt_without_att_params_batch_00100.tar.gz")
``` ```
2. 在终端执行命令 `python generate.py`,脚本中的`generate()`执行了依次如下逻辑: 2. In the terminal phase, execute the `python generate.py` command. The` generate () `in the script executes the following code:
**a) 加载测试样本** **a) Load the test sample**
```python ```python
# load data samples for generation # load data samples for generation
...@@ -326,7 +330,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -326,7 +330,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
gen_data.append((item[0], )) gen_data.append((item[0], ))
``` ```
**b) 初始化模型,执行`infer()`为每个输入样本生成`beam search`的翻译结果** **b) Initialize the model, execute `infer ()` for each input sample to generate `beam search` translation results**
```python ```python
beam_gen = seq2seq_net(source_dict_dim, target_dict_dim, True) beam_gen = seq2seq_net(source_dict_dim, target_dict_dim, True)
...@@ -340,7 +344,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -340,7 +344,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
field=['prob', 'id']) field=['prob', 'id'])
``` ```
**c) 加载源语言和目标语言词典,将`id`序列表示的句子转化成原语言并输出结果** **c) Next, load the source and target language dictionaries, convert the sentences represented by the `id` sequence into the original language and output the results.**
```python ```python
beam_result = inferer.infer(input=test_batch, field=["prob", "id"]) beam_result = inferer.infer(input=test_batch, field=["prob", "id"])
...@@ -361,7 +365,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836 ...@@ -361,7 +365,7 @@ Pass 0, Batch 40, Cost 168.170543, {'classification_error_evaluator': 0.83481836
print("\n") print("\n")
``` ```
设置beam search的宽度为3,输入为一个法文句子,则自动为测试数据生成对应的翻译结果,输出格式如下: Set the width of the beam search to 3, enter a French sentence. Then it automatically generate the corresponding test data for the translation results, the output format is as follows:
```text ```text
Elles connaissent leur entreprise mieux que personne . Elles connaissent leur entreprise mieux que personne .
...@@ -370,20 +374,20 @@ Elles connaissent leur entreprise mieux que personne . ...@@ -370,20 +374,20 @@ Elles connaissent leur entreprise mieux que personne .
-5.026885 They know their business better than anybody . <e> -5.026885 They know their business better than anybody . <e>
``` ```
- 第一行为输入的源语言句子。 - The first line of input for the source language.
- 第二 ~ beam_size + 1 行是柱搜索生成的 `beam_size` 条翻译结果 - Second ~ beam_size + 1 line is the result of the `beam_size` translation generated by the column search
- 相同行的输出以“\t”分隔为两列,第一列是句子的log 概率,第二列是翻译结果的文本。 - the output of the same row is separated into two columns by "\ t", the first column is the log probability of the sentence, and the second column is the text of the translation result.
- 符号`<s>` 表示句子的开始,符号`<e>`表示一个句子的结束,如果出现了在词典中未包含的词,则用符号`<unk>`替代。 - the symbol `<s>` represents the beginning of the sentence, the symbol `<e>` indicates the end of a sentence, and if there is a word that is not included in the dictionary, it is replaced with the symbol `<unk>`.
至此,我们在 PaddlePaddle 上实现了一个初步的机器翻译模型。我们可以看到,PaddlePaddle 提供了灵活丰富的API供大家选择和使用,使得我们能够很方便完成各种复杂网络的配置。机器翻译本身也是个快速发展的领域,各种新方法新思想在不断涌现。在学习完本例后,读者若有兴趣和余力,可基于 PaddlePaddle 平台实现更为复杂、性能更优的机器翻译模型。 So far, we have implemented a basic machine translation model using PaddlePaddle. We can see, PaddlePaddle provides a flexible and rich API. This enables users to easily choose and use a various complex network configuration. NMT itself is also a rapidly developing field, and many new ideas continue to emerge. This example is a basic implementation of NMT. Users can also implement more complex NMT models using PaddlePaddle.
## 参考文献 ## References
[1] Sutskever I, Vinyals O, Le Q V. [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)[J]. 2014, 4:3104-3112. [1] Sutskever I, Vinyals O, Le Q V. [Sequence to Sequence Learning with Neural Networks] (https://arxiv.org/abs/1409.3215) [J]. 2014, 4: 3104-3112.
[2]Cho K, Van Merriënboer B, Gulcehre C, et al. [Learning phrase representations using RNN encoder-decoder for statistical machine translation](http://www.aclweb.org/anthology/D/D14/D14-1179.pdf)[C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1724-1734. [2] Cho K, Van Merriënboer B, Gulcehre C, et al. [Learning phrase representations using RNN encoder-decoder for statistical machine translation (http://www.aclweb.org/anthology/D/D14/D14-1179 .pdf) [C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1724-1734.
[3] Bahdanau D, Cho K, Bengio Y. [Neural machine translation by jointly learning to align and translate](https://arxiv.org/abs/1409.0473)[C]. Proceedings of ICLR 2015, 2015 [3] Bahdanau D, Cho K, Bengio Y. [Neural machine translation by exclusive learning to align and translate] (https://arxiv.org/abs/1409.0473) [C]. Proceedings of ICLR 2015, 2015
</div> </div>
<!-- You can change the lines below now. --> <!-- You can change the lines below now. -->
......
# Single Shot MultiBox Detector (SSD) Object Detection
## Introduction
Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images [ 1 ]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
## SSD Architecture
SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 [ 2 ]. SSD differs from VGG16 network model as in following.
1. The final fc6, fc7 full connection layer into a convolution layer, convolution layer parameters through the original fc6, fc7 parameters obtained.
2. Change the parameters of the pool5 layer from 2x2-s2 (kernel size 2x2, stride size to 2) to 3x3-s1-p1 (kernel size is 3x3, stride size is 1, padding size is 1).
3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\]
Below is the overall structure of the model (300x300)
<p align="center">
<img src="images/ssd_network.png" width="900" height="250" hspace='10'/> <br/>
图1. SSD网络结构
</p>
Each box in the figure represents a convolution layer, and the last two rectangles represent the summary of each convolution layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
## Example Overview
This example contains the following files:
<table>
<caption>Table 1. Directory structure</caption>
<tr><th>File</th><th>Description</th></tr>
<tr><td>train.py</td><td>Training script</td></tr>
<tr><td>eval.py</td><td>Evaluation</td></tr>
<tr><td>infer.py</td><td>Prediction using the trained model</tr>
<tr><td>visual.py</td><td>Visualization of the test results</td></tr>
<tr><td>image_util.py</td><td>Image preprocessing required common function</td></tr>
<tr><td>data_provider.py</td><td>Data processing scripts, generate training, evaluate or detect the required data</td></tr>
<tr><td>config/pascal_voc_conf.py</td><td> Neural network hyperparameter configuration file</td></tr>
<tr><td>data/label_list</td><td>Label list</td></tr>
<tr><td>data/prepare_voc_data.py</td><td>Prepare training PASCAL VOC data list</td></tr>
</table>
The training phase requires pre-processing of the data, including clipping, sampling, etc. This is done in ```image_util.py``` and ```data_provider.py```.```config/vgg_config.py```. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of VOC2007 and VOC2012.
## PASCAL VOC Data set
### Data Preparation
First download the data set. VOC2007\[[3](#References)\] contains both training and test data set, and VOC2012\[[4](#References)\] contains only training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as following:
```python
def prepare_filelist(devkit_dir, years, output_dir):
trainval_list = []
test_list = []
for year in years:
trainval, test = walk_dir(devkit_dir, year)
trainval_list.extend(trainval)
test_list.extend(test)
random.shuffle(trainval_list)
with open(osp.join(output_dir, 'trainval.txt'), 'w') as ftrainval:
for item in trainval_list:
ftrainval.write(item[0] + ' ' + item[1] + '\n')
with open(osp.join(output_dir, 'test.txt'), 'w') as ftest:
for item in test_list:
ftest.write(item[0] + ' ' + item[1] + '\n')
```
The data in ```trainval.txt``` will look like:
```
VOCdevkit/VOC2007/JPEGImages/000005.jpg VOCdevkit/VOC2007/Annotations/000005.xml
VOCdevkit/VOC2007/JPEGImages/000007.jpg VOCdevkit/VOC2007/Annotations/000007.xml
VOCdevkit/VOC2007/JPEGImages/000009.jpg VOCdevkit/VOC2007/Annotations/000009.xml
```
The first field is the relative path of the image file, and the second field is the relative path of the corresponding label file.
### To Use Pre-trained Model
We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```
### Training
Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because the training is very slow using CPU only.
```python
paddle.init(use_gpu=True, trainer_count=4)
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104,117,124])
train(train_file_list='./data/trainval.txt',
dev_file_list='./data/test.txt',
data_args=data_args,
init_model_path='./vgg/vgg_model.tar.gz')
```
Below is a description about this script:
1. Call ```paddle.init``` with 4 GPUs.
2. ```data_provider.Settings()``` is to pass configuration parameters. For ```config/vgg_config.py``` setting,300x300 is a typical configuration for both the accuracy and efficiency. It can be extended to 512x512 by modifying the configuration file.
3. In ```train()```执 function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and mAP (mean Average Precision. Each training pass will be saved a model to the default saved directory ```checkpoints``` (Need to be created in advance).
The following shows the SDD300x300 in the VOC data set.
<p align="center">
<img src="images/SSD300x300_map.png" hspace='10'/> <br/>
图2. SSD300x300 mAP收敛曲线
</p>
### Model Assessment
Next, run ```python eval.py``` to evaluate the model.
```python
paddle.init(use_gpu=True, trainer_count=4) # use 4 gpus
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104, 117, 124])
eval(
eval_file_list='./data/test.txt',
batch_size=4,
data_args=data_args,
model_path='models/pass-00000.tar.gz')
```
### Obejct Detection
Run ```python infer.py``` to perform the object detection using the trained model.
```python
infer(
eval_file_list='./data/infer.txt',
save_path='infer.res',
data_args=data_args,
batch_size=4,
model_path='models/pass-00000.tar.gz',
threshold=0.3)
```
Here ```eval_file_list``` specified image path list, ```save_path``` specifies directory to save the prediction result.
```
VOCdevkit/VOC2007/JPEGImages/006936.jpg 12 0.997844 131.255611777 162.271582842 396.475315094 334.0
VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.998557 229.160234332 49.5991278887 314.098775387 312.913876176
VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.372522 187.543615699 133.727034628 345.647156239 327.448492289
...
```
一共包含4个字段,以tab分割,第一个字段是检测图像路径,第二字段为检测矩形框内类别,第三个字段是置信度,第四个字段是4个坐标值(以空格分割)。
Below is the example after running ```python visual.py``` to visualize the model result. The default visualization of the image saved in the ```./visual_res```.
<p align="center">
<img src="images/vis_1.jpg" height=150 width=200 hspace='10'/>
<img src="images/vis_2.jpg" height=150 width=200 hspace='10'/>
<img src="images/vis_3.jpg" height=150 width=100 hspace='10'/>
<img src="images/vis_4.jpg" height=150 width=200 hspace='10'/> <br />
Figure 3. SSD300x300 Visualization Example
</p>
## To Use Custo Data set
In PaddlePaddle, using the custom data set to train SSD model is also easy! Just input the format that ```train.txt``` can understand. Below is a recommended structure to input for ```train.txt```.
```
image00000_file_path image00000_annotation_file_path
image00001_file_path image00001_annotation_file_path
image00002_file_path image00002_annotation_file_path
...
```
The first column is for the image file path, and the second column for the corresponding marked data file path. In the case of using xml file format, ```data_provider.py``` can be used to process the data as follows.
```python
bbox_labels = []
root = xml.etree.ElementTree.parse(label_path).getroot()
for object in root.findall('object'):
bbox_sample = []
# start from 1
bbox_sample.append(float(settings.label_list.index(
object.find('name').text)))
bbox = object.find('bndbox')
difficult = float(object.find('difficult').text)
bbox_sample.append(float(bbox.find('xmin').text)/img_width)
bbox_sample.append(float(bbox.find('ymin').text)/img_height)
bbox_sample.append(float(bbox.find('xmax').text)/img_width)
bbox_sample.append(float(bbox.find('ymax').text)/img_height)
bbox_sample.append(difficult)
bbox_labels.append(bbox_sample)
```
Now the marked data(e.g. image00000\_annotation\_file\_path)is as follows:
```
label1 xmin1 ymin1 xmax1 ymax1
label2 xmin2 ymin2 xmax2 ymax2
...
```
Here each row corresponds to an object for 5 fields. The first is for the label (note the background 0, need to be numbered from 1), and the remaining four are for the coordinates.
```
bbox_labels = []
with open(label_path) as flabel:
for line in flabel:
bbox_sample = []
bbox = [float(i) for i in line.strip().split()]
label = bbox[0]
bbox_sample.append(label)
bbox_sample.append(bbox[1]/float(img_width))
bbox_sample.append(bbox[2]/float(img_height))
bbox_sample.append(bbox[3]/float(img_width))
bbox_sample.append(bbox[4]/float(img_height))
bbox_sample.append(0.0)
bbox_labels.append(bbox_sample)
```
Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\]
## References
1. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. [SSD: Single shot multibox detector](https://arxiv.org/abs/1512.02325). European conference on computer vision. Springer, Cham, 2016.
2. Simonyan, Karen, and Andrew Zisserman. [Very deep convolutional networks for large-scale image recognition](https://arxiv.org/abs/1409.1556). arXiv preprint arXiv:1409.1556 (2014).
3. [The PASCAL Visual Object Classes Challenge 2007](http://host.robots.ox.ac.uk/pascal/VOC/voc2007/index.html)
4. [Visual Object Classes Challenge 2012 (VOC2012)](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html)
from easydict import EasyDict as edict
import numpy as np
__C = edict()
cfg = __C
__C.TRAIN = edict()
__C.IMG_WIDTH = 300
__C.IMG_HEIGHT = 300
__C.IMG_CHANNEL = 3
__C.CLASS_NUM = 21
__C.BACKGROUND_ID = 0
# training settings
__C.TRAIN.LEARNING_RATE = 0.001 / 4
__C.TRAIN.MOMENTUM = 0.9
__C.TRAIN.BATCH_SIZE = 32
__C.TRAIN.NUM_PASS = 200
__C.TRAIN.L2REGULARIZATION = 0.0005 * 4
__C.TRAIN.LEARNING_RATE_DECAY_A = 0.1
__C.TRAIN.LEARNING_RATE_DECAY_B = 16551 * 80
__C.TRAIN.LEARNING_RATE_SCHEDULE = 'discexp'
__C.NET = edict()
# configuration for multibox_loss_layer
__C.NET.MBLOSS = edict()
__C.NET.MBLOSS.OVERLAP_THRESHOLD = 0.5
__C.NET.MBLOSS.NEG_POS_RATIO = 3.0
__C.NET.MBLOSS.NEG_OVERLAP = 0.5
# configuration for detection_map
__C.NET.DETMAP = edict()
__C.NET.DETMAP.OVERLAP_THRESHOLD = 0.5
__C.NET.DETMAP.EVAL_DIFFICULT = False
__C.NET.DETMAP.AP_TYPE = "11point"
# configuration for detection_output_layer
__C.NET.DETOUT = edict()
__C.NET.DETOUT.CONFIDENCE_THRESHOLD = 0.01
__C.NET.DETOUT.NMS_THRESHOLD = 0.45
__C.NET.DETOUT.NMS_TOP_K = 400
__C.NET.DETOUT.KEEP_TOP_K = 200
# configuration for priorbox_layer from conv4_3
__C.NET.CONV4 = edict()
__C.NET.CONV4.PB = edict()
__C.NET.CONV4.PB.MIN_SIZE = [30]
__C.NET.CONV4.PB.ASPECT_RATIO = [2.]
__C.NET.CONV4.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
# configuration for priorbox_layer from fc7
__C.NET.FC7 = edict()
__C.NET.FC7.PB = edict()
__C.NET.FC7.PB.MIN_SIZE = [60]
__C.NET.FC7.PB.MAX_SIZE = [114]
__C.NET.FC7.PB.ASPECT_RATIO = [2., 3.]
__C.NET.FC7.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
# configuration for priorbox_layer from conv6_2
__C.NET.CONV6 = edict()
__C.NET.CONV6.PB = edict()
__C.NET.CONV6.PB.MIN_SIZE = [114]
__C.NET.CONV6.PB.MAX_SIZE = [168]
__C.NET.CONV6.PB.ASPECT_RATIO = [2., 3.]
__C.NET.CONV6.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
# configuration for priorbox_layer from conv7_2
__C.NET.CONV7 = edict()
__C.NET.CONV7.PB = edict()
__C.NET.CONV7.PB.MIN_SIZE = [168]
__C.NET.CONV7.PB.MAX_SIZE = [222]
__C.NET.CONV7.PB.ASPECT_RATIO = [2., 3.]
__C.NET.CONV7.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
# configuration for priorbox_layer from conv8_2
__C.NET.CONV8 = edict()
__C.NET.CONV8.PB = edict()
__C.NET.CONV8.PB.MIN_SIZE = [222]
__C.NET.CONV8.PB.MAX_SIZE = [276]
__C.NET.CONV8.PB.ASPECT_RATIO = [2., 3.]
__C.NET.CONV8.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
# configuration for priorbox_layer from pool6
__C.NET.POOL6 = edict()
__C.NET.POOL6.PB = edict()
__C.NET.POOL6.PB.MIN_SIZE = [276]
__C.NET.POOL6.PB.MAX_SIZE = [330]
__C.NET.POOL6.PB.ASPECT_RATIO = [2., 3.]
__C.NET.POOL6.PB.VARIANCE = [0.1, 0.1, 0.2, 0.2]
background
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor
import os
import os.path as osp
import re
import random
devkit_dir = './VOCdevkit'
years = ['2007', '2012']
def get_dir(devkit_dir, year, type):
return osp.join(devkit_dir, 'VOC' + year, type)
def walk_dir(devkit_dir, year):
filelist_dir = get_dir(devkit_dir, year, 'ImageSets/Main')
annotation_dir = get_dir(devkit_dir, year, 'Annotations')
img_dir = get_dir(devkit_dir, year, 'JPEGImages')
trainval_list = []
test_list = []
added = set()
for _, _, files in os.walk(filelist_dir):
for fname in files:
img_ann_list = []
if re.match('[a-z]+_trainval\.txt', fname):
img_ann_list = trainval_list
elif re.match('[a-z]+_test\.txt', fname):
img_ann_list = test_list
else:
continue
fpath = osp.join(filelist_dir, fname)
for line in open(fpath):
name_prefix = line.strip().split()[0]
if name_prefix in added:
continue
added.add(name_prefix)
ann_path = osp.join(annotation_dir, name_prefix + '.xml')
img_path = osp.join(img_dir, name_prefix + '.jpg')
assert os.path.isfile(ann_path), 'file %s not found.' % ann_path
assert os.path.isfile(img_path), 'file %s not found.' % img_path
img_ann_list.append((img_path, ann_path))
return trainval_list, test_list
def prepare_filelist(devkit_dir, years, output_dir):
trainval_list = []
test_list = []
for year in years:
trainval, test = walk_dir(devkit_dir, year)
trainval_list.extend(trainval)
test_list.extend(test)
random.shuffle(trainval_list)
with open(osp.join(output_dir, 'trainval.txt'), 'w') as ftrainval:
for item in trainval_list:
ftrainval.write(item[0] + ' ' + item[1] + '\n')
with open(osp.join(output_dir, 'test.txt'), 'w') as ftest:
for item in test_list:
ftest.write(item[0] + ' ' + item[1] + '\n')
prepare_filelist(devkit_dir, years, '.')
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import image_util
from paddle.utils.image_util import *
import random
from PIL import Image
import numpy as np
import xml.etree.ElementTree
import os
class Settings(object):
def __init__(self, data_dir, label_file, resize_h, resize_w, mean_value):
self._data_dir = data_dir
self._label_list = []
label_fpath = os.path.join(data_dir, label_file)
for line in open(label_fpath):
self._label_list.append(line.strip())
self._resize_height = resize_h
self._resize_width = resize_w
self._img_mean = np.array(mean_value)[:, np.newaxis, np.newaxis].astype(
'float32')
@property
def data_dir(self):
return self._data_dir
@property
def label_list(self):
return self._label_list
@property
def resize_h(self):
return self._resize_height
@property
def resize_w(self):
return self._resize_width
@property
def img_mean(self):
return self._img_mean
def _reader_creator(settings, file_list, mode, shuffle):
def reader():
with open(file_list) as flist:
lines = [line.strip() for line in flist]
if shuffle:
random.shuffle(lines)
for line in lines:
if mode == 'train' or mode == 'test':
img_path, label_path = line.split()
img_path = os.path.join(settings.data_dir, img_path)
label_path = os.path.join(settings.data_dir, label_path)
elif mode == 'infer':
img_path = os.path.join(settings.data_dir, line)
img = Image.open(img_path)
img_width, img_height = img.size
img = np.array(img)
# layout: label | xmin | ymin | xmax | ymax | difficult
if mode == 'train' or mode == 'test':
bbox_labels = []
root = xml.etree.ElementTree.parse(label_path).getroot()
for object in root.findall('object'):
bbox_sample = []
# start from 1
bbox_sample.append(
float(
settings.label_list.index(
object.find('name').text)))
bbox = object.find('bndbox')
difficult = float(object.find('difficult').text)
bbox_sample.append(
float(bbox.find('xmin').text) / img_width)
bbox_sample.append(
float(bbox.find('ymin').text) / img_height)
bbox_sample.append(
float(bbox.find('xmax').text) / img_width)
bbox_sample.append(
float(bbox.find('ymax').text) / img_height)
bbox_sample.append(difficult)
bbox_labels.append(bbox_sample)
sample_labels = bbox_labels
if mode == 'train':
batch_sampler = []
# hard-code here
batch_sampler.append(
image_util.sampler(1, 1, 1.0, 1.0, 1.0, 1.0, 0.0,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.1,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.3,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.5,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.7,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.9,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.0,
1.0))
""" random crop """
sampled_bbox = image_util.generate_batch_samples(
batch_sampler, bbox_labels, img_width, img_height)
if len(sampled_bbox) > 0:
idx = int(random.uniform(0, len(sampled_bbox)))
img, sample_labels = image_util.crop_image(
img, bbox_labels, sampled_bbox[idx], img_width,
img_height)
img = Image.fromarray(img)
img = img.resize((settings.resize_w, settings.resize_h),
Image.ANTIALIAS)
img = np.array(img)
if mode == 'train':
mirror = int(random.uniform(0, 2))
if mirror == 1:
img = img[:, ::-1, :]
for i in xrange(len(sample_labels)):
tmp = sample_labels[i][1]
sample_labels[i][1] = 1 - sample_labels[i][3]
sample_labels[i][3] = 1 - tmp
if len(img.shape) == 3:
img = np.swapaxes(img, 1, 2)
img = np.swapaxes(img, 1, 0)
img = img.astype('float32')
img -= settings.img_mean
img = img.flatten()
if mode == 'train' or mode == 'test':
if mode == 'train' and len(sample_labels) == 0: continue
yield img.astype('float32'), sample_labels
elif mode == 'infer':
yield img.astype('float32')
return reader
def train(settings, file_list, shuffle=True):
return _reader_creator(settings, file_list, 'train', shuffle)
def test(settings, file_list):
return _reader_creator(settings, file_list, 'test', False)
def infer(settings, file_list):
return _reader_creator(settings, file_list, 'infer', False)
import paddle.v2 as paddle
import data_provider
import vgg_ssd_net
import os, sys
import gzip
from config.pascal_voc_conf import cfg
def eval(eval_file_list, batch_size, data_args, model_path):
cost, detect_out = vgg_ssd_net.net_conf(mode='eval')
assert os.path.isfile(model_path), 'Invalid model.'
parameters = paddle.parameters.Parameters.from_tar(gzip.open(model_path))
optimizer = paddle.optimizer.Momentum()
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
extra_layers=[detect_out],
update_equation=optimizer)
feeding = {'image': 0, 'bbox': 1}
reader = paddle.batch(
data_provider.test(data_args, eval_file_list), batch_size=batch_size)
result = trainer.test(reader=reader, feeding=feeding)
print "TestCost: %f, Detection mAP=%g" % \
(result.cost, result.metrics['detection_evaluator'])
if __name__ == "__main__":
paddle.init(use_gpu=True, trainer_count=4) # use 4 gpus
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104, 117, 124])
eval(
eval_file_list='./data/test.txt',
batch_size=4,
data_args=data_args,
model_path='models/pass-00000.tar.gz')
from PIL import Image
import numpy as np
import random
import math
class sampler():
def __init__(self, max_sample, max_trial, min_scale, max_scale,
min_aspect_ratio, max_aspect_ratio, min_jaccard_overlap,
max_jaccard_overlap):
self.max_sample = max_sample
self.max_trial = max_trial
self.min_scale = min_scale
self.max_scale = max_scale
self.min_aspect_ratio = min_aspect_ratio
self.max_aspect_ratio = max_aspect_ratio
self.min_jaccard_overlap = min_jaccard_overlap
self.max_jaccard_overlap = max_jaccard_overlap
class bbox():
def __init__(self, xmin, ymin, xmax, ymax):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
def bbox_area(src_bbox):
width = src_bbox.xmax - src_bbox.xmin
height = src_bbox.ymax - src_bbox.ymin
return width * height
def generate_sample(sampler):
scale = random.uniform(sampler.min_scale, sampler.max_scale)
min_aspect_ratio = max(sampler.min_aspect_ratio, (scale**2.0))
max_aspect_ratio = min(sampler.max_aspect_ratio, 1 / (scale**2.0))
aspect_ratio = random.uniform(min_aspect_ratio, max_aspect_ratio)
bbox_width = scale * (aspect_ratio**0.5)
bbox_height = scale / (aspect_ratio**0.5)
xmin_bound = 1 - bbox_width
ymin_bound = 1 - bbox_height
xmin = random.uniform(0, xmin_bound)
ymin = random.uniform(0, ymin_bound)
xmax = xmin + bbox_width
ymax = ymin + bbox_height
sampled_bbox = bbox(xmin, ymin, xmax, ymax)
return sampled_bbox
def jaccard_overlap(sample_bbox, object_bbox):
if sample_bbox.xmin >= object_bbox.xmax or \
sample_bbox.xmax <= object_bbox.xmin or \
sample_bbox.ymin >= object_bbox.ymax or \
sample_bbox.ymax <= object_bbox.ymin:
return 0
intersect_xmin = max(sample_bbox.xmin, object_bbox.xmin)
intersect_ymin = max(sample_bbox.ymin, object_bbox.ymin)
intersect_xmax = min(sample_bbox.xmax, object_bbox.xmax)
intersect_ymax = min(sample_bbox.ymax, object_bbox.ymax)
intersect_size = (intersect_xmax - intersect_xmin) * (
intersect_ymax - intersect_ymin)
sample_bbox_size = bbox_area(sample_bbox)
object_bbox_size = bbox_area(object_bbox)
overlap = intersect_size / (
sample_bbox_size + object_bbox_size - intersect_size)
return overlap
def satisfy_sample_constraint(sampler, sample_bbox, bbox_labels):
if sampler.min_jaccard_overlap == 0 and sampler.max_jaccard_overlap == 0:
return True
for i in range(len(bbox_labels)):
object_bbox = bbox(bbox_labels[i][1], bbox_labels[i][2],
bbox_labels[i][3], bbox_labels[i][4])
overlap = jaccard_overlap(sample_bbox, object_bbox)
if sampler.min_jaccard_overlap != 0 and \
overlap < sampler.min_jaccard_overlap:
continue
if sampler.max_jaccard_overlap != 0 and \
overlap > sampler.max_jaccard_overlap:
continue
return True
return False
def generate_batch_samples(batch_sampler, bbox_labels, image_width,
image_height):
sampled_bbox = []
index = []
c = 0
for sampler in batch_sampler:
found = 0
for i in range(sampler.max_trial):
if found >= sampler.max_sample:
break
sample_bbox = generate_sample(sampler)
if satisfy_sample_constraint(sampler, sample_bbox, bbox_labels):
sampled_bbox.append(sample_bbox)
found = found + 1
index.append(c)
c = c + 1
return sampled_bbox
def clip_bbox(src_bbox):
src_bbox.xmin = max(min(src_bbox.xmin, 1.0), 0.0)
src_bbox.ymin = max(min(src_bbox.ymin, 1.0), 0.0)
src_bbox.xmax = max(min(src_bbox.xmax, 1.0), 0.0)
src_bbox.ymax = max(min(src_bbox.ymax, 1.0), 0.0)
return src_bbox
def meet_emit_constraint(src_bbox, sample_bbox):
center_x = (src_bbox.xmax + src_bbox.xmin) / 2
center_y = (src_bbox.ymax + src_bbox.ymin) / 2
if center_x >= sample_bbox.xmin and \
center_x <= sample_bbox.xmax and \
center_y >= sample_bbox.ymin and \
center_y <= sample_bbox.ymax:
return True
return False
def transform_labels(bbox_labels, sample_bbox):
proj_bbox = bbox(0, 0, 0, 0)
sample_labels = []
for i in range(len(bbox_labels)):
sample_label = []
object_bbox = bbox(bbox_labels[i][1], bbox_labels[i][2],
bbox_labels[i][3], bbox_labels[i][4])
if not meet_emit_constraint(object_bbox, sample_bbox):
continue
sample_width = sample_bbox.xmax - sample_bbox.xmin
sample_height = sample_bbox.ymax - sample_bbox.ymin
proj_bbox.xmin = (object_bbox.xmin - sample_bbox.xmin) / sample_width
proj_bbox.ymin = (object_bbox.ymin - sample_bbox.ymin) / sample_height
proj_bbox.xmax = (object_bbox.xmax - sample_bbox.xmin) / sample_width
proj_bbox.ymax = (object_bbox.ymax - sample_bbox.ymin) / sample_height
proj_bbox = clip_bbox(proj_bbox)
if bbox_area(proj_bbox) > 0:
sample_label.append(bbox_labels[i][0])
sample_label.append(float(proj_bbox.xmin))
sample_label.append(float(proj_bbox.ymin))
sample_label.append(float(proj_bbox.xmax))
sample_label.append(float(proj_bbox.ymax))
sample_label.append(bbox_labels[i][5])
sample_labels.append(sample_label)
return sample_labels
def crop_image(img, bbox_labels, sample_bbox, image_width, image_height):
sample_bbox = clip_bbox(sample_bbox)
xmin = int(sample_bbox.xmin * image_width)
xmax = int(sample_bbox.xmax * image_width)
ymin = int(sample_bbox.ymin * image_height)
ymax = int(sample_bbox.ymax * image_height)
sample_img = img[ymin:ymax, xmin:xmax]
sample_labels = transform_labels(bbox_labels, sample_bbox)
return sample_img, sample_labels
<html>
<head>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tools/theme/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tools/theme/github-markdown.css" rel='stylesheet'>
</head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body>
<div id="context" class="container-fluid markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
# Single Shot MultiBox Detector (SSD) Object Detection
## Introduction
Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images [ 1 ]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
## SSD Architecture
SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 [ 2 ]. SSD differs from VGG16 network model as in following.
1. The final fc6, fc7 full connection layer into a convolution layer, convolution layer parameters through the original fc6, fc7 parameters obtained.
2. Change the parameters of the pool5 layer from 2x2-s2 (kernel size 2x2, stride size to 2) to 3x3-s1-p1 (kernel size is 3x3, stride size is 1, padding size is 1).
3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\]。
Below is the overall structure of the model (300x300)
<p align="center">
<img src="images/ssd_network.png" width="900" height="250" hspace='10'/> <br/>
图1. SSD网络结构
</p>
Each box in the figure represents a convolution layer, and the last two rectangles represent the summary of each convolution layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
## Example Overview
This example contains the following files:
<table>
<caption>Table 1. Directory structure</caption>
<tr><th>File</th><th>Description</th></tr>
<tr><td>train.py</td><td>Training script</td></tr>
<tr><td>eval.py</td><td>Evaluation</td></tr>
<tr><td>infer.py</td><td>Prediction using the trained model</tr>
<tr><td>visual.py</td><td>Visualization of the test results</td></tr>
<tr><td>image_util.py</td><td>Image preprocessing required common function</td></tr>
<tr><td>data_provider.py</td><td>Data processing scripts, generate training, evaluate or detect the required data</td></tr>
<tr><td>config/pascal_voc_conf.py</td><td> Neural network hyperparameter configuration file</td></tr>
<tr><td>data/label_list</td><td>Label list</td></tr>
<tr><td>data/prepare_voc_data.py</td><td>Prepare training PASCAL VOC data list</td></tr>
</table>
The training phase requires pre-processing of the data, including clipping, sampling, etc. This is done in ```image_util.py``` and ```data_provider.py```.```config/vgg_config.py```. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of VOC2007 and VOC2012.
## PASCAL VOC Data set
### Data Preparation
First download the data set. VOC2007\[[3](#References)\] contains both training and test data set, and VOC2012\[[4](#References)\] contains only training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as following:
```python
def prepare_filelist(devkit_dir, years, output_dir):
trainval_list = []
test_list = []
for year in years:
trainval, test = walk_dir(devkit_dir, year)
trainval_list.extend(trainval)
test_list.extend(test)
random.shuffle(trainval_list)
with open(osp.join(output_dir, 'trainval.txt'), 'w') as ftrainval:
for item in trainval_list:
ftrainval.write(item[0] + ' ' + item[1] + '\n')
with open(osp.join(output_dir, 'test.txt'), 'w') as ftest:
for item in test_list:
ftest.write(item[0] + ' ' + item[1] + '\n')
```
The data in ```trainval.txt``` will look like:
```
VOCdevkit/VOC2007/JPEGImages/000005.jpg VOCdevkit/VOC2007/Annotations/000005.xml
VOCdevkit/VOC2007/JPEGImages/000007.jpg VOCdevkit/VOC2007/Annotations/000007.xml
VOCdevkit/VOC2007/JPEGImages/000009.jpg VOCdevkit/VOC2007/Annotations/000009.xml
```
The first field is the relative path of the image file, and the second field is the relative path of the corresponding label file.
### To Use Pre-trained Model
We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```。
### Training
Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because the training is very slow using CPU only.
```python
paddle.init(use_gpu=True, trainer_count=4)
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104,117,124])
train(train_file_list='./data/trainval.txt',
dev_file_list='./data/test.txt',
data_args=data_args,
init_model_path='./vgg/vgg_model.tar.gz')
```
Below is a description about this script:
1. Call ```paddle.init``` with 4 GPUs.
2. ```data_provider.Settings()``` is to pass configuration parameters. For ```config/vgg_config.py``` setting,300x300 is a typical configuration for both the accuracy and efficiency. It can be extended to 512x512 by modifying the configuration file.
3. In ```train()```执 function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and mAP (mean Average Precision. Each training pass will be saved a model to the default saved directory ```checkpoints``` (Need to be created in advance).
The following shows the SDD300x300 in the VOC data set.
<p align="center">
<img src="images/SSD300x300_map.png" hspace='10'/> <br/>
图2. SSD300x300 mAP收敛曲线
</p>
### Model Assessment
Next, run ```python eval.py``` to evaluate the model.
```python
paddle.init(use_gpu=True, trainer_count=4) # use 4 gpus
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104, 117, 124])
eval(
eval_file_list='./data/test.txt',
batch_size=4,
data_args=data_args,
model_path='models/pass-00000.tar.gz')
```
### Obejct Detection
Run ```python infer.py``` to perform the object detection using the trained model.
```python
infer(
eval_file_list='./data/infer.txt',
save_path='infer.res',
data_args=data_args,
batch_size=4,
model_path='models/pass-00000.tar.gz',
threshold=0.3)
```
Here ```eval_file_list``` specified image path list, ```save_path``` specifies directory to save the prediction result.
```
VOCdevkit/VOC2007/JPEGImages/006936.jpg 12 0.997844 131.255611777 162.271582842 396.475315094 334.0
VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.998557 229.160234332 49.5991278887 314.098775387 312.913876176
VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.372522 187.543615699 133.727034628 345.647156239 327.448492289
...
```
一共包含4个字段,以tab分割,第一个字段是检测图像路径,第二字段为检测矩形框内类别,第三个字段是置信度,第四个字段是4个坐标值(以空格分割)。
Below is the example after running ```python visual.py``` to visualize the model result. The default visualization of the image saved in the ```./visual_res```.
<p align="center">
<img src="images/vis_1.jpg" height=150 width=200 hspace='10'/>
<img src="images/vis_2.jpg" height=150 width=200 hspace='10'/>
<img src="images/vis_3.jpg" height=150 width=100 hspace='10'/>
<img src="images/vis_4.jpg" height=150 width=200 hspace='10'/> <br />
Figure 3. SSD300x300 Visualization Example
</p>
## To Use Custo Data set
In PaddlePaddle, using the custom data set to train SSD model is also easy! Just input the format that ```train.txt``` can understand. Below is a recommended structure to input for ```train.txt```.
```
image00000_file_path image00000_annotation_file_path
image00001_file_path image00001_annotation_file_path
image00002_file_path image00002_annotation_file_path
...
```
The first column is for the image file path, and the second column for the corresponding marked data file path. In the case of using xml file format, ```data_provider.py``` can be used to process the data as follows.
```python
bbox_labels = []
root = xml.etree.ElementTree.parse(label_path).getroot()
for object in root.findall('object'):
bbox_sample = []
# start from 1
bbox_sample.append(float(settings.label_list.index(
object.find('name').text)))
bbox = object.find('bndbox')
difficult = float(object.find('difficult').text)
bbox_sample.append(float(bbox.find('xmin').text)/img_width)
bbox_sample.append(float(bbox.find('ymin').text)/img_height)
bbox_sample.append(float(bbox.find('xmax').text)/img_width)
bbox_sample.append(float(bbox.find('ymax').text)/img_height)
bbox_sample.append(difficult)
bbox_labels.append(bbox_sample)
```
Now the marked data(e.g. image00000\_annotation\_file\_path)is as follows:
```
label1 xmin1 ymin1 xmax1 ymax1
label2 xmin2 ymin2 xmax2 ymax2
...
```
Here each row corresponds to an object for 5 fields. The first is for the label (note the background 0, need to be numbered from 1), and the remaining four are for the coordinates.
```
bbox_labels = []
with open(label_path) as flabel:
for line in flabel:
bbox_sample = []
bbox = [float(i) for i in line.strip().split()]
label = bbox[0]
bbox_sample.append(label)
bbox_sample.append(bbox[1]/float(img_width))
bbox_sample.append(bbox[2]/float(img_height))
bbox_sample.append(bbox[3]/float(img_width))
bbox_sample.append(bbox[4]/float(img_height))
bbox_sample.append(0.0)
bbox_labels.append(bbox_sample)
```
Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\]。
## References
1. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. [SSD: Single shot multibox detector](https://arxiv.org/abs/1512.02325). European conference on computer vision. Springer, Cham, 2016.
2. Simonyan, Karen, and Andrew Zisserman. [Very deep convolutional networks for large-scale image recognition](https://arxiv.org/abs/1409.1556). arXiv preprint arXiv:1409.1556 (2014).
3. [The PASCAL Visual Object Classes Challenge 2007](http://host.robots.ox.ac.uk/pascal/VOC/voc2007/index.html)
4. [Visual Object Classes Challenge 2012 (VOC2012)](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html)
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body>
import paddle.v2 as paddle
import data_provider
import vgg_ssd_net
import os, sys
import numpy as np
import gzip
from PIL import Image
from config.pascal_voc_conf import cfg
def _infer(inferer, infer_data, threshold):
ret = []
infer_res = inferer.infer(input=infer_data)
keep_inds = np.where(infer_res[:, 2] >= threshold)[0]
for idx in keep_inds:
ret.append([
infer_res[idx][0], infer_res[idx][1] - 1, infer_res[idx][2],
infer_res[idx][3], infer_res[idx][4], infer_res[idx][5],
infer_res[idx][6]
])
return ret
def save_batch_res(ret_res, img_w, img_h, fname_list, fout):
for det_res in ret_res:
img_idx = int(det_res[0])
label = int(det_res[1])
conf_score = det_res[2]
xmin = det_res[3] * img_w[img_idx]
ymin = det_res[4] * img_h[img_idx]
xmax = det_res[5] * img_w[img_idx]
ymax = det_res[6] * img_h[img_idx]
fout.write(fname_list[img_idx] + '\t' + str(label) + '\t' + str(
conf_score) + '\t' + str(xmin) + ' ' + str(ymin) + ' ' + str(xmax) +
' ' + str(ymax))
fout.write('\n')
def infer(eval_file_list, save_path, data_args, batch_size, model_path,
threshold):
detect_out = vgg_ssd_net.net_conf(mode='infer')
assert os.path.isfile(model_path), 'Invalid model.'
parameters = paddle.parameters.Parameters.from_tar(gzip.open(model_path))
inferer = paddle.inference.Inference(
output_layer=detect_out, parameters=parameters)
reader = data_provider.infer(data_args, eval_file_list)
all_fname_list = [line.strip() for line in open(eval_file_list).readlines()]
test_data = []
fname_list = []
img_w = []
img_h = []
idx = 0
"""Do inference batch by batch,
coords of bbox will be scaled based on image size
"""
with open(save_path, 'w') as fout:
for img in reader():
test_data.append([img])
fname_list.append(all_fname_list[idx])
w, h = Image.open(os.path.join('./data', fname_list[-1])).size
img_w.append(w)
img_h.append(h)
if len(test_data) == batch_size:
ret_res = _infer(inferer, test_data, threshold)
save_batch_res(ret_res, img_w, img_h, fname_list, fout)
test_data = []
fname_list = []
img_w = []
img_h = []
idx += 1
if len(test_data) > 0:
ret_res = _infer(inferer, test_data, threshold)
save_batch_res(ret_res, img_w, img_h, fname_list, fout)
if __name__ == "__main__":
paddle.init(use_gpu=True, trainer_count=1)
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104, 117, 124])
infer(
eval_file_list='./data/infer.txt',
save_path='infer.res',
data_args=data_args,
batch_size=4,
model_path='models/pass-00000.tar.gz',
threshold=0.3)
import paddle.v2 as paddle
import data_provider
import vgg_ssd_net
import os, sys
import gzip
import tarfile
from config.pascal_voc_conf import cfg
def train(train_file_list, dev_file_list, data_args, init_model_path):
optimizer = paddle.optimizer.Momentum(
momentum=cfg.TRAIN.MOMENTUM,
learning_rate=cfg.TRAIN.LEARNING_RATE,
regularization=paddle.optimizer.L2Regularization(
rate=cfg.TRAIN.L2REGULARIZATION),
learning_rate_decay_a=cfg.TRAIN.LEARNING_RATE_DECAY_A,
learning_rate_decay_b=cfg.TRAIN.LEARNING_RATE_DECAY_B,
learning_rate_schedule=cfg.TRAIN.LEARNING_RATE_SCHEDULE)
cost, detect_out = vgg_ssd_net.net_conf('train')
parameters = paddle.parameters.create(cost)
if not (init_model_path is None):
assert os.path.isfile(init_model_path), 'Invalid model.'
parameters.init_from_tar(gzip.open(init_model_path))
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
extra_layers=[detect_out],
update_equation=optimizer)
feeding = {'image': 0, 'bbox': 1}
train_reader = paddle.batch(
data_provider.train(data_args, train_file_list),
batch_size=cfg.TRAIN.BATCH_SIZE) # generate a batch image each time
dev_reader = paddle.batch(
data_provider.test(data_args, dev_file_list),
batch_size=cfg.TRAIN.BATCH_SIZE)
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 1 == 0:
print "\nPass %d, Batch %d, TrainCost %f, Detection mAP=%f" % \
(event.pass_id,
event.batch_id,
event.cost,
event.metrics['detection_evaluator'])
else:
sys.stdout.write('.')
sys.stdout.flush()
if isinstance(event, paddle.event.EndPass):
with gzip.open('checkpoints/params_pass_%05d.tar.gz' % \
event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=dev_reader, feeding=feeding)
print "\nTest with Pass %d, TestCost: %f, Detection mAP=%g" % \
(event.pass_id,
result.cost,
result.metrics['detection_evaluator'])
trainer.train(
reader=train_reader,
event_handler=event_handler,
num_passes=cfg.TRAIN.NUM_PASS,
feeding=feeding)
if __name__ == "__main__":
paddle.init(use_gpu=True, trainer_count=4)
data_args = data_provider.Settings(
data_dir='./data',
label_file='label_list',
resize_h=cfg.IMG_HEIGHT,
resize_w=cfg.IMG_WIDTH,
mean_value=[104, 117, 124])
train(
train_file_list='./data/trainval.txt',
dev_file_list='./data/test.txt',
data_args=data_args,
init_model_path='./vgg/vgg_model.tar.gz')
import paddle.v2 as paddle
from config.pascal_voc_conf import cfg
def net_conf(mode):
"""Network configuration. Total three modes included 'train' 'eval'
and 'infer'. Loss and mAP evaluation layer will return if using 'train'
and 'eval'. In 'infer' mode, only detection output layer will be returned.
"""
default_l2regularization = cfg.TRAIN.L2REGULARIZATION
default_bias_attr = paddle.attr.ParamAttr(l2_rate=0.0, learning_rate=2.0)
default_static_bias_attr = paddle.attr.ParamAttr(is_static=True)
def get_param_attr(local_lr, regularization):
is_static = False
if local_lr == 0.0:
is_static = True
return paddle.attr.ParamAttr(
learning_rate=local_lr, l2_rate=regularization, is_static=is_static)
def conv_group(stack_num, name_list, input, filter_size_list, num_channels,
num_filters_list, stride_list, padding_list,
common_bias_attr, common_param_attr, common_act):
conv = input
in_channels = num_channels
for i in xrange(stack_num):
conv = paddle.layer.img_conv(
name=name_list[i],
input=conv,
filter_size=filter_size_list[i],
num_channels=in_channels,
num_filters=num_filters_list[i],
stride=stride_list[i],
padding=padding_list[i],
bias_attr=common_bias_attr,
param_attr=common_param_attr,
act=common_act)
in_channels = num_filters_list[i]
return conv
def vgg_block(idx_str, input, num_channels, num_filters, pool_size,
pool_stride, pool_pad):
layer_name = "conv%s_" % idx_str
stack_num = 3
name_list = [layer_name + str(i + 1) for i in xrange(3)]
conv = conv_group(stack_num, name_list, input, [3] * stack_num,
num_channels, [num_filters] * stack_num,
[1] * stack_num, [1] * stack_num, default_bias_attr,
get_param_attr(1, default_l2regularization),
paddle.activation.Relu())
pool = paddle.layer.img_pool(
input=conv,
pool_size=pool_size,
num_channels=num_filters,
pool_type=paddle.pooling.CudnnMax(),
stride=pool_stride,
padding=pool_pad)
return conv, pool
def mbox_block(layer_idx, input, num_channels, filter_size, loc_filters,
conf_filters):
mbox_loc_name = layer_idx + "_mbox_loc"
mbox_loc = paddle.layer.img_conv(
name=mbox_loc_name,
input=input,
filter_size=filter_size,
num_channels=num_channels,
num_filters=loc_filters,
stride=1,
padding=1,
bias_attr=default_bias_attr,
param_attr=get_param_attr(1, default_l2regularization),
act=paddle.activation.Identity())
mbox_conf_name = layer_idx + "_mbox_conf"
mbox_conf = paddle.layer.img_conv(
name=mbox_conf_name,
input=input,
filter_size=filter_size,
num_channels=num_channels,
num_filters=conf_filters,
stride=1,
padding=1,
bias_attr=default_bias_attr,
param_attr=get_param_attr(1, default_l2regularization),
act=paddle.activation.Identity())
return mbox_loc, mbox_conf
def ssd_block(layer_idx, input, img_shape, num_channels, num_filters1,
num_filters2, aspect_ratio, variance, min_size, max_size):
layer_name = "conv" + layer_idx + "_"
stack_num = 2
conv1_name = layer_name + "1"
conv2_name = layer_name + "2"
conv2 = conv_group(stack_num, [conv1_name, conv2_name], input, [1, 3],
num_channels, [num_filters1, num_filters2], [1, 2],
[0, 1], default_bias_attr,
get_param_attr(1, default_l2regularization),
paddle.activation.Relu())
loc_filters = (len(aspect_ratio) * 2 + 1 + len(max_size)) * 4
conf_filters = (
len(aspect_ratio) * 2 + 1 + len(max_size)) * cfg.CLASS_NUM
mbox_loc, mbox_conf = mbox_block(conv2_name, conv2, num_filters2, 3,
loc_filters, conf_filters)
mbox_priorbox = paddle.layer.priorbox(
input=conv2,
image=img_shape,
min_size=min_size,
max_size=max_size,
aspect_ratio=aspect_ratio,
variance=variance)
return conv2, mbox_loc, mbox_conf, mbox_priorbox
img = paddle.layer.data(
name='image',
type=paddle.data_type.dense_vector(cfg.IMG_CHANNEL * cfg.IMG_HEIGHT *
cfg.IMG_WIDTH),
height=cfg.IMG_HEIGHT,
width=cfg.IMG_WIDTH)
stack_num = 2
conv1_2 = conv_group(stack_num, ['conv1_1', 'conv1_2'], img,
[3] * stack_num, 3, [64] * stack_num, [1] * stack_num,
[1] * stack_num, default_static_bias_attr,
get_param_attr(0, 0), paddle.activation.Relu())
pool1 = paddle.layer.img_pool(
name="pool1",
input=conv1_2,
pool_type=paddle.pooling.CudnnMax(),
pool_size=2,
num_channels=64,
stride=2)
stack_num = 2
conv2_2 = conv_group(stack_num, ['conv2_1', 'conv2_2'], pool1, [3] *
stack_num, 64, [128] * stack_num, [1] * stack_num,
[1] * stack_num, default_static_bias_attr,
get_param_attr(0, 0), paddle.activation.Relu())
pool2 = paddle.layer.img_pool(
name="pool2",
input=conv2_2,
pool_type=paddle.pooling.CudnnMax(),
pool_size=2,
num_channels=128,
stride=2)
conv3_3, pool3 = vgg_block("3", pool2, 128, 256, 2, 2, 0)
conv4_3, pool4 = vgg_block("4", pool3, 256, 512, 2, 2, 0)
conv4_3_mbox_priorbox = paddle.layer.priorbox(
input=conv4_3,
image=img,
min_size=cfg.NET.CONV4.PB.MIN_SIZE,
aspect_ratio=cfg.NET.CONV4.PB.ASPECT_RATIO,
variance=cfg.NET.CONV4.PB.VARIANCE)
conv4_3_norm = paddle.layer.cross_channel_norm(
name="conv4_3_norm",
input=conv4_3,
param_attr=paddle.attr.ParamAttr(
initial_mean=20, initial_std=0, is_static=False, learning_rate=1))
conv4_3_norm_mbox_loc, conv4_3_norm_mbox_conf = \
mbox_block("conv4_3_norm", conv4_3_norm, 512, 3, 12, 63)
conv5_3, pool5 = vgg_block("5", pool4, 512, 512, 3, 1, 1)
stack_num = 2
fc7 = conv_group(stack_num, ['fc6', 'fc7'], pool5, [3, 1], 512, [1024] *
stack_num, [1] * stack_num, [1, 0], default_bias_attr,
get_param_attr(1, default_l2regularization),
paddle.activation.Relu())
fc7_mbox_loc, fc7_mbox_conf = mbox_block("fc7", fc7, 1024, 3, 24, 126)
fc7_mbox_priorbox = paddle.layer.priorbox(
input=fc7,
image=img,
min_size=cfg.NET.FC7.PB.MIN_SIZE,
max_size=cfg.NET.FC7.PB.MAX_SIZE,
aspect_ratio=cfg.NET.FC7.PB.ASPECT_RATIO,
variance=cfg.NET.FC7.PB.VARIANCE)
conv6_2, conv6_2_mbox_loc, conv6_2_mbox_conf, conv6_2_mbox_priorbox = \
ssd_block("6", fc7, img, 1024, 256, 512,
cfg.NET.CONV6.PB.ASPECT_RATIO,
cfg.NET.CONV6.PB.VARIANCE,
cfg.NET.CONV6.PB.MIN_SIZE,
cfg.NET.CONV6.PB.MAX_SIZE)
conv7_2, conv7_2_mbox_loc, conv7_2_mbox_conf, conv7_2_mbox_priorbox = \
ssd_block("7", conv6_2, img, 512, 128, 256,
cfg.NET.CONV7.PB.ASPECT_RATIO,
cfg.NET.CONV7.PB.VARIANCE,
cfg.NET.CONV7.PB.MIN_SIZE,
cfg.NET.CONV7.PB.MAX_SIZE)
conv8_2, conv8_2_mbox_loc, conv8_2_mbox_conf, conv8_2_mbox_priorbox = \
ssd_block("8", conv7_2, img, 256, 128, 256,
cfg.NET.CONV8.PB.ASPECT_RATIO,
cfg.NET.CONV8.PB.VARIANCE,
cfg.NET.CONV8.PB.MIN_SIZE,
cfg.NET.CONV8.PB.MAX_SIZE)
pool6 = paddle.layer.img_pool(
name="pool6",
input=conv8_2,
pool_size=3,
num_channels=256,
stride=1,
pool_type=paddle.pooling.Avg())
pool6_mbox_loc, pool6_mbox_conf = mbox_block("pool6", pool6, 256, 3, 24,
126)
pool6_mbox_priorbox = paddle.layer.priorbox(
input=pool6,
image=img,
min_size=cfg.NET.POOL6.PB.MIN_SIZE,
max_size=cfg.NET.POOL6.PB.MAX_SIZE,
aspect_ratio=cfg.NET.POOL6.PB.ASPECT_RATIO,
variance=cfg.NET.POOL6.PB.VARIANCE)
mbox_priorbox = paddle.layer.concat(
name="mbox_priorbox",
input=[
conv4_3_mbox_priorbox, fc7_mbox_priorbox, conv6_2_mbox_priorbox,
conv7_2_mbox_priorbox, conv8_2_mbox_priorbox, pool6_mbox_priorbox
])
loc_loss_input = [
conv4_3_norm_mbox_loc, fc7_mbox_loc, conv6_2_mbox_loc, conv7_2_mbox_loc,
conv8_2_mbox_loc, pool6_mbox_loc
]
conf_loss_input = [
conv4_3_norm_mbox_conf, fc7_mbox_conf, conv6_2_mbox_conf,
conv7_2_mbox_conf, conv8_2_mbox_conf, pool6_mbox_conf
]
detection_out = paddle.layer.detection_output(
input_loc=loc_loss_input,
input_conf=conf_loss_input,
priorbox=mbox_priorbox,
confidence_threshold=cfg.NET.DETOUT.CONFIDENCE_THRESHOLD,
nms_threshold=cfg.NET.DETOUT.NMS_THRESHOLD,
num_classes=cfg.CLASS_NUM,
nms_top_k=cfg.NET.DETOUT.NMS_TOP_K,
keep_top_k=cfg.NET.DETOUT.KEEP_TOP_K,
background_id=cfg.BACKGROUND_ID,
name="detection_output")
if mode == 'train' or mode == 'eval':
bbox = paddle.layer.data(
name='bbox', type=paddle.data_type.dense_vector_sequence(6))
loss = paddle.layer.multibox_loss(
input_loc=loc_loss_input,
input_conf=conf_loss_input,
priorbox=mbox_priorbox,
label=bbox,
num_classes=cfg.CLASS_NUM,
overlap_threshold=cfg.NET.MBLOSS.OVERLAP_THRESHOLD,
neg_pos_ratio=cfg.NET.MBLOSS.NEG_POS_RATIO,
neg_overlap=cfg.NET.MBLOSS.NEG_OVERLAP,
background_id=cfg.BACKGROUND_ID,
name="multibox_loss")
paddle.evaluator.detection_map(
input=detection_out,
label=bbox,
overlap_threshold=cfg.NET.DETMAP.OVERLAP_THRESHOLD,
background_id=cfg.BACKGROUND_ID,
evaluate_difficult=cfg.NET.DETMAP.EVAL_DIFFICULT,
ap_type=cfg.NET.DETMAP.AP_TYPE,
name="detection_evaluator")
return loss, detection_out
elif mode == 'infer':
return detection_out
import cv2
import os
data_dir = './data'
infer_file = './infer.res'
out_dir = './visual_res'
path_to_im = dict()
for line in open(infer_file):
img_path, _, _, _ = line.strip().split('\t')
if img_path not in path_to_im:
im = cv2.imread(os.path.join(data_dir, img_path))
path_to_im[img_path] = im
for line in open(infer_file):
img_path, label, conf, bbox = line.strip().split('\t')
xmin, ymin, xmax, ymax = map(float, bbox.split(' '))
xmin = int(round(xmin))
ymin = int(round(ymin))
xmax = int(round(xmax))
ymax = int(round(ymax))
img = path_to_im[img_path]
cv2.rectangle(img, (xmin, ymin), (xmax, ymax),
(0, (1 - xmin) * 255, xmin * 255), 2)
for img_path in path_to_im:
im = path_to_im[img_path]
out_path = os.path.join(out_dir, os.path.basename(img_path))
cv2.imwrite(out_path, im)
print 'Done.'
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册