提交 d9c0e51a 编写于 作者: T Travis CI

Deploy to GitHub Pages: ae7d1c1f

上级 edc3b742
# DeepSpeech2 on PaddlePaddle: Design Doc
We are planning to build Deep Speech 2 (DS2) \[[1](#references)\], a powerful Automatic Speech Recognition (ASR) engine, on PaddlePaddle. For the first-stage plan, we have the following short-term goals:
- Release a basic distributed implementation of DS2 on PaddlePaddle.
- Contribute a chapter of Deep Speech to PaddlePaddle Book.
Intensive system optimization and low-latency inference library (details in \[[1](#references)\]) are not yet covered in this first-stage plan.
## Table of Contents
- [Tasks](#tasks)
- [Task Dependency](#task-dependency)
- [Design Details](#design-details)
- [Overview](#overview)
- [Row Convolution](#row-convolution)
- [Beam Search With CTC and LM](#beam-search-with-ctc-and-lm)
- [Future Work](#future-work)
- [References](#references)
## Tasks
We roughly break down the project into 14 tasks:
1. Develop an **audio data provider**:
- Json filelist generator.
- Audio file format transformer.
- Spectrogram feature extraction, power normalization etc.
- Batch data reader with SortaGrad.
- Data augmentation (optional).
- Prepare (one or more) public English data sets & baseline.
2. Create a **simplified DS2 model configuration**:
- With only fixed-length (by padding) audio sequences (otherwise need *Task 3*).
- With only bidirectional-GRU (otherwise need *Task 4*).
- With only greedy decoder (otherwise need *Task 5, 6*).
3. Develop to support **variable-shaped** dense-vector (image) batches of input data.
- Update `DenseScanner` in `dataprovider_converter.py`, etc.
4. Develop a new **lookahead-row-convolution layer** (See \[[1](#references)\] for details):
- Lookahead convolution windows.
- Within-row convolution, without kernels shared across rows.
5. Build KenLM **language model** (5-gram) for beam search decoder:
- Use KenLM toolkit.
- Prepare the corpus & train the model.
- Create infererence interfaces (for Task 6).
6. Develop a **beam search decoder** with CTC + LM + WORDCOUNT:
- Beam search with CTC.
- Beam search with external custom scorer (e.g. LM).
- Try to design a more general beam search interface.
7. Develop a **Word Error Rate evaluator**:
- update `ctc_error_evaluator`(CER) to support WER.
8. Prepare internal dataset for Mandarin (optional):
- Dataset, baseline, evaluation details.
- Particular data preprocessing for Mandarin.
- Might need cooperating with the Speech Department.
9. Create **standard DS2 model configuration**:
- With variable-length audio sequences (need *Task 3*).
- With unidirectional-GRU + row-convolution (need *Task 4*).
- With CTC-LM beam search decoder (need *Task 5, 6*).
10. Make it run perfectly on **clusters**.
11. Experiments and **benchmarking** (for accuracy, not efficiency):
- With public English dataset.
- With internal (Baidu) Mandarin dataset (optional).
12. Time **profiling** and optimization.
13. Prepare **docs**.
14. Prepare PaddlePaddle **Book** chapter with a simplified version.
## Task Dependency
Tasks parallelizable within phases:
Roadmap | Description | Parallelizable Tasks
----------- | :------------------------------------ | :--------------------
Phase I | Simplified model & components | *Task 1* ~ *Task 8*
Phase II | Standard model & benchmarking & profiling | *Task 9* ~ *Task 12*
Phase III | Documentations | *Task13* ~ *Task14*
Issue for each task will be created later. Contributions, discussions and comments are all highly appreciated and welcomed!
## Design Details
### Overview
Traditional **ASR** (Automatic Speech Recognition) pipelines require great human efforts devoted to elaborately tuning multiple hand-engineered components (e.g. audio feature design, accoustic model, pronuncation model and language model etc.). **Deep Speech 2** (**DS2**) \[[1](#references)\], however, trains such ASR models in an end-to-end manner, replacing most intermediate modules with only a single deep network architecture. With scaling up both the data and model sizes, DS2 achieves a very significant performance boost.
Please read Deep Speech 2 \[[1](#references),[2](#references)\] paper for more background knowledge.
The classical DS2 network contains 15 layers (from bottom to top):
- **Two** data layers (audio spectrogram, transcription text)
- **Three** 2D convolution layers
- **Seven** uni-directional simple-RNN layers
- **One** lookahead row convolution layers
- **One** fully-connected layers
- **One** CTC-loss layer
<div align="center">
<img src="image/ds2_network.png" width=350><br/>
Figure 1. Archetecture of Deep Speech 2 Network.
</div>
We don't have to persist on this 2-3-7-1-1-1 depth \[[2](#references)\]. Similar networks with different depths might also work well. As in \[[1](#references)\], authors use a different depth (e.g. 2-2-3-1-1-1) for final experiments.
Key ingredients about the layers:
- **Data Layers**:
- Frame sequences data of audio **spectrogram** (with FFT).
- Token sequences data of **transcription** text (labels).
- These two type of sequences do not have the same lengthes, thus a CTC-loss layer is required.
- **2D Convolution Layers**:
- Not only temporal convolution, but also **frequency convolution**. Like a 2D image convolution, but with a variable dimension (i.e. temporal dimension).
- With striding for only the first convlution layer.
- No pooling for all convolution layers.
- **Uni-directional RNNs**
- Uni-directional + row convolution: for low-latency inference.
- Bi-direcitional + without row convolution: if we don't care about the inference latency.
- **Row convolution**:
- For looking only a few steps ahead into the feature, instead of looking into a whole sequence in bi-directional RNNs.
- Not nessesary if with bi-direcitional RNNs.
- "**Row**" means convolutions are done within each frequency dimension (row), and no convolution kernels shared across.
- **Batch Normalization Layers**:
- Added to all above layers (except for data and loss layer).
- Sequence-wise normalization for RNNs: BatchNorm only performed on input-state projection and not state-state projection, for efficiency consideration.
Required Components | PaddlePaddle Support | Need to Develop
:------------------------------------- | :-------------------------------------- | :-----------------------
Data Layer I (Spectrogram) | Not supported yet. | TBD (Task 3)
Data Layer II (Transcription) | `paddle.data_type.integer_value_sequence` | -
2D Convolution Layer | `paddle.layer.image_conv_layer` | -
DataType Converter (vec2seq) | `paddle.layer.block_expand` | -
Bi-/Uni-directional RNNs | `paddle.layer.recurrent_group` | -
Row Convolution Layer | Not supported yet. | TBD (Task 4)
CTC-loss Layer | `paddle.layer.warp_ctc` | -
Batch Normalization Layer | `paddle.layer.batch_norm` | -
CTC-Beam search | Not supported yet. | TBD (Task 6)
### Row Convolution
TODO by Assignees
### Beam Search with CTC and LM
<div align="center">
<img src="image/beam_search.png" width=600><br/>
Figure 2. Algorithm for CTC Beam Search Decoder.
</div>
- The **Beam Search Decoder** for DS2 CTC-trained network follows the similar approach in \[[3](#references)\] as shown in Figure 2, with two important modifications for the ambiguous parts:
- 1) in the iterative computation of probabilities, the assignment operation is changed to accumulation for one prefix may comes from different paths;
- 2) the if condition ```if l^+ not in A_prev then``` after probabilities' computation is deprecated for it is hard to understand and seems unnecessary.
- An **external scorer** would be passed into the decoder to evaluate a candidate prefix during decoding whenever a white space appended in English decoding and any character appended in Mandarin decoding.
- Such external scorer consists of language model, word count or any other custom scorers.
- The **language model** is built from Task 5, with parameters should be carefully tuned to achieve minimum WER/CER (c.f. Task 7)
- This decoder needs to perform with **high efficiency** for the convenience of parameters tuning and speech recognition in reality.
## Future Work
- Efficiency Improvement
- Accuracy Improvement
- Low-latency Inference Library
- Large-scale benchmarking
## References
1. Dario Amodei, etc., [Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin](http://proceedings.mlr.press/v48/amodei16.pdf). ICML 2016.
2. Dario Amodei, etc., [Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/abs/1512.02595). arXiv:1512.02595.
3. Awni Y. Hannun, etc. [First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs](https://arxiv.org/abs/1408.2873). arXiv:1408.2873
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>DeepSpeech2 on PaddlePaddle: Design Doc &mdash; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="../../genindex.html"/>
<link rel="search" title="Search" href="../../search.html"/>
<link rel="top" title="PaddlePaddle documentation" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Fork me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../mobile/index_en.html">MOBILE</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/pip_install_en.html">Install Using pip</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_en.html">Run in Docker Containers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/dev/build_en.html">Build using Docker</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/build_from_source_en.html">Build from Sources</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_en.html">Set Command-line Parameters</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_en.html">Use Case</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_en.html">Argument Outline</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_en.html">Detail Description</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_en.html">Distributed Training</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/fabric_en.html">fabric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/openmpi_en.html">openmpi</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/k8s_en.html">kubernetes</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/k8s_aws_en.html">kubernetes on AWS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/new_layer_en.html">Write New Layers</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_en.html">Contribute Code</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/write_docs_en.html">Contribute Documentation</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_en.html">RNN Models</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_en.html">RNN Configuration</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_en.html">Tune GPU Performance</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">Model Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">Data Reader Interface and DataSets</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/data_reader.html">Data Reader Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/image.html">Image Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/dataset.html">Dataset</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">Training and Inference</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/fluid.html">Fluid</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/layers.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/data_feeder.html">DataFeeder</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/executor.html">Executor</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/initializer.html">Initializer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/evaluator.html">Evaluator</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/nets.html">Nets</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/param_attr.html">ParamAttr</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/profiler.html">Profiler</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/regularizer.html">Regularizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/io.html">IO</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../mobile/index_en.html">MOBILE</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_android_en.html">Build PaddlePaddle for Android</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_ios_en.html">Build PaddlePaddle for iOS</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_raspberry_en.html">Build PaddlePaddle for Raspberry Pi</a></li>
</ul>
</li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>DeepSpeech2 on PaddlePaddle: Design Doc</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="deepspeech2-on-paddlepaddle-design-doc">
<span id="deepspeech2-on-paddlepaddle-design-doc"></span><h1>DeepSpeech2 on PaddlePaddle: Design Doc<a class="headerlink" href="#deepspeech2-on-paddlepaddle-design-doc" title="Permalink to this headline"></a></h1>
<p>We are planning to build Deep Speech 2 (DS2) [<a class="reference external" href="#references">1</a>], a powerful Automatic Speech Recognition (ASR) engine, on PaddlePaddle. For the first-stage plan, we have the following short-term goals:</p>
<ul class="simple">
<li>Release a basic distributed implementation of DS2 on PaddlePaddle.</li>
<li>Contribute a chapter of Deep Speech to PaddlePaddle Book.</li>
</ul>
<p>Intensive system optimization and low-latency inference library (details in [<a class="reference external" href="#references">1</a>]) are not yet covered in this first-stage plan.</p>
<div class="section" id="table-of-contents">
<span id="table-of-contents"></span><h2>Table of Contents<a class="headerlink" href="#table-of-contents" title="Permalink to this headline"></a></h2>
<ul class="simple">
<li><a class="reference external" href="#tasks">Tasks</a></li>
<li><a class="reference external" href="#task-dependency">Task Dependency</a></li>
<li><a class="reference external" href="#design-details">Design Details</a><ul>
<li><a class="reference external" href="#overview">Overview</a></li>
<li><a class="reference external" href="#row-convolution">Row Convolution</a></li>
<li><a class="reference external" href="#beam-search-with-ctc-and-lm">Beam Search With CTC and LM</a></li>
</ul>
</li>
<li><a class="reference external" href="#future-work">Future Work</a></li>
<li><a class="reference external" href="#references">References</a></li>
</ul>
</div>
<div class="section" id="tasks">
<span id="tasks"></span><h2>Tasks<a class="headerlink" href="#tasks" title="Permalink to this headline"></a></h2>
<p>We roughly break down the project into 14 tasks:</p>
<ol class="simple">
<li>Develop an <strong>audio data provider</strong>:<ul>
<li>Json filelist generator.</li>
<li>Audio file format transformer.</li>
<li>Spectrogram feature extraction, power normalization etc.</li>
<li>Batch data reader with SortaGrad.</li>
<li>Data augmentation (optional).</li>
<li>Prepare (one or more) public English data sets &amp; baseline.</li>
</ul>
</li>
<li>Create a <strong>simplified DS2 model configuration</strong>:<ul>
<li>With only fixed-length (by padding) audio sequences (otherwise need <em>Task 3</em>).</li>
<li>With only bidirectional-GRU (otherwise need <em>Task 4</em>).</li>
<li>With only greedy decoder (otherwise need <em>Task 5, 6</em>).</li>
</ul>
</li>
<li>Develop to support <strong>variable-shaped</strong> dense-vector (image) batches of input data.<ul>
<li>Update <code class="docutils literal"><span class="pre">DenseScanner</span></code> in <code class="docutils literal"><span class="pre">dataprovider_converter.py</span></code>, etc.</li>
</ul>
</li>
<li>Develop a new <strong>lookahead-row-convolution layer</strong> (See [<a class="reference external" href="#references">1</a>] for details):<ul>
<li>Lookahead convolution windows.</li>
<li>Within-row convolution, without kernels shared across rows.</li>
</ul>
</li>
<li>Build KenLM <strong>language model</strong> (5-gram) for beam search decoder:<ul>
<li>Use KenLM toolkit.</li>
<li>Prepare the corpus &amp; train the model.</li>
<li>Create infererence interfaces (for Task 6).</li>
</ul>
</li>
<li>Develop a <strong>beam search decoder</strong> with CTC + LM + WORDCOUNT:<ul>
<li>Beam search with CTC.</li>
<li>Beam search with external custom scorer (e.g. LM).</li>
<li>Try to design a more general beam search interface.</li>
</ul>
</li>
<li>Develop a <strong>Word Error Rate evaluator</strong>:<ul>
<li>update <code class="docutils literal"><span class="pre">ctc_error_evaluator</span></code>(CER) to support WER.</li>
</ul>
</li>
<li>Prepare internal dataset for Mandarin (optional):<ul>
<li>Dataset, baseline, evaluation details.</li>
<li>Particular data preprocessing for Mandarin.</li>
<li>Might need cooperating with the Speech Department.</li>
</ul>
</li>
<li>Create <strong>standard DS2 model configuration</strong>:<ul>
<li>With variable-length audio sequences (need <em>Task 3</em>).</li>
<li>With unidirectional-GRU + row-convolution (need <em>Task 4</em>).</li>
<li>With CTC-LM beam search decoder (need <em>Task 5, 6</em>).</li>
</ul>
</li>
<li>Make it run perfectly on <strong>clusters</strong>.</li>
<li>Experiments and <strong>benchmarking</strong> (for accuracy, not efficiency):<ul>
<li>With public English dataset.</li>
<li>With internal (Baidu) Mandarin dataset (optional).</li>
</ul>
</li>
<li>Time <strong>profiling</strong> and optimization.</li>
<li>Prepare <strong>docs</strong>.</li>
<li>Prepare PaddlePaddle <strong>Book</strong> chapter with a simplified version.</li>
</ol>
</div>
<div class="section" id="task-dependency">
<span id="task-dependency"></span><h2>Task Dependency<a class="headerlink" href="#task-dependency" title="Permalink to this headline"></a></h2>
<p>Tasks parallelizable within phases:</p>
<p>Roadmap | Description | Parallelizable Tasks
&#8212;&#8212;&#8212;&#8211; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;
Phase I | Simplified model &amp; components | <em>Task 1</em> ~ <em>Task 8</em>
Phase II | Standard model &amp; benchmarking &amp; profiling | <em>Task 9</em> ~ <em>Task 12</em>
Phase III | Documentations | <em>Task13</em> ~ <em>Task14</em></p>
<p>Issue for each task will be created later. Contributions, discussions and comments are all highly appreciated and welcomed!</p>
</div>
<div class="section" id="design-details">
<span id="design-details"></span><h2>Design Details<a class="headerlink" href="#design-details" title="Permalink to this headline"></a></h2>
<div class="section" id="overview">
<span id="overview"></span><h3>Overview<a class="headerlink" href="#overview" title="Permalink to this headline"></a></h3>
<p>Traditional <strong>ASR</strong> (Automatic Speech Recognition) pipelines require great human efforts devoted to elaborately tuning multiple hand-engineered components (e.g. audio feature design, accoustic model, pronuncation model and language model etc.). <strong>Deep Speech 2</strong> (<strong>DS2</strong>) [<a class="reference external" href="#references">1</a>], however, trains such ASR models in an end-to-end manner, replacing most intermediate modules with only a single deep network architecture. With scaling up both the data and model sizes, DS2 achieves a very significant performance boost.</p>
<p>Please read Deep Speech 2 [<a class="reference external" href="#references">1</a>,<a class="reference external" href="#references">2</a>] paper for more background knowledge.</p>
<p>The classical DS2 network contains 15 layers (from bottom to top):</p>
<ul class="simple">
<li><strong>Two</strong> data layers (audio spectrogram, transcription text)</li>
<li><strong>Three</strong> 2D convolution layers</li>
<li><strong>Seven</strong> uni-directional simple-RNN layers</li>
<li><strong>One</strong> lookahead row convolution layers</li>
<li><strong>One</strong> fully-connected layers</li>
<li><strong>One</strong> CTC-loss layer</li>
</ul>
<div align="center">
<img src="image/ds2_network.png" width=350><br/>
Figure 1. Archetecture of Deep Speech 2 Network.
</div><p>We don&#8217;t have to persist on this 2-3-7-1-1-1 depth [<a class="reference external" href="#references">2</a>]. Similar networks with different depths might also work well. As in [<a class="reference external" href="#references">1</a>], authors use a different depth (e.g. 2-2-3-1-1-1) for final experiments.</p>
<p>Key ingredients about the layers:</p>
<ul class="simple">
<li><strong>Data Layers</strong>:<ul>
<li>Frame sequences data of audio <strong>spectrogram</strong> (with FFT).</li>
<li>Token sequences data of <strong>transcription</strong> text (labels).</li>
<li>These two type of sequences do not have the same lengthes, thus a CTC-loss layer is required.</li>
</ul>
</li>
<li><strong>2D Convolution Layers</strong>:<ul>
<li>Not only temporal convolution, but also <strong>frequency convolution</strong>. Like a 2D image convolution, but with a variable dimension (i.e. temporal dimension).</li>
<li>With striding for only the first convlution layer.</li>
<li>No pooling for all convolution layers.</li>
</ul>
</li>
<li><strong>Uni-directional RNNs</strong><ul>
<li>Uni-directional + row convolution: for low-latency inference.</li>
<li>Bi-direcitional + without row convolution: if we don&#8217;t care about the inference latency.</li>
</ul>
</li>
<li><strong>Row convolution</strong>:<ul>
<li>For looking only a few steps ahead into the feature, instead of looking into a whole sequence in bi-directional RNNs.</li>
<li>Not nessesary if with bi-direcitional RNNs.</li>
<li>&#8220;<strong>Row</strong>&#8221; means convolutions are done within each frequency dimension (row), and no convolution kernels shared across.</li>
</ul>
</li>
<li><strong>Batch Normalization Layers</strong>:<ul>
<li>Added to all above layers (except for data and loss layer).</li>
<li>Sequence-wise normalization for RNNs: BatchNorm only performed on input-state projection and not state-state projection, for efficiency consideration.</li>
</ul>
</li>
</ul>
<p>Required Components | PaddlePaddle Support | Need to Develop
:&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;
Data Layer I (Spectrogram) | Not supported yet. | TBD (Task 3)
Data Layer II (Transcription) | <code class="docutils literal"><span class="pre">paddle.data_type.integer_value_sequence</span></code> | -
2D Convolution Layer | <code class="docutils literal"><span class="pre">paddle.layer.image_conv_layer</span></code> | -
DataType Converter (vec2seq) | <code class="docutils literal"><span class="pre">paddle.layer.block_expand</span></code> | -
Bi-/Uni-directional RNNs | <code class="docutils literal"><span class="pre">paddle.layer.recurrent_group</span></code> | -
Row Convolution Layer | Not supported yet. | TBD (Task 4)
CTC-loss Layer | <code class="docutils literal"><span class="pre">paddle.layer.warp_ctc</span></code> | -
Batch Normalization Layer | <code class="docutils literal"><span class="pre">paddle.layer.batch_norm</span></code> | -
CTC-Beam search | Not supported yet. | TBD (Task 6)</p>
</div>
<div class="section" id="row-convolution">
<span id="row-convolution"></span><h3>Row Convolution<a class="headerlink" href="#row-convolution" title="Permalink to this headline"></a></h3>
<p>TODO by Assignees</p>
</div>
<div class="section" id="beam-search-with-ctc-and-lm">
<span id="beam-search-with-ctc-and-lm"></span><h3>Beam Search with CTC and LM<a class="headerlink" href="#beam-search-with-ctc-and-lm" title="Permalink to this headline"></a></h3>
<div align="center">
<img src="image/beam_search.png" width=600><br/>
Figure 2. Algorithm for CTC Beam Search Decoder.
</div><ul class="simple">
<li>The <strong>Beam Search Decoder</strong> for DS2 CTC-trained network follows the similar approach in [<a class="reference external" href="#references">3</a>] as shown in Figure 2, with two important modifications for the ambiguous parts:<ul>
<li><ol class="first">
<li>in the iterative computation of probabilities, the assignment operation is changed to accumulation for one prefix may comes from different paths;</li>
</ol>
</li>
<li><ol class="first">
<li>the if condition <code class="docutils literal"><span class="pre">if</span> <span class="pre">l^+</span> <span class="pre">not</span> <span class="pre">in</span> <span class="pre">A_prev</span> <span class="pre">then</span></code> after probabilities&#8217; computation is deprecated for it is hard to understand and seems unnecessary.</li>
</ol>
</li>
</ul>
</li>
<li>An <strong>external scorer</strong> would be passed into the decoder to evaluate a candidate prefix during decoding whenever a white space appended in English decoding and any character appended in Mandarin decoding.</li>
<li>Such external scorer consists of language model, word count or any other custom scorers.</li>
<li>The <strong>language model</strong> is built from Task 5, with parameters should be carefully tuned to achieve minimum WER/CER (c.f. Task 7)</li>
<li>This decoder needs to perform with <strong>high efficiency</strong> for the convenience of parameters tuning and speech recognition in reality.</li>
</ul>
</div>
</div>
<div class="section" id="future-work">
<span id="future-work"></span><h2>Future Work<a class="headerlink" href="#future-work" title="Permalink to this headline"></a></h2>
<ul class="simple">
<li>Efficiency Improvement</li>
<li>Accuracy Improvement</li>
<li>Low-latency Inference Library</li>
<li>Large-scale benchmarking</li>
</ul>
</div>
<div class="section" id="references">
<span id="references"></span><h2>References<a class="headerlink" href="#references" title="Permalink to this headline"></a></h2>
<ol class="simple">
<li>Dario Amodei, etc., <a class="reference external" href="http://proceedings.mlr.press/v48/amodei16.pdf">Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin</a>. ICML 2016.</li>
<li>Dario Amodei, etc., <a class="reference external" href="https://arxiv.org/abs/1512.02595">Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin</a>. arXiv:1512.02595.</li>
<li>Awni Y. Hannun, etc. <a class="reference external" href="https://arxiv.org/abs/1408.2873">First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs</a>. arXiv:1408.2873</li>
</ol>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
# DeepSpeech2 on PaddlePaddle: Design Doc
We are planning to build Deep Speech 2 (DS2) \[[1](#references)\], a powerful Automatic Speech Recognition (ASR) engine, on PaddlePaddle. For the first-stage plan, we have the following short-term goals:
- Release a basic distributed implementation of DS2 on PaddlePaddle.
- Contribute a chapter of Deep Speech to PaddlePaddle Book.
Intensive system optimization and low-latency inference library (details in \[[1](#references)\]) are not yet covered in this first-stage plan.
## Table of Contents
- [Tasks](#tasks)
- [Task Dependency](#task-dependency)
- [Design Details](#design-details)
- [Overview](#overview)
- [Row Convolution](#row-convolution)
- [Beam Search With CTC and LM](#beam-search-with-ctc-and-lm)
- [Future Work](#future-work)
- [References](#references)
## Tasks
We roughly break down the project into 14 tasks:
1. Develop an **audio data provider**:
- Json filelist generator.
- Audio file format transformer.
- Spectrogram feature extraction, power normalization etc.
- Batch data reader with SortaGrad.
- Data augmentation (optional).
- Prepare (one or more) public English data sets & baseline.
2. Create a **simplified DS2 model configuration**:
- With only fixed-length (by padding) audio sequences (otherwise need *Task 3*).
- With only bidirectional-GRU (otherwise need *Task 4*).
- With only greedy decoder (otherwise need *Task 5, 6*).
3. Develop to support **variable-shaped** dense-vector (image) batches of input data.
- Update `DenseScanner` in `dataprovider_converter.py`, etc.
4. Develop a new **lookahead-row-convolution layer** (See \[[1](#references)\] for details):
- Lookahead convolution windows.
- Within-row convolution, without kernels shared across rows.
5. Build KenLM **language model** (5-gram) for beam search decoder:
- Use KenLM toolkit.
- Prepare the corpus & train the model.
- Create infererence interfaces (for Task 6).
6. Develop a **beam search decoder** with CTC + LM + WORDCOUNT:
- Beam search with CTC.
- Beam search with external custom scorer (e.g. LM).
- Try to design a more general beam search interface.
7. Develop a **Word Error Rate evaluator**:
- update `ctc_error_evaluator`(CER) to support WER.
8. Prepare internal dataset for Mandarin (optional):
- Dataset, baseline, evaluation details.
- Particular data preprocessing for Mandarin.
- Might need cooperating with the Speech Department.
9. Create **standard DS2 model configuration**:
- With variable-length audio sequences (need *Task 3*).
- With unidirectional-GRU + row-convolution (need *Task 4*).
- With CTC-LM beam search decoder (need *Task 5, 6*).
10. Make it run perfectly on **clusters**.
11. Experiments and **benchmarking** (for accuracy, not efficiency):
- With public English dataset.
- With internal (Baidu) Mandarin dataset (optional).
12. Time **profiling** and optimization.
13. Prepare **docs**.
14. Prepare PaddlePaddle **Book** chapter with a simplified version.
## Task Dependency
Tasks parallelizable within phases:
Roadmap | Description | Parallelizable Tasks
----------- | :------------------------------------ | :--------------------
Phase I | Simplified model & components | *Task 1* ~ *Task 8*
Phase II | Standard model & benchmarking & profiling | *Task 9* ~ *Task 12*
Phase III | Documentations | *Task13* ~ *Task14*
Issue for each task will be created later. Contributions, discussions and comments are all highly appreciated and welcomed!
## Design Details
### Overview
Traditional **ASR** (Automatic Speech Recognition) pipelines require great human efforts devoted to elaborately tuning multiple hand-engineered components (e.g. audio feature design, accoustic model, pronuncation model and language model etc.). **Deep Speech 2** (**DS2**) \[[1](#references)\], however, trains such ASR models in an end-to-end manner, replacing most intermediate modules with only a single deep network architecture. With scaling up both the data and model sizes, DS2 achieves a very significant performance boost.
Please read Deep Speech 2 \[[1](#references),[2](#references)\] paper for more background knowledge.
The classical DS2 network contains 15 layers (from bottom to top):
- **Two** data layers (audio spectrogram, transcription text)
- **Three** 2D convolution layers
- **Seven** uni-directional simple-RNN layers
- **One** lookahead row convolution layers
- **One** fully-connected layers
- **One** CTC-loss layer
<div align="center">
<img src="image/ds2_network.png" width=350><br/>
Figure 1. Archetecture of Deep Speech 2 Network.
</div>
We don't have to persist on this 2-3-7-1-1-1 depth \[[2](#references)\]. Similar networks with different depths might also work well. As in \[[1](#references)\], authors use a different depth (e.g. 2-2-3-1-1-1) for final experiments.
Key ingredients about the layers:
- **Data Layers**:
- Frame sequences data of audio **spectrogram** (with FFT).
- Token sequences data of **transcription** text (labels).
- These two type of sequences do not have the same lengthes, thus a CTC-loss layer is required.
- **2D Convolution Layers**:
- Not only temporal convolution, but also **frequency convolution**. Like a 2D image convolution, but with a variable dimension (i.e. temporal dimension).
- With striding for only the first convlution layer.
- No pooling for all convolution layers.
- **Uni-directional RNNs**
- Uni-directional + row convolution: for low-latency inference.
- Bi-direcitional + without row convolution: if we don't care about the inference latency.
- **Row convolution**:
- For looking only a few steps ahead into the feature, instead of looking into a whole sequence in bi-directional RNNs.
- Not nessesary if with bi-direcitional RNNs.
- "**Row**" means convolutions are done within each frequency dimension (row), and no convolution kernels shared across.
- **Batch Normalization Layers**:
- Added to all above layers (except for data and loss layer).
- Sequence-wise normalization for RNNs: BatchNorm only performed on input-state projection and not state-state projection, for efficiency consideration.
Required Components | PaddlePaddle Support | Need to Develop
:------------------------------------- | :-------------------------------------- | :-----------------------
Data Layer I (Spectrogram) | Not supported yet. | TBD (Task 3)
Data Layer II (Transcription) | `paddle.data_type.integer_value_sequence` | -
2D Convolution Layer | `paddle.layer.image_conv_layer` | -
DataType Converter (vec2seq) | `paddle.layer.block_expand` | -
Bi-/Uni-directional RNNs | `paddle.layer.recurrent_group` | -
Row Convolution Layer | Not supported yet. | TBD (Task 4)
CTC-loss Layer | `paddle.layer.warp_ctc` | -
Batch Normalization Layer | `paddle.layer.batch_norm` | -
CTC-Beam search | Not supported yet. | TBD (Task 6)
### Row Convolution
TODO by Assignees
### Beam Search with CTC and LM
<div align="center">
<img src="image/beam_search.png" width=600><br/>
Figure 2. Algorithm for CTC Beam Search Decoder.
</div>
- The **Beam Search Decoder** for DS2 CTC-trained network follows the similar approach in \[[3](#references)\] as shown in Figure 2, with two important modifications for the ambiguous parts:
- 1) in the iterative computation of probabilities, the assignment operation is changed to accumulation for one prefix may comes from different paths;
- 2) the if condition ```if l^+ not in A_prev then``` after probabilities' computation is deprecated for it is hard to understand and seems unnecessary.
- An **external scorer** would be passed into the decoder to evaluate a candidate prefix during decoding whenever a white space appended in English decoding and any character appended in Mandarin decoding.
- Such external scorer consists of language model, word count or any other custom scorers.
- The **language model** is built from Task 5, with parameters should be carefully tuned to achieve minimum WER/CER (c.f. Task 7)
- This decoder needs to perform with **high efficiency** for the convenience of parameters tuning and speech recognition in reality.
## Future Work
- Efficiency Improvement
- Accuracy Improvement
- Low-latency Inference Library
- Large-scale benchmarking
## References
1. Dario Amodei, etc., [Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin](http://proceedings.mlr.press/v48/amodei16.pdf). ICML 2016.
2. Dario Amodei, etc., [Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/abs/1512.02595). arXiv:1512.02595.
3. Awni Y. Hannun, etc. [First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs](https://arxiv.org/abs/1408.2873). arXiv:1408.2873
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>DeepSpeech2 on PaddlePaddle: Design Doc &mdash; PaddlePaddle 文档</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="索引"
href="../../genindex.html"/>
<link rel="search" title="搜索" href="../../search.html"/>
<link rel="top" title="PaddlePaddle 文档" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Fork me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../mobile/index_cn.html">MOBILE</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_cn.html">安装与编译</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/pip_install_cn.html">使用pip安装</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_cn.html">使用Docker安装运行</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/dev/build_cn.html">用Docker编译和测试PaddlePaddle</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/build_from_source_cn.html">从源码编译</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/concepts/use_concepts_cn.html">基本使用概念</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_cn.html">设置命令行参数</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_cn.html">使用案例</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_cn.html">参数概述</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_cn.html">细节描述</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_cn.html">分布式训练</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/fabric_cn.html">fabric集群</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/openmpi_cn.html">openmpi集群</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/k8s_cn.html">kubernetes单机</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/k8s_distributed_cn.html">kubernetes distributed分布式</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cluster/k8s_aws_cn.html">AWS上运行kubernetes集群训练</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/capi/index_cn.html">PaddlePaddle C-API</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/capi/compile_paddle_lib_cn.html">编译 PaddlePaddle 预测库</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/capi/organization_of_the_inputs_cn.html">输入/输出数据组织</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/capi/workflow_of_capi_cn.html">C-API 使用流程</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_cn.html">如何贡献代码</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/write_docs_cn.html">如何贡献/修改文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_cn.html">RNN相关模型</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_cn.html">RNN配置</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/recurrent_group_cn.html">Recurrent Group教程</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hierarchical_layer_cn.html">支持双层序列作为输入的Layer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hrnn_rnn_api_compare_cn.html">单双层RNN API对比介绍</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_cn.html">GPU性能分析与调优</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">模型配置</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">数据访问</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/data_reader.html">Data Reader Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/image.html">Image Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/data/dataset.html">Dataset</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">训练与应用</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/fluid.html">Fluid</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/layers.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/data_feeder.html">DataFeeder</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/executor.html">Executor</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/initializer.html">Initializer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/evaluator.html">Evaluator</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/nets.html">Nets</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/param_attr.html">ParamAttr</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/profiler.html">Profiler</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/regularizer.html">Regularizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/fluid/io.html">IO</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../faq/build_and_install/index_cn.html">编译安装与单元测试</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../faq/model/index_cn.html">模型配置</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../faq/parameter/index_cn.html">参数设置</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../faq/local/index_cn.html">本地训练与预测</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../faq/cluster/index_cn.html">集群训练与预测</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../mobile/index_cn.html">MOBILE</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_android_cn.html">Android平台编译指南</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_ios_cn.html">iOS平台编译指南</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mobile/cross_compiling_for_raspberry_cn.html">Raspberry Pi平台编译指南</a></li>
</ul>
</li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>DeepSpeech2 on PaddlePaddle: Design Doc</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="deepspeech2-on-paddlepaddle-design-doc">
<span id="deepspeech2-on-paddlepaddle-design-doc"></span><h1>DeepSpeech2 on PaddlePaddle: Design Doc<a class="headerlink" href="#deepspeech2-on-paddlepaddle-design-doc" title="永久链接至标题"></a></h1>
<p>We are planning to build Deep Speech 2 (DS2) [<a class="reference external" href="#references">1</a>], a powerful Automatic Speech Recognition (ASR) engine, on PaddlePaddle. For the first-stage plan, we have the following short-term goals:</p>
<ul class="simple">
<li>Release a basic distributed implementation of DS2 on PaddlePaddle.</li>
<li>Contribute a chapter of Deep Speech to PaddlePaddle Book.</li>
</ul>
<p>Intensive system optimization and low-latency inference library (details in [<a class="reference external" href="#references">1</a>]) are not yet covered in this first-stage plan.</p>
<div class="section" id="table-of-contents">
<span id="table-of-contents"></span><h2>Table of Contents<a class="headerlink" href="#table-of-contents" title="永久链接至标题"></a></h2>
<ul class="simple">
<li><a class="reference external" href="#tasks">Tasks</a></li>
<li><a class="reference external" href="#task-dependency">Task Dependency</a></li>
<li><a class="reference external" href="#design-details">Design Details</a><ul>
<li><a class="reference external" href="#overview">Overview</a></li>
<li><a class="reference external" href="#row-convolution">Row Convolution</a></li>
<li><a class="reference external" href="#beam-search-with-ctc-and-lm">Beam Search With CTC and LM</a></li>
</ul>
</li>
<li><a class="reference external" href="#future-work">Future Work</a></li>
<li><a class="reference external" href="#references">References</a></li>
</ul>
</div>
<div class="section" id="tasks">
<span id="tasks"></span><h2>Tasks<a class="headerlink" href="#tasks" title="永久链接至标题"></a></h2>
<p>We roughly break down the project into 14 tasks:</p>
<ol class="simple">
<li>Develop an <strong>audio data provider</strong>:<ul>
<li>Json filelist generator.</li>
<li>Audio file format transformer.</li>
<li>Spectrogram feature extraction, power normalization etc.</li>
<li>Batch data reader with SortaGrad.</li>
<li>Data augmentation (optional).</li>
<li>Prepare (one or more) public English data sets &amp; baseline.</li>
</ul>
</li>
<li>Create a <strong>simplified DS2 model configuration</strong>:<ul>
<li>With only fixed-length (by padding) audio sequences (otherwise need <em>Task 3</em>).</li>
<li>With only bidirectional-GRU (otherwise need <em>Task 4</em>).</li>
<li>With only greedy decoder (otherwise need <em>Task 5, 6</em>).</li>
</ul>
</li>
<li>Develop to support <strong>variable-shaped</strong> dense-vector (image) batches of input data.<ul>
<li>Update <code class="docutils literal"><span class="pre">DenseScanner</span></code> in <code class="docutils literal"><span class="pre">dataprovider_converter.py</span></code>, etc.</li>
</ul>
</li>
<li>Develop a new <strong>lookahead-row-convolution layer</strong> (See [<a class="reference external" href="#references">1</a>] for details):<ul>
<li>Lookahead convolution windows.</li>
<li>Within-row convolution, without kernels shared across rows.</li>
</ul>
</li>
<li>Build KenLM <strong>language model</strong> (5-gram) for beam search decoder:<ul>
<li>Use KenLM toolkit.</li>
<li>Prepare the corpus &amp; train the model.</li>
<li>Create infererence interfaces (for Task 6).</li>
</ul>
</li>
<li>Develop a <strong>beam search decoder</strong> with CTC + LM + WORDCOUNT:<ul>
<li>Beam search with CTC.</li>
<li>Beam search with external custom scorer (e.g. LM).</li>
<li>Try to design a more general beam search interface.</li>
</ul>
</li>
<li>Develop a <strong>Word Error Rate evaluator</strong>:<ul>
<li>update <code class="docutils literal"><span class="pre">ctc_error_evaluator</span></code>(CER) to support WER.</li>
</ul>
</li>
<li>Prepare internal dataset for Mandarin (optional):<ul>
<li>Dataset, baseline, evaluation details.</li>
<li>Particular data preprocessing for Mandarin.</li>
<li>Might need cooperating with the Speech Department.</li>
</ul>
</li>
<li>Create <strong>standard DS2 model configuration</strong>:<ul>
<li>With variable-length audio sequences (need <em>Task 3</em>).</li>
<li>With unidirectional-GRU + row-convolution (need <em>Task 4</em>).</li>
<li>With CTC-LM beam search decoder (need <em>Task 5, 6</em>).</li>
</ul>
</li>
<li>Make it run perfectly on <strong>clusters</strong>.</li>
<li>Experiments and <strong>benchmarking</strong> (for accuracy, not efficiency):<ul>
<li>With public English dataset.</li>
<li>With internal (Baidu) Mandarin dataset (optional).</li>
</ul>
</li>
<li>Time <strong>profiling</strong> and optimization.</li>
<li>Prepare <strong>docs</strong>.</li>
<li>Prepare PaddlePaddle <strong>Book</strong> chapter with a simplified version.</li>
</ol>
</div>
<div class="section" id="task-dependency">
<span id="task-dependency"></span><h2>Task Dependency<a class="headerlink" href="#task-dependency" title="永久链接至标题"></a></h2>
<p>Tasks parallelizable within phases:</p>
<p>Roadmap | Description | Parallelizable Tasks
&#8212;&#8212;&#8212;&#8211; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;
Phase I | Simplified model &amp; components | <em>Task 1</em> ~ <em>Task 8</em>
Phase II | Standard model &amp; benchmarking &amp; profiling | <em>Task 9</em> ~ <em>Task 12</em>
Phase III | Documentations | <em>Task13</em> ~ <em>Task14</em></p>
<p>Issue for each task will be created later. Contributions, discussions and comments are all highly appreciated and welcomed!</p>
</div>
<div class="section" id="design-details">
<span id="design-details"></span><h2>Design Details<a class="headerlink" href="#design-details" title="永久链接至标题"></a></h2>
<div class="section" id="overview">
<span id="overview"></span><h3>Overview<a class="headerlink" href="#overview" title="永久链接至标题"></a></h3>
<p>Traditional <strong>ASR</strong> (Automatic Speech Recognition) pipelines require great human efforts devoted to elaborately tuning multiple hand-engineered components (e.g. audio feature design, accoustic model, pronuncation model and language model etc.). <strong>Deep Speech 2</strong> (<strong>DS2</strong>) [<a class="reference external" href="#references">1</a>], however, trains such ASR models in an end-to-end manner, replacing most intermediate modules with only a single deep network architecture. With scaling up both the data and model sizes, DS2 achieves a very significant performance boost.</p>
<p>Please read Deep Speech 2 [<a class="reference external" href="#references">1</a>,<a class="reference external" href="#references">2</a>] paper for more background knowledge.</p>
<p>The classical DS2 network contains 15 layers (from bottom to top):</p>
<ul class="simple">
<li><strong>Two</strong> data layers (audio spectrogram, transcription text)</li>
<li><strong>Three</strong> 2D convolution layers</li>
<li><strong>Seven</strong> uni-directional simple-RNN layers</li>
<li><strong>One</strong> lookahead row convolution layers</li>
<li><strong>One</strong> fully-connected layers</li>
<li><strong>One</strong> CTC-loss layer</li>
</ul>
<div align="center">
<img src="image/ds2_network.png" width=350><br/>
Figure 1. Archetecture of Deep Speech 2 Network.
</div><p>We don&#8217;t have to persist on this 2-3-7-1-1-1 depth [<a class="reference external" href="#references">2</a>]. Similar networks with different depths might also work well. As in [<a class="reference external" href="#references">1</a>], authors use a different depth (e.g. 2-2-3-1-1-1) for final experiments.</p>
<p>Key ingredients about the layers:</p>
<ul class="simple">
<li><strong>Data Layers</strong>:<ul>
<li>Frame sequences data of audio <strong>spectrogram</strong> (with FFT).</li>
<li>Token sequences data of <strong>transcription</strong> text (labels).</li>
<li>These two type of sequences do not have the same lengthes, thus a CTC-loss layer is required.</li>
</ul>
</li>
<li><strong>2D Convolution Layers</strong>:<ul>
<li>Not only temporal convolution, but also <strong>frequency convolution</strong>. Like a 2D image convolution, but with a variable dimension (i.e. temporal dimension).</li>
<li>With striding for only the first convlution layer.</li>
<li>No pooling for all convolution layers.</li>
</ul>
</li>
<li><strong>Uni-directional RNNs</strong><ul>
<li>Uni-directional + row convolution: for low-latency inference.</li>
<li>Bi-direcitional + without row convolution: if we don&#8217;t care about the inference latency.</li>
</ul>
</li>
<li><strong>Row convolution</strong>:<ul>
<li>For looking only a few steps ahead into the feature, instead of looking into a whole sequence in bi-directional RNNs.</li>
<li>Not nessesary if with bi-direcitional RNNs.</li>
<li>&#8220;<strong>Row</strong>&#8221; means convolutions are done within each frequency dimension (row), and no convolution kernels shared across.</li>
</ul>
</li>
<li><strong>Batch Normalization Layers</strong>:<ul>
<li>Added to all above layers (except for data and loss layer).</li>
<li>Sequence-wise normalization for RNNs: BatchNorm only performed on input-state projection and not state-state projection, for efficiency consideration.</li>
</ul>
</li>
</ul>
<p>Required Components | PaddlePaddle Support | Need to Develop
:&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; | :&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;
Data Layer I (Spectrogram) | Not supported yet. | TBD (Task 3)
Data Layer II (Transcription) | <code class="docutils literal"><span class="pre">paddle.data_type.integer_value_sequence</span></code> | -
2D Convolution Layer | <code class="docutils literal"><span class="pre">paddle.layer.image_conv_layer</span></code> | -
DataType Converter (vec2seq) | <code class="docutils literal"><span class="pre">paddle.layer.block_expand</span></code> | -
Bi-/Uni-directional RNNs | <code class="docutils literal"><span class="pre">paddle.layer.recurrent_group</span></code> | -
Row Convolution Layer | Not supported yet. | TBD (Task 4)
CTC-loss Layer | <code class="docutils literal"><span class="pre">paddle.layer.warp_ctc</span></code> | -
Batch Normalization Layer | <code class="docutils literal"><span class="pre">paddle.layer.batch_norm</span></code> | -
CTC-Beam search | Not supported yet. | TBD (Task 6)</p>
</div>
<div class="section" id="row-convolution">
<span id="row-convolution"></span><h3>Row Convolution<a class="headerlink" href="#row-convolution" title="永久链接至标题"></a></h3>
<p>TODO by Assignees</p>
</div>
<div class="section" id="beam-search-with-ctc-and-lm">
<span id="beam-search-with-ctc-and-lm"></span><h3>Beam Search with CTC and LM<a class="headerlink" href="#beam-search-with-ctc-and-lm" title="永久链接至标题"></a></h3>
<div align="center">
<img src="image/beam_search.png" width=600><br/>
Figure 2. Algorithm for CTC Beam Search Decoder.
</div><ul class="simple">
<li>The <strong>Beam Search Decoder</strong> for DS2 CTC-trained network follows the similar approach in [<a class="reference external" href="#references">3</a>] as shown in Figure 2, with two important modifications for the ambiguous parts:<ul>
<li><ol class="first">
<li>in the iterative computation of probabilities, the assignment operation is changed to accumulation for one prefix may comes from different paths;</li>
</ol>
</li>
<li><ol class="first">
<li>the if condition <code class="docutils literal"><span class="pre">if</span> <span class="pre">l^+</span> <span class="pre">not</span> <span class="pre">in</span> <span class="pre">A_prev</span> <span class="pre">then</span></code> after probabilities&#8217; computation is deprecated for it is hard to understand and seems unnecessary.</li>
</ol>
</li>
</ul>
</li>
<li>An <strong>external scorer</strong> would be passed into the decoder to evaluate a candidate prefix during decoding whenever a white space appended in English decoding and any character appended in Mandarin decoding.</li>
<li>Such external scorer consists of language model, word count or any other custom scorers.</li>
<li>The <strong>language model</strong> is built from Task 5, with parameters should be carefully tuned to achieve minimum WER/CER (c.f. Task 7)</li>
<li>This decoder needs to perform with <strong>high efficiency</strong> for the convenience of parameters tuning and speech recognition in reality.</li>
</ul>
</div>
</div>
<div class="section" id="future-work">
<span id="future-work"></span><h2>Future Work<a class="headerlink" href="#future-work" title="永久链接至标题"></a></h2>
<ul class="simple">
<li>Efficiency Improvement</li>
<li>Accuracy Improvement</li>
<li>Low-latency Inference Library</li>
<li>Large-scale benchmarking</li>
</ul>
</div>
<div class="section" id="references">
<span id="references"></span><h2>References<a class="headerlink" href="#references" title="永久链接至标题"></a></h2>
<ol class="simple">
<li>Dario Amodei, etc., <a class="reference external" href="http://proceedings.mlr.press/v48/amodei16.pdf">Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin</a>. ICML 2016.</li>
<li>Dario Amodei, etc., <a class="reference external" href="https://arxiv.org/abs/1512.02595">Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin</a>. arXiv:1512.02595.</li>
<li>Awni Y. Hannun, etc. <a class="reference external" href="https://arxiv.org/abs/1408.2873">First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs</a>. arXiv:1408.2873</li>
</ol>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="../../_static/translations.js"></script>
<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.0/MathJax.js"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册