README.md 27.2 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

3
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
4 5 6 7 8 9

## Table of Contents
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
10
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
11
- [Inference and Evaluation](#inference-and-evaluation)
12
- [Running in Docker Container](#running-in-docker-container)
13 14
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
15
- [Training for Mandarin Language](#training-for-mandarin-language)
16
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
17
- [Released Models](#released-models)
18
- [Experiments and Benchmarks](#experiments-and-benchmarks)
19 20
- [Questions and Help](#questions-and-help)

21

22

23
## Installation
24

L
lfchener 已提交
25 26 27
For this project was developed in PaddlePaddle V2 API, which is not maintained officially any more. To avoid the trouble of environment setup, [running in Docker container](#running-in-docker-container) is highly recommended. Otherwise follow the guidelines below to install the dependencies manually. And we are going to release the update to the latest Paddle Fluid API very soon, please keep an eye on this project.
### Prerequisites
- Python 2.7 only supported
L
lfchener 已提交
28 29 30
- PaddlePaddle the 0.13 version

```bash
L
lfchener 已提交
31
pip install paddlepaddle-gpu==0.13
L
lfchener 已提交
32
```
L
lfchener 已提交
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

### Setup
- Make sure these libraries or tools installed: `pkg-config`, `flac`, `ogg`, `vorbis`, `boost` and `swig`, e.g. installing them via `apt-get`:

```bash
sudo apt-get install -y pkg-config libflac-dev libogg-dev libvorbis-dev libboost-dev swig
```

- Run the setup script for the remaining dependencies

```bash
git clone https://github.com/PaddlePaddle/DeepSpeech.git
cd DeepSpeech
sh setup.sh
```
48

49
## Getting Started
50

51
Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](http://www.openslr.org/33)). Reading these examples will also help you to understand how to make it work with your own data.
52

53
Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICES` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead. Besides, if out-of-memory problem occurs, just reduce `--batch_size` to fit.
54 55 56 57 58

Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.

- Go to directory

59
    ```bash
60
    cd examples/tiny
61 62
    ```

63
    Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
64
- Prepare the data
65

66
    ```bash
67
    sh run_data.sh
68 69
    ```

70
    `run_data.sh` will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments.
71 72
- Train your own ASR model

73
    ```bash
74 75 76
    sh run_train.sh
    ```

77
    `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. These checkpoints could be used for training resuming, inference, evaluation and deployment.
78 79
- Case inference with an existing model

80
    ```bash
81 82 83
    sh run_infer.sh
    ```

84
    `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, you can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
85

86
    ```bash
87
    sh run_infer_golden.sh
88 89 90
    ```
- Evaluate an existing model

91
    ```bash
92 93 94
    sh run_test.sh
    ```

95
    `run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance:
96

97
    ```bash
98
    sh run_test_golden.sh
99 100
    ```

X
Xinghai Sun 已提交
101
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
102

103

104
## Data Preparation
105

X
Xinghai Sun 已提交
106
### Generate Manifest
107

108
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
109

110
```
111 112
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
113
```
114

115
To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
116

117
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which will download data and generate manifest files for LibriSpeech dataset.
X
Xinghai Sun 已提交
118

X
Xinghai Sun 已提交
119
### Compute Mean & Stddev for Normalizer
X
Xinghai Sun 已提交
120

121
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
X
Xinghai Sun 已提交
122

123
```bash
124 125 126 127 128
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
X
Xinghai Sun 已提交
129 130
```

X
Xinghai Sun 已提交
131 132
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.

133

X
Xinghai Sun 已提交
134
### Build Vocabulary
X
Xinghai Sun 已提交
135

X
Xinghai Sun 已提交
136
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
Y
Yibing Liu 已提交
137

138
```bash
139 140 141 142
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
Y
Yibing Liu 已提交
143
```
144

145
It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
146

X
Xinghai Sun 已提交
147
### More Help
148

149
For more help on arguments:
150

151
```bash
152
python data/librispeech/librispeech.py --help
153
python tools/compute_mean_std.py --help
154
python tools/build_vocab.py --help
155 156
```

157
## Training a model
158

159
`train.py` is the main caller of the training module. Examples of usage are shown below.
160

161
- Start training from scratch with 8 GPUs:
162

163 164 165
    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
    ```
166

167 168 169 170 171
- Start training from scratch with 16 CPUs:

    ```
    python train.py --use_gpu False --trainer_count 16
    ```
172
- Resume training from a checkpoint:
173 174

    ```
X
Xinghai Sun 已提交
175 176
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python train.py \
177 178
    --init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
    ```
179

180
For more help on arguments:
181

182
```bash
183 184
python train.py --help
```
185
or refer to `example/librispeech/run_train.sh`.
186

187
## Data Augmentation Pipeline
Y
Yibing Liu 已提交
188

189
Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. You don't have to do the syntheses on your own, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
190

191
Six optional augmentation components are provided to be selected, configured and inserted into the processing pipeline.
192 193 194 195

  - Volume Perturbation
  - Speed Perturbation
  - Shifting Perturbation
X
Xinghai Sun 已提交
196
  - Online Bayesian normalization
197 198 199
  - Noise Perturbation (need background noise audio files)
  - Impulse Response (need impulse audio files)

H
Hu Weiwei 已提交
200
In order to inform the trainer of what augmentation components are needed and what their processing orders are, it is required to prepare in advance an *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
Y
Yibing Liu 已提交
201 202

```
203 204 205 206 207 208 209 210 211 212 213 214
[{
    "type": "speed",
    "params": {"min_speed_rate": 0.95,
               "max_speed_rate": 1.05},
    "prob": 0.6
},
{
    "type": "shift",
    "params": {"min_shift_ms": -5,
               "max_shift_ms": 5},
    "prob": 0.8
}]
Y
Yibing Liu 已提交
215 216
```

217
When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
218

219
For other configuration examples, please refer to `conf/augmenatation.config.example`.
Y
Yibing Liu 已提交
220

221
Be careful when utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
222

223
## Inference and Evaluation
224

225
### Prepare Language Model
Y
Yibing Liu 已提交
226

227
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. Users can simply run this to download the preprared language models:
X
Xinghai Sun 已提交
228

229
```bash
X
Xinghai Sun 已提交
230 231 232 233
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
```
234

Y
yangyaming 已提交
235
If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials. Here we provide some tips to show how we preparing our English and Mandarin language models. You can take it as a reference when you train your own.
Y
yangyaming 已提交
236 237 238

#### English LM

H
Hu Weiwei 已提交
239
The English corpus is from the [Common Crawl Repository](http://commoncrawl.org) and you can download it from [statmt](http://data.statmt.org/ngrams/deduped_en). We use part en.00 to train our English language model. There are some preprocessing steps before training:
Y
yangyaming 已提交
240

Y
yangyaming 已提交
241
  * Characters not in \[A-Za-z0-9\s'\] (\s represents whitespace characters) are removed and Arabic numbers are converted to English numbers like 1000 to one thousand.
Y
yangyaming 已提交
242 243
  * Repeated whitespace characters are squeezed to one and the beginning whitespace characters are removed. Notice that all transcriptions are lowercase, so all characters are converted to lowercase.
  * Top 400,000 most frequent words are selected to build the vocabulary and the rest are replaced with 'UNKNOWNWORD'.
Y
yangyaming 已提交
244

Y
yangyaming 已提交
245
Now the preprocessing is done and we get a clean corpus to train the language model. Our released language model are trained with agruments '-o 5 --prune 0 1 1 1 1'. '-o 5' means the max order of language model is 5. '--prune 0 1 1 1 1' represents count thresholds for each order and more specifically it will prune singletons for orders two and higher. To save disk storage we convert the arpa file to 'trie' binary file with arguments '-a 22 -q 8 -b 8'. '-a' represents the maximum number of leading bits of pointers in 'trie' to chop. '-q -b' are quantization parameters for probability and backoff.
Y
yangyaming 已提交
246

Y
yangyaming 已提交
247 248
#### Mandarin LM

Y
yangyaming 已提交
249
Different from the English language model, Mandarin language model is character-based where each token is a Chinese character. We use internal corpus to train the released Mandarin language models. The corpus contain billions of tokens. The preprocessing has tiny difference from English language model and main steps include:
Y
yangyaming 已提交
250 251

  * The beginning and trailing whitespace characters are removed.
Y
yangyaming 已提交
252 253
  * English punctuations and Chinese punctuations are removed.
  * A whitespace character between two tokens is inserted.
Y
yangyaming 已提交
254

Y
yangyaming 已提交
255
Please notice that the released language models only contain Chinese simplified characters. After preprocessing done we can begin to train the language model. The key training arguments for small LM is '-o 5 --prune 0 1 2 4 4' and '-o 5' for large LM. Please refer above section for the meaning of each argument. We also convert the arpa file to binary file using default settings.
256

257
### Speech-to-text Inference
258

259
An inference module caller `infer.py` is provided to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
260 261 262

- Inference with GPU:

263
    ```bash
264 265
    CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
    ```
Y
Yibing Liu 已提交
266

X
Xinghai Sun 已提交
267
- Inference with CPUs:
268

269
    ```bash
270
    python infer.py --use_gpu False --trainer_count 12
271 272
    ```

X
Xinghai Sun 已提交
273
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
274 275

For more help on arguments:
276 277 278 279

```
python infer.py --help
```
280
or refer to `example/librispeech/run_infer.sh`.
Y
Yibing Liu 已提交
281

282
### Evaluate a Model
Y
Yibing Liu 已提交
283

284
To evaluate a model's performance quantitatively, please run:
285

X
Xinghai Sun 已提交
286
- Evaluation with GPUs:
287

288
    ```bash
289 290 291
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
    ```

X
Xinghai Sun 已提交
292
- Evaluation with CPUs:
Y
Yibing Liu 已提交
293

294
    ```bash
295
    python test.py --use_gpu False --trainer_count 12
296 297
    ```

298
The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
299 300

For more help on arguments:
Y
Yibing Liu 已提交
301

302
```bash
303
python test.py --help
Y
Yibing Liu 已提交
304
```
305
or refer to `example/librispeech/run_test.sh`.
Y
Yibing Liu 已提交
306

307
## Hyper-parameters Tuning
Y
Yibing Liu 已提交
308

309
The hyper-parameters $\alpha$ (language model weight) and $\beta$ (word insertion weight) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on the validation set when the acoustic model is renewed.
Y
Yibing Liu 已提交
310

311
`tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. You must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
Y
Yibing Liu 已提交
312

313
- Tuning with GPU:
Y
Yibing Liu 已提交
314

315
    ```bash
X
Xinghai Sun 已提交
316 317
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python tools/tune.py \
318
    --trainer_count 8 \
319 320 321 322 323 324
    --alpha_from 1.0 \
    --alpha_to 3.2 \
    --num_alphas 45 \
    --beta_from 0.1 \
    --beta_to 0.45 \
    --num_betas 8
325
    ```
Y
Yibing Liu 已提交
326

327
- Tuning with CPU:
Y
Yibing Liu 已提交
328

329
    ```bash
330 331
    python tools/tune.py --use_gpu False
    ```
Y
yangyaming 已提交
332
 The grid search will print the WER (word error rate) or CER (character error rate) at each point in the hyper-parameters space, and draw the error surface optionally. A proper hyper-parameters range should include the global minima of the error surface for WER/CER, as illustrated in the following figure.
333

334
<p align="center">
Y
Yibing Liu 已提交
335
<img src="docs/images/tuning_error_surface.png" width=550>
336 337 338
<br/>An example error surface for tuning on the dev-clean set of LibriSpeech
</p>

Y
Yibing Liu 已提交
339
Usually, as the figure shows, the variation of language model weight ($\alpha$) significantly affect the performance of CTC beam search decoder. And a better procedure is to first tune on serveral data batches (the number can be specified) to find out the proper range of hyper-parameters, then change to the whole validation set to carray out an accurate tuning.
340 341

After tuning, you can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance. For more help
Y
Yibing Liu 已提交
342

343
```bash
Y
Yibing Liu 已提交
344 345
python tune.py --help
```
346
or refer to `example/librispeech/run_tune.sh`.
Y
Yibing Liu 已提交
347

348 349
## Running in Docker Container

350
Docker is an open source tool to build, ship, and run distributed applications in an isolated environment. A Docker image for this project has been provided in [hub.docker.com](https://hub.docker.com) with all the dependencies installed, including the pre-built PaddlePaddle, CTC decoders, and other necessary Python and third-party packages. This Docker image requires the support of NVIDIA GPU, so please make sure its availiability and the [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) has been installed.
351 352

Take several steps to launch the Docker image:
353

354 355 356
- Download the Docker image

```bash
357
nvidia-docker pull paddlepaddle/deep_speech:latest-gpu
358 359 360 361 362
```

- Clone this repository

```
363
git clone https://github.com/PaddlePaddle/DeepSpeech.git
364 365 366 367 368
```

- Run the Docker image

```bash
369
sudo nvidia-docker run -it -v $(pwd)/DeepSpeech:/DeepSpeech paddlepaddle/deep_speech:latest-gpu /bin/bash
370
```
371
Now go back and start from the [Getting Started](#getting-started) section, you can execute training, inference and hyper-parameters tuning similarly in the Docker container.
372

373 374
## Distributed Cloud Training

375
We also provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).
X
Xinghai Sun 已提交
376

377
Please take the following steps to submit a training job:
X
Xinghai Sun 已提交
378

X
Xinghai Sun 已提交
379
- Go to directory:
X
Xinghai Sun 已提交
380

381
    ```bash
X
Xinghai Sun 已提交
382 383 384 385
    cd cloud
    ```
- Upload data:

X
Xinghai Sun 已提交
386
    Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
X
Xinghai Sun 已提交
387

388
    ```bash
X
Xinghai Sun 已提交
389 390 391 392 393 394 395 396 397 398
    sh pcloud_upload_data.sh
    ```

    Given input manifests, `pcloud_upload_data.sh` will:

    - Extract the audio files listed in the input manifests.
    - Pack them into a specified number of tar files.
    - Upload these tar files to PaddleCloud filesystem.
    - Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.

399
    It should be done only once for the very first time to do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
X
Xinghai Sun 已提交
400

Y
Yibing Liu 已提交
401
    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
X
Xinghai Sun 已提交
402 403 404 405 406

 - Configure training arguments:

    Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).

Y
Yibing Liu 已提交
407
    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
X
Xinghai Sun 已提交
408 409 410 411 412

 - Submit the job:

    By running:

413
    ```bash
X
Xinghai Sun 已提交
414 415
    sh pcloud_submit.sh
    ```
416
    a training job has been submitted to PaddleCloud, with the job name printed to the console.
X
Xinghai Sun 已提交
417 418 419 420 421

  - Get training logs

    Run this to list all the jobs you have submitted, as well as their running status:

422
    ```bash
X
Xinghai Sun 已提交
423 424 425 426
    paddlecloud get jobs
    ```

    Run this, the corresponding job's logs will be printed.
427
    ```bash
X
Xinghai Sun 已提交
428 429 430 431 432 433
    paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
    ```

For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).

For more information about the DeepSpeech2 training on PaddleCloud, please refer to
Y
Yibing Liu 已提交
434
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
435

436 437
## Training for Mandarin Language

Y
yangyaming 已提交
438
The key steps of training for Mandarin language are same to that of English language and we have also provided an example for Mandarin training with Aishell in ```examples/aishell```. As mentioned above, please execute ```sh run_data.sh```, ```sh run_train.sh```, ```sh run_test.sh``` and ```sh run_infer.sh``` to do data preparation, training, testing and inference correspondingly. We have also prepared a pre-trained model (downloaded by ./models/aishell/download_model.sh) for users to try with ```sh run_infer_golden.sh``` and ```sh run_test_golden.sh```. Notice that, different from English LM, the Mandarin LM is character-based and please run ```tools/tune.py``` to find an optimal setting.
X
Xinghai Sun 已提交
439

440
## Trying Live Demo with Your Own Voice
441

442
Until now, an ASR model is trained and tested qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But it is not yet tested with your own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling you to test and play around with the demo, with your own voice.
X
Xinghai Sun 已提交
443

444
To start the demo's server, please run this in one console:
X
Xinghai Sun 已提交
445

446
```bash
X
Xinghai Sun 已提交
447 448 449 450 451 452 453
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
```

454
For the machine (might not be the same machine) to run the demo's client, please do the following installation before moving on.
455 456 457

For example, on MAC OS X:

458
```bash
459 460 461 462
brew install portaudio
pip install pyaudio
pip install pynput
```
X
Xinghai Sun 已提交
463

464
Then to start the client, please run this in another console:
465

466
```bash
X
Xinghai Sun 已提交
467 468 469
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
X
Xinghai Sun 已提交
470
--host_port 8086
471
```
X
Xinghai Sun 已提交
472

473
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until finishing your utterance, release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
X
Xinghai Sun 已提交
474

475
Notice that `deploy/demo_client.py` must be run on a machine with a microphone device, while `deploy/demo_server.py` could be run on one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running on one single machine.
X
Xinghai Sun 已提交
476

477
Please also refer to `examples/mandarin/run_demo_server.sh`, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, you can speak Mandarin to test it. If you would like to try some other models, just update `--model_path` argument in the script.  
X
Xinghai Sun 已提交
478 479

For more help on arguments:
480

481
```bash
X
Xinghai Sun 已提交
482 483
python deploy/demo_server.py --help
python deploy/demo_client.py --help
484
```
485

486 487 488 489
## Released Models

#### Speech Model Released

X
Xinghai Sun 已提交
490
Language  | Model Name | Training Data | Hours of Speech
491
:-----------: | :------------: | :----------: |  -------:
Y
Yibing Liu 已提交
492 493 494 495
English  | [LibriSpeech Model](https://deepspeech.bj.bcebos.com/eng_models/librispeech_model.tar.gz) | [LibriSpeech Dataset](http://www.openslr.org/12/) | 960 h
English  | [BaiduEN8k Model](https://deepspeech.bj.bcebos.com/demo_models/baidu_en8k_model.tar.gz) | Baidu Internal English Dataset | 8628 h
Mandarin | [Aishell Model](https://deepspeech.bj.bcebos.com/mandarin_models/aishell_model.tar.gz) | [Aishell Dataset](http://www.openslr.org/33/) | 151 h
Mandarin | [BaiduCN1.2k Model](https://deepspeech.bj.bcebos.com/demo_models/baidu_cn1.2k_model.tar.gz) | Baidu Internal Mandarin Dataset | 1204 h
496 497 498

#### Language Model Released

Y
yangyaming 已提交
499 500
Language Model | Training Data | Token-based | Size | Descriptions
:-------------:| :------------:| :-----: | -----: | :-----------------
Y
Yibing Liu 已提交
501 502 503
[English LM](https://deepspeech.bj.bcebos.com/en_lm/common_crawl_00.prune01111.trie.klm) |  [CommonCrawl(en.00)](http://web-language-models.s3-website-us-east-1.amazonaws.com/ngrams/en/deduped/en.00.deduped.xz) | Word-based | 8.3 GB | Pruned with 0 1 1 1 1; <br/> About 1.85 billion n-grams; <br/> 'trie'  binary with '-a 22 -q 8 -b 8'
[Mandarin LM Small](https://deepspeech.bj.bcebos.com/zh_lm/zh_giga.no_cna_cmn.prune01244.klm) | Baidu Internal Corpus | Char-based | 2.8 GB | Pruned with 0 1 2 4 4; <br/> About 0.13 billion n-grams; <br/> 'probing' binary with default settings
[Mandarin LM Large](https://deepspeech.bj.bcebos.com/zh_lm/zhidao_giga.klm) | Baidu Internal Corpus | Char-based | 70.4 GB | No Pruning; <br/> About 3.7 billion n-grams; <br/> 'probing' binary with default settings
504

505
## Experiments and Benchmarks
506

X
Xinghai Sun 已提交
507
#### Benchmark Results for English Models (Word Error Rate)
X
Xinghai Sun 已提交
508

X
Xinghai Sun 已提交
509
Test Set                | LibriSpeech Model | BaiduEN8K Model
X
Xinghai Sun 已提交
510
:---------------------  | ---------------:  | -------------------:
511 512 513 514 515 516 517
LibriSpeech Test-Clean  |   6.85            |   5.41
LibriSpeech Test-Other  |   21.18           |   13.85
VoxForge American-Canadian | 12.12          |   7.13
VoxForge Commonwealth   |   19.82           |   14.93
VoxForge European       |   30.15           |   18.64
VoxForge Indian         |   53.73           |   25.51
Baidu Internal Testset  |   40.75           |   8.48
518

519
For reproducing benchmark results on VoxForge data, we provide a script to download data and generate VoxForge dialect manifest files. Please go to ```data/voxforge``` and execute ```sh run_data.sh``` to get VoxForge dialect manifest files. Notice that VoxForge data may keep updating and the generated manifest files may have difference from those we evaluated on.
520

X
Xinghai Sun 已提交
521
#### Benchmark Results for Mandarin Model (Character Error Rate)
522

X
Xinghai Sun 已提交
523 524 525
Test Set                |  BaiduCN1.2k Model
:---------------------  |  -------------------:
Baidu Internal Testset  |   12.64
526

527 528
#### Acceleration with Multi-GPUs

529
We compare the training time with 1, 2, 4, 8, 16 Tesla K40m GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0 seconds).  And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) cost for training is printed on the blue bars.
530 531 532 533 534 535 536 537 538 539

<img src="docs/images/multi_gpu_speedup.png" width=450><br/>

| # of GPU  | Acceleration Rate |
| --------  | --------------:   |
| 1         | 1.00 X |
| 2         | 1.97 X |
| 4         | 3.74 X |
| 8         | 6.21 X |
|16         | 10.70 X |
540

541
`tools/profile.sh` provides such a profiling tool.
X
Xinghai Sun 已提交
542

543
## Questions and Help
X
Xinghai Sun 已提交
544

Y
Yibing Liu 已提交
545
You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/DeepSpeech/issues). You are also welcome to contribute to this project.