README.md 26.9 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

3
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
4 5 6 7 8 9

## Table of Contents
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
10
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
11
- [Inference and Evaluation](#inference-and-evaluation)
12
- [Running in Docker Container](#running-in-docker-container)
13 14
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
15
- [Training for Mandarin Language](#training-for-mandarin-language)
16
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
17
- [Released Models](#released-models)
18
- [Experiments and Benchmarks](#experiments-and-benchmarks)
19 20
- [Questions and Help](#questions-and-help)

21

22

23
## Installation
24

25
To avoid the trouble of environment setup, [running in docker container](#running-in-docker-container) is highly recommended. Otherwise follow the guidelines below to install the dependencies manually.
26 27 28 29 30 31

### Prerequisites
- Python 2.7 only supported
- PaddlePaddle the latest version (please refer to the [Installation Guide](https://github.com/PaddlePaddle/Paddle#installation))

### Setup
32

33
```bash
Y
Yibing Liu 已提交
34 35 36
sudo apt-get install -y pkg-config libflac-dev libogg-dev libvorbis-dev swig
git clone https://github.com/PaddlePaddle/DeepSpeech.git
cd DeepSpeech
Y
yangyaming 已提交
37
sh setup.sh
38
```
39

40
## Getting Started
41

42
Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](http://www.openslr.org/33)). Reading these examples will also help you to understand how to make it work with your own data.
43

44
Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICES` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead. Besides, if out-of-memory problem occurs, just reduce `--batch_size` to fit.
45 46 47 48 49

Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.

- Go to directory

50
    ```bash
51
    cd examples/tiny
52 53
    ```

54
    Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
55
- Prepare the data
56

57
    ```bash
58
    sh run_data.sh
59 60
    ```

61
    `run_data.sh` will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments.
62 63
- Train your own ASR model

64
    ```bash
65 66 67
    sh run_train.sh
    ```

68
    `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. These checkpoints could be used for training resuming, inference, evaluation and deployment.
69 70
- Case inference with an existing model

71
    ```bash
72 73 74
    sh run_infer.sh
    ```

75
    `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, you can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
76

77
    ```bash
78
    sh run_infer_golden.sh
79 80 81
    ```
- Evaluate an existing model

82
    ```bash
83 84 85
    sh run_test.sh
    ```

86
    `run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance:
87

88
    ```bash
89
    sh run_test_golden.sh
90 91
    ```

X
Xinghai Sun 已提交
92
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
93

94

95
## Data Preparation
96

X
Xinghai Sun 已提交
97
### Generate Manifest
98

99
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
100

101
```
102 103
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
104
```
105

106
To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
107

108
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which will download data and generate manifest files for LibriSpeech dataset.
X
Xinghai Sun 已提交
109

X
Xinghai Sun 已提交
110
### Compute Mean & Stddev for Normalizer
X
Xinghai Sun 已提交
111

112
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
X
Xinghai Sun 已提交
113

114
```bash
115 116 117 118 119
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
X
Xinghai Sun 已提交
120 121
```

X
Xinghai Sun 已提交
122 123
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.

124

X
Xinghai Sun 已提交
125
### Build Vocabulary
X
Xinghai Sun 已提交
126

X
Xinghai Sun 已提交
127
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
Y
Yibing Liu 已提交
128

129
```bash
130 131 132 133
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
Y
Yibing Liu 已提交
134
```
135

136
It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
137

X
Xinghai Sun 已提交
138
### More Help
139

140
For more help on arguments:
141

142
```bash
143
python data/librispeech/librispeech.py --help
144
python tools/compute_mean_std.py --help
145
python tools/build_vocab.py --help
146 147
```

148
## Training a model
149

150
`train.py` is the main caller of the training module. Examples of usage are shown below.
151

152
- Start training from scratch with 8 GPUs:
153

154 155 156
    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
    ```
157

158 159 160 161 162
- Start training from scratch with 16 CPUs:

    ```
    python train.py --use_gpu False --trainer_count 16
    ```
163
- Resume training from a checkpoint:
164 165

    ```
X
Xinghai Sun 已提交
166 167
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python train.py \
168 169
    --init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
    ```
170

171
For more help on arguments:
172

173
```bash
174 175
python train.py --help
```
176
or refer to `example/librispeech/run_train.sh`.
177

178
## Data Augmentation Pipeline
Y
Yibing Liu 已提交
179

180
Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. You don't have to do the syntheses on your own, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
181

182
Six optional augmentation components are provided to be selected, configured and inserted into the processing pipeline.
183 184 185 186

  - Volume Perturbation
  - Speed Perturbation
  - Shifting Perturbation
X
Xinghai Sun 已提交
187
  - Online Bayesian normalization
188 189 190
  - Noise Perturbation (need background noise audio files)
  - Impulse Response (need impulse audio files)

H
Hu Weiwei 已提交
191
In order to inform the trainer of what augmentation components are needed and what their processing orders are, it is required to prepare in advance an *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
Y
Yibing Liu 已提交
192 193

```
194 195 196 197 198 199 200 201 202 203 204 205
[{
    "type": "speed",
    "params": {"min_speed_rate": 0.95,
               "max_speed_rate": 1.05},
    "prob": 0.6
},
{
    "type": "shift",
    "params": {"min_shift_ms": -5,
               "max_shift_ms": 5},
    "prob": 0.8
}]
Y
Yibing Liu 已提交
206 207
```

208
When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
209

210
For other configuration examples, please refer to `conf/augmenatation.config.example`.
Y
Yibing Liu 已提交
211

212
Be careful when utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
213

214
## Inference and Evaluation
215

216
### Prepare Language Model
Y
Yibing Liu 已提交
217

218
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. Users can simply run this to download the preprared language models:
X
Xinghai Sun 已提交
219

220
```bash
X
Xinghai Sun 已提交
221 222 223 224
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
```
225

Y
yangyaming 已提交
226
If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials. Here we provide some tips to show how we preparing our English and Mandarin language models. You can take it as a reference when you train your own.
Y
yangyaming 已提交
227 228 229

#### English LM

H
Hu Weiwei 已提交
230
The English corpus is from the [Common Crawl Repository](http://commoncrawl.org) and you can download it from [statmt](http://data.statmt.org/ngrams/deduped_en). We use part en.00 to train our English language model. There are some preprocessing steps before training:
Y
yangyaming 已提交
231

Y
yangyaming 已提交
232
  * Characters not in \[A-Za-z0-9\s'\] (\s represents whitespace characters) are removed and Arabic numbers are converted to English numbers like 1000 to one thousand.
Y
yangyaming 已提交
233 234
  * Repeated whitespace characters are squeezed to one and the beginning whitespace characters are removed. Notice that all transcriptions are lowercase, so all characters are converted to lowercase.
  * Top 400,000 most frequent words are selected to build the vocabulary and the rest are replaced with 'UNKNOWNWORD'.
Y
yangyaming 已提交
235

Y
yangyaming 已提交
236
Now the preprocessing is done and we get a clean corpus to train the language model. Our released language model are trained with agruments '-o 5 --prune 0 1 1 1 1'. '-o 5' means the max order of language model is 5. '--prune 0 1 1 1 1' represents count thresholds for each order and more specifically it will prune singletons for orders two and higher. To save disk storage we convert the arpa file to 'trie' binary file with arguments '-a 22 -q 8 -b 8'. '-a' represents the maximum number of leading bits of pointers in 'trie' to chop. '-q -b' are quantization parameters for probability and backoff.
Y
yangyaming 已提交
237

Y
yangyaming 已提交
238 239
#### Mandarin LM

Y
yangyaming 已提交
240
Different from the English language model, Mandarin language model is character-based where each token is a Chinese character. We use internal corpus to train the released Mandarin language models. The corpus contain billions of tokens. The preprocessing has tiny difference from English language model and main steps include:
Y
yangyaming 已提交
241 242

  * The beginning and trailing whitespace characters are removed.
Y
yangyaming 已提交
243 244
  * English punctuations and Chinese punctuations are removed.
  * A whitespace character between two tokens is inserted.
Y
yangyaming 已提交
245

Y
yangyaming 已提交
246
Please notice that the released language models only contain Chinese simplified characters. After preprocessing done we can begin to train the language model. The key training arguments for small LM is '-o 5 --prune 0 1 2 4 4' and '-o 5' for large LM. Please refer above section for the meaning of each argument. We also convert the arpa file to binary file using default settings.
247

248
### Speech-to-text Inference
249

250
An inference module caller `infer.py` is provided to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
251 252 253

- Inference with GPU:

254
    ```bash
255 256
    CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
    ```
Y
Yibing Liu 已提交
257

X
Xinghai Sun 已提交
258
- Inference with CPUs:
259

260
    ```bash
261
    python infer.py --use_gpu False --trainer_count 12
262 263
    ```

X
Xinghai Sun 已提交
264
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
265 266

For more help on arguments:
267 268 269 270

```
python infer.py --help
```
271
or refer to `example/librispeech/run_infer.sh`.
Y
Yibing Liu 已提交
272

273
### Evaluate a Model
Y
Yibing Liu 已提交
274

275
To evaluate a model's performance quantitatively, please run:
276

X
Xinghai Sun 已提交
277
- Evaluation with GPUs:
278

279
    ```bash
280 281 282
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
    ```

X
Xinghai Sun 已提交
283
- Evaluation with CPUs:
Y
Yibing Liu 已提交
284

285
    ```bash
286
    python test.py --use_gpu False --trainer_count 12
287 288
    ```

289
The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
290 291

For more help on arguments:
Y
Yibing Liu 已提交
292

293
```bash
294
python test.py --help
Y
Yibing Liu 已提交
295
```
296
or refer to `example/librispeech/run_test.sh`.
Y
Yibing Liu 已提交
297

298
## Hyper-parameters Tuning
Y
Yibing Liu 已提交
299

300
The hyper-parameters $\alpha$ (language model weight) and $\beta$ (word insertion weight) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on the validation set when the acoustic model is renewed.
Y
Yibing Liu 已提交
301

302
`tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. You must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
Y
Yibing Liu 已提交
303

304
- Tuning with GPU:
Y
Yibing Liu 已提交
305

306
    ```bash
X
Xinghai Sun 已提交
307 308
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python tools/tune.py \
309
    --trainer_count 8 \
310 311 312 313 314 315
    --alpha_from 1.0 \
    --alpha_to 3.2 \
    --num_alphas 45 \
    --beta_from 0.1 \
    --beta_to 0.45 \
    --num_betas 8
316
    ```
Y
Yibing Liu 已提交
317

318
- Tuning with CPU:
Y
Yibing Liu 已提交
319

320
    ```bash
321 322
    python tools/tune.py --use_gpu False
    ```
Y
yangyaming 已提交
323
 The grid search will print the WER (word error rate) or CER (character error rate) at each point in the hyper-parameters space, and draw the error surface optionally. A proper hyper-parameters range should include the global minima of the error surface for WER/CER, as illustrated in the following figure.
324

325
<p align="center">
Y
Yibing Liu 已提交
326
<img src="docs/images/tuning_error_surface.png" width=550>
327 328 329
<br/>An example error surface for tuning on the dev-clean set of LibriSpeech
</p>

Y
Yibing Liu 已提交
330
Usually, as the figure shows, the variation of language model weight ($\alpha$) significantly affect the performance of CTC beam search decoder. And a better procedure is to first tune on serveral data batches (the number can be specified) to find out the proper range of hyper-parameters, then change to the whole validation set to carray out an accurate tuning.
331 332

After tuning, you can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance. For more help
Y
Yibing Liu 已提交
333

334
```bash
Y
Yibing Liu 已提交
335 336
python tune.py --help
```
337
or refer to `example/librispeech/run_tune.sh`.
Y
Yibing Liu 已提交
338

339 340
## Running in Docker Container

341
Docker is an open source tool to build, ship, and run distributed applications in an isolated environment. A Docker image for this project has been provided in [hub.docker.com](https://hub.docker.com) with all the dependencies installed, including the pre-built PaddlePaddle, CTC decoders, and other necessary Python and third-party packages. This Docker image requires the support of NVIDIA GPU, so please make sure its availiability and the [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) has been installed.
342 343

Take several steps to launch the Docker image:
344

345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361
- Download the Docker image

```bash
nvidia-docker pull paddlepaddle/models:deep-speech-2
```

- Clone this repository

```
git clone https://github.com/PaddlePaddle/models.git
```

- Run the Docker image

```bash
sudo nvidia-docker run -it -v $(pwd)/models:/models paddlepaddle/models:deep-speech-2 /bin/bash
```
362
Now go back and start from the [Getting Started](#getting-started) section, you can execute training, inference and hyper-parameters tuning similarly in the Docker container.
363

364 365
## Distributed Cloud Training

366
We also provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).
X
Xinghai Sun 已提交
367

368
Please take the following steps to submit a training job:
X
Xinghai Sun 已提交
369

X
Xinghai Sun 已提交
370
- Go to directory:
X
Xinghai Sun 已提交
371

372
    ```bash
X
Xinghai Sun 已提交
373 374 375 376
    cd cloud
    ```
- Upload data:

X
Xinghai Sun 已提交
377
    Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
X
Xinghai Sun 已提交
378

379
    ```bash
X
Xinghai Sun 已提交
380 381 382 383 384 385 386 387 388 389
    sh pcloud_upload_data.sh
    ```

    Given input manifests, `pcloud_upload_data.sh` will:

    - Extract the audio files listed in the input manifests.
    - Pack them into a specified number of tar files.
    - Upload these tar files to PaddleCloud filesystem.
    - Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.

390
    It should be done only once for the very first time to do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
X
Xinghai Sun 已提交
391 392 393 394 395 396 397 398 399 400 401 402 403

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Configure training arguments:

    Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Submit the job:

    By running:

404
    ```bash
X
Xinghai Sun 已提交
405 406
    sh pcloud_submit.sh
    ```
407
    a training job has been submitted to PaddleCloud, with the job name printed to the console.
X
Xinghai Sun 已提交
408 409 410 411 412

  - Get training logs

    Run this to list all the jobs you have submitted, as well as their running status:

413
    ```bash
X
Xinghai Sun 已提交
414 415 416 417
    paddlecloud get jobs
    ```

    Run this, the corresponding job's logs will be printed.
418
    ```bash
X
Xinghai Sun 已提交
419 420 421 422 423 424
    paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
    ```

For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).

For more information about the DeepSpeech2 training on PaddleCloud, please refer to
425 426
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

427 428
## Training for Mandarin Language

Y
yangyaming 已提交
429
The key steps of training for Mandarin language are same to that of English language and we have also provided an example for Mandarin training with Aishell in ```examples/aishell```. As mentioned above, please execute ```sh run_data.sh```, ```sh run_train.sh```, ```sh run_test.sh``` and ```sh run_infer.sh``` to do data preparation, training, testing and inference correspondingly. We have also prepared a pre-trained model (downloaded by ./models/aishell/download_model.sh) for users to try with ```sh run_infer_golden.sh``` and ```sh run_test_golden.sh```. Notice that, different from English LM, the Mandarin LM is character-based and please run ```tools/tune.py``` to find an optimal setting.
X
Xinghai Sun 已提交
430

431
## Trying Live Demo with Your Own Voice
432

433
Until now, an ASR model is trained and tested qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But it is not yet tested with your own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling you to test and play around with the demo, with your own voice.
X
Xinghai Sun 已提交
434

435
To start the demo's server, please run this in one console:
X
Xinghai Sun 已提交
436

437
```bash
X
Xinghai Sun 已提交
438 439 440 441 442 443 444
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
```

445
For the machine (might not be the same machine) to run the demo's client, please do the following installation before moving on.
446 447 448

For example, on MAC OS X:

449
```bash
450 451 452 453
brew install portaudio
pip install pyaudio
pip install pynput
```
X
Xinghai Sun 已提交
454

455
Then to start the client, please run this in another console:
456

457
```bash
X
Xinghai Sun 已提交
458 459 460
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
X
Xinghai Sun 已提交
461
--host_port 8086
462
```
X
Xinghai Sun 已提交
463

464
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until finishing your utterance, release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
X
Xinghai Sun 已提交
465

466
Notice that `deploy/demo_client.py` must be run on a machine with a microphone device, while `deploy/demo_server.py` could be run on one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running on one single machine.
X
Xinghai Sun 已提交
467

468
Please also refer to `examples/mandarin/run_demo_server.sh`, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, you can speak Mandarin to test it. If you would like to try some other models, just update `--model_path` argument in the script.  
X
Xinghai Sun 已提交
469 470

For more help on arguments:
471

472
```bash
X
Xinghai Sun 已提交
473 474
python deploy/demo_server.py --help
python deploy/demo_client.py --help
475
```
476

477 478 479 480
## Released Models

#### Speech Model Released

X
Xinghai Sun 已提交
481
Language  | Model Name | Training Data | Hours of Speech
482
:-----------: | :------------: | :----------: |  -------:
Y
yangyaming 已提交
483
English  | [LibriSpeech Model](http://cloud.dlnel.org/filepub/?uuid=117cde63-cd59-4948-8b80-df782555f7d6) | [LibriSpeech Dataset](http://www.openslr.org/12/) | 960 h
Y
yangyaming 已提交
484
English  | [BaiduEN8k Model](http://cloud.dlnel.org/filepub/?uuid=37a1c211-ec47-494c-973c-31437a10ae90) | Baidu Internal English Dataset | 8628 h
Y
yangyaming 已提交
485
Mandarin | [Aishell Model](http://cloud.dlnel.org/filepub/?uuid=61de63b9-6904-4809-ad95-0cc5104ab973) | [Aishell Dataset](http://www.openslr.org/33/) | 151 h
X
Xinghai Sun 已提交
486
Mandarin | [BaiduCN1.2k Model](to-be-added) | Baidu Internal Mandarin Dataset | 1204 h
487 488 489

#### Language Model Released

Y
yangyaming 已提交
490 491 492 493 494
Language Model | Training Data | Token-based | Size | Descriptions
:-------------:| :------------:| :-----: | -----: | :-----------------
[English LM](http://paddlepaddle.bj.bcebos.com/model_zoo/speech/common_crawl_00.prune01111.trie.klm) |  [CommonCrawl(en.00)](http://web-language-models.s3-website-us-east-1.amazonaws.com/ngrams/en/deduped/en.00.deduped.xz) | Word-based | 8.3 GB | Pruned with 0 1 1 1 1; <br/> About 1.85 billion n-grams; <br/> 'trie'  binary with '-a 22 -q 8 -b 8'
[Mandarin LM Small](http://cloud.dlnel.org/filepub/?uuid=d21861e4-4ed6-45bb-ad8e-ae417a43195e) | Baidu Internal Corpus | Char-based | 2.8 GB | Pruned with 0 1 2 4 4; <br/> About 0.13 billion n-grams; <br/> 'probing' binary with default settings
[Mandarin LM Large](http://cloud.dlnel.org/filepub/?uuid=245d02bb-cd01-4ebe-b079-b97be864ec37) | Baidu Internal Corpus | Char-based | 70.4 GB | No Pruning; <br/> About 3.7 billion n-grams; <br/> 'probing' binary with default settings
495

496
## Experiments and Benchmarks
497

X
Xinghai Sun 已提交
498
#### Benchmark Results for English Models (Word Error Rate)
X
Xinghai Sun 已提交
499

X
Xinghai Sun 已提交
500
Test Set                | LibriSpeech Model | BaiduEN8K Model
X
Xinghai Sun 已提交
501
:---------------------  | ---------------:  | -------------------:
Y
yangyaming 已提交
502 503 504 505 506 507 508
LibriSpeech Test-Clean  |   7.73            |   6.63
LibriSpeech Test-Other  |   23.15           |   16.59
VoxForge American-Canadian | 12.30          |   7.46
VoxForge Commonwealth   |   20.03           |   16.23
VoxForge European       |   30.31           |   20.47
VoxForge Indian         |   55.47           |   28.15
Baidu Internal Testset  |   44.71           |   8.92
509

510
For reproducing benchmark results on VoxForge data, we provide a script to download data and generate VoxForge dialect manifest files. Please go to ```data/voxforge``` and execute ```sh run_data.sh``` to get VoxForge dialect manifest files. Notice that VoxForge data may keep updating and the generated manifest files may have difference from those we evaluated on.
511

X
Xinghai Sun 已提交
512
#### Benchmark Results for Mandarin Model (Character Error Rate)
513

X
Xinghai Sun 已提交
514
Test Set                | Aishell Model     | BaiduCN1.2k Model
X
Xinghai Sun 已提交
515 516
:---------------------  | ---------------:  | -------------------:
Baidu Internal Testset  |   -               |   15.49
517

518 519
#### Acceleration with Multi-GPUs

520
We compare the training time with 1, 2, 4, 8, 16 Tesla K40m GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0 seconds).  And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) cost for training is printed on the blue bars.
521 522 523 524 525 526 527 528 529 530

<img src="docs/images/multi_gpu_speedup.png" width=450><br/>

| # of GPU  | Acceleration Rate |
| --------  | --------------:   |
| 1         | 1.00 X |
| 2         | 1.97 X |
| 4         | 3.74 X |
| 8         | 6.21 X |
|16         | 10.70 X |
531

532
`tools/profile.sh` provides such a profiling tool.
X
Xinghai Sun 已提交
533

534
## Questions and Help
X
Xinghai Sun 已提交
535 536

You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/models/issues). You are also welcome to contribute to this project.