README.md 26.6 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

3
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
4 5 6 7 8 9

## Table of Contents
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
10
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
11
- [Inference and Evaluation](#inference-and-evaluation)
12
- [Running in Docker Container](#running-in-docker-container)
13 14
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
15
- [Training for Mandarin Language](#training-for-mandarin-language)
16
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
17
- [Released Models](#released-models)
18
- [Experiments and Benchmarks](#experiments-and-benchmarks)
19 20
- [Questions and Help](#questions-and-help)

21

22

23
## Installation
24

Y
Yibing Liu 已提交
25
For this project was developed in PaddlePaddle V2 API, which is not maintained officially any more, we only support [running it in Docker container](#running-in-docker-container), instead of building environment from source code. And we are going to release the update to the latest Paddle Fluid API very soon, please keep an eye on this project.
26

27
## Getting Started
28

29
Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](http://www.openslr.org/33)). Reading these examples will also help you to understand how to make it work with your own data.
30

31
Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICES` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead. Besides, if out-of-memory problem occurs, just reduce `--batch_size` to fit.
32 33 34 35 36

Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.

- Go to directory

37
    ```bash
38
    cd examples/tiny
39 40
    ```

41
    Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
42
- Prepare the data
43

44
    ```bash
45
    sh run_data.sh
46 47
    ```

48
    `run_data.sh` will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments.
49 50
- Train your own ASR model

51
    ```bash
52 53 54
    sh run_train.sh
    ```

55
    `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. These checkpoints could be used for training resuming, inference, evaluation and deployment.
56 57
- Case inference with an existing model

58
    ```bash
59 60 61
    sh run_infer.sh
    ```

62
    `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, you can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
63

64
    ```bash
65
    sh run_infer_golden.sh
66 67 68
    ```
- Evaluate an existing model

69
    ```bash
70 71 72
    sh run_test.sh
    ```

73
    `run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance:
74

75
    ```bash
76
    sh run_test_golden.sh
77 78
    ```

X
Xinghai Sun 已提交
79
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
80

81

82
## Data Preparation
83

X
Xinghai Sun 已提交
84
### Generate Manifest
85

86
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
87

88
```
89 90
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
91
```
92

93
To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
94

95
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which will download data and generate manifest files for LibriSpeech dataset.
X
Xinghai Sun 已提交
96

X
Xinghai Sun 已提交
97
### Compute Mean & Stddev for Normalizer
X
Xinghai Sun 已提交
98

99
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
X
Xinghai Sun 已提交
100

101
```bash
102 103 104 105 106
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
X
Xinghai Sun 已提交
107 108
```

X
Xinghai Sun 已提交
109 110
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.

111

X
Xinghai Sun 已提交
112
### Build Vocabulary
X
Xinghai Sun 已提交
113

X
Xinghai Sun 已提交
114
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
Y
Yibing Liu 已提交
115

116
```bash
117 118 119 120
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
Y
Yibing Liu 已提交
121
```
122

123
It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
124

X
Xinghai Sun 已提交
125
### More Help
126

127
For more help on arguments:
128

129
```bash
130
python data/librispeech/librispeech.py --help
131
python tools/compute_mean_std.py --help
132
python tools/build_vocab.py --help
133 134
```

135
## Training a model
136

137
`train.py` is the main caller of the training module. Examples of usage are shown below.
138

139
- Start training from scratch with 8 GPUs:
140

141 142 143
    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
    ```
144

145 146 147 148 149
- Start training from scratch with 16 CPUs:

    ```
    python train.py --use_gpu False --trainer_count 16
    ```
150
- Resume training from a checkpoint:
151 152

    ```
X
Xinghai Sun 已提交
153 154
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python train.py \
155 156
    --init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
    ```
157

158
For more help on arguments:
159

160
```bash
161 162
python train.py --help
```
163
or refer to `example/librispeech/run_train.sh`.
164

165
## Data Augmentation Pipeline
Y
Yibing Liu 已提交
166

167
Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. You don't have to do the syntheses on your own, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
168

169
Six optional augmentation components are provided to be selected, configured and inserted into the processing pipeline.
170 171 172 173

  - Volume Perturbation
  - Speed Perturbation
  - Shifting Perturbation
X
Xinghai Sun 已提交
174
  - Online Bayesian normalization
175 176 177
  - Noise Perturbation (need background noise audio files)
  - Impulse Response (need impulse audio files)

H
Hu Weiwei 已提交
178
In order to inform the trainer of what augmentation components are needed and what their processing orders are, it is required to prepare in advance an *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
Y
Yibing Liu 已提交
179 180

```
181 182 183 184 185 186 187 188 189 190 191 192
[{
    "type": "speed",
    "params": {"min_speed_rate": 0.95,
               "max_speed_rate": 1.05},
    "prob": 0.6
},
{
    "type": "shift",
    "params": {"min_shift_ms": -5,
               "max_shift_ms": 5},
    "prob": 0.8
}]
Y
Yibing Liu 已提交
193 194
```

195
When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
196

197
For other configuration examples, please refer to `conf/augmenatation.config.example`.
Y
Yibing Liu 已提交
198

199
Be careful when utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
200

201
## Inference and Evaluation
202

203
### Prepare Language Model
Y
Yibing Liu 已提交
204

205
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. Users can simply run this to download the preprared language models:
X
Xinghai Sun 已提交
206

207
```bash
X
Xinghai Sun 已提交
208 209 210 211
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
```
212

Y
yangyaming 已提交
213
If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials. Here we provide some tips to show how we preparing our English and Mandarin language models. You can take it as a reference when you train your own.
Y
yangyaming 已提交
214 215 216

#### English LM

H
Hu Weiwei 已提交
217
The English corpus is from the [Common Crawl Repository](http://commoncrawl.org) and you can download it from [statmt](http://data.statmt.org/ngrams/deduped_en). We use part en.00 to train our English language model. There are some preprocessing steps before training:
Y
yangyaming 已提交
218

Y
yangyaming 已提交
219
  * Characters not in \[A-Za-z0-9\s'\] (\s represents whitespace characters) are removed and Arabic numbers are converted to English numbers like 1000 to one thousand.
Y
yangyaming 已提交
220 221
  * Repeated whitespace characters are squeezed to one and the beginning whitespace characters are removed. Notice that all transcriptions are lowercase, so all characters are converted to lowercase.
  * Top 400,000 most frequent words are selected to build the vocabulary and the rest are replaced with 'UNKNOWNWORD'.
Y
yangyaming 已提交
222

Y
yangyaming 已提交
223
Now the preprocessing is done and we get a clean corpus to train the language model. Our released language model are trained with agruments '-o 5 --prune 0 1 1 1 1'. '-o 5' means the max order of language model is 5. '--prune 0 1 1 1 1' represents count thresholds for each order and more specifically it will prune singletons for orders two and higher. To save disk storage we convert the arpa file to 'trie' binary file with arguments '-a 22 -q 8 -b 8'. '-a' represents the maximum number of leading bits of pointers in 'trie' to chop. '-q -b' are quantization parameters for probability and backoff.
Y
yangyaming 已提交
224

Y
yangyaming 已提交
225 226
#### Mandarin LM

Y
yangyaming 已提交
227
Different from the English language model, Mandarin language model is character-based where each token is a Chinese character. We use internal corpus to train the released Mandarin language models. The corpus contain billions of tokens. The preprocessing has tiny difference from English language model and main steps include:
Y
yangyaming 已提交
228 229

  * The beginning and trailing whitespace characters are removed.
Y
yangyaming 已提交
230 231
  * English punctuations and Chinese punctuations are removed.
  * A whitespace character between two tokens is inserted.
Y
yangyaming 已提交
232

Y
yangyaming 已提交
233
Please notice that the released language models only contain Chinese simplified characters. After preprocessing done we can begin to train the language model. The key training arguments for small LM is '-o 5 --prune 0 1 2 4 4' and '-o 5' for large LM. Please refer above section for the meaning of each argument. We also convert the arpa file to binary file using default settings.
234

235
### Speech-to-text Inference
236

237
An inference module caller `infer.py` is provided to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
238 239 240

- Inference with GPU:

241
    ```bash
242 243
    CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
    ```
Y
Yibing Liu 已提交
244

X
Xinghai Sun 已提交
245
- Inference with CPUs:
246

247
    ```bash
248
    python infer.py --use_gpu False --trainer_count 12
249 250
    ```

X
Xinghai Sun 已提交
251
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
252 253

For more help on arguments:
254 255 256 257

```
python infer.py --help
```
258
or refer to `example/librispeech/run_infer.sh`.
Y
Yibing Liu 已提交
259

260
### Evaluate a Model
Y
Yibing Liu 已提交
261

262
To evaluate a model's performance quantitatively, please run:
263

X
Xinghai Sun 已提交
264
- Evaluation with GPUs:
265

266
    ```bash
267 268 269
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
    ```

X
Xinghai Sun 已提交
270
- Evaluation with CPUs:
Y
Yibing Liu 已提交
271

272
    ```bash
273
    python test.py --use_gpu False --trainer_count 12
274 275
    ```

276
The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
277 278

For more help on arguments:
Y
Yibing Liu 已提交
279

280
```bash
281
python test.py --help
Y
Yibing Liu 已提交
282
```
283
or refer to `example/librispeech/run_test.sh`.
Y
Yibing Liu 已提交
284

285
## Hyper-parameters Tuning
Y
Yibing Liu 已提交
286

287
The hyper-parameters $\alpha$ (language model weight) and $\beta$ (word insertion weight) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on the validation set when the acoustic model is renewed.
Y
Yibing Liu 已提交
288

289
`tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. You must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
Y
Yibing Liu 已提交
290

291
- Tuning with GPU:
Y
Yibing Liu 已提交
292

293
    ```bash
X
Xinghai Sun 已提交
294 295
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python tools/tune.py \
296
    --trainer_count 8 \
297 298 299 300 301 302
    --alpha_from 1.0 \
    --alpha_to 3.2 \
    --num_alphas 45 \
    --beta_from 0.1 \
    --beta_to 0.45 \
    --num_betas 8
303
    ```
Y
Yibing Liu 已提交
304

305
- Tuning with CPU:
Y
Yibing Liu 已提交
306

307
    ```bash
308 309
    python tools/tune.py --use_gpu False
    ```
Y
yangyaming 已提交
310
 The grid search will print the WER (word error rate) or CER (character error rate) at each point in the hyper-parameters space, and draw the error surface optionally. A proper hyper-parameters range should include the global minima of the error surface for WER/CER, as illustrated in the following figure.
311

312
<p align="center">
Y
Yibing Liu 已提交
313
<img src="docs/images/tuning_error_surface.png" width=550>
314 315 316
<br/>An example error surface for tuning on the dev-clean set of LibriSpeech
</p>

Y
Yibing Liu 已提交
317
Usually, as the figure shows, the variation of language model weight ($\alpha$) significantly affect the performance of CTC beam search decoder. And a better procedure is to first tune on serveral data batches (the number can be specified) to find out the proper range of hyper-parameters, then change to the whole validation set to carray out an accurate tuning.
318 319

After tuning, you can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance. For more help
Y
Yibing Liu 已提交
320

321
```bash
Y
Yibing Liu 已提交
322 323
python tune.py --help
```
324
or refer to `example/librispeech/run_tune.sh`.
Y
Yibing Liu 已提交
325

326 327
## Running in Docker Container

328
Docker is an open source tool to build, ship, and run distributed applications in an isolated environment. A Docker image for this project has been provided in [hub.docker.com](https://hub.docker.com) with all the dependencies installed, including the pre-built PaddlePaddle, CTC decoders, and other necessary Python and third-party packages. This Docker image requires the support of NVIDIA GPU, so please make sure its availiability and the [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) has been installed.
329 330

Take several steps to launch the Docker image:
331

332 333 334
- Download the Docker image

```bash
335
nvidia-docker pull paddlepaddle/deep_speech:latest-gpu
336 337 338 339 340
```

- Clone this repository

```
341
git clone https://github.com/PaddlePaddle/DeepSpeech.git
342 343 344 345 346
```

- Run the Docker image

```bash
347
sudo nvidia-docker run -it -v $(pwd)/DeepSpeech:/DeepSpeech paddlepaddle/deep_speech:latest-gpu /bin/bash
348
```
349
Now go back and start from the [Getting Started](#getting-started) section, you can execute training, inference and hyper-parameters tuning similarly in the Docker container.
350

351 352
## Distributed Cloud Training

353
We also provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).
X
Xinghai Sun 已提交
354

355
Please take the following steps to submit a training job:
X
Xinghai Sun 已提交
356

X
Xinghai Sun 已提交
357
- Go to directory:
X
Xinghai Sun 已提交
358

359
    ```bash
X
Xinghai Sun 已提交
360 361 362 363
    cd cloud
    ```
- Upload data:

X
Xinghai Sun 已提交
364
    Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
X
Xinghai Sun 已提交
365

366
    ```bash
X
Xinghai Sun 已提交
367 368 369 370 371 372 373 374 375 376
    sh pcloud_upload_data.sh
    ```

    Given input manifests, `pcloud_upload_data.sh` will:

    - Extract the audio files listed in the input manifests.
    - Pack them into a specified number of tar files.
    - Upload these tar files to PaddleCloud filesystem.
    - Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.

377
    It should be done only once for the very first time to do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
X
Xinghai Sun 已提交
378

Y
Yibing Liu 已提交
379
    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
X
Xinghai Sun 已提交
380 381 382 383 384

 - Configure training arguments:

    Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).

Y
Yibing Liu 已提交
385
    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
X
Xinghai Sun 已提交
386 387 388 389 390

 - Submit the job:

    By running:

391
    ```bash
X
Xinghai Sun 已提交
392 393
    sh pcloud_submit.sh
    ```
394
    a training job has been submitted to PaddleCloud, with the job name printed to the console.
X
Xinghai Sun 已提交
395 396 397 398 399

  - Get training logs

    Run this to list all the jobs you have submitted, as well as their running status:

400
    ```bash
X
Xinghai Sun 已提交
401 402 403 404
    paddlecloud get jobs
    ```

    Run this, the corresponding job's logs will be printed.
405
    ```bash
X
Xinghai Sun 已提交
406 407 408 409 410 411
    paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
    ```

For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).

For more information about the DeepSpeech2 training on PaddleCloud, please refer to
Y
Yibing Liu 已提交
412
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/DeepSpeech/tree/develop/cloud).
413

414 415
## Training for Mandarin Language

Y
yangyaming 已提交
416
The key steps of training for Mandarin language are same to that of English language and we have also provided an example for Mandarin training with Aishell in ```examples/aishell```. As mentioned above, please execute ```sh run_data.sh```, ```sh run_train.sh```, ```sh run_test.sh``` and ```sh run_infer.sh``` to do data preparation, training, testing and inference correspondingly. We have also prepared a pre-trained model (downloaded by ./models/aishell/download_model.sh) for users to try with ```sh run_infer_golden.sh``` and ```sh run_test_golden.sh```. Notice that, different from English LM, the Mandarin LM is character-based and please run ```tools/tune.py``` to find an optimal setting.
X
Xinghai Sun 已提交
417

418
## Trying Live Demo with Your Own Voice
419

420
Until now, an ASR model is trained and tested qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But it is not yet tested with your own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling you to test and play around with the demo, with your own voice.
X
Xinghai Sun 已提交
421

422
To start the demo's server, please run this in one console:
X
Xinghai Sun 已提交
423

424
```bash
X
Xinghai Sun 已提交
425 426 427 428 429 430 431
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
```

432
For the machine (might not be the same machine) to run the demo's client, please do the following installation before moving on.
433 434 435

For example, on MAC OS X:

436
```bash
437 438 439 440
brew install portaudio
pip install pyaudio
pip install pynput
```
X
Xinghai Sun 已提交
441

442
Then to start the client, please run this in another console:
443

444
```bash
X
Xinghai Sun 已提交
445 446 447
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
X
Xinghai Sun 已提交
448
--host_port 8086
449
```
X
Xinghai Sun 已提交
450

451
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until finishing your utterance, release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
X
Xinghai Sun 已提交
452

453
Notice that `deploy/demo_client.py` must be run on a machine with a microphone device, while `deploy/demo_server.py` could be run on one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running on one single machine.
X
Xinghai Sun 已提交
454

455
Please also refer to `examples/mandarin/run_demo_server.sh`, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, you can speak Mandarin to test it. If you would like to try some other models, just update `--model_path` argument in the script.  
X
Xinghai Sun 已提交
456 457

For more help on arguments:
458

459
```bash
X
Xinghai Sun 已提交
460 461
python deploy/demo_server.py --help
python deploy/demo_client.py --help
462
```
463

464 465 466 467
## Released Models

#### Speech Model Released

X
Xinghai Sun 已提交
468
Language  | Model Name | Training Data | Hours of Speech
469
:-----------: | :------------: | :----------: |  -------:
Y
Yibing Liu 已提交
470 471 472 473
English  | [LibriSpeech Model](https://deepspeech.bj.bcebos.com/eng_models/librispeech_model.tar.gz) | [LibriSpeech Dataset](http://www.openslr.org/12/) | 960 h
English  | [BaiduEN8k Model](https://deepspeech.bj.bcebos.com/demo_models/baidu_en8k_model.tar.gz) | Baidu Internal English Dataset | 8628 h
Mandarin | [Aishell Model](https://deepspeech.bj.bcebos.com/mandarin_models/aishell_model.tar.gz) | [Aishell Dataset](http://www.openslr.org/33/) | 151 h
Mandarin | [BaiduCN1.2k Model](https://deepspeech.bj.bcebos.com/demo_models/baidu_cn1.2k_model.tar.gz) | Baidu Internal Mandarin Dataset | 1204 h
474 475 476

#### Language Model Released

Y
yangyaming 已提交
477 478
Language Model | Training Data | Token-based | Size | Descriptions
:-------------:| :------------:| :-----: | -----: | :-----------------
Y
Yibing Liu 已提交
479 480 481
[English LM](https://deepspeech.bj.bcebos.com/en_lm/common_crawl_00.prune01111.trie.klm) |  [CommonCrawl(en.00)](http://web-language-models.s3-website-us-east-1.amazonaws.com/ngrams/en/deduped/en.00.deduped.xz) | Word-based | 8.3 GB | Pruned with 0 1 1 1 1; <br/> About 1.85 billion n-grams; <br/> 'trie'  binary with '-a 22 -q 8 -b 8'
[Mandarin LM Small](https://deepspeech.bj.bcebos.com/zh_lm/zh_giga.no_cna_cmn.prune01244.klm) | Baidu Internal Corpus | Char-based | 2.8 GB | Pruned with 0 1 2 4 4; <br/> About 0.13 billion n-grams; <br/> 'probing' binary with default settings
[Mandarin LM Large](https://deepspeech.bj.bcebos.com/zh_lm/zhidao_giga.klm) | Baidu Internal Corpus | Char-based | 70.4 GB | No Pruning; <br/> About 3.7 billion n-grams; <br/> 'probing' binary with default settings
482

483
## Experiments and Benchmarks
484

X
Xinghai Sun 已提交
485
#### Benchmark Results for English Models (Word Error Rate)
X
Xinghai Sun 已提交
486

X
Xinghai Sun 已提交
487
Test Set                | LibriSpeech Model | BaiduEN8K Model
X
Xinghai Sun 已提交
488
:---------------------  | ---------------:  | -------------------:
489 490 491 492 493 494 495
LibriSpeech Test-Clean  |   6.85            |   5.41
LibriSpeech Test-Other  |   21.18           |   13.85
VoxForge American-Canadian | 12.12          |   7.13
VoxForge Commonwealth   |   19.82           |   14.93
VoxForge European       |   30.15           |   18.64
VoxForge Indian         |   53.73           |   25.51
Baidu Internal Testset  |   40.75           |   8.48
496

497
For reproducing benchmark results on VoxForge data, we provide a script to download data and generate VoxForge dialect manifest files. Please go to ```data/voxforge``` and execute ```sh run_data.sh``` to get VoxForge dialect manifest files. Notice that VoxForge data may keep updating and the generated manifest files may have difference from those we evaluated on.
498

X
Xinghai Sun 已提交
499
#### Benchmark Results for Mandarin Model (Character Error Rate)
500

X
Xinghai Sun 已提交
501 502 503
Test Set                |  BaiduCN1.2k Model
:---------------------  |  -------------------:
Baidu Internal Testset  |   12.64
504

505 506
#### Acceleration with Multi-GPUs

507
We compare the training time with 1, 2, 4, 8, 16 Tesla K40m GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0 seconds).  And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) cost for training is printed on the blue bars.
508 509 510 511 512 513 514 515 516 517

<img src="docs/images/multi_gpu_speedup.png" width=450><br/>

| # of GPU  | Acceleration Rate |
| --------  | --------------:   |
| 1         | 1.00 X |
| 2         | 1.97 X |
| 4         | 3.74 X |
| 8         | 6.21 X |
|16         | 10.70 X |
518

519
`tools/profile.sh` provides such a profiling tool.
X
Xinghai Sun 已提交
520

521
## Questions and Help
X
Xinghai Sun 已提交
522

Y
Yibing Liu 已提交
523
You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/DeepSpeech/issues). You are also welcome to contribute to this project.