README.md 18.2 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

X
Xinghai Sun 已提交
3
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inferencing & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
4 5 6 7 8 9 10

## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
11
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
12 13 14
- [Inference and Evaluation](#inference-and-evaluation)
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
15
- [Training for Mandarin Language](#training-for-mandarin-language)
16 17
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
- [Experiments and Benchmarks](#experiments-and-benchmarks)
18
- [Released Models](#released-models)
19 20 21 22 23
- [Questions and Help](#questions-and-help)

## Prerequisites
- Only support Python 2.7
- PaddlePaddle the latest version (please refer to the [Installation Guide](https://github.com/PaddlePaddle/Paddle#installation))
24

25
## Installation
26

27
Please install the [prerequisites](#prerequisites) above before moving on.
28

29
```
30 31
git clone https://github.com/PaddlePaddle/models.git
cd models/deep_speech_2
Y
yangyaming 已提交
32
sh setup.sh
33
```
34

35
## Getting Started
36

X
Xinghai Sun 已提交
37
Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell)). Reading these examples will also help us understand how to make it work with our own data.
38

39
Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICE` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead.
40 41 42 43 44 45

Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.

- Go to directory

    ```
46
    cd examples/tiny
47 48
    ```

49 50
    Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If we would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
- Prepare the data
51 52

    ```
53
    sh run_data.sh
54 55
    ```

X
Xinghai Sun 已提交
56
    `run_data.sh` will download dataset, generate manifests, collect normalizer' statistics and build vocabulary. Once the data preparation is done, we will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time we run this dataset and is reusable for all further experiments.
57 58 59 60 61 62
- Train your own ASR model

    ```
    sh run_train.sh
    ```

X
Xinghai Sun 已提交
63
    `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. We can resume the training from these checkpoints, or use them for inference, evaluation and deployment.
64 65 66 67 68 69
- Case inference with an existing model

    ```
    sh run_infer.sh
    ```

70
    `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, we can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
71 72

    ```
73
    sh run_infer_golden.sh
74 75 76 77 78 79 80
    ```
- Evaluate an existing model

    ```
    sh run_test.sh
    ```

81
    `run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, we can also download a well-trained model and test its performance:
82 83

    ```
84
    sh run_test_golden.sh
85 86
    ```

X
Xinghai Sun 已提交
87
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
88

89

90
## Data Preparation
91

X
Xinghai Sun 已提交
92
### Generate Manifest
93

94
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
95

96
```
97 98
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
99
```
100

101
To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
102

103
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which download and generate manifests for LibriSpeech dataset.
X
Xinghai Sun 已提交
104

X
Xinghai Sun 已提交
105
### Compute Mean & Stddev for Normalizer
X
Xinghai Sun 已提交
106

107
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
X
Xinghai Sun 已提交
108 109

```
110 111 112 113 114
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
X
Xinghai Sun 已提交
115 116
```

X
Xinghai Sun 已提交
117 118
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.

119

X
Xinghai Sun 已提交
120
### Build Vocabulary
X
Xinghai Sun 已提交
121

X
Xinghai Sun 已提交
122
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
Y
Yibing Liu 已提交
123 124

```
125 126 127 128
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
Y
Yibing Liu 已提交
129
```
130

131
It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
132

X
Xinghai Sun 已提交
133
### More Help
134

135
For more help on arguments:
136 137

```
138
python data/librispeech/librispeech.py --help
139
python tools/compute_mean_std.py --help
140
python tools/build_vocab.py --help
141 142
```

143
## Training a model
144

145
`train.py` is the main caller of the training module. We show several examples of usage below.
146

147
- Start training from scratch with 8 GPUs:
148

149 150 151
    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
    ```
152

153 154 155 156 157
- Start training from scratch with 16 CPUs:

    ```
    python train.py --use_gpu False --trainer_count 16
    ```
158
- Resume training from a checkpoint:
159 160

    ```
X
Xinghai Sun 已提交
161 162
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python train.py \
163 164
    --init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
    ```
165

166
For more help on arguments:
167 168 169 170

```
python train.py --help
```
171
or refer to `example/librispeech/run_train.sh`.
172

173
## Data Augmentation Pipeline
Y
Yibing Liu 已提交
174

X
Xinghai Sun 已提交
175
Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. We don't have to do the syntheses by ourselves, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
176 177 178 179 180 181

Six optional augmentation components are provided for us to configured and inserted into the processing pipeline.

  - Volume Perturbation
  - Speed Perturbation
  - Shifting Perturbation
X
Xinghai Sun 已提交
182
  - Online Bayesian normalization
183 184 185
  - Noise Perturbation (need background noise audio files)
  - Impulse Response (need impulse audio files)

186
In order to inform the trainer of what augmentation components we need and what their processing orders are, we are required to prepare a *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
Y
Yibing Liu 已提交
187 188

```
189 190 191 192 193 194 195 196 197 198 199 200
[{
    "type": "speed",
    "params": {"min_speed_rate": 0.95,
               "max_speed_rate": 1.05},
    "prob": 0.6
},
{
    "type": "shift",
    "params": {"min_shift_ms": -5,
               "max_shift_ms": 5},
    "prob": 0.8
}]
Y
Yibing Liu 已提交
201 202
```

203
When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
204

205
For other configuration examples, please refer to `conf/augmenatation.config.example`.
Y
Yibing Liu 已提交
206

207
Be careful when we are utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
208

209
## Inference and Evaluation
210

211
### Prepare Language Model
Y
Yibing Liu 已提交
212

X
Xinghai Sun 已提交
213 214 215 216 217 218 219 220
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. We can simply run this to download the preprared language models:

```
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
```
If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials.
221 222 223

TODO: any other requirements or tips to add?

224
### Speech-to-text Inference
225

X
Xinghai Sun 已提交
226
An inference module caller `infer.py` is provided for us to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
227 228 229 230 231 232

- Inference with GPU:

    ```
    CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
    ```
Y
Yibing Liu 已提交
233

X
Xinghai Sun 已提交
234
- Inference with CPUs:
235 236

    ```
237
    python infer.py --use_gpu False --trainer_count 12
238 239
    ```

X
Xinghai Sun 已提交
240
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
241 242

For more help on arguments:
243 244 245 246

```
python infer.py --help
```
247
or refer to `example/librispeech/run_infer.sh`.
Y
Yibing Liu 已提交
248

249
### Evaluate a Model
Y
Yibing Liu 已提交
250

X
Xinghai Sun 已提交
251
To evaluate a model's performance quantitatively, we can run:
252

X
Xinghai Sun 已提交
253
- Evaluation with GPUs:
254 255 256 257 258

    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
    ```

X
Xinghai Sun 已提交
259
- Evaluation with CPUs:
Y
Yibing Liu 已提交
260

261
    ```
262
    python test.py --use_gpu False --trainer_count 12
263 264
    ```

265
The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
266 267

For more help on arguments:
Y
Yibing Liu 已提交
268 269

```
270
python test.py --help
Y
Yibing Liu 已提交
271
```
272
or refer to `example/librispeech/run_test.sh`.
Y
Yibing Liu 已提交
273

274
## Hyper-parameters Tuning
Y
Yibing Liu 已提交
275

X
Xinghai Sun 已提交
276
The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta$ (coefficient for word count scorer) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on a validation set when the acoustic model is renewed.
Y
Yibing Liu 已提交
277

X
Xinghai Sun 已提交
278
`tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. We must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
Y
Yibing Liu 已提交
279

280
- Tuning with GPU:
Y
Yibing Liu 已提交
281

282
    ```
X
Xinghai Sun 已提交
283 284
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python tools/tune.py \
285 286 287 288 289 290 291 292
    --trainer_count 8 \
    --alpha_from 0.1 \
    --alpha_to 0.36 \
    --num_alphas 14 \
    --beta_from 0.05 \
    --beta_to 1.0 \
    --num_betas 20
    ```
Y
Yibing Liu 已提交
293

294
- Tuning with CPU:
Y
Yibing Liu 已提交
295

296 297 298 299
    ```
    python tools/tune.py --use_gpu False
    ```

300
After tuning, we can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance.
Y
Yibing Liu 已提交
301 302 303 304

```
python tune.py --help
```
305
or refer to `example/librispeech/run_tune.sh`.
Y
Yibing Liu 已提交
306

307
TODO: add figure.
308

309 310
## Distributed Cloud Training

X
Xinghai Sun 已提交
311 312
We provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).

X
Xinghai Sun 已提交
313
Then, we take the following steps to submit a training job:
X
Xinghai Sun 已提交
314

X
Xinghai Sun 已提交
315
- Go to directory:
X
Xinghai Sun 已提交
316 317 318 319 320 321

    ```
    cd cloud
    ```
- Upload data:

X
Xinghai Sun 已提交
322
    Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
X
Xinghai Sun 已提交
323 324 325 326 327 328 329 330 331 332 333 334

    ```
    sh pcloud_upload_data.sh
    ```

    Given input manifests, `pcloud_upload_data.sh` will:

    - Extract the audio files listed in the input manifests.
    - Pack them into a specified number of tar files.
    - Upload these tar files to PaddleCloud filesystem.
    - Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.

X
Xinghai Sun 已提交
335
    It should be done only once for the very first time we do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
X
Xinghai Sun 已提交
336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Configure training arguments:

    Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Submit the job:

    By running:

    ```
    sh pcloud_submit.sh
    ```
X
Xinghai Sun 已提交
352
    we submit a training job to PaddleCloud. And the job name will be printed when the submission is finished. Now our training job is running well on the PaddleCloud.
X
Xinghai Sun 已提交
353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369

  - Get training logs

    Run this to list all the jobs you have submitted, as well as their running status:

    ```
    paddlecloud get jobs
    ```

    Run this, the corresponding job's logs will be printed.
    ```
    paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
    ```

For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).

For more information about the DeepSpeech2 training on PaddleCloud, please refer to
370 371
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

372 373
## Training for Mandarin Language

X
Xinghai Sun 已提交
374 375
TODO: to be added

376
## Trying Live Demo with Your Own Voice
377

X
Xinghai Sun 已提交
378
Until now, we have trained and tested our ASR model qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But we have not yet try the model with our own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling us to test and play around with the demo, with our own voice.
X
Xinghai Sun 已提交
379 380 381 382 383 384 385 386 387 388 389

We start the demo's server in one console by:

```
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
```

X
Xinghai Sun 已提交
390
For the machine (might not be the same machine) to run the demo's client, we have to do the following installation before moving on.
391 392 393 394 395 396 397 398

For example, on MAC OS X:

```
brew install portaudio
pip install pyaudio
pip install pynput
```
X
Xinghai Sun 已提交
399 400

Then we can start the client in another console by:
401 402

```
X
Xinghai Sun 已提交
403 404 405
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
X
Xinghai Sun 已提交
406
--host_port 8086
407
```
X
Xinghai Sun 已提交
408

X
Xinghai Sun 已提交
409
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until we finish our utterance, we release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
X
Xinghai Sun 已提交
410

X
Xinghai Sun 已提交
411
Notice that `deploy/demo_client.py` must be run in a machine with a microphone device, while `deploy/demo_server.py` could be run in one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running in one single machine.
X
Xinghai Sun 已提交
412 413 414 415

We can also refer to `examples/mandarin/run_demo_server.sh` for example, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, we can speak Mandarin to test it. If we would like to try some other models, just update `--model_path` argument in the script.  

For more help on arguments:
416 417

```
X
Xinghai Sun 已提交
418 419
python deploy/demo_server.py --help
python deploy/demo_client.py --help
420
```
421

422
## Experiments and Benchmarks
423

X
Xinghai Sun 已提交
424 425
TODO: to be added

426 427
## Released Models

X
Xinghai Sun 已提交
428 429
TODO: to be added

430
## Questions and Help
X
Xinghai Sun 已提交
431 432

You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/models/issues). You are also welcome to contribute to this project.