README.md 18.4 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

3
*DeepSpeech2 on PaddlePaddle* is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on [Baidu's Deep Speech 2 paper](http://proceedings.mlr.press/v48/amodei16.pdf), with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed [PaddleCloud](https://github.com/PaddlePaddle/cloud) training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released.
4 5 6 7 8 9 10

## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Data Preparation](#data-preparation)
- [Training a Model](#training-a-model)
11
- [Data Augmentation Pipeline](#data-augmentation-pipeline)
12 13 14
- [Inference and Evaluation](#inference-and-evaluation)
- [Distributed Cloud Training](#distributed-cloud-training)
- [Hyper-parameters Tuning](#hyper-parameters-tuning)
15
- [Training for Mandarin Language](#training-for-mandarin-language)
16 17
- [Trying Live Demo with Your Own Voice](#trying-live-demo-with-your-own-voice)
- [Experiments and Benchmarks](#experiments-and-benchmarks)
18
- [Released Models](#released-models)
19 20 21
- [Questions and Help](#questions-and-help)

## Prerequisites
22
- Python 2.7 only supported
23
- PaddlePaddle the latest version (please refer to the [Installation Guide](https://github.com/PaddlePaddle/Paddle#installation))
24

25
## Installation
26

27
Please make sure the above [prerequisites](#prerequisites) have been satisfied before moving on.
28

29
```bash
30 31
git clone https://github.com/PaddlePaddle/models.git
cd models/deep_speech_2
Y
yangyaming 已提交
32
sh setup.sh
33
```
34

35
## Getting Started
36

37
Several shell scripts provided in `./examples` will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference and model evaluation, with a few public dataset (e.g. [LibriSpeech](http://www.openslr.org/12/), [Aishell](http://www.openslr.org/33)). Reading these examples will also help you to understand how to make it work with your own data.
38

39
Some of the scripts in `./examples` are configured with 8 GPUs. If you don't have 8 GPUs available, please modify `CUDA_VISIBLE_DEVICES` and `--trainer_count`. If you don't have any GPU available, please set `--use_gpu` to False to use CPUs instead. Besides, if out-of-memory problem occurs, just reduce `--batch_size` to fit.
40

41
Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org/12/) for instance.
42

43
- Go to directory
X
Xinghai Sun 已提交
44

45
    ```bash
46
    cd examples/tiny
47
    ```
X
Xinghai Sun 已提交
48

49
    Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
50
- Prepare the data
X
Xinghai Sun 已提交
51

52
    ```bash
53
    sh run_data.sh
54 55
    ```

56
    `run_data.sh` will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments.
57 58
- Train your own ASR model

59
    ```bash
60 61 62
    sh run_train.sh
    ```

63
    `run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. These checkpoints could be used for training resuming, inference, evaluation and deployment.
64 65
- Case inference with an existing model

66
    ```bash
67 68 69
    sh run_infer.sh
    ```

70
    `run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, you can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
X
Xinghai Sun 已提交
71

72
    ```bash
73
    sh run_infer_golden.sh
74 75 76
    ```
- Evaluate an existing model

77
    ```bash
78 79 80
    sh run_test.sh
    ```

81
    `run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance:
82

83
    ```bash
84
    sh run_test_golden.sh
85 86
    ```

X
Xinghai Sun 已提交
87
More detailed information are provided in the following sections. Wish you a happy journey with the *DeepSpeech2 on PaddlePaddle* ASR engine!
88

89

90
## Data Preparation
91

X
Xinghai Sun 已提交
92
### Generate Manifest
93

94
*DeepSpeech2 on PaddlePaddle* accepts a textual **manifest** file as its data set interface. A manifest file summarizes a set of speech data, with each line containing some meta data (e.g. filepath, transcription, duration) of one audio clip, in [JSON](http://www.json.org/) format, such as:
X
Xinghai Sun 已提交
95 96

```
97 98
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0001.flac", "duration": 3.275, "text": "stuff it into you his belly counselled him"}
{"audio_filepath": "/home/work/.cache/paddle/Libri/134686/1089-134686-0007.flac", "duration": 4.275, "text": "a cold lucid indifference reigned in his soul"}
X
Xinghai Sun 已提交
99 100
```

101
To use your custom data, you only need to generate such manifest files to summarize the dataset. Given such summarized manifests, training, inference and all other modules can be aware of where to access the audio files, as well as their meta data including the transcription labels.
Y
Yibing Liu 已提交
102

103
For how to generate such manifest files, please refer to `data/librispeech/librispeech.py`, which will download data and generate manifest files for LibriSpeech dataset.
X
Xinghai Sun 已提交
104

X
Xinghai Sun 已提交
105
### Compute Mean & Stddev for Normalizer
X
Xinghai Sun 已提交
106

107
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
X
Xinghai Sun 已提交
108

109
```bash
110 111 112 113 114
python tools/compute_mean_std.py \
--num_samples 2000 \
--specgram_type linear \
--manifest_paths data/librispeech/manifest.train \
--output_path data/librispeech/mean_std.npz
Y
Yibing Liu 已提交
115
```
116

X
Xinghai Sun 已提交
117 118
It will compute the mean and standard deviation of power spectrum feature with 2000 random sampled audio clips listed in `data/librispeech/manifest.train` and save the results to `data/librispeech/mean_std.npz` for further usage.

119

X
Xinghai Sun 已提交
120
### Build Vocabulary
121

X
Xinghai Sun 已提交
122
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
123

124
```bash
125 126 127 128
python tools/build_vocab.py \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--manifest_paths data/librispeech/manifest.train
129
```
130

131
It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transcription text in `data/librispeech/manifest.train`, without vocabulary truncation (`--count_threshold 0`).
132

X
Xinghai Sun 已提交
133
### More Help
134

135
For more help on arguments:
136

137
```bash
138
python data/librispeech/librispeech.py --help
139
python tools/compute_mean_std.py --help
140
python tools/build_vocab.py --help
141 142
```

143
## Training a model
144

145
`train.py` is the main caller of the training module. Examples of usage are shown below.
146

147
- Start training from scratch with 8 GPUs:
148

149 150 151
    ```
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --trainer_count 8
    ```
152

153
- Start training from scratch with 16 CPUs:
154

155 156 157
    ```
    python train.py --use_gpu False --trainer_count 16
    ```
158
- Resume training from a checkpoint:
159

160
    ```
X
Xinghai Sun 已提交
161 162
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python train.py \
163 164
    --init_model_path CHECKPOINT_PATH_TO_RESUME_FROM
    ```
165

166
For more help on arguments:
167

168
```bash
169 170
python train.py --help
```
171
or refer to `example/librispeech/run_train.sh`.
172

173
## Data Augmentation Pipeline
Y
Yibing Liu 已提交
174

175
Data augmentation has often been a highly effective technique to boost the deep learning performance. We augment our speech data by synthesizing new audios with small random perturbation (label-invariant transformation) added upon raw audios. You don't have to do the syntheses on your own, as it is already embedded into the data provider and is done on the fly, randomly for each epoch during training.
176

177
Six optional augmentation components are provided to be selected, configured and inserted into the processing pipeline.
178

Y
Yibing Liu 已提交
179
### Inference
180 181 182
  - Volume Perturbation
  - Speed Perturbation
  - Shifting Perturbation
X
Xinghai Sun 已提交
183
  - Online Bayesian normalization
184 185
  - Noise Perturbation (need background noise audio files)
  - Impulse Response (need impulse audio files)
Y
Yibing Liu 已提交
186

187
In order to inform the trainer of what augmentation components are needed and what their processing orders are, it is required to prepare in advance a *augmentation configuration file* in [JSON](http://www.json.org/) format. For example:
Y
Yibing Liu 已提交
188 189

```
190 191 192 193 194 195 196 197 198 199 200 201
[{
    "type": "speed",
    "params": {"min_speed_rate": 0.95,
               "max_speed_rate": 1.05},
    "prob": 0.6
},
{
    "type": "shift",
    "params": {"min_shift_ms": -5,
               "max_shift_ms": 5},
    "prob": 0.8
}]
Y
Yibing Liu 已提交
202 203
```

204
When the `--augment_conf_file` argument of `trainer.py` is set to the path of the above example configuration file, every audio clip in every epoch will be processed: with 60% of chance, it will first be speed perturbed with a uniformly random sampled speed-rate between 0.95 and 1.05, and then with 80% of chance it will be shifted in time with a random sampled offset between -5 ms and 5 ms. Finally this newly synthesized audio clip will be feed into the feature extractor for further training.
205

206
For other configuration examples, please refer to `conf/augmenatation.config.example`.
207

208
Be careful when utilizing the data augmentation technique, as improper augmentation will do harm to the training, due to the enlarged train-test gap.
209

210
## Inference and Evaluation
Y
Yibing Liu 已提交
211

212
### Prepare Language Model
213

214
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. Users can simply run this to download the preprared language models:
X
Xinghai Sun 已提交
215

216
```bash
X
Xinghai Sun 已提交
217 218 219
cd models/lm
sh download_lm_en.sh
sh download_lm_ch.sh
220
```
X
Xinghai Sun 已提交
221
If you wish to train your own better language model, please refer to [KenLM](https://github.com/kpu/kenlm) for tutorials.
222

223
TODO: any other requirements or tips to add?
Y
Yibing Liu 已提交
224

225
### Speech-to-text Inference
226

227
An inference module caller `infer.py` is provided to infer, decode and visualize speech-to-text results for several given audio clips. It might help to have an intuitive and qualitative evaluation of the ASR model's performance.
228 229 230

- Inference with GPU:

231
    ```bash
232 233
    CUDA_VISIBLE_DEVICES=0 python infer.py --trainer_count 1
    ```
Y
Yibing Liu 已提交
234

X
Xinghai Sun 已提交
235
- Inference with CPUs:
Y
Yibing Liu 已提交
236

237
    ```bash
238
    python infer.py --use_gpu False --trainer_count 12
239 240
    ```

X
Xinghai Sun 已提交
241
We provide two types of CTC decoders: *CTC greedy decoder* and *CTC beam search decoder*. The *CTC greedy decoder* is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) otherwise utilizes a heuristic breadth-first graph search for reaching a near global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with argument `--decoding_method`.
242 243

For more help on arguments:
244 245 246 247

```
python infer.py --help
```
248
or refer to `example/librispeech/run_infer.sh`.
Y
Yibing Liu 已提交
249

250
### Evaluate a Model
Y
Yibing Liu 已提交
251

252
To evaluate a model's performance quantitatively, please run:
Y
Yibing Liu 已提交
253

X
Xinghai Sun 已提交
254
- Evaluation with GPUs:
Y
Yibing Liu 已提交
255

256
    ```bash
257 258
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python test.py --trainer_count 8
    ```
Y
Yibing Liu 已提交
259

X
Xinghai Sun 已提交
260
- Evaluation with CPUs:
Y
Yibing Liu 已提交
261

262
    ```bash
263
    python test.py --use_gpu False --trainer_count 12
264
    ```
Y
Yibing Liu 已提交
265

266
The error rate (default: word error rate; can be set with `--error_rate_type`) will be printed.
Y
Yibing Liu 已提交
267

268
For more help on arguments:
Y
Yibing Liu 已提交
269

270
```bash
271
python test.py --help
Y
Yibing Liu 已提交
272
```
273
or refer to `example/librispeech/run_test.sh`.
Y
Yibing Liu 已提交
274

275
## Hyper-parameters Tuning
Y
Yibing Liu 已提交
276

X
Xinghai Sun 已提交
277
The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta$ (coefficient for word count scorer) for the [*CTC beam search decoder*](https://arxiv.org/abs/1408.2873) often have a significant impact on the decoder's performance. It would be better to re-tune them on a validation set when the acoustic model is renewed.
Y
Yibing Liu 已提交
278

279
`tools/tune.py` performs a 2-D grid search over the hyper-parameter $\alpha$ and $\beta$. You must provide the range of $\alpha$ and $\beta$, as well as the number of their attempts.
Y
Yibing Liu 已提交
280

281
- Tuning with GPU:
Y
Yibing Liu 已提交
282

283
    ```bash
X
Xinghai Sun 已提交
284 285
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
    python tools/tune.py \
286 287 288 289 290 291 292 293
    --trainer_count 8 \
    --alpha_from 0.1 \
    --alpha_to 0.36 \
    --num_alphas 14 \
    --beta_from 0.05 \
    --beta_to 1.0 \
    --num_betas 20
    ```
Y
Yibing Liu 已提交
294

295
- Tuning with CPU:
Y
Yibing Liu 已提交
296

297
    ```bash
298 299 300
    python tools/tune.py --use_gpu False
    ```

301
After tuning, you can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance.
Y
Yibing Liu 已提交
302

303
```bash
Y
Yibing Liu 已提交
304 305
python tune.py --help
```
306
or refer to `example/librispeech/run_tune.sh`.
Y
Yibing Liu 已提交
307

308
TODO: add figure.
309

310
## Distributed Cloud Training
311

312
We also provide a cloud training module for users to do the distributed cluster training on [PaddleCloud](https://github.com/PaddlePaddle/cloud), to achieve a much faster training speed with multiple machines. To start with this, please first install PaddleCloud client and register a PaddleCloud account, as described in [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#%E4%B8%8B%E8%BD%BD%E5%B9%B6%E9%85%8D%E7%BD%AEpaddlecloud).
313

314
Please take the following steps to submit a training job:
X
Xinghai Sun 已提交
315

X
Xinghai Sun 已提交
316
- Go to directory:
X
Xinghai Sun 已提交
317

318
    ```bash
X
Xinghai Sun 已提交
319 320 321 322
    cd cloud
    ```
- Upload data:

X
Xinghai Sun 已提交
323
    Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
X
Xinghai Sun 已提交
324

325
    ```bash
X
Xinghai Sun 已提交
326 327 328 329 330 331 332 333 334 335
    sh pcloud_upload_data.sh
    ```

    Given input manifests, `pcloud_upload_data.sh` will:

    - Extract the audio files listed in the input manifests.
    - Pack them into a specified number of tar files.
    - Upload these tar files to PaddleCloud filesystem.
    - Create cloud manifests by replacing local filesystem paths with PaddleCloud filesystem paths. New manifests will be used to inform the cloud jobs of audio files' location and their meta information.

336
    It should be done only once for the very first time to do the cloud training. Later, the data is kept persisitent on the cloud filesystem and reusable for further job submissions.
X
Xinghai Sun 已提交
337 338 339 340 341 342 343 344 345 346

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Configure training arguments:

    Configure the cloud job parameters in `pcloud_submit.sh` (e.g. `NUM_NODES`, `NUM_GPUS`, `CLOUD_TRAIN_DIR`, `JOB_NAME` etc.) and then configure other hyper-parameters for training in `pcloud_train.sh` (just as what you do for local training).

    For argument details please refer to [Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

 - Submit the job:
347

X
Xinghai Sun 已提交
348 349
    By running:

350
    ```bash
X
Xinghai Sun 已提交
351 352
    sh pcloud_submit.sh
    ```
353
    a training job has been submitted to PaddleCloud, with the job name printed to the console.
X
Xinghai Sun 已提交
354 355 356 357 358

  - Get training logs

    Run this to list all the jobs you have submitted, as well as their running status:

359
    ```bash
X
Xinghai Sun 已提交
360 361 362 363
    paddlecloud get jobs
    ```

    Run this, the corresponding job's logs will be printed.
364
    ```bash
X
Xinghai Sun 已提交
365 366 367 368 369 370
    paddlecloud logs -n 10000 $REPLACED_WITH_YOUR_ACTUAL_JOB_NAME
    ```

For more information about the usage of PaddleCloud, please refer to [PaddleCloud Usage](https://github.com/PaddlePaddle/cloud/blob/develop/doc/usage_cn.md#提交任务).

For more information about the DeepSpeech2 training on PaddleCloud, please refer to
371 372
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).

373 374
## Training for Mandarin Language

X
Xinghai Sun 已提交
375 376
TODO: to be added

377
## Trying Live Demo with Your Own Voice
378

379
Until now, an ASR model is trained and tested qualitatively (`infer.py`) and quantitatively (`test.py`) with existing audio files. But it is not yet tested with your own speech. `deploy/demo_server.py` and `deploy/demo_client.py` helps quickly build up a real-time demo ASR engine with the trained model, enabling you to test and play around with the demo, with your own voice.
X
Xinghai Sun 已提交
380

381
To start the demo's server, please run this in one console:
X
Xinghai Sun 已提交
382

383
```bash
X
Xinghai Sun 已提交
384 385 386 387 388
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--trainer_count 1 \
--host_ip localhost \
--host_port 8086
389
```
X
Xinghai Sun 已提交
390

391
For the machine (might not be the same machine) to run the demo's client, please do the following installation before moving on.
392 393 394

For example, on MAC OS X:

395
```bash
396 397 398 399
brew install portaudio
pip install pyaudio
pip install pynput
```
400

401
Then to start the client, please run this in another console:
402

403
```bash
X
Xinghai Sun 已提交
404 405 406
CUDA_VISIBLE_DEVICES=0 \
python -u deploy/demo_client.py \
--host_ip 'localhost' \
X
Xinghai Sun 已提交
407
--host_port 8086
408
```
X
Xinghai Sun 已提交
409

410
Now, in the client console, press the `whitespace` key, hold, and start speaking. Until finishing your utterance, release the key to let the speech-to-text results shown in the console. To quit the client, just press `ESC` key.
X
Xinghai Sun 已提交
411

412
Notice that `deploy/demo_client.py` must be run on a machine with a microphone device, while `deploy/demo_server.py` could be run on one without any audio recording hardware, e.g. any remote server machine. Just be careful to set the `host_ip` and `host_port` argument with the actual accessible IP address and port, if the server and client are running with two separate machines. Nothing should be done if they are running on one single machine.
X
Xinghai Sun 已提交
413

414
Please also refer to `examples/mandarin/run_demo_server.sh`, which will first download a pre-trained Mandarin model (trained with 3000 hours of internal speech data) and then start the demo server with the model. With running `examples/mandarin/run_demo_client.sh`, you can speak Mandarin to test it. If you would like to try some other models, just update `--model_path` argument in the script.  
X
Xinghai Sun 已提交
415 416

For more help on arguments:
417

418
```bash
X
Xinghai Sun 已提交
419 420
python deploy/demo_server.py --help
python deploy/demo_client.py --help
421
```
422

423
## Experiments and Benchmarks
424

X
Xinghai Sun 已提交
425
TODO: to be added
426

427
## Released Models
428

X
Xinghai Sun 已提交
429 430
TODO: to be added

431
## Questions and Help
X
Xinghai Sun 已提交
432 433

You are welcome to submit questions and bug reports in [Github Issues](https://github.com/PaddlePaddle/models/issues). You are also welcome to contribute to this project.