README.md 4.2 KB
Newer Older
1
# DeepSpeech2 on PaddlePaddle
2

3
## Installation
4 5

```
Y
yangyaming 已提交
6
sh setup.sh
7 8
export LD_LIBRARY_PATH=$PADDLE_INSTALL_DIR/Paddle/third_party/install/warpctc/lib:$LD_LIBRARY_PATH
```
9 10 11

Please replace `$PADDLE_INSTALL_DIR` with your own paddle installation directory.

12 13 14
## Usage

### Preparing Data
15

16
```
X
Xinghai Sun 已提交
17 18
cd datasets
sh run_all.sh
19
cd ..
20
```
21

X
Xinghai Sun 已提交
22
`sh run_all.sh` prepares all ASR datasets (currently, only LibriSpeech available). After running, we have several summarization manifest files in json-format.
23

X
Xinghai Sun 已提交
24 25 26 27 28 29 30 31 32 33 34 35
A manifest file summarizes a speech data set, with each line containing the meta data (i.e. audio filepath, transcript text, audio duration) of each audio file within the data set, in json format. Manifest file serves as an interface informing our system of  where and what to read the speech samples.


More help for arguments:

```
python datasets/librispeech/librispeech.py --help
```

### Preparing for Training

```
36
python tools/compute_mean_std.py
X
Xinghai Sun 已提交
37 38
```

Y
Yibing Liu 已提交
39
It will compute mean and stdandard deviation for audio features, and save them to a file with a default name `./mean_std.npz`. This file will be used in both training and inferencing. The default feature of audio data is power spectrum, and the mfcc feature is also supported. To train and infer based on mfcc feature, please generate this file by
Y
Yibing Liu 已提交
40 41

```
42
python tools/compute_mean_std.py --specgram_type mfcc
Y
Yibing Liu 已提交
43
```
44

Y
Yibing Liu 已提交
45
and specify ```--specgram_type mfcc``` when running train.py, infer.py, evaluator.py or tune.py.
46

47 48 49
More help for arguments:

```
50
python tools/compute_mean_std.py --help
51 52
```

X
Xinghai Sun 已提交
53
### Training
54 55 56 57

For GPU Training:

```
58
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py
59 60 61 62 63
```

For CPU Training:

```
64
python train.py --use_gpu False
65 66 67 68 69 70 71 72
```

More help for arguments:

```
python train.py --help
```

Y
Yibing Liu 已提交
73 74 75 76 77 78 79 80 81 82 83 84 85 86
### Preparing language model

The following steps, inference, parameters tuning and evaluating, will require a language model during decoding.
A compressed language model is provided and can be accessed by

```
cd ./lm
sh run.sh
cd ..
```

### Inference

For GPU inference
87 88

```
X
Xinghai Sun 已提交
89
CUDA_VISIBLE_DEVICES=0 python infer.py
90 91
```

Y
Yibing Liu 已提交
92 93 94 95 96 97
For CPU inference

```
python infer.py --use_gpu=False
```

98 99 100 101 102
More help for arguments:

```
python infer.py --help
```
Y
Yibing Liu 已提交
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117

### Evaluating

```
CUDA_VISIBLE_DEVICES=0 python evaluate.py
```

More help for arguments:

```
python evaluate.py --help
```

### Parameters tuning

Y
Yibing Liu 已提交
118 119 120
Usually, the parameters $\alpha$ and $\beta$ for the CTC [prefix beam search](https://arxiv.org/abs/1408.2873) decoder need to be tuned after retraining the acoustic model.

For GPU tuning
Y
Yibing Liu 已提交
121 122 123 124 125

```
CUDA_VISIBLE_DEVICES=0 python tune.py
```

Y
Yibing Liu 已提交
126 127 128 129 130 131
For CPU tuning

```
python tune.py --use_gpu=False
```

Y
Yibing Liu 已提交
132 133 134 135 136
More help for arguments:

```
python tune.py --help
```
Y
Yibing Liu 已提交
137 138

Then reset parameters with the tuning result before inference or evaluating.
139 140 141

### Playing with the ASR Demo

142 143 144 145 146 147 148 149 150 151
A real-time ASR demo is built for users to try out the ASR model with their own voice. Please do the following installation on the machine you'd like to run the demo's client (no need for the machine running the demo's server).

For example, on MAC OS X:

```
brew install portaudio
pip install pyaudio
pip install pynput
```
After a model and language model is prepared, we can first start the demo's server:
152 153 154 155

```
CUDA_VISIBLE_DEVICES=0 python demo_server.py
```
156
And then in another console, start the demo's client:
157 158 159 160

```
python demo_client.py
```
161
On the client console, press and hold the "white-space" key on the keyboard to start talking, until you finish your speech and then release the "white-space" key. The decoding results (infered transcription) will be displayed.
162

163
It could be possible to start the server and the client in two seperate machines, e.g. `demo_client.py` is usually started in a machine with a microphone hardware, while `demo_server.py` is usually started in a remote server with powerful GPUs. Please first make sure that these two machines have network access to each other, and then use `--host_ip` and `--host_port` to indicate the server machine's actual IP address (instead of the `localhost` as default) and TCP port, in both `demo_server.py` and `demo_client.py`.
164 165 166 167 168 169


## PaddleCloud Training

If you wish to train DeepSpeech2 on PaddleCloud, please refer to
[Train DeepSpeech2 on PaddleCloud](https://github.com/PaddlePaddle/models/tree/develop/deep_speech_2/cloud).