@@ -42,45 +42,45 @@ Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org
...
@@ -42,45 +42,45 @@ Let's take a tiny sampled subset of [LibriSpeech dataset](http://www.openslr.org
- Go to directory
- Go to directory
```
```bash
cd examples/tiny
cd examples/tiny
```
```
Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If we would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If we would like to try with the complete dataset (would take several days for training), please go to `examples/librispeech` instead.
- Prepare the data
- Prepare the data
```
```bash
sh run_data.sh
sh run_data.sh
```
```
`run_data.sh` will download dataset, generate manifests, collect normalizer' statistics and build vocabulary. Once the data preparation is done, we will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time we run this dataset and is reusable for all further experiments.
`run_data.sh` will download dataset, generate manifests, collect normalizer' statistics and build vocabulary. Once the data preparation is done, we will find the data (only part of LibriSpeech) downloaded in `~/.cache/paddle/dataset/speech/libri` and the corresponding manifest files generated in `./data/tiny` as well as a mean stddev file and a vocabulary file. It has to be run for the very first time we run this dataset and is reusable for all further experiments.
- Train your own ASR model
- Train your own ASR model
```
```bash
sh run_train.sh
sh run_train.sh
```
```
`run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. We can resume the training from these checkpoints, or use them for inference, evaluation and deployment.
`run_train.sh` will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to `./checkpoints/tiny`. We can resume the training from these checkpoints, or use them for inference, evaluation and deployment.
- Case inference with an existing model
- Case inference with an existing model
```
```bash
sh run_infer.sh
sh run_infer.sh
```
```
`run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, we can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
`run_infer.sh` will show us some speech-to-text decoding results for several (default: 10) samples with the trained model. The performance might not be good now as the current model is only trained with a toy subset of LibriSpeech. To see the results with a better model, we can download a well-trained (trained for several days, with the complete LibriSpeech) model and do the inference:
```
```bash
sh run_infer_golden.sh
sh run_infer_golden.sh
```
```
- Evaluate an existing model
- Evaluate an existing model
```
```bash
sh run_test.sh
sh run_test.sh
```
```
`run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, we can also download a well-trained model and test its performance:
`run_test.sh` will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, we can also download a well-trained model and test its performance:
```
```bash
sh run_test_golden.sh
sh run_test_golden.sh
```
```
...
@@ -106,7 +106,7 @@ For how to generate such manifest files, please refer to `data/librispeech/libri
...
@@ -106,7 +106,7 @@ For how to generate such manifest files, please refer to `data/librispeech/libri
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
To perform z-score normalization (zero-mean, unit stddev) upon audio features, we have to estimate in advance the mean and standard deviation of the features, with some training samples:
```
```bash
python tools/compute_mean_std.py \
python tools/compute_mean_std.py \
--num_samples 2000 \
--num_samples 2000 \
--specgram_type linear \
--specgram_type linear \
...
@@ -121,7 +121,7 @@ It will compute the mean and standard deviation of power spectrum feature with 2
...
@@ -121,7 +121,7 @@ It will compute the mean and standard deviation of power spectrum feature with 2
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
A vocabulary of possible characters is required to convert the transcription into a list of token indices for training, and in decoding, to convert from a list of indices back to text again. Such a character-based vocabulary can be built with `tools/build_vocab.py`.
```
```bash
python tools/build_vocab.py \
python tools/build_vocab.py \
--count_threshold 0 \
--count_threshold 0 \
--vocab_path data/librispeech/eng_vocab.txt \
--vocab_path data/librispeech/eng_vocab.txt \
...
@@ -134,7 +134,7 @@ It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transc
...
@@ -134,7 +134,7 @@ It will write a vocabuary file `data/librispeeech/eng_vocab.txt` with all transc
@@ -212,7 +212,7 @@ Be careful when we are utilizing the data augmentation technique, as improper au
...
@@ -212,7 +212,7 @@ Be careful when we are utilizing the data augmentation technique, as improper au
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. We can simply run this to download the preprared language models:
A language model is required to improve the decoder's performance. We have prepared two language models (with lossy compression) for users to download and try. One is for English and the other is for Mandarin. We can simply run this to download the preprared language models:
```
```bash
cd models/lm
cd models/lm
sh download_lm_en.sh
sh download_lm_en.sh
sh download_lm_ch.sh
sh download_lm_ch.sh
...
@@ -227,13 +227,13 @@ An inference module caller `infer.py` is provided for us to infer, decode and vi
...
@@ -227,13 +227,13 @@ An inference module caller `infer.py` is provided for us to infer, decode and vi
@@ -266,7 +266,7 @@ The error rate (default: word error rate; can be set with `--error_rate_type`) w
...
@@ -266,7 +266,7 @@ The error rate (default: word error rate; can be set with `--error_rate_type`) w
For more help on arguments:
For more help on arguments:
```
```bash
python test.py --help
python test.py --help
```
```
or refer to `example/librispeech/run_test.sh`.
or refer to `example/librispeech/run_test.sh`.
...
@@ -279,7 +279,7 @@ The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta
...
@@ -279,7 +279,7 @@ The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta
- Tuning with GPU:
- Tuning with GPU:
```
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python tools/tune.py \
python tools/tune.py \
--trainer_count 8 \
--trainer_count 8 \
...
@@ -293,13 +293,13 @@ The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta
...
@@ -293,13 +293,13 @@ The hyper-parameters $\alpha$ (coefficient for language model scorer) and $\beta
- Tuning with CPU:
- Tuning with CPU:
```
```bash
python tools/tune.py --use_gpu False
python tools/tune.py --use_gpu False
```
```
After tuning, we can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance.
After tuning, we can reset $\alpha$ and $\beta$ in the inference and evaluation modules to see if they really help improve the ASR performance.
```
```bash
python tune.py --help
python tune.py --help
```
```
or refer to `example/librispeech/run_tune.sh`.
or refer to `example/librispeech/run_tune.sh`.
...
@@ -314,14 +314,14 @@ Then, we take the following steps to submit a training job:
...
@@ -314,14 +314,14 @@ Then, we take the following steps to submit a training job:
- Go to directory:
- Go to directory:
```
```bash
cd cloud
cd cloud
```
```
- Upload data:
- Upload data:
Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
Data must be uploaded to PaddleCloud filesystem to be accessed within a cloud job. `pcloud_upload_data.sh` helps do the data packing and uploading:
```
```bash
sh pcloud_upload_data.sh
sh pcloud_upload_data.sh
```
```
...
@@ -346,7 +346,7 @@ Then, we take the following steps to submit a training job:
...
@@ -346,7 +346,7 @@ Then, we take the following steps to submit a training job:
By running:
By running:
```
```bash
sh pcloud_submit.sh
sh pcloud_submit.sh
```
```
we submit a training job to PaddleCloud. And the job name will be printed when the submission is finished. Now our training job is running well on the PaddleCloud.
we submit a training job to PaddleCloud. And the job name will be printed when the submission is finished. Now our training job is running well on the PaddleCloud.
...
@@ -355,12 +355,12 @@ Then, we take the following steps to submit a training job:
...
@@ -355,12 +355,12 @@ Then, we take the following steps to submit a training job:
Run this to list all the jobs you have submitted, as well as their running status:
Run this to list all the jobs you have submitted, as well as their running status:
```
```bash
paddlecloud get jobs
paddlecloud get jobs
```
```
Run this, the corresponding job's logs will be printed.
Run this, the corresponding job's logs will be printed.