README.md 12.3 KB
Newer Older
Z
Zeyu Chen 已提交
1 2
# SpeedySpeech with CSMSC
This example contains code used to train a [SpeedySpeech](http://arxiv.org/abs/2008.03802) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html). NOTE that we only implement the student part of the Speedyspeech model. The ground truth alignment used to train the model is extracted from the dataset using [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner).
H
Hui Zhang 已提交
3 4

## Dataset
小湉湉's avatar
小湉湉 已提交
5
### Download and Extract
H
Hui Zhang 已提交
6 7
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/source).

小湉湉's avatar
小湉湉 已提交
8
### Get MFA Result and Extract
H
Hui Zhang 已提交
9
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for SPEEDYSPEECH.
10
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your own MFA model reference to  [use_mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/use_mfa) of our repo.
H
Hui Zhang 已提交
11

小湉湉's avatar
小湉湉 已提交
12
## Get Started
H
Hui Zhang 已提交
13 14
Assume the path to the dataset is `~/datasets/BZNSYP`.
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
小湉湉's avatar
小湉湉 已提交
15 16
Run the command below to
1. **source path**.
17
2. preprocess the dataset.
小湉湉's avatar
小湉湉 已提交
18 19 20 21
3. train the model.
4. synthesize wavs.
    - synthesize waveform from `metadata.jsonl`.
    - synthesize waveform from text file.
小湉湉's avatar
小湉湉 已提交
22
5. inference using static model.
H
Hui Zhang 已提交
23
```bash
小湉湉's avatar
小湉湉 已提交
24 25
./run.sh
```
小湉湉's avatar
小湉湉 已提交
26 27 28 29 30
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, run the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
小湉湉's avatar
小湉湉 已提交
31 32
```bash
./local/preprocess.sh ${conf_path}
H
Hui Zhang 已提交
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.

```text
dump
├── dev
│   ├── norm
│   └── raw
├── test
│   ├── norm
│   └── raw
└── train
    ├── norm
    ├── raw
    └── feats_stats.npy
```

The dataset is split into 3 parts, namely `train`, `dev` and `test`, each of which contains a `norm` and `raw` sub folder. The raw folder contains log magnitude of mel spectrogram of each utterances, while the norm folder contains normalized spectrogram. The statistics used to normalize the spectrogram is computed from the training set, which is located in `dump/train/feats_stats.npy`.

Also there is a `metadata.jsonl` in each subfolder. It is a table-like file which contains phones, tones, durations, path of spectrogram, and id of each utterance.

小湉湉's avatar
小湉湉 已提交
54
### Model Training
小湉湉's avatar
小湉湉 已提交
55
`./local/train.sh` calls `${BIN_DIR}/train.py`.
H
Hui Zhang 已提交
56
```bash
小湉湉's avatar
小湉湉 已提交
57
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
H
Hui Zhang 已提交
58 59 60 61
```
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
62 63 64 65
                [--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
                [--ngpu NGPU] [--verbose VERBOSE]
                [--use-relative-path USE_RELATIVE_PATH]
                [--phones-dict PHONES_DICT] [--tones-dict TONES_DICT]
H
Hui Zhang 已提交
66 67 68 69 70 71 72 73 74 75 76 77

Train a Speedyspeech model with sigle speaker dataset.

optional arguments:
  -h, --help            show this help message and exit
  --config CONFIG       config file.
  --train-metadata TRAIN_METADATA
                        training data.
  --dev-metadata DEV_METADATA
                        dev data.
  --output-dir OUTPUT_DIR
                        output dir.
78
  --ngpu NGPU           if ngpu == 0, use cpu.
H
Hui Zhang 已提交
79 80 81 82 83 84 85 86 87 88 89 90
  --verbose VERBOSE     verbose.
  --use-relative-path USE_RELATIVE_PATH
                        whether use relative path in metadata
  --phones-dict PHONES_DICT
                        phone vocabulary file.
  --tones-dict TONES_DICT
                        tone vocabulary file.
```

1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are save in `checkpoints/` inside this directory.
91 92 93
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
6. `--tones-dict` is the path of the tone vocabulary file.
H
Hui Zhang 已提交
94

小湉湉's avatar
小湉湉 已提交
95
### Synthesizing
96
We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/voc1) as the neural vocoder.
小湉湉's avatar
小湉湉 已提交
97
Download pretrained parallel wavegan model from [pwg_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_baker_ckpt_0.4.zip) and unzip it.
H
Hui Zhang 已提交
98 99 100 101 102 103 104 105 106 107
```bash
unzip pwg_baker_ckpt_0.4.zip
```
Parallel WaveGAN checkpoint contains files listed below.
```text
pwg_baker_ckpt_0.4
├── pwg_default.yaml               # default config used to train parallel wavegan
├── pwg_snapshot_iter_400000.pdz   # model parameters of parallel wavegan
└── pwg_stats.npy                  # statistics used to normalize spectrogram when training parallel wavegan
```
小湉湉's avatar
小湉湉 已提交
108
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
H
Hui Zhang 已提交
109
```bash
小湉湉's avatar
小湉湉 已提交
110
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
H
Hui Zhang 已提交
111 112 113 114 115 116 117 118 119
```
```text
usage: synthesize.py [-h] [--speedyspeech-config SPEEDYSPEECH_CONFIG]
                     [--speedyspeech-checkpoint SPEEDYSPEECH_CHECKPOINT]
                     [--speedyspeech-stat SPEEDYSPEECH_STAT]
                     [--pwg-config PWG_CONFIG]
                     [--pwg-checkpoint PWG_CHECKPOINT] [--pwg-stat PWG_STAT]
                     [--phones-dict PHONES_DICT] [--tones-dict TONES_DICT]
                     [--test-metadata TEST_METADATA] [--output-dir OUTPUT_DIR]
120
                     [--inference-dir INFERENCE_DIR] [--ngpu NGPU]
H
Hui Zhang 已提交
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
                     [--verbose VERBOSE]

Synthesize with speedyspeech & parallel wavegan.

optional arguments:
  -h, --help            show this help message and exit
  --speedyspeech-config SPEEDYSPEECH_CONFIG
                        config file for speedyspeech.
  --speedyspeech-checkpoint SPEEDYSPEECH_CHECKPOINT
                        speedyspeech checkpoint to load.
  --speedyspeech-stat SPEEDYSPEECH_STAT
                        mean and standard deviation used to normalize
                        spectrogram when training speedyspeech.
  --pwg-config PWG_CONFIG
                        config file for parallelwavegan.
  --pwg-checkpoint PWG_CHECKPOINT
                        parallel wavegan generator parameters to load.
  --pwg-stat PWG_STAT   mean and standard deviation used to normalize
                        spectrogram when training speedyspeech.
  --phones-dict PHONES_DICT
                        phone vocabulary file.
  --tones-dict TONES_DICT
                        tone vocabulary file.
  --test-metadata TEST_METADATA
                        test metadata
  --output-dir OUTPUT_DIR
                        output dir
  --inference-dir INFERENCE_DIR
                        dir to save inference models
150
  --ngpu NGPU           if ngpu == 0, use cpu.
H
Hui Zhang 已提交
151 152
  --verbose VERBOSE     verbose
```
小湉湉's avatar
小湉湉 已提交
153
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
H
Hui Zhang 已提交
154
```bash
小湉湉's avatar
小湉湉 已提交
155
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
H
Hui Zhang 已提交
156 157 158 159 160 161 162 163 164 165
```
```text
usage: synthesize_e2e.py [-h] [--speedyspeech-config SPEEDYSPEECH_CONFIG]
                         [--speedyspeech-checkpoint SPEEDYSPEECH_CHECKPOINT]
                         [--speedyspeech-stat SPEEDYSPEECH_STAT]
                         [--pwg-config PWG_CONFIG]
                         [--pwg-checkpoint PWG_CHECKPOINT]
                         [--pwg-stat PWG_STAT] [--text TEXT]
                         [--phones-dict PHONES_DICT] [--tones-dict TONES_DICT]
                         [--output-dir OUTPUT_DIR]
166 167
                         [--inference-dir INFERENCE_DIR] [--verbose VERBOSE]
                         [--ngpu NGPU]
H
Hui Zhang 已提交
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195

Synthesize with speedyspeech & parallel wavegan.

optional arguments:
  -h, --help            show this help message and exit
  --speedyspeech-config SPEEDYSPEECH_CONFIG
                        config file for speedyspeech.
  --speedyspeech-checkpoint SPEEDYSPEECH_CHECKPOINT
                        speedyspeech checkpoint to load.
  --speedyspeech-stat SPEEDYSPEECH_STAT
                        mean and standard deviation used to normalize
                        spectrogram when training speedyspeech.
  --pwg-config PWG_CONFIG
                        config file for parallelwavegan.
  --pwg-checkpoint PWG_CHECKPOINT
                        parallel wavegan checkpoint to load.
  --pwg-stat PWG_STAT   mean and standard deviation used to normalize
                        spectrogram when training speedyspeech.
  --text TEXT           text to synthesize, a 'utt_id sentence' pair per line
  --phones-dict PHONES_DICT
                        phone vocabulary file.
  --tones-dict TONES_DICT
                        tone vocabulary file.
  --output-dir OUTPUT_DIR
                        output dir
  --inference-dir INFERENCE_DIR
                        dir to save inference models
  --verbose VERBOSE     verbose
196
  --ngpu NGPU           if ngpu == 0, use cpu.
H
Hui Zhang 已提交
197 198 199 200 201 202
```
1. `--speedyspeech-config`, `--speedyspeech-checkpoint`, `--speedyspeech-stat` are arguments for speedyspeech, which correspond to the 3 files in the speedyspeech pretrained model.
2. `--pwg-config`, `--pwg-checkpoint`, `--pwg-stat` are arguments for parallel wavegan, which correspond to the 3 files in the parallel wavegan pretrained model.
3. `--text` is the text file, which contains sentences to synthesize.
4. `--output-dir` is the directory to save synthesized audio files.
5. `--inference-dir` is the directory to save exported model, which can be used with paddle infernece.
203
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
小湉湉's avatar
小湉湉 已提交
204 205 206
7. `--phones-dict` is the path of the phone vocabulary file.
8. `--tones-dict` is the path of the tone vocabulary file.

小湉湉's avatar
小湉湉 已提交
207
### Inferencing
小湉湉's avatar
小湉湉 已提交
208 209 210 211 212 213 214
After Synthesize, we will get static models of speedyspeech and pwgan in `${train_output_path}/inference`.
`./local/inference.sh` calls `${BIN_DIR}/inference.py`, which provides a paddle static model inference example for speedyspeech + pwgan synthesize.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${train_output_path}
```

## Pretrained Model
小湉湉's avatar
小湉湉 已提交
215
Pretrained SpeedySpeech model with no silence in the edge of audios[speedyspeech_nosil_baker_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/speedyspeech/speedyspeech_nosil_baker_ckpt_0.5.zip).
216

小湉湉's avatar
小湉湉 已提交
217
Static model can be downloaded here [speedyspeech_nosil_baker_static_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/speedyspeech/speedyspeech_nosil_baker_static_0.5.zip).
H
Hui Zhang 已提交
218

小湉湉's avatar
小湉湉 已提交
219 220 221 222
Model | Step | eval/loss | eval/l1_loss | eval/duration_loss | eval/ssim_loss
:-------------:| :------------:| :-----: | :-----: | :--------:|:--------:
default| 1 gpu 11400|0.83655|0.42324|0.03211| 0.38119

小湉湉's avatar
小湉湉 已提交
223 224 225 226 227 228 229 230 231 232
SpeedySpeech checkpoint contains files listed below.
```text
speedyspeech_nosil_baker_ckpt_0.5
├── default.yaml            # default config used to train speedyspeech
├── feats_stats.npy         # statistics used to normalize spectrogram when training speedyspeech
├── phone_id_map.txt        # phone vocabulary file when training speedyspeech
├── snapshot_iter_11400.pdz # model parameters and optimizer states
└── tone_id_map.txt         # tone vocabulary file when training speedyspeech
```
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained speedyspeech and parallel wavegan models.
H
Hui Zhang 已提交
233
```bash
234 235
source path.sh

H
Hui Zhang 已提交
236 237
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
小湉湉's avatar
小湉湉 已提交
238
python3 ${BIN_DIR}/synthesize_e2e.py \
H
Hui Zhang 已提交
239 240 241 242 243 244
  --speedyspeech-config=speedyspeech_nosil_baker_ckpt_0.5/default.yaml \
  --speedyspeech-checkpoint=speedyspeech_nosil_baker_ckpt_0.5/snapshot_iter_11400.pdz \
  --speedyspeech-stat=speedyspeech_nosil_baker_ckpt_0.5/feats_stats.npy \
  --pwg-config=pwg_baker_ckpt_0.4/pwg_default.yaml \
  --pwg-checkpoint=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \
  --pwg-stat=pwg_baker_ckpt_0.4/pwg_stats.npy \
小湉湉's avatar
小湉湉 已提交
245
  --text=${BIN_DIR}/../sentences.txt \
H
Hui Zhang 已提交
246 247 248 249 250
  --output-dir=exp/default/test_e2e \
  --inference-dir=exp/default/inference \
  --phones-dict=speedyspeech_nosil_baker_ckpt_0.5/phone_id_map.txt \
  --tones-dict=speedyspeech_nosil_baker_ckpt_0.5/tone_id_map.txt
```