README.md 4.4 KB
Newer Older
Z
zhaokexin01 已提交
1
# Deep Voice 3
C
chenfeiyu 已提交
2

Z
zhaokexin01 已提交
3
PaddlePaddle dynamic graph implementation of Deep Voice 3, a convolutional network based text-to-speech generative model. The implementation is based on [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654).
C
chenfeiyu 已提交
4

Z
zhaokexin01 已提交
5
We implement Deep Voice 3 using Paddle Fluid with dynamic graph, which is convenient for building flexible network architectures.
C
chenfeiyu 已提交
6

C
chenfeiyu 已提交
7
## Dataset
C
chenfeiyu 已提交
8 9 10 11 12 13 14 15 16 17

We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).

```bash
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2
```

## Model Architecture

Z
zhaokexin01 已提交
18
![Deep Voice 3 model architecture](./images/model_architecture.png)
C
chenfeiyu 已提交
19

Z
zhaokexin01 已提交
20
The model consists of an encoder, a decoder and a converter (and a speaker embedding for multispeaker models). The encoder and the decoder together form the seq2seq part of the model, and the converter forms the postnet part.
C
chenfeiyu 已提交
21 22 23

## Project Structure

C
chenfeiyu 已提交
24
```text
L
lifuchen 已提交
25
├── data.py          data_processing
C
chenfeiyu 已提交
26
├── configs/         (example) configuration files
C
chenfeiyu 已提交
27 28 29 30
├── sentences.txt    sample sentences
├── synthesis.py     script to synthesize waveform from text
├── train.py         script to train a model
└── utils.py         utility functions
C
chenfeiyu 已提交
31
```
C
chenfeiyu 已提交
32

C
chenfeiyu 已提交
33
## Train
C
chenfeiyu 已提交
34 35 36 37

Train the model using train.py, follow the usage displayed by `python train.py --help`.

```text
38 39
usage: train.py [-h] [-c CONFIG] [-s DATA] [--checkpoint CHECKPOINT]
                [-o OUTPUT] [-g DEVICE]
C
chenfeiyu 已提交
40

Z
zhaokexin01 已提交
41
Train a Deep Voice 3 model with LJSpeech dataset.
C
chenfeiyu 已提交
42 43

optional arguments:
44 45 46 47 48 49
  -h, --help                      show this help message and exit
  -c CONFIG, --config CONFIG      experimrnt config
  -s DATA, --data DATA            The path of the LJSpeech dataset.
  --checkpoint CHECKPOINT         checkpoint to load
  -o OUTPUT, --output OUTPUT      The directory to save result.
  -g DEVICE, --device DEVICE      device to use
L
lifuchen 已提交
50
```
C
chenfeiyu 已提交
51

Z
zhaokexin01 已提交
52 53
- `--config` is the configuration file to use. The provided `ljspeech.yaml` can be used directly. And you can change some values in the configuration file and train the model with a different config.
- `--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).
54
- `--checkpoint` is the path of the checkpoint. If it is provided, the model would load the checkpoint before trainig.
Z
zhaokexin01 已提交
55
- `--output` is the directory to save results, all results are saved in this directory. The structure of the output directory is shown below.
C
chenfeiyu 已提交
56 57 58 59 60

```text
├── checkpoints      # checkpoint
├── log              # tensorboard log
└── states           # train and evaluation results
L
lifuchen 已提交
61
    ├── alignments   # attention
C
chenfeiyu 已提交
62 63 64 65 66
    ├── lin_spec     # linear spectrogram
    ├── mel_spec     # mel spectrogram
    └── waveform     # waveform (.wav files)
```

67 68
If `checkpoints` is not empty and argument `--checkpoint` is not specified, the model will be resumed from the latest checkpoint at the beginning of training.

Z
zhaokexin01 已提交
69
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
C
chenfeiyu 已提交
70

Z
zhaokexin01 已提交
71
Example script:
72 73

```bash
C
chenfeiyu 已提交
74
python train.py --config=configs/ljspeech.yaml --data=./LJSpeech-1.1/ --output=experiment --device=0
75 76 77 78 79 80 81 82 83
```

You can monitor training log via tensorboard, using the script below.

```bash
cd experiment/log
tensorboard --logdir=.
```

C
chenfeiyu 已提交
84
## Synthesis
C
chenfeiyu 已提交
85 86 87
```text
usage: synthesis.py [-h] [-c CONFIG] [-g DEVICE] checkpoint text output_path

Z
zhaokexin01 已提交
88
Synthsize waveform from a checkpoint.
C
chenfeiyu 已提交
89 90 91 92 93 94 95 96 97 98 99 100 101 102

positional arguments:
  checkpoint            checkpoint to load.
  text                  text file to synthesize
  output_path           path to save results

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        experiment config.
  -g DEVICE, --device DEVICE
                        device to use
```

Z
zhaokexin01 已提交
103 104 105 106 107
- `--config` is the configuration file to use. You should use the same configuration with which you train you model.
- `checkpoint` is the checkpoint to load.
- `text`is the text file to synthesize.
- `output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`) and attention plots (*.png) for each sentence.
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
C
chenfeiyu 已提交
108

Z
zhaokexin01 已提交
109
Example script:
110 111

```bash
C
chenfeiyu 已提交
112
python synthesis.py --config=configs/ljspeech.yaml --device=0 experiment/checkpoints/model_step_005000000 sentences.txt generated
113
```