Paddle implementation of wavenet in dynamic graph, a convolutional network based vocoder. Wavenet is proposed in [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499), but in thie experiment, the implementation follows the teacher model in [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
PaddlePaddle dynamic graph implementation of WaveNet, a convolutional network based vocoder. WaveNet is originally proposed in [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499). However, in this experiment, the implementation follows the teacher model in [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
## Dataset
...
...
@@ -24,13 +24,13 @@ tar xjvf LJSpeech-1.1.tar.bz2
## Train
Train the model using train.py, follow the usage displayed by `python train.py --help`.
Train the model using train.py. For help on usage, try `python train.py --help`.
1.`--config` is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config.
2.`--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).
3.`--resume` is the path of the checkpoint. If it is provided, the model would load the checkpoint before trainig.
3.`--resume` is the path of the checkpoint. If it is provided, the model would load the checkpoint before training.
4.`--output` is the directory to save results, all result are saved in this directory. The structure of the output directory is shown below.
```text
...
...
@@ -53,13 +53,13 @@ optional arguments:
5.`--device` is the device (gpu id) to use for training. `-1` means CPU.
Synthesize valid data from LJspeech with a wavenet model.
Synthesize valid data from LJspeech with a WaveNet model.
positional arguments:
checkpoint checkpoint to load.
...
...
@@ -85,12 +85,12 @@ optional arguments:
```
1.`--config` is the configuration file to use. You should use the same configuration with which you train you model.
2.`--data` is the path of the LJspeech dataset. A dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
2.`--data` is the path of the LJspeech dataset. A dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
3.`checkpoint` is the checkpoint to load.
4.`output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`).
5.`--device` is the device (gpu id) to use for training. `-1` means CPU.