From 00cd98437fe075ee336315b4df02058bb443bf4f Mon Sep 17 00:00:00 2001 From: zhaokexin01 Date: Fri, 6 Mar 2020 08:45:54 +0800 Subject: [PATCH] Update README.md --- examples/wavenet/README.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/examples/wavenet/README.md b/examples/wavenet/README.md index caed5d9..2064d1b 100644 --- a/examples/wavenet/README.md +++ b/examples/wavenet/README.md @@ -1,6 +1,6 @@ -# Wavenet +# WaveNet -Paddle implementation of wavenet in dynamic graph, a convolutional network based vocoder. Wavenet is proposed in [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499), but in thie experiment, the implementation follows the teacher model in [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281). +PaddlePaddle dynamic graph implementation of WaveNet, a convolutional network based vocoder. WaveNet is originally proposed in [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499). However, in this experiment, the implementation follows the teacher model in [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281). ## Dataset @@ -24,13 +24,13 @@ tar xjvf LJSpeech-1.1.tar.bz2 ## Train -Train the model using train.py, follow the usage displayed by `python train.py --help`. +Train the model using train.py. For help on usage, try `python train.py --help`. ```text usage: train.py [-h] [--data DATA] [--config CONFIG] [--output OUTPUT] [--device DEVICE] [--resume RESUME] -Train a wavenet model with LJSpeech. +Train a WaveNet model with LJSpeech. optional arguments: -h, --help show this help message and exit @@ -43,7 +43,7 @@ optional arguments: 1. `--config` is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config. 2. `--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt). -3. `--resume` is the path of the checkpoint. If it is provided, the model would load the checkpoint before trainig. +3. `--resume` is the path of the checkpoint. If it is provided, the model would load the checkpoint before training. 4. `--output` is the directory to save results, all result are saved in this directory. The structure of the output directory is shown below. ```text @@ -53,13 +53,13 @@ optional arguments: 5. `--device` is the device (gpu id) to use for training. `-1` means CPU. -example script: +Example script: ```bash python train.py --config=./configs/wavenet_single_gaussian.yaml --data=./LJSpeech-1.1/ --output=experiment --device=0 ``` -You can monitor training log via tensorboard, using the script below. +You can monitor training log via TensorBoard, using the script below. ```bash cd experiment/log @@ -71,7 +71,7 @@ tensorboard --logdir=. usage: synthesis.py [-h] [--data DATA] [--config CONFIG] [--device DEVICE] checkpoint output -Synthesize valid data from LJspeech with a wavenet model. +Synthesize valid data from LJspeech with a WaveNet model. positional arguments: checkpoint checkpoint to load. @@ -85,12 +85,12 @@ optional arguments: ``` 1. `--config` is the configuration file to use. You should use the same configuration with which you train you model. -2. `--data` is the path of the LJspeech dataset. A dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files. +2. `--data` is the path of the LJspeech dataset. A dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files. 3. `checkpoint` is the checkpoint to load. 4. `output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`). 5. `--device` is the device (gpu id) to use for training. `-1` means CPU. -example script: +Example script: ```bash python synthesis.py --config=./configs/wavenet_single_gaussian.yaml --data=./LJSpeech-1.1/ --device=0 experiment/checkpoints/step_500000 generated -- GitLab