From e26b0c104e9dc3c6a56ecc11d6c5b784c423c24e Mon Sep 17 00:00:00 2001 From: zhaokexin01 Date: Thu, 5 Mar 2020 09:51:58 +0800 Subject: [PATCH] Update README.md --- examples/clarinet/README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/examples/clarinet/README.md b/examples/clarinet/README.md index 6046327..58bca99 100644 --- a/examples/clarinet/README.md +++ b/examples/clarinet/README.md @@ -1,6 +1,6 @@ # Clarinet -Paddle implementation of clarinet in dynamic graph, a convolutional network based vocoder. The implementation is based on the paper [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281). +PaddlePaddle dynamic graph implementation of ClariNet, a convolutional network based vocoder. The implementation is based on the paper [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281). ## Dataset @@ -30,7 +30,7 @@ Train the model using train.py, follow the usage displayed by `python train.py - usage: train.py [-h] [--config CONFIG] [--device DEVICE] [--output OUTPUT] [--data DATA] [--resume RESUME] [--wavenet WAVENET] -train a clarinet model with LJspeech and a trained wavenet model. +train a ClariNet model with LJspeech and a trained WaveNet model. optional arguments: -h, --help show this help message and exit @@ -54,12 +54,12 @@ optional arguments: ``` 5. `--device` is the device (gpu id) to use for training. `-1` means CPU. -6. `--wavenet` is the path of the wavenet checkpoint to load. if you do not specify `--resume`, then this must be provided. +6. `--wavenet` is the path of the wavenet checkpoint to load. If you do not specify `--resume`, then this must be provided. -Before you start training a clarinet model, you should have trained a wavenet model with single gaussian as output distribution. Make sure the config for teacher matches that for the trained model. +Before you start training a ClariNet model, you should have trained a WaveNet model with single Gaussian output distribution. Make sure the config of the teacher model matches that of the trained model. -example script: +Example script: ```bash python train.py --config=./configs/clarinet_ljspeech.yaml --data=./LJSpeech-1.1/ --output=experiment --device=0 --conditioner=wavenet_checkpoint/conditioner --conditioner=wavenet_checkpoint/teacher @@ -77,7 +77,7 @@ tensorboard --logdir=. usage: synthesis.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA] checkpoint output -train a clarinet model with LJspeech and a trained wavenet model. +train a ClariNet model with LJspeech and a trained WaveNet model. positional arguments: checkpoint checkpoint to load from. @@ -96,7 +96,7 @@ optional arguments: 4. `output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`). 5. `--device` is the device (gpu id) to use for training. `-1` means CPU. -example script: +Example script: ```bash python synthesis.py --config=./configs/wavenet_single_gaussian.yaml --data=./LJSpeech-1.1/ --device=0 experiment/checkpoints/step_500000 generated -- GitLab