提交 e26b0c10 编写于 作者: Z zhaokexin01

Update README.md

上级 61fc5d4c
# Clarinet
Paddle implementation of clarinet in dynamic graph, a convolutional network based vocoder. The implementation is based on the paper [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
PaddlePaddle dynamic graph implementation of ClariNet, a convolutional network based vocoder. The implementation is based on the paper [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
## Dataset
......@@ -30,7 +30,7 @@ Train the model using train.py, follow the usage displayed by `python train.py -
usage: train.py [-h] [--config CONFIG] [--device DEVICE] [--output OUTPUT]
[--data DATA] [--resume RESUME] [--wavenet WAVENET]
train a clarinet model with LJspeech and a trained wavenet model.
train a ClariNet model with LJspeech and a trained WaveNet model.
optional arguments:
-h, --help show this help message and exit
......@@ -54,12 +54,12 @@ optional arguments:
```
5. `--device` is the device (gpu id) to use for training. `-1` means CPU.
6. `--wavenet` is the path of the wavenet checkpoint to load. if you do not specify `--resume`, then this must be provided.
6. `--wavenet` is the path of the wavenet checkpoint to load. If you do not specify `--resume`, then this must be provided.
Before you start training a clarinet model, you should have trained a wavenet model with single gaussian as output distribution. Make sure the config for teacher matches that for the trained model.
Before you start training a ClariNet model, you should have trained a WaveNet model with single Gaussian output distribution. Make sure the config of the teacher model matches that of the trained model.
example script:
Example script:
```bash
python train.py --config=./configs/clarinet_ljspeech.yaml --data=./LJSpeech-1.1/ --output=experiment --device=0 --conditioner=wavenet_checkpoint/conditioner --conditioner=wavenet_checkpoint/teacher
......@@ -77,7 +77,7 @@ tensorboard --logdir=.
usage: synthesis.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA]
checkpoint output
train a clarinet model with LJspeech and a trained wavenet model.
train a ClariNet model with LJspeech and a trained WaveNet model.
positional arguments:
checkpoint checkpoint to load from.
......@@ -96,7 +96,7 @@ optional arguments:
4. `output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`).
5. `--device` is the device (gpu id) to use for training. `-1` means CPU.
example script:
Example script:
```bash
python synthesis.py --config=./configs/wavenet_single_gaussian.yaml --data=./LJSpeech-1.1/ --device=0 experiment/checkpoints/step_500000 generated
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册