提交 53d7c34c 编写于 作者: C chenfeiyu

add entries for wavenet and clarinet in parakeet/README.md

上级 f6d7016d
...@@ -6,10 +6,10 @@ Parakeet aims to provide a flexible, efficient and state-of-the-art text-to-spee ...@@ -6,10 +6,10 @@ Parakeet aims to provide a flexible, efficient and state-of-the-art text-to-spee
<img src="images/logo.png" width=450 /> <br> <img src="images/logo.png" width=450 /> <br>
</div> </div>
In particular, it features the latest [WaveFlow] (https://arxiv.org/abs/1912.01219) model proposed by Baidu Research. In particular, it features the latest [WaveFlow] (https://arxiv.org/abs/1912.01219) model proposed by Baidu Research.
- WaveFlow can synthesize 22.05 kHz high-fidelity speech around 40x faster than real-time on a Nvidia V100 GPU without engineered inference kernels, which is faster than WaveGlow and serveral orders of magnitude faster than WaveNet. - WaveFlow can synthesize 22.05 kHz high-fidelity speech around 40x faster than real-time on a Nvidia V100 GPU without engineered inference kernels, which is faster than WaveGlow and serveral orders of magnitude faster than WaveNet.
- WaveFlow is a small-footprint flow-based model for raw audio. It has only 5.9M parameters, which is 15x smalller than WaveGlow (87.9M) and comparable to WaveNet (4.6M). - WaveFlow is a small-footprint flow-based model for raw audio. It has only 5.9M parameters, which is 15x smalller than WaveGlow (87.9M) and comparable to WaveNet (4.6M).
- WaveFlow is directly trained with maximum likelihood without probability density distillation and auxiliary losses as used in Parallel WaveNet and ClariNet, which simplifies the training pipeline and reduces the cost of development. - WaveFlow is directly trained with maximum likelihood without probability density distillation and auxiliary losses as used in Parallel WaveNet and ClariNet, which simplifies the training pipeline and reduces the cost of development.
### Setup ### Setup
...@@ -43,10 +43,12 @@ nltk.download("cmudict") ...@@ -43,10 +43,12 @@ nltk.download("cmudict")
## Related Research ## Related Research
- [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654) - [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654).
- [Neural Speech Synthesis with Transformer Network](https://arxiv.org/abs/1809.08895) - [Neural Speech Synthesis with Transformer Network](https://arxiv.org/abs/1809.08895).
- [FastSpeech: Fast, Robust and Controllable Text to Speech](https://arxiv.org/abs/1905.09263). - [FastSpeech: Fast, Robust and Controllable Text to Speech](https://arxiv.org/abs/1905.09263).
- [WaveFlow: A Compact Flow-based Model for Raw Audio](https://arxiv.org/abs/1912.01219) - [WaveFlow: A Compact Flow-based Model for Raw Audio](https://arxiv.org/abs/1912.01219).
- [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499).
- [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](https://arxiv.org/abs/1807.07281).
## Examples ## Examples
...@@ -54,6 +56,8 @@ nltk.download("cmudict") ...@@ -54,6 +56,8 @@ nltk.download("cmudict")
- [Train a TransformerTTS model with ljspeech dataset](./examples/transformer_tts) - [Train a TransformerTTS model with ljspeech dataset](./examples/transformer_tts)
- [Train a FastSpeech model with ljspeech dataset](./examples/fastspeech) - [Train a FastSpeech model with ljspeech dataset](./examples/fastspeech)
- [Train a WaveFlow model with ljspeech dataset](./examples/waveflow) - [Train a WaveFlow model with ljspeech dataset](./examples/waveflow)
- [Train a WaveNet model with ljspeech dataset](./examples/wavenet)
- [Train a Clarinet model with ljspeech dataset](./examples/clarinet)
## Copyright and License ## Copyright and License
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册