README.md 30.0 KB
Newer Older
G
grasswolfs 已提交
1 2
([简体中文](./README_cn.md)|English)

H
Hui Zhang 已提交
3 4


M
Mingxue-Xu 已提交
5
<p align="center">
M
Mingxue-Xu 已提交
6
  <img src="./docs/images/PaddleSpeech_logo.png" />
M
Mingxue-Xu 已提交
7
</p>
G
grasswolfs 已提交
8 9

<p align="center">
小湉湉's avatar
小湉湉 已提交
10
    <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-red.svg"></a>
H
Hui Zhang 已提交
11 12
    <a href="https://github.com/PaddlePaddle/PaddleSpeech/releases"><img src="https://img.shields.io/github/v/release/PaddlePaddle/PaddleSpeech?color=ffa"></a>
    <a href="support os"><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-pink.svg"></a>
小湉湉's avatar
小湉湉 已提交
13 14 15
    <a href=""><img src="https://img.shields.io/badge/python-3.7+-aff.svg"></a>
    <a href="https://github.com/PaddlePaddle/PaddleSpeech/graphs/contributors"><img src="https://img.shields.io/github/contributors/PaddlePaddle/PaddleSpeech?color=9ea"></a>
    <a href="https://github.com/PaddlePaddle/PaddleSpeech/commits"><img src="https://img.shields.io/github/commit-activity/m/PaddlePaddle/PaddleSpeech?color=3af"></a>
G
grasswolfs 已提交
16 17
    <a href="https://github.com/PaddlePaddle/PaddleSpeech/issues"><img src="https://img.shields.io/github/issues/PaddlePaddle/PaddleSpeech?color=9cc"></a>
    <a href="https://github.com/PaddlePaddle/PaddleSpeech/stargazers"><img src="https://img.shields.io/github/stars/PaddlePaddle/PaddleSpeech?color=ccf"></a>
H
Hui Zhang 已提交
18 19
    <a href="=https://pypi.org/project/paddlespeech/"><img src="https://img.shields.io/pypi/dm/PaddleSpeech"></a>
    <a href="=https://pypi.org/project/paddlespeech/"><img src="https://static.pepy.tech/badge/paddlespeech"></a>
D
DanielYang 已提交
20
    <a href="https://huggingface.co/spaces"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"></a>
G
grasswolfs 已提交
21
</p>
H
Hui Zhang 已提交
22 23 24 25 26 27 28 29 30 31 32 33
<div align="center">  
<h3>
  <a href="#quick-start"> Quick Start </a>
  | <a href="#quick-start-server"> Quick Start Server </a>
  | <a href="#quick-start-streaming-server"> Quick Start Streaming Server</a>
  </br>
  <a href="#documents"> Documents </a>
  | <a href="#model-list"> Models List </a>
</h3>
</div>


L
lfchener 已提交
34

M
Mingxue-Xu 已提交
35

36
**PaddleSpeech** is an open-source toolkit on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform for a variety of critical tasks in speech and audio, with the state-of-art and influential models.
M
Mingxue-Xu 已提交
37

H
Hui Zhang 已提交
38
##### Speech Recognition
M
Mingxue-Xu 已提交
39 40 41 42 43 44

<div align = "center">
<table style="width:100%">
  <thead>
    <tr>
      <th> Input Audio  </th>
M
Mingxue-Xu 已提交
45
      <th width="550"> Recognition Result  </th>
M
Mingxue-Xu 已提交
46 47 48 49 50
    </tr>
  </thead>
  <tbody>
   <tr>
      <td align = "center">
M
Mingxue-Xu 已提交
51
      <a href="https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav" rel="nofollow">
M
Mingxue-Xu 已提交
52
            <img align="center" src="./docs/images/audio_icon.png" width="200 style="max-width: 100%;"></a><br>
M
Mingxue-Xu 已提交
53
      </td>
M
Mingxue-Xu 已提交
54
      <td >I knocked at the door on the ancient side of the building.</td>
M
Mingxue-Xu 已提交
55 56 57
    </tr>
    <tr>
      <td align = "center">
M
Mingxue-Xu 已提交
58
      <a href="https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav" rel="nofollow">
M
Mingxue-Xu 已提交
59
            <img align="center" src="./docs/images/audio_icon.png" width="200" style="max-width: 100%;"></a><br>
M
Mingxue-Xu 已提交
60
      </td>
M
Mingxue-Xu 已提交
61
      <td>我认为跑步最重要的就是给我带来了身体健康。</td>
M
Mingxue-Xu 已提交
62
    </tr>
M
Mingxue-Xu 已提交
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
  </tbody>
</table>

</div>

##### Speech Translation (English to Chinese)

<div align = "center">
<table style="width:100%">
  <thead>
    <tr>
      <th> Input Audio  </th>
      <th width="550"> Translations Result  </th>
    </tr>
  </thead>
  <tbody>
   <tr>
      <td align = "center">
      <a href="https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav" rel="nofollow">
            <img align="center" src="./docs/images/audio_icon.png" width="200 style="max-width: 100%;"></a><br>
      </td>
      <td >我 在 这栋 建筑 的 古老 门上 敲门。</td>
    </tr>
M
Mingxue-Xu 已提交
86 87 88 89
  </tbody>
</table>

</div>
90

Z
Zeyu Chen 已提交
91
##### Text-to-Speech
M
Mingxue-Xu 已提交
92 93 94 95
<div align = "center">
<table style="width:100%">
  <thead>
    <tr>
小湉湉's avatar
小湉湉 已提交
96
      <th width="550" > Input Text</th>
M
Mingxue-Xu 已提交
97 98 99 100 101 102 103
      <th>Synthetic Audio</th>
    </tr>
  </thead>
  <tbody>
   <tr>
      <td >Life was like a box of chocolates, you never know what you're gonna get.</td>
      <td align = "center">
小湉湉's avatar
小湉湉 已提交
104
      <a href="https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/tacotron2_ljspeech_waveflow_samples_0.2/sentence_1.wav" rel="nofollow">
M
Mingxue-Xu 已提交
105 106 107 108 109 110 111 112 113 114
            <img align="center" src="./docs/images/audio_icon.png" width="200" style="max-width: 100%;"></a><br>
      </td>
    </tr>
    <tr>
      <td >早上好,今天是2020/10/29,最低温度是-3°C。</td>
      <td align = "center">
      <a href="https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/parakeet_espnet_fs2_pwg_demo/tn_g2p/parakeet/001.wav" rel="nofollow">
            <img align="center" src="./docs/images/audio_icon.png" width="200" style="max-width: 100%;"></a><br>
      </td>
    </tr>
小湉湉's avatar
小湉湉 已提交
115 116 117 118 119 120 121
    <tr>
      <td >季姬寂,集鸡,鸡即棘鸡。棘鸡饥叽,季姬及箕稷济鸡。鸡既济,跻姬笈,季姬忌,急咭鸡,鸡急,继圾几,季姬急,即籍箕击鸡,箕疾击几伎,伎即齑,鸡叽集几基,季姬急极屐击鸡,鸡既殛,季姬激,即记《季姬击鸡记》。</td>
      <td align = "center">
      <a href="https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/jijiji.wav" rel="nofollow">
            <img align="center" src="./docs/images/audio_icon.png" width="200" style="max-width: 100%;"></a><br>
      </td>
    </tr>
M
Mingxue-Xu 已提交
122 123 124 125 126
  </tbody>
</table>

</div>

Z
Zeyu Chen 已提交
127
For more synthesized audios, please refer to [PaddleSpeech Text-to-Speech samples](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html).
小湉湉's avatar
小湉湉 已提交
128

小湉湉's avatar
小湉湉 已提交
129
##### Punctuation Restoration
K
KP 已提交
130 131 132 133
<div align = "center">
<table style="width:100%">
  <thead>
    <tr>
小湉湉's avatar
小湉湉 已提交
134 135
      <th width="390"> Input Text </th>
      <th width="390"> Output Text </th>
K
KP 已提交
136 137 138 139 140 141 142 143 144 145 146 147
    </tr>
  </thead>
  <tbody>
   <tr>
      <td>今天的天气真不错啊你下午有空吗我想约你一起去吃饭</td>
      <td>今天的天气真不错啊!你下午有空吗?我想约你一起去吃饭。</td>
    </tr>
  </tbody>
</table>

</div>

148 149

### Features
150

M
Mingxue-Xu 已提交
151
Via the easy-to-use, efficient, flexible and scalable implementation, our vision is to empower both industrial application and academic research, including training, inference & testing modules, and deployment process. To be more specific, this toolkit features at:
小湉湉's avatar
小湉湉 已提交
152 153 154
- 📦  **Ease of Use**: low barriers to install, and [CLI](#quick-start) is available to quick-start your journey.
- 🏆  **Align to the State-of-the-Art**: we provide high-speed and ultra-lightweight models, and also cutting-edge technology. 
- 💯  **Rule-based Chinese frontend**: our frontend contains Text Normalization and Grapheme-to-Phoneme (G2P, including Polyphone and Tone Sandhi). Moreover, we use self-defined linguistic rules to adapt Chinese context.
M
Mingxue-Xu 已提交
155
- **Varieties of Functions that Vitalize both Industrial and Academia**:
小湉湉's avatar
小湉湉 已提交
156 157 158
  - 🛎️  *Implementation of critical audio tasks*: this toolkit contains audio functions like  Audio Classification, Speech Translation, Automatic Speech Recognition, Text-to-Speech Synthesis, etc.
  - 🔬  *Integration of mainstream models and datasets*: the toolkit implements modules that participate in the whole pipeline of the speech tasks, and uses mainstream datasets like LibriSpeech, LJSpeech, AIShell, CSMSC, etc. See also [model list](#model-list) for more details.
  - 🧩  *Cascaded models application*: as an extension of the typical traditional audio tasks, we combine the workflows of the aforementioned tasks with other fields like Natural language processing (NLP) and Computer Vision (CV).
M
Mingxue-Xu 已提交
159

H
Hui Zhang 已提交
160
### 🔥 Hot Activities
M
Mingxue-Xu 已提交
161 162 163 164

<!---
2021.12.14: We would like to have an online courses to introduce basics and research of speech, as well as code practice with `paddlespeech`. Please pay attention to our [Calendar](https://www.paddlepaddle.org.cn/live).
--->
H
Hui Zhang 已提交
165 166 167 168 169 170 171 172 173 174 175

- 2021.12.21~12.24

  4 Days Live Courses: Depth interpretation of PaddleSpeech!

  **Courses videos and related materials: https://aistudio.baidu.com/aistudio/education/group/info/25130**


### Recent Update

- 👏🏻  2022.04.28: PaddleSpeech Streaming Server is available for Automatic Speech Recognition and Text-to-Speech.
H
Hui Zhang 已提交
176
- 👏🏻  2022.03.28: PaddleSpeech Server is available for Audio Classification, Automatic Speech Recognition and Text-to-Speech.
H
Hui Zhang 已提交
177
- 👏🏻  2022.03.28: PaddleSpeech CLI is available for Speaker Verification.
H
Hui Zhang 已提交
178
- 🤗  2021.12.14: Our PaddleSpeech [ASR](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR) and [TTS](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS) Demos on Hugging Face Spaces are available!
小湉湉's avatar
小湉湉 已提交
179
- 👏🏻  2021.12.10: PaddleSpeech CLI is available for Audio Classification, Automatic Speech Recognition, Speech Translation (English to Chinese) and Text-to-Speech.
M
Mingxue-Xu 已提交
180

G
grasswolfs 已提交
181
### Community
小湉湉's avatar
小湉湉 已提交
182
- Scan the QR code below with your Wechat (reply【语音】after your friend's application is approved), you can access to official technical exchange group. Look forward to your participation.
M
Mingxue-Xu 已提交
183 184

<div align="center">
小湉湉's avatar
小湉湉 已提交
185
<img src="https://raw.githubusercontent.com/yt605155624/lanceTest/main/images/wechat_4.jpg"  width = "300"  />
M
Mingxue-Xu 已提交
186 187
</div>

Z
Zeyu Chen 已提交
188
## Installation
M
Mingxue-Xu 已提交
189

J
Jackwaterveg 已提交
190 191
We strongly recommend our users to install PaddleSpeech in **Linux** with *python>=3.7*.
Up to now, **Linux** supports CLI for the all our tasks, **Mac OSX** and **Windows** only supports PaddleSpeech CLI for Audio Classification, Speech-to-Text and Text-to-Speech. To install `PaddleSpeech`, please see [installation](./docs/source/install.md).
M
Mingxue-Xu 已提交
192

H
Hui Zhang 已提交
193

G
grasswolfs 已提交
194
<a name="quickstart"></a>
Z
Zeyu Chen 已提交
195
## Quick Start
M
Mingxue-Xu 已提交
196

M
Mingxue-Xu 已提交
197
Developers can have a try of our models with [PaddleSpeech Command Line](./paddlespeech/cli/README.md). Change `--input` to test your own audio/text.
M
Mingxue-Xu 已提交
198

M
Mingxue-Xu 已提交
199
**Audio Classification**     
M
Mingxue-Xu 已提交
200
```shell
小湉湉's avatar
小湉湉 已提交
201
paddlespeech cls --input input.wav
M
Mingxue-Xu 已提交
202
```
H
Hui Zhang 已提交
203

204 205 206 207 208
**Speaker Verification**
```
paddlespeech vector --task spk --input input_16k.wav
```

M
Mingxue-Xu 已提交
209 210
**Automatic Speech Recognition**
```shell
小湉湉's avatar
小湉湉 已提交
211
paddlespeech asr --lang zh --input input_16k.wav
M
Mingxue-Xu 已提交
212
```
H
Hui Zhang 已提交
213
- web demo for Automatic Speech Recognition is integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See Demo: [ASR Demo](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR)
小湉湉's avatar
小湉湉 已提交
214

H
Hui Zhang 已提交
215
**Speech Translation** (English to Chinese)
H
huangyuxin 已提交
216
(not support for Mac and Windows now)
M
Mingxue-Xu 已提交
217
```shell
小湉湉's avatar
小湉湉 已提交
218
paddlespeech st --input input_16k.wav
M
Mingxue-Xu 已提交
219
```
H
Hui Zhang 已提交
220

M
Mingxue-Xu 已提交
221
**Text-to-Speech** 
M
Mingxue-Xu 已提交
222
```shell
G
grasswolfs 已提交
223
paddlespeech tts --input "你好,欢迎使用飞桨深度学习框架!" --output output.wav
M
Mingxue-Xu 已提交
224
```
H
Hui Zhang 已提交
225
- web demo for Text to Speech is integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See Demo: [TTS Demo](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS)
K
KP 已提交
226 227 228 229 230 231 232

**Text Postprocessing** 
- Punctuation Restoration
  ```bash
  paddlespeech text --task punc --input 今天的天气真不错啊你下午有空吗我想约你一起去吃饭
  ```

H
Hui Zhang 已提交
233 234 235
**Batch Process**
```
echo -e "1 欢迎光临。\n2 谢谢惠顾。" | paddlespeech tts
H
Hui Zhang 已提交
236
```
H
Hui Zhang 已提交
237

H
Hui Zhang 已提交
238 239
**Shell Pipeline**   
- ASR + Punctuation Restoration
H
Hui Zhang 已提交
240 241 242
```
paddlespeech asr --input ./zh.wav | paddlespeech text --task punc
```
小湉湉's avatar
小湉湉 已提交
243 244 245

For more command lines, please see: [demos](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/demos)

246
If you want to try more functions like training and tuning, please have a look at [Speech-to-Text Quick Start](./docs/source/asr/quick_start.md) and [Text-to-Speech Quick Start](./docs/source/tts/quick_start.md).
Z
Zeyu Chen 已提交
247

L
lym0302 已提交
248 249 250 251 252 253 254

<a name="quickstartserver"></a>
## Quick Start Server

Developers can have a try of our speech server with [PaddleSpeech Server Command Line](./paddlespeech/server/README.md).

**Start server**     
H
Hui Zhang 已提交
255

L
lym0302 已提交
256 257 258 259 260
```shell
paddlespeech_server start --config_file ./paddlespeech/server/conf/application.yaml
```

**Access Speech Recognition Services**     
H
Hui Zhang 已提交
261

L
lym0302 已提交
262 263 264 265 266
```shell
paddlespeech_client asr --server_ip 127.0.0.1 --port 8090 --input input_16k.wav
```

**Access Text to Speech Services**     
H
Hui Zhang 已提交
267

L
lym0302 已提交
268 269 270 271 272 273 274 275 276 277 278 279 280
```shell
paddlespeech_client tts --server_ip 127.0.0.1 --port 8090 --input "您好,欢迎使用百度飞桨语音合成服务。" --output output.wav
```

**Access Audio Classification Services**     
```shell
paddlespeech_client cls --server_ip 127.0.0.1 --port 8090 --input input.wav
```


For more information about server command lines, please see: [speech server demos](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/demos/speech_server)


H
Hui Zhang 已提交
281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311
<a name="quickstartstreamingserver"></a>
## Quick Start Streaming Server

Developers can have a try of  [streaming asr](./demos/streaming_asr_server/README.md) and [streaming tts](./demos/streaming_tts_server/README.md) server.

**Start Streaming Speech Recognition Server**

```
paddlespeech_server start --config_file ./demos/streaming_asr_server/conf/application.yaml
```

**Access Streaming Speech Recognition Services**     

```
paddlespeech_client asr_online --server_ip 127.0.0.1 --port 8090 --input input_16k.wav
```

**Start Streaming Text to Speech  Server**

```
paddlespeech_server start --config_file ./demos/streaming_tts_server/conf/tts_online_application.yaml
```

**Access Streaming Text to Speech Services**     

```
paddlespeech_client tts_online --server_ip 127.0.0.1 --port 8092 --protocol http --input "您好,欢迎使用百度飞桨语音合成服务。" --output output.wav
```

For more information please see:  [streaming asr](./demos/streaming_asr_server/README.md) and [streaming tts](./demos/streaming_tts_server/README.md) 

I
iftaken 已提交
312 313
<a name="ModelList"></a>

Z
Zeyu Chen 已提交
314
## Model List
315

316
PaddleSpeech supports a series of most popular models. They are summarized in [released models](./docs/source/released_model.md) and attached with available pretrained models.
Z
Zeyu Chen 已提交
317

I
iftaken 已提交
318 319
<a name="SpeechToText"></a>

小湉湉's avatar
小湉湉 已提交
320
**Speech-to-Text** contains *Acoustic Model*, *Language Model*, and *Speech Translation*, with the following details:
L
lfchener 已提交
321

M
Mingxue-Xu 已提交
322 323 324
<table style="width:100%">
  <thead>
    <tr>
Z
Zeyu Chen 已提交
325
      <th>Speech-to-Text Module Type</th>
M
Mingxue-Xu 已提交
326 327 328 329 330 331 332
      <th>Dataset</th>
      <th>Model Type</th>
      <th>Link</th>
    </tr>
  </thead>
  <tbody>
    <tr>
小湉湉's avatar
小湉湉 已提交
333
      <td rowspan="4">Speech Recogination</td>
M
Mingxue-Xu 已提交
334 335 336
      <td rowspan="2" >Aishell</td>
      <td >DeepSpeech2 RNN + Conv based Models</td>
      <td>
H
Hui Zhang 已提交
337
      <a href = "./examples/aishell/asr0">deepspeech2-aishell</a>
M
Mingxue-Xu 已提交
338 339 340 341 342
      </td>
    </tr>
    <tr>
      <td>Transformer based Attention Models </td>
      <td>
H
Hui Zhang 已提交
343
      <a href = "./examples/aishell/asr1">u2.transformer.conformer-aishell</a>
M
Mingxue-Xu 已提交
344 345
      </td>
    </tr>
小湉湉's avatar
小湉湉 已提交
346
    <tr>
M
Mingxue-Xu 已提交
347 348 349
      <td> Librispeech</td>
      <td>Transformer based Attention Models </td>
      <td>
H
Hui Zhang 已提交
350
      <a href = "./examples/librispeech/asr0">deepspeech2-librispeech</a> / <a href = "./examples/librispeech/asr1">transformer.conformer.u2-librispeech</a>  / <a href = "./examples/librispeech/asr2">transformer.conformer.u2-kaldi-librispeech</a>
M
Mingxue-Xu 已提交
351 352 353
      </td>
      </td>
    </tr>
小湉湉's avatar
小湉湉 已提交
354 355 356 357 358 359 360
  <tr>
      <td>TIMIT</td>
      <td>Unified Streaming & Non-streaming Two-pass</td>
      <td>
    <a href = "./examples/timit/asr1"> u2-timit</a>
      </td>
  </tr>
M
Mingxue-Xu 已提交
361 362 363 364 365
  <tr>
  <td>Alignment</td>
  <td>THCHS30</td>
  <td>MFA</td>
  <td>
H
Hui Zhang 已提交
366
  <a href = ".examples/thchs30/align0">mfa-thchs30</a>
M
Mingxue-Xu 已提交
367 368
  </td>
  </tr>
M
Mingxue-Xu 已提交
369
   <tr>
小湉湉's avatar
小湉湉 已提交
370
      <td rowspan="1">Language Model</td>
M
Mingxue-Xu 已提交
371
      <td colspan = "2">Ngram Language Model</td>
M
Mingxue-Xu 已提交
372
      <td>
M
Mingxue-Xu 已提交
373
      <a href = "./examples/other/ngram_lm">kenlm</a>
M
Mingxue-Xu 已提交
374 375
      </td>
    </tr>
小湉湉's avatar
小湉湉 已提交
376
  <tr>
M
Mingxue-Xu 已提交
377 378 379 380 381 382 383 384 385 386 387 388 389
      <td rowspan="2">Speech Translation (English to Chinese)</td> 
      <td rowspan="2">TED En-Zh</td>
      <td>Transformer + ASR MTL</td>
      <td>
      <a href = "./examples/ted_en_zh/st0">transformer-ted</a>
      </td>
  </tr>
  <tr>
      <td>FAT + Transformer + ASR MTL</td>
      <td>
      <a href = "./examples/ted_en_zh/st1">fat-st-ted</a>
      </td>
  </tr>
M
Mingxue-Xu 已提交
390 391
  </tbody>
</table>
H
Hui Zhang 已提交
392

I
iftaken 已提交
393 394
<a name="TextToSpeech"></a>

395
**Text-to-Speech** in PaddleSpeech mainly contains three modules: *Text Frontend*, *Acoustic Model* and *Vocoder*. Acoustic Model and Vocoder models are listed as follow:
H
Hui Zhang 已提交
396

M
Mingxue-Xu 已提交
397 398 399
<table>
  <thead>
    <tr>
小湉湉's avatar
小湉湉 已提交
400 401 402 403
      <th> Text-to-Speech Module Type </th>
      <th> Model Type </th>
      <th> Dataset </th>
      <th> Link </th>
M
Mingxue-Xu 已提交
404 405 406 407
    </tr>
  </thead>
  <tbody>
    <tr>
小湉湉's avatar
小湉湉 已提交
408
    <td> Text Frontend </td>
M
Mingxue-Xu 已提交
409 410
    <td colspan="2"> &emsp; </td>
    <td>
411
    <a href = "./examples/other/tn">tn</a> / <a href = "./examples/other/g2p">g2p</a>
M
Mingxue-Xu 已提交
412 413 414 415
    </td>
    </tr>
    <tr>
      <td rowspan="4">Acoustic Model</td>
小湉湉's avatar
小湉湉 已提交
416 417
      <td>Tacotron2</td>
      <td>LJSpeech / CSMSC</td>
M
Mingxue-Xu 已提交
418
      <td>
小湉湉's avatar
小湉湉 已提交
419
      <a href = "./examples/ljspeech/tts0">tacotron2-ljspeech</a> / <a href = "./examples/csmsc/tts0">tacotron2-csmsc</a>
M
Mingxue-Xu 已提交
420 421 422
      </td>
    </tr>
    <tr>
小湉湉's avatar
小湉湉 已提交
423
      <td>Transformer TTS</td>
小湉湉's avatar
小湉湉 已提交
424
      <td>LJSpeech</td>
M
Mingxue-Xu 已提交
425 426 427 428 429 430 431 432 433 434 435 436 437
      <td>
      <a href = "./examples/ljspeech/tts1">transformer-ljspeech</a>
      </td>
    </tr>
    <tr>
      <td>SpeedySpeech</td>
      <td>CSMSC</td>
      <td >
      <a href = "./examples/csmsc/tts2">speedyspeech-csmsc</a>
      </td>
    </tr>
    <tr>
      <td>FastSpeech2</td>
小湉湉's avatar
小湉湉 已提交
438
      <td>LJSpeech / VCTK / CSMSC / AISHELL-3</td>
M
Mingxue-Xu 已提交
439
      <td>
小湉湉's avatar
小湉湉 已提交
440
      <a href = "./examples/ljspeech/tts3">fastspeech2-ljspeech</a> / <a href = "./examples/vctk/tts3">fastspeech2-vctk</a> / <a href = "./examples/csmsc/tts3">fastspeech2-csmsc</a> / <a href = "./examples/aishell3/tts3">fastspeech2-aishell3</a>
M
Mingxue-Xu 已提交
441 442 443
      </td>
    </tr>
   <tr>
小湉湉's avatar
小湉湉 已提交
444
      <td rowspan="6">Vocoder</td>
M
Mingxue-Xu 已提交
445 446 447 448 449 450 451 452
      <td >WaveFlow</td>
      <td >LJSpeech</td>
      <td>
      <a href = "./examples/ljspeech/voc0">waveflow-ljspeech</a>
      </td>
    </tr>
    <tr>
      <td >Parallel WaveGAN</td>
小湉湉's avatar
小湉湉 已提交
453
      <td >LJSpeech / VCTK / CSMSC / AISHELL-3</td>
M
Mingxue-Xu 已提交
454
      <td>
小湉湉's avatar
小湉湉 已提交
455
      <a href = "./examples/ljspeech/voc1">PWGAN-ljspeech</a> / <a href = "./examples/vctk/voc1">PWGAN-vctk</a> / <a href = "./examples/csmsc/voc1">PWGAN-csmsc</a> /  <a href = "./examples/aishell3/voc1">PWGAN-aishell3</a>
M
Mingxue-Xu 已提交
456 457
      </td>
    </tr>
小湉湉's avatar
小湉湉 已提交
458 459 460 461 462 463
    <tr>
      <td >Multi Band MelGAN</td>
      <td >CSMSC</td>
      <td>
      <a href = "./examples/csmsc/voc3">Multi Band MelGAN-csmsc</a> 
      </td>
小湉湉's avatar
小湉湉 已提交
464 465 466 467 468 469 470 471 472 473
    </tr> 
    <tr>
      <td >Style MelGAN</td>
      <td >CSMSC</td>
      <td>
      <a href = "./examples/csmsc/voc4">Style MelGAN-csmsc</a> 
      </td>
    </tr>
    <tr>
      <td >HiFiGAN</td>
小湉湉's avatar
小湉湉 已提交
474
      <td >LJSpeech / VCTK / CSMSC / AISHELL-3</td>
小湉湉's avatar
小湉湉 已提交
475
      <td>
小湉湉's avatar
小湉湉 已提交
476
      <a href = "./examples/ljspeech/voc5">HiFiGAN-ljspeech</a> / <a href = "./examples/vctk/voc5">HiFiGAN-vctk</a> / <a href = "./examples/csmsc/voc5">HiFiGAN-csmsc</a> / <a href = "./examples/aishell3/voc5">HiFiGAN-aishell3</a>
小湉湉's avatar
小湉湉 已提交
477
      </td>
小湉湉's avatar
小湉湉 已提交
478 479 480 481 482 483 484 485
    </tr>
    <tr>
      <td >WaveRNN</td>
      <td >CSMSC</td>
      <td>
      <a href = "./examples/csmsc/voc6">WaveRNN-csmsc</a>
      </td>
    </tr>
M
Mingxue-Xu 已提交
486
    <tr>
小湉湉's avatar
小湉湉 已提交
487 488
      <td rowspan="3">Voice Cloning</td>
      <td>GE2E</td>
小湉湉's avatar
小湉湉 已提交
489
      <td >Librispeech, etc.</td>
小湉湉's avatar
小湉湉 已提交
490 491 492
      <td>
      <a href = "./examples/other/ge2e">ge2e</a>
      </td>
M
Mingxue-Xu 已提交
493 494
    </tr>
    <tr>
小湉湉's avatar
小湉湉 已提交
495
      <td>GE2E + Tacotron2</td>
小湉湉's avatar
小湉湉 已提交
496 497
      <td>AISHELL-3</td>
      <td>
小湉湉's avatar
小湉湉 已提交
498
      <a href = "./examples/aishell3/vc0">ge2e-tacotron2-aishell3</a>
小湉湉's avatar
小湉湉 已提交
499 500 501 502 503 504 505 506
      </td>
    </tr>
    <tr>
      <td>GE2E + FastSpeech2</td>
      <td>AISHELL-3</td>
      <td>
      <a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a>
      </td>
M
Mingxue-Xu 已提交
507 508 509
    </tr>
  </tbody>
</table>
H
Hui Zhang 已提交
510

I
iftaken 已提交
511 512
<a name="AudioClassification"></a>

M
Mingxue-Xu 已提交
513
**Audio Classification**
514 515 516 517

<table style="width:100%">
  <thead>
    <tr>
小湉湉's avatar
小湉湉 已提交
518 519 520 521
      <th> Task </th>
      <th> Dataset </th>
      <th> Model Type </th>
      <th> Link </th>
522 523 524 525 526 527 528 529 530 531 532 533 534 535
    </tr>
  </thead>
  <tbody>
  <tr>
      <td>Audio Classification</td>
      <td>ESC-50</td>
      <td>PANN</td>
      <td>
      <a href = "./examples/esc50/cls0">pann-esc50</a>
      </td>
    </tr>
  </tbody>
</table>

I
iftaken 已提交
536 537
<a name="SpeakerVerification"></a>

538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560
**Speaker Verification**

<table style="width:100%">
  <thead>
    <tr>
      <th> Task </th>
      <th> Dataset </th>
      <th> Model Type </th>
      <th> Link </th>
    </tr>
  </thead>
  <tbody>
  <tr>
      <td>Speaker Verification</td>
      <td>VoxCeleb12</td>
      <td>ECAPA-TDNN</td>
      <td>
      <a href = "./examples/voxceleb/sv0">ecapa-tdnn-voxceleb12</a>
      </td>
    </tr>
  </tbody>
</table>

I
iftaken 已提交
561 562
<a name="PunctuationRestoration"></a>

小湉湉's avatar
小湉湉 已提交
563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585
**Punctuation Restoration**

<table style="width:100%">
  <thead>
    <tr>
      <th> Task </th>
      <th> Dataset </th>
      <th> Model Type </th>
      <th> Link </th>
    </tr>
  </thead>
  <tbody>
  <tr>
      <td>Punctuation Restoration</td>
      <td>IWLST2012_zh</td>
      <td>Ernie Linear</td>
      <td>
      <a href = "./examples/iwslt2012/punc0">iwslt2012-punc0</a>
      </td>
    </tr>
  </tbody>
</table>

M
Mingxue-Xu 已提交
586
## Documents
Z
Zeyu Chen 已提交
587

M
Mingxue-Xu 已提交
588
Normally, [Speech SoTA](https://paperswithcode.com/area/speech), [Audio SoTA](https://paperswithcode.com/area/audio) and [Music SoTA](https://paperswithcode.com/area/music) give you an overview of the hot academic topics in the related area. To focus on the tasks in PaddleSpeech, you will find the following guidelines are helpful to grasp the core ideas.
Z
Zeyu Chen 已提交
589

590
- [Installation](./docs/source/install.md)
G
grasswolfs 已提交
591 592
- [Quick Start](#quickstart)
- [Some Demos](./demos/README.md)
M
Mingxue-Xu 已提交
593 594 595 596 597 598 599 600 601 602
- Tutorials
  - [Automatic Speech Recognition](./docs/source/asr/quick_start.md)
    - [Introduction](./docs/source/asr/models_introduction.md)
    - [Data Preparation](./docs/source/asr/data_preparation.md)
    - [Ngram LM](./docs/source/asr/ngram_lm.md)
  - [Text-to-Speech](./docs/source/tts/quick_start.md)
    - [Introduction](./docs/source/tts/models_introduction.md)
    - [Advanced Usage](./docs/source/tts/advanced_usage.md)
    - [Chinese Rule Based Text Frontend](./docs/source/tts/zh_text_frontend.md)
    - [Test Audio Samples](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html)
I
iftaken 已提交
603 604 605
  - Speaker Verification
    - [Audio Searching](./demos/audio_searching/README.md)
    - [Speaker Verification](./demos/speaker_verification/README.md)
G
grasswolfs 已提交
606 607
  - [Audio Classification](./demos/audio_tagging/README.md)
  - [Speech Translation](./demos/speech_translation/README.md)
I
iftaken 已提交
608
  - [Speech Server](./demos/speech_server/README.md)
M
Mingxue-Xu 已提交
609
- [Released Models](./docs/source/released_model.md)
I
iftaken 已提交
610 611 612 613 614
  - [Speech-to-Text](#SpeechToText)
  - [Text-to-Speech](#TextToSpeech)
  - [Audio Classification](#AudioClassification)
  - [Speaker Verification](#SpeakerVerification)
  - [Punctuation Restoration](#PunctuationRestoration)
G
grasswolfs 已提交
615 616
- [Community](#Community)
- [Welcome to contribute](#contribution)
G
grasswolfs 已提交
617
- [License](#License)
Z
Zeyu Chen 已提交
618

M
Mingxue-Xu 已提交
619
The Text-to-Speech module is originally called [Parakeet](https://github.com/PaddlePaddle/Parakeet), and now merged with this repository. If you are interested in academic research about this task, please see [TTS research overview](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/docs/source/tts#overview). Also, [this document](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/tts/models_introduction.md) is a good guideline for the pipeline components.
Z
Zeyu Chen 已提交
620

H
Hui Zhang 已提交
621 622 623 624 625 626 627 628 629 630 631 632 633 634 635

## ⭐ Examples
- **[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo): Use PaddleSpeech TTS to generate virtual human voice.**
  
<div align="center"><a href="https://www.bilibili.com/video/BV1cL411V71o?share_source=copy_web"><img src="https://ai-studio-static-online.cdn.bcebos.com/06fd746ab32042f398fb6f33f873e6869e846fe63c214596ae37860fe8103720" / width="500px"></a></div>

- [PaddleSpeech Demo Video](https://paddlespeech.readthedocs.io/en/latest/demo_video.html)

- **[VTuberTalk](https://github.com/jerryuhoo/VTuberTalk): Use PaddleSpeech TTS and ASR to clone voice from videos.**

<div align="center">
<img src="https://raw.githubusercontent.com/jerryuhoo/VTuberTalk/main/gui/gui.png"  width = "500px"  />
</div>


Z
Zeyu Chen 已提交
636
## Citation
637

M
Mingxue-Xu 已提交
638 639
To cite PaddleSpeech for research, please use the following format.
```tex
H
Hui Zhang 已提交
640
@inproceedings{zhang2022paddlespeech,
H
Hui Zhang 已提交
641 642 643 644 645
    title = {PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit},
    author = {Hui Zhang, Tian Yuan, Junkun Chen, Xintong Li, Renjie Zheng, Yuxin Huang, Xiaojie Chen, Enlei Gong, Zeyu Chen, Xiaoguang Hu, dianhai yu, Yanjun Ma, Liang Huang},
    booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations},
    year = {2022},
    publisher = {Association for Computational Linguistics},
M
Mingxue-Xu 已提交
646
}
H
Hui Zhang 已提交
647 648 649 650 651 652 653 654 655

@inproceedings{zheng2021fused,
  title={Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation},
  author={Zheng, Renjie and Chen, Junkun and Ma, Mingbo and Huang, Liang},
  booktitle={International Conference on Machine Learning},
  pages={12736--12746},
  year={2021},
  organization={PMLR}
}
M
Mingxue-Xu 已提交
656
```
H
Hui Zhang 已提交
657

G
grasswolfs 已提交
658
<a name="contribution"></a>
M
Mingxue-Xu 已提交
659
## Contribute to PaddleSpeech
Z
Zeyu Chen 已提交
660

M
Mingxue-Xu 已提交
661
You are warmly welcome to submit questions in [discussions](https://github.com/PaddlePaddle/PaddleSpeech/discussions) and bug reports in [issues](https://github.com/PaddlePaddle/PaddleSpeech/issues)! Also, we highly appreciate if you are willing to contribute to this project!
Z
Zeyu Chen 已提交
662

M
Mingxue-Xu 已提交
663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701
### Contributors
<p align="center">
<a href="https://github.com/zh794390558"><img src="https://avatars.githubusercontent.com/u/3038472?v=4" width=75 height=75></a>
<a href="https://github.com/Jackwaterveg"><img src="https://avatars.githubusercontent.com/u/87408988?v=4" width=75 height=75></a>
<a href="https://github.com/yt605155624"><img src="https://avatars.githubusercontent.com/u/24568452?v=4" width=75 height=75></a>
<a href="https://github.com/kuke"><img src="https://avatars.githubusercontent.com/u/3064195?v=4" width=75 height=75></a>
<a href="https://github.com/xinghai-sun"><img src="https://avatars.githubusercontent.com/u/7038341?v=4" width=75 height=75></a>
<a href="https://github.com/pkuyym"><img src="https://avatars.githubusercontent.com/u/5782283?v=4" width=75 height=75></a>
<a href="https://github.com/KPatr1ck"><img src="https://avatars.githubusercontent.com/u/22954146?v=4" width=75 height=75></a>
<a href="https://github.com/LittleChenCc"><img src="https://avatars.githubusercontent.com/u/10339970?v=4" width=75 height=75></a>
<a href="https://github.com/745165806"><img src="https://avatars.githubusercontent.com/u/20623194?v=4" width=75 height=75></a>
<a href="https://github.com/Mingxue-Xu"><img src="https://avatars.githubusercontent.com/u/92848346?v=4" width=75 height=75></a>
<a href="https://github.com/chrisxu2016"><img src="https://avatars.githubusercontent.com/u/18379485?v=4" width=75 height=75></a>
<a href="https://github.com/lfchener"><img src="https://avatars.githubusercontent.com/u/6771821?v=4" width=75 height=75></a>
<a href="https://github.com/luotao1"><img src="https://avatars.githubusercontent.com/u/6836917?v=4" width=75 height=75></a>
<a href="https://github.com/wanghaoshuang"><img src="https://avatars.githubusercontent.com/u/7534971?v=4" width=75 height=75></a>
<a href="https://github.com/gongel"><img src="https://avatars.githubusercontent.com/u/24390500?v=4" width=75 height=75></a>
<a href="https://github.com/mmglove"><img src="https://avatars.githubusercontent.com/u/38800877?v=4" width=75 height=75></a>
<a href="https://github.com/iclementine"><img src="https://avatars.githubusercontent.com/u/16222986?v=4" width=75 height=75></a>
<a href="https://github.com/ZeyuChen"><img src="https://avatars.githubusercontent.com/u/1371212?v=4" width=75 height=75></a>
<a href="https://github.com/AK391"><img src="https://avatars.githubusercontent.com/u/81195143?v=4" width=75 height=75></a>
<a href="https://github.com/qingqing01"><img src="https://avatars.githubusercontent.com/u/7845005?v=4" width=75 height=75></a>
<a href="https://github.com/ericxk"><img src="https://avatars.githubusercontent.com/u/4719594?v=4" width=75 height=75></a>
<a href="https://github.com/kvinwang"><img src="https://avatars.githubusercontent.com/u/6442159?v=4" width=75 height=75></a>
<a href="https://github.com/jiqiren11"><img src="https://avatars.githubusercontent.com/u/82639260?v=4" width=75 height=75></a>
<a href="https://github.com/AshishKarel"><img src="https://avatars.githubusercontent.com/u/58069375?v=4" width=75 height=75></a>
<a href="https://github.com/chesterkuo"><img src="https://avatars.githubusercontent.com/u/6285069?v=4" width=75 height=75></a>
<a href="https://github.com/tensor-tang"><img src="https://avatars.githubusercontent.com/u/21351065?v=4" width=75 height=75></a>
<a href="https://github.com/hysunflower"><img src="https://avatars.githubusercontent.com/u/52739577?v=4" width=75 height=75></a>  
<a href="https://github.com/wwhu"><img src="https://avatars.githubusercontent.com/u/6081200?v=4" width=75 height=75></a>
<a href="https://github.com/lispc"><img src="https://avatars.githubusercontent.com/u/2833376?v=4" width=75 height=75></a>
<a href="https://github.com/jerryuhoo"><img src="https://avatars.githubusercontent.com/u/24245709?v=4" width=75 height=75></a>
<a href="https://github.com/harisankarh"><img src="https://avatars.githubusercontent.com/u/1307053?v=4" width=75 height=75></a>
<a href="https://github.com/Jackiexiao"><img src="https://avatars.githubusercontent.com/u/18050469?v=4" width=75 height=75></a>
<a href="https://github.com/limpidezza"><img src="https://avatars.githubusercontent.com/u/71760778?v=4" width=75 height=75></a>
</p>

## Acknowledgement

702
- Many thanks to [yeyupiaoling](https://github.com/yeyupiaoling)/[PPASR](https://github.com/yeyupiaoling/PPASR)/[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)/[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)/[AudioClassification-PaddlePaddle](https://github.com/yeyupiaoling/AudioClassification-PaddlePaddle) for years of attention, constructive advice and great help.
703
- Many thanks to [mymagicpower](https://github.com/mymagicpower) for the Java implementation of ASR upon [short](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_sdk) and [long](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_long_audio_sdk) audio files.
H
Hui Zhang 已提交
704 705
- Many thanks to [JiehangXie](https://github.com/JiehangXie)/[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo) for developing Virtual Uploader(VUP)/Virtual YouTuber(VTuber) with PaddleSpeech TTS function.
- Many thanks to [745165806](https://github.com/745165806)/[PaddleSpeechTask](https://github.com/745165806/PaddleSpeechTask) for contributing Punctuation Restoration model.
L
lizi 已提交
706
- Many thanks to [kslz](https://github.com/745165806) for supplementary Chinese documents.
P
Phecda xu 已提交
707
- Many thanks to [awmmmm](https://github.com/awmmmm) for contributing fastspeech2 aishell3 conformer pretrained model.
P
Phecda xu 已提交
708
- Many thanks to [phecda-xu](https://github.com/phecda-xu)/[PaddleDubbing](https://github.com/phecda-xu/PaddleDubbing) for developing a dubbing tool with GUI based on PaddleSpeech TTS model.
J
Jerryuhoo 已提交
709
- Many thanks to [jerryuhoo](https://github.com/jerryuhoo)/[VTuberTalk](https://github.com/jerryuhoo/VTuberTalk) for developing a GUI tool based on PaddleSpeech TTS and code for making datasets from videos based on PaddleSpeech ASR.
M
Mingxue-Xu 已提交
710 711 712

Besides, PaddleSpeech depends on a lot of open source repositories. See [references](./docs/source/reference.md) for more information.

G
grasswolfs 已提交
713
<a name="License"></a>
M
Mingxue-Xu 已提交
714 715 716
## License

PaddleSpeech is provided under the [Apache-2.0 License](./LICENSE).