提交 11317999 编写于 作者: H Hui Zhang

Merge branch 'develop' into audio_dev_merge

--- ---
name: Bug report name: "\U0001F41B S2T Bug Report"
about: Create a report to help us improve about: Create a report to help us improve
title: '' title: "[S2T]XXXX"
labels: '' labels: Bug, S2T
assignees: '' assignees: zh794390558
--- ---
...@@ -27,7 +27,7 @@ A clear and concise description of what you expected to happen. ...@@ -27,7 +27,7 @@ A clear and concise description of what you expected to happen.
**Screenshots** **Screenshots**
If applicable, add screenshots to help explain your problem. If applicable, add screenshots to help explain your problem.
** Environment (please complete the following information):** **Environment (please complete the following information):**
- OS: [e.g. Ubuntu] - OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3] - GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7] - Python Version [e.g. 3.7]
......
---
name: "\U0001F41B TTS Bug Report"
about: Create a report to help us improve
title: "[TTS]XXXX"
labels: Bug, T2S
assignees: yt605155624
---
For support and discussions, please use our [Discourse forums](https://github.com/PaddlePaddle/DeepSpeech/discussions).
If you've found a bug then please create an issue with the following information:
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7]
- PaddlePaddle Version [e.g. 2.0.0]
- Model Version [e.g. 2.0.0]
- GPU/DRIVER Informationo [e.g. Tesla V100-SXM2-32GB/440.64.00]
- CUDA/CUDNN Version [e.g. cuda-10.2]
- MKL Version
- TensorRT Version
**Additional context**
Add any other context about the problem here.
---
name: "\U0001F680 Feature Request"
about: As a user, I want to request a New Feature on the product.
title: ''
labels: feature request
assignees: D-DanielYang, iftaken
---
## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
---
name: "\U0001F9E9 Others"
about: Report any other non-support related issues.
title: ''
labels: ''
assignees: ''
---
## Others
<!--
你可以在这里提出任何前面几类模板不适用的问题,包括但不限于:优化性建议、框架使用体验反馈、版本兼容性问题、报错信息不清楚等。
You can report any issues that are not applicable to the previous types of templates, including but not limited to: enhancement suggestions, feedback on the use of the framework, version compatibility issues, unclear error information, etc.
-->
---
name: "\U0001F914 Ask a Question"
about: I want to ask a question.
title: ''
labels: Question
assignees: ''
---
## General Question
<!--
Before asking a question, make sure you have:
- Baidu/Google your question.
- Searched open and closed [GitHub issues](https://github.com/PaddlePaddle/PaddleSpeech/issues?q=is%3Aissue)
- Read the documentation:
- [Readme](https://github.com/PaddlePaddle/PaddleSpeech)
- [Doc](https://paddlespeech.readthedocs.io/)
-->
# Changelog
Date: 2022-3-22, Author: yt605155624.
Add features to: CLI:
- Support aishell3_hifigan、vctk_hifigan
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1587
Date: 2022-3-09, Author: yt605155624.
Add features to: T2S:
- Add ljspeech hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1549
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add aishell3 hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1545
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add vctk hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1544
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Update aishell3 vc0 with new Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1419
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Add ljspeech Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1416
Date: 2022-1-24, Author: yt605155624.
Add features to: T2S:
- Add csmsc WaveRNN.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1379
Date: 2022-1-19, Author: yt605155624.
Add features to: T2S:
- Add csmsc Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1314
Date: 2022-1-10, Author: Jackwaterveg.
Add features to: CLI:
- Support English (librispeech/asr1/transformer).
- Support choosing `decode_method` for conformer and transformer models.
- Refactor the config, using the unified config.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1297
***
Date: 2022-1-17, Author: Jackwaterveg.
Add features to: CLI:
- Support deepspeech2 online/offline model(aishell).
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1356
***
Date: 2022-1-24, Author: Jackwaterveg.
Add features to: ctc_decoders:
- Support online ctc prefix-beam search decoder.
- Unified ctc online decoder and ctc offline decoder.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/821
***
([简体中文](./README_cn.md)|English) ([简体中文](./README_cn.md)|English)
<p align="center"> <p align="center">
<img src="./docs/images/PaddleSpeech_logo.png" /> <img src="./docs/images/PaddleSpeech_logo.png" />
...@@ -160,15 +159,20 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision ...@@ -160,15 +159,20 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision
- 🧩 *Cascaded models application*: as an extension of the typical traditional audio tasks, we combine the workflows of the aforementioned tasks with other fields like Natural language processing (NLP) and Computer Vision (CV). - 🧩 *Cascaded models application*: as an extension of the typical traditional audio tasks, we combine the workflows of the aforementioned tasks with other fields like Natural language processing (NLP) and Computer Vision (CV).
### Recent Update ### Recent Update
- 👑 2022.05.13: Release [PP-ASR](./docs/source/asr/PPASR.md)[PP-TTS](./docs/source/tts/PPTTS.md)[PP-VPR](docs/source/vpr/PPVPR.md) - ⚡ 2022.08.25: Release TTS [finetune](./examples/other/tts_finetune/tts3) example.
- 👏🏻 2022.05.06: `Streaming ASR` with `Punctuation Restoration` and `Token Timestamp`. - 🔥 2022.08.22: Add ERNIE-SAT models: [ERNIE-SAT-vctk](./examples/vctk/ernie_sat)[ERNIE-SAT-aishell3](./examples/aishell3/ernie_sat)[ERNIE-SAT-zh_en](./examples/aishell3_vctk/ernie_sat).
- 👏🏻 2022.05.06: `Server` is available for `Speaker Verification`, and `Punctuation Restoration`. - 🔥 2022.08.15: Add [g2pW](https://github.com/GitYCC/g2pW) into TTS Chinese Text Frontend.
- 👏🏻 2022.04.28: `Streaming Server` is available for `Automatic Speech Recognition` and `Text-to-Speech`. - 🔥 2022.08.09: Release [Chinese English mixed TTS](./examples/zh_en_tts/tts3).
- 👏🏻 2022.03.28: `Server` is available for `Audio Classification`, `Automatic Speech Recognition` and `Text-to-Speech`. - ⚡ 2022.08.03: Add ONNXRuntime infer for TTS CLI.
- 👏🏻 2022.03.28: `CLI` is available for `Speaker Verification`. - 🎉 2022.07.18: Release VITS: [VITS-csmsc](./examples/csmsc/vits)[VITS-aishell3](./examples/aishell3/vits)[VITS-VC](./examples/aishell3/vits-vc).
- 🎉 2022.06.22: All TTS models support ONNX format.
- 🍀 2022.06.17: Add [PaddleSpeech Web Demo](./demos/speech_web).
- 👑 2022.05.13: Release [PP-ASR](./docs/source/asr/PPASR.md)[PP-TTS](./docs/source/tts/PPTTS.md)[PP-VPR](docs/source/vpr/PPVPR.md).
- 👏🏻 2022.05.06: `PaddleSpeech Streaming Server` is available for `Streaming ASR` with `Punctuation Restoration` and `Token Timestamp` and `Text-to-Speech`.
- 👏🏻 2022.05.06: `PaddleSpeech Server` is available for `Audio Classification`, `Automatic Speech Recognition` and `Text-to-Speech`, `Speaker Verification` and `Punctuation Restoration`.
- 👏🏻 2022.03.28: `PaddleSpeech CLI` is available for `Speaker Verification`.
- 🤗 2021.12.14: [ASR](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR) and [TTS](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS) Demos on Hugging Face Spaces are available! - 🤗 2021.12.14: [ASR](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR) and [TTS](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS) Demos on Hugging Face Spaces are available!
- 👏🏻 2021.12.10: `CLI` is available for `Audio Classification`, `Automatic Speech Recognition`, `Speech Translation (English to Chinese)` and `Text-to-Speech`. - 👏🏻 2021.12.10: `PaddleSpeech CLI` is available for `Audio Classification`, `Automatic Speech Recognition`, `Speech Translation (English to Chinese)` and `Text-to-Speech`.
### Community ### Community
- Scan the QR code below with your Wechat, you can access to official technical exchange group and get the bonus ( more than 20GB learning materials, such as papers, codes and videos ) and the live link of the lessons. Look forward to your participation. - Scan the QR code below with your Wechat, you can access to official technical exchange group and get the bonus ( more than 20GB learning materials, such as papers, codes and videos ) and the live link of the lessons. Look forward to your participation.
...@@ -180,62 +184,191 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision ...@@ -180,62 +184,191 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision
## Installation ## Installation
We strongly recommend our users to install PaddleSpeech in **Linux** with *python>=3.7* and *paddlepaddle>=2.3.1*. We strongly recommend our users to install PaddleSpeech in **Linux** with *python>=3.7* and *paddlepaddle>=2.3.1*.
Up to now, **Linux** supports CLI for the all our tasks, **Mac OSX** and **Windows** only supports PaddleSpeech CLI for Audio Classification, Speech-to-Text and Text-to-Speech. To install `PaddleSpeech`, please see [installation](./docs/source/install.md).
### **Dependency Introduction**
+ gcc >= 4.8.5
+ paddlepaddle >= 2.3.1
+ python >= 3.7
+ OS support: Linux(recommend), Windows, Mac OSX
PaddleSpeech depends on paddlepaddle. For installation, please refer to the official website of [paddlepaddle](https://www.paddlepaddle.org.cn/en) and choose according to your own machine. Here is an example of the cpu version.
```bash
pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
There are two quick installation methods for PaddleSpeech, one is pip installation, and the other is source code compilation (recommended).
### pip install
```shell
pip install pytest-runner
pip install paddlespeech
```
### source code compilation
```shell
git clone https://github.com/PaddlePaddle/PaddleSpeech.git
cd PaddleSpeech
pip install pytest-runner
pip install .
```
For more installation problems, such as conda environment, librosa-dependent, gcc problems, kaldi installation, etc., you can refer to this [installation document](./docs/source/install.md). If you encounter problems during installation, you can leave a message on [#2150](https://github.com/PaddlePaddle/PaddleSpeech/issues/2150) and find related problems
<a name="quickstart"></a> <a name="quickstart"></a>
## Quick Start ## Quick Start
Developers can have a try of our models with [PaddleSpeech Command Line](./paddlespeech/cli/README.md). Change `--input` to test your own audio/text. Developers can have a try of our models with [PaddleSpeech Command Line](./paddlespeech/cli/README.md) or Python. Change `--input` to test your own audio/text and support 16k wav format audio.
**You can also quickly experience it in AI Studio 👉🏻 [PaddleSpeech API Demo](https://aistudio.baidu.com/aistudio/projectdetail/4353348?sUid=2470186&shared=1&ts=1660876445786)**
Test audio sample download
**Audio Classification**
```shell ```shell
paddlespeech cls --input input.wav wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav
``` ```
**Speaker Verification** ### Automatic Speech Recognition
<details><summary>&emsp;(Click to expand)Open Source Speech Recognition</summary>
**command line experience**
```shell
paddlespeech asr --lang zh --input zh.wav
``` ```
paddlespeech vector --task spk --input input_16k.wav
**Python API experience**
```python
>>> from paddlespeech.cli.asr.infer import ASRExecutor
>>> asr = ASRExecutor()
>>> result = asr(audio_file="zh.wav")
>>> print(result)
我认为跑步最重要的就是给我带来了身体健康
``` ```
</details>
### Text-to-Speech
<details><summary>&emsp;Open Source Speech Synthesis</summary>
Output 24k sample rate wav format audio
**command line experience**
**Automatic Speech Recognition**
```shell ```shell
paddlespeech asr --lang zh --input input_16k.wav paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output output.wav
``` ```
- web demo for Automatic Speech Recognition is integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See Demo: [ASR Demo](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR)
**Speech Translation** (English to Chinese) **Python API experience**
(not support for Mac and Windows now)
```python
>>> from paddlespeech.cli.tts.infer import TTSExecutor
>>> tts = TTSExecutor()
>>> tts(text="今天天气十分不错。", output="output.wav")
```
- You can experience in [Huggingface Spaces](https://huggingface.co/spaces) [TTS Demo](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS)
</details>
### Audio Classification
<details><summary>&emsp;An open-domain sound classification tool</summary>
Sound classification model based on 527 categories of AudioSet dataset
**command line experience**
```shell ```shell
paddlespeech st --input input_16k.wav paddlespeech cls --input zh.wav
``` ```
**Text-to-Speech** **Python API experience**
```python
>>> from paddlespeech.cli.cls.infer import CLSExecutor
>>> cls = CLSExecutor()
>>> result = cls(audio_file="zh.wav")
>>> print(result)
Speech 0.9027186632156372
```
</details>
### Voiceprint Extraction
<details><summary>&emsp;Industrial-grade voiceprint extraction tool</summary>
**command line experience**
```shell ```shell
paddlespeech tts --input "你好,欢迎使用飞桨深度学习框架!" --output output.wav paddlespeech vector --task spk --input zh.wav
``` ```
- web demo for Text to Speech is integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See Demo: [TTS Demo](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS)
**Text Postprocessing** **Python API experience**
- Punctuation Restoration
```bash
paddlespeech text --task punc --input 今天的天气真不错啊你下午有空吗我想约你一起去吃饭
```
**Batch Process** ```python
>>> from paddlespeech.cli.vector import VectorExecutor
>>> vec = VectorExecutor()
>>> result = vec(audio_file="zh.wav")
>>> print(result) # 187维向量
[ -0.19083306 9.474295 -14.122263 -2.0916545 0.04848729
4.9295826 1.4780062 0.3733844 10.695862 3.2697146
-4.48199 -0.6617882 -9.170393 -11.1568775 -1.2358263 ...]
``` ```
echo -e "1 欢迎光临。\n2 谢谢惠顾。" | paddlespeech tts
</details>
### Punctuation Restoration
<details><summary>&emsp;Quick recovery of text punctuation, works with ASR models</summary>
**command line experience**
```shell
paddlespeech text --task punc --input 今天的天气真不错啊你下午有空吗我想约你一起去吃饭
``` ```
**Shell Pipeline** **Python API experience**
- ASR + Punctuation Restoration
```python
>>> from paddlespeech.cli.text.infer import TextExecutor
>>> text_punc = TextExecutor()
>>> result = text_punc(text="今天的天气真不错啊你下午有空吗我想约你一起去吃饭")
今天的天气真不错啊你下午有空吗我想约你一起去吃饭
``` ```
paddlespeech asr --input ./zh.wav | paddlespeech text --task punc
</details>
### Speech Translation
<details><summary>&emsp;End-to-end English to Chinese Speech Translation Tool</summary>
Use pre-compiled kaldi related tools, only support experience in Ubuntu system
**command line experience**
```shell
paddlespeech st --input en.wav
``` ```
For more command lines, please see: [demos](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/demos) **Python API experience**
```python
>>> from paddlespeech.cli.st.infer import STExecutor
>>> st = STExecutor()
>>> result = st(audio_file="en.wav")
['我 在 这栋 建筑 的 古老 门上 敲门 。']
```
If you want to try more functions like training and tuning, please have a look at [Speech-to-Text Quick Start](./docs/source/asr/quick_start.md) and [Text-to-Speech Quick Start](./docs/source/tts/quick_start.md). </details>
<a name="quickstartserver"></a> <a name="quickstartserver"></a>
...@@ -243,10 +376,12 @@ If you want to try more functions like training and tuning, please have a look a ...@@ -243,10 +376,12 @@ If you want to try more functions like training and tuning, please have a look a
Developers can have a try of our speech server with [PaddleSpeech Server Command Line](./paddlespeech/server/README.md). Developers can have a try of our speech server with [PaddleSpeech Server Command Line](./paddlespeech/server/README.md).
**You can try it quickly in AI Studio (recommend): [SpeechServer](https://aistudio.baidu.com/aistudio/projectdetail/4354592?sUid=2470186&shared=1&ts=1660877827034)**
**Start server** **Start server**
```shell ```shell
paddlespeech_server start --config_file ./paddlespeech/server/conf/application.yaml paddlespeech_server start --config_file ./demos/speech_server/conf/application.yaml
``` ```
**Access Speech Recognition Services** **Access Speech Recognition Services**
...@@ -404,7 +539,7 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r ...@@ -404,7 +539,7 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
</td> </td>
</tr> </tr>
<tr> <tr>
<td rowspan="4">Acoustic Model</td> <td rowspan="5">Acoustic Model</td>
<td>Tacotron2</td> <td>Tacotron2</td>
<td>LJSpeech / CSMSC</td> <td>LJSpeech / CSMSC</td>
<td> <td>
...@@ -427,9 +562,16 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r ...@@ -427,9 +562,16 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
</tr> </tr>
<tr> <tr>
<td>FastSpeech2</td> <td>FastSpeech2</td>
<td>LJSpeech / VCTK / CSMSC / AISHELL-3</td> <td>LJSpeech / VCTK / CSMSC / AISHELL-3 / ZH_EN / finetune</td>
<td> <td>
<a href = "./examples/ljspeech/tts3">fastspeech2-ljspeech</a> / <a href = "./examples/vctk/tts3">fastspeech2-vctk</a> / <a href = "./examples/csmsc/tts3">fastspeech2-csmsc</a> / <a href = "./examples/aishell3/tts3">fastspeech2-aishell3</a> <a href = "./examples/ljspeech/tts3">fastspeech2-ljspeech</a> / <a href = "./examples/vctk/tts3">fastspeech2-vctk</a> / <a href = "./examples/csmsc/tts3">fastspeech2-csmsc</a> / <a href = "./examples/aishell3/tts3">fastspeech2-aishell3</a> / <a href = "./examples/zh_en_tts/tts3">fastspeech2-zh_en</a> / <a href = "./examples/other/tts_finetune/tts3">fastspeech2-finetune</a>
</td>
</tr>
<tr>
<td>ERNIE-SAT</td>
<td>VCTK / AISHELL-3 / ZH_EN</td>
<td>
<a href = "./examples/vctk/ernie_sat">ERNIE-SAT-vctk</a> / <a href = "./examples/aishell3/ernie_sat">ERNIE-SAT-aishell3</a> / <a href = "./examples/aishell3_vctk/ernie_sat">ERNIE-SAT-zh_en</a>
</td> </td>
</tr> </tr>
<tr> <tr>
...@@ -462,47 +604,61 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r ...@@ -462,47 +604,61 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
</td> </td>
</tr> </tr>
<tr> <tr>
<td >HiFiGAN</td> <td>HiFiGAN</td>
<td >LJSpeech / VCTK / CSMSC / AISHELL-3</td> <td>LJSpeech / VCTK / CSMSC / AISHELL-3</td>
<td> <td>
<a href = "./examples/ljspeech/voc5">HiFiGAN-ljspeech</a> / <a href = "./examples/vctk/voc5">HiFiGAN-vctk</a> / <a href = "./examples/csmsc/voc5">HiFiGAN-csmsc</a> / <a href = "./examples/aishell3/voc5">HiFiGAN-aishell3</a> <a href = "./examples/ljspeech/voc5">HiFiGAN-ljspeech</a> / <a href = "./examples/vctk/voc5">HiFiGAN-vctk</a> / <a href = "./examples/csmsc/voc5">HiFiGAN-csmsc</a> / <a href = "./examples/aishell3/voc5">HiFiGAN-aishell3</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td >WaveRNN</td> <td>WaveRNN</td>
<td >CSMSC</td> <td>CSMSC</td>
<td> <td>
<a href = "./examples/csmsc/voc6">WaveRNN-csmsc</a> <a href = "./examples/csmsc/voc6">WaveRNN-csmsc</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td rowspan="3">Voice Cloning</td> <td rowspan="5">Voice Cloning</td>
<td>GE2E</td> <td>GE2E</td>
<td >Librispeech, etc.</td> <td >Librispeech, etc.</td>
<td> <td>
<a href = "./examples/other/ge2e">ge2e</a> <a href = "./examples/other/ge2e">GE2E</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td>GE2E + Tacotron2</td> <td>SV2TTS (GE2E + Tacotron2)</td>
<td>AISHELL-3</td> <td>AISHELL-3</td>
<td> <td>
<a href = "./examples/aishell3/vc0">ge2e-tacotron2-aishell3</a> <a href = "./examples/aishell3/vc0">VC0</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td>GE2E + FastSpeech2</td> <td>SV2TTS (GE2E + FastSpeech2)</td>
<td>AISHELL-3</td> <td>AISHELL-3</td>
<td> <td>
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a> <a href = "./examples/aishell3/vc1">VC1</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td>SV2TTS (ECAPA-TDNN + FastSpeech2)</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vc2">VC2</a>
</td>
</tr>
<tr>
<td>GE2E + VITS</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vits-vc">VITS-VC</a>
</td>
</tr>
<tr>
<td rowspan="3">End-to-End</td> <td rowspan="3">End-to-End</td>
<td>VITS</td> <td>VITS</td>
<td >CSMSC</td> <td>CSMSC / AISHELL-3</td>
<td> <td>
<a href = "./examples/csmsc/vits">VITS-csmsc</a> <a href = "./examples/csmsc/vits">VITS-csmsc</a> / <a href = "./examples/aishell3/vits">VITS-aishell3</a>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -662,44 +818,79 @@ You are warmly welcome to submit questions in [discussions](https://github.com/P ...@@ -662,44 +818,79 @@ You are warmly welcome to submit questions in [discussions](https://github.com/P
### Contributors ### Contributors
<p align="center"> <p align="center">
<a href="https://github.com/zh794390558"><img src="https://avatars.githubusercontent.com/u/3038472?v=4" width=75 height=75></a> <a href="https://github.com/zh794390558"><img src="https://avatars.githubusercontent.com/u/3038472?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackwaterveg"><img src="https://avatars.githubusercontent.com/u/87408988?v=4" width=75 height=75></a> <a href="https://github.com/Jackwaterveg"><img src="https://avatars.githubusercontent.com/u/87408988?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/yt605155624"><img src="https://avatars.githubusercontent.com/u/24568452?v=4" width=75 height=75></a> <a href="https://github.com/yt605155624"><img src="https://avatars.githubusercontent.com/u/24568452?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kuke"><img src="https://avatars.githubusercontent.com/u/3064195?v=4" width=75 height=75></a> <a href="https://github.com/Honei"><img src="https://avatars.githubusercontent.com/u/11361692?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/xinghai-sun"><img src="https://avatars.githubusercontent.com/u/7038341?v=4" width=75 height=75></a> <a href="https://github.com/KPatr1ck"><img src="https://avatars.githubusercontent.com/u/22954146?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/pkuyym"><img src="https://avatars.githubusercontent.com/u/5782283?v=4" width=75 height=75></a> <a href="https://github.com/kuke"><img src="https://avatars.githubusercontent.com/u/3064195?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/KPatr1ck"><img src="https://avatars.githubusercontent.com/u/22954146?v=4" width=75 height=75></a> <a href="https://github.com/lym0302"><img src="https://avatars.githubusercontent.com/u/34430015?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/LittleChenCc"><img src="https://avatars.githubusercontent.com/u/10339970?v=4" width=75 height=75></a> <a href="https://github.com/SmileGoat"><img src="https://avatars.githubusercontent.com/u/56786796?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/745165806"><img src="https://avatars.githubusercontent.com/u/20623194?v=4" width=75 height=75></a> <a href="https://github.com/xinghai-sun"><img src="https://avatars.githubusercontent.com/u/7038341?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Mingxue-Xu"><img src="https://avatars.githubusercontent.com/u/92848346?v=4" width=75 height=75></a> <a href="https://github.com/pkuyym"><img src="https://avatars.githubusercontent.com/u/5782283?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chrisxu2016"><img src="https://avatars.githubusercontent.com/u/18379485?v=4" width=75 height=75></a> <a href="https://github.com/LittleChenCc"><img src="https://avatars.githubusercontent.com/u/10339970?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lfchener"><img src="https://avatars.githubusercontent.com/u/6771821?v=4" width=75 height=75></a> <a href="https://github.com/qingen"><img src="https://avatars.githubusercontent.com/u/3139179?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/luotao1"><img src="https://avatars.githubusercontent.com/u/6836917?v=4" width=75 height=75></a> <a href="https://github.com/D-DanielYang"><img src="https://avatars.githubusercontent.com/u/23690325?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wanghaoshuang"><img src="https://avatars.githubusercontent.com/u/7534971?v=4" width=75 height=75></a> <a href="https://github.com/Mingxue-Xu"><img src="https://avatars.githubusercontent.com/u/92848346?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/gongel"><img src="https://avatars.githubusercontent.com/u/24390500?v=4" width=75 height=75></a> <a href="https://github.com/745165806"><img src="https://avatars.githubusercontent.com/u/20623194?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/mmglove"><img src="https://avatars.githubusercontent.com/u/38800877?v=4" width=75 height=75></a> <a href="https://github.com/jerryuhoo"><img src="https://avatars.githubusercontent.com/u/24245709?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/iclementine"><img src="https://avatars.githubusercontent.com/u/16222986?v=4" width=75 height=75></a> <a href="https://github.com/WilliamZhang06"><img src="https://avatars.githubusercontent.com/u/97937340?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ZeyuChen"><img src="https://avatars.githubusercontent.com/u/1371212?v=4" width=75 height=75></a> <a href="https://github.com/chrisxu2016"><img src="https://avatars.githubusercontent.com/u/18379485?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AK391"><img src="https://avatars.githubusercontent.com/u/81195143?v=4" width=75 height=75></a> <a href="https://github.com/iftaken"><img src="https://avatars.githubusercontent.com/u/30135920?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/qingqing01"><img src="https://avatars.githubusercontent.com/u/7845005?v=4" width=75 height=75></a> <a href="https://github.com/lfchener"><img src="https://avatars.githubusercontent.com/u/6771821?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ericxk"><img src="https://avatars.githubusercontent.com/u/4719594?v=4" width=75 height=75></a> <a href="https://github.com/BarryKCL"><img src="https://avatars.githubusercontent.com/u/48039828?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kvinwang"><img src="https://avatars.githubusercontent.com/u/6442159?v=4" width=75 height=75></a> <a href="https://github.com/mmglove"><img src="https://avatars.githubusercontent.com/u/38800877?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jiqiren11"><img src="https://avatars.githubusercontent.com/u/82639260?v=4" width=75 height=75></a> <a href="https://github.com/gongel"><img src="https://avatars.githubusercontent.com/u/24390500?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AshishKarel"><img src="https://avatars.githubusercontent.com/u/58069375?v=4" width=75 height=75></a> <a href="https://github.com/luotao1"><img src="https://avatars.githubusercontent.com/u/6836917?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chesterkuo"><img src="https://avatars.githubusercontent.com/u/6285069?v=4" width=75 height=75></a> <a href="https://github.com/wanghaoshuang"><img src="https://avatars.githubusercontent.com/u/7534971?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/tensor-tang"><img src="https://avatars.githubusercontent.com/u/21351065?v=4" width=75 height=75></a> <a href="https://github.com/kslz"><img src="https://avatars.githubusercontent.com/u/54951765?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/hysunflower"><img src="https://avatars.githubusercontent.com/u/52739577?v=4" width=75 height=75></a> <a href="https://github.com/JiehangXie"><img src="https://avatars.githubusercontent.com/u/51190264?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wwhu"><img src="https://avatars.githubusercontent.com/u/6081200?v=4" width=75 height=75></a> <a href="https://github.com/david-95"><img src="https://avatars.githubusercontent.com/u/15189190?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lispc"><img src="https://avatars.githubusercontent.com/u/2833376?v=4" width=75 height=75></a> <a href="https://github.com/THUzyt21"><img src="https://avatars.githubusercontent.com/u/91456992?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jerryuhoo"><img src="https://avatars.githubusercontent.com/u/24245709?v=4" width=75 height=75></a> <a href="https://github.com/buchongyu2"><img src="https://avatars.githubusercontent.com/u/29157444?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/harisankarh"><img src="https://avatars.githubusercontent.com/u/1307053?v=4" width=75 height=75></a> <a href="https://github.com/iclementine"><img src="https://avatars.githubusercontent.com/u/16222986?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackiexiao"><img src="https://avatars.githubusercontent.com/u/18050469?v=4" width=75 height=75></a> <a href="https://github.com/phecda-xu"><img src="https://avatars.githubusercontent.com/u/46859427?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/limpidezza"><img src="https://avatars.githubusercontent.com/u/71760778?v=4" width=75 height=75></a> <a href="https://github.com/freeliuzc"><img src="https://avatars.githubusercontent.com/u/23568094?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ZeyuChen"><img src="https://avatars.githubusercontent.com/u/1371212?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ccrrong"><img src="https://avatars.githubusercontent.com/u/101700995?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AK391"><img src="https://avatars.githubusercontent.com/u/81195143?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/qingqing01"><img src="https://avatars.githubusercontent.com/u/7845005?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/0x45f"><img src="https://avatars.githubusercontent.com/u/23097963?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/vpegasus"><img src="https://avatars.githubusercontent.com/u/22723154?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ericxk"><img src="https://avatars.githubusercontent.com/u/4719594?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Betterman-qs"><img src="https://avatars.githubusercontent.com/u/61459181?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/sneaxiy"><img src="https://avatars.githubusercontent.com/u/32832641?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Doubledongli"><img src="https://avatars.githubusercontent.com/u/20540661?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/apps/dependabot"><img src="https://avatars.githubusercontent.com/in/29110?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kvinwang"><img src="https://avatars.githubusercontent.com/u/6442159?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chenkui164"><img src="https://avatars.githubusercontent.com/u/34813030?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/PaddleZhang"><img src="https://avatars.githubusercontent.com/u/97284124?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/billishyahao"><img src="https://avatars.githubusercontent.com/u/96406262?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/BrightXiaoHan"><img src="https://avatars.githubusercontent.com/u/25839309?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jiqiren11"><img src="https://avatars.githubusercontent.com/u/82639260?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ryanrussell"><img src="https://avatars.githubusercontent.com/u/523300?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/GT-ZhangAcer"><img src="https://avatars.githubusercontent.com/u/46156734?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/tensor-tang"><img src="https://avatars.githubusercontent.com/u/21351065?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/hysunflower"><img src="https://avatars.githubusercontent.com/u/52739577?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/oyjxer"><img src="https://avatars.githubusercontent.com/u/16233945?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/JamesLim-sy"><img src="https://avatars.githubusercontent.com/u/61349199?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/limpidezza"><img src="https://avatars.githubusercontent.com/u/71760778?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/windstamp"><img src="https://avatars.githubusercontent.com/u/34057289?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AshishKarel"><img src="https://avatars.githubusercontent.com/u/58069375?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chesterkuo"><img src="https://avatars.githubusercontent.com/u/6285069?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/YDX-2147483647"><img src="https://avatars.githubusercontent.com/u/73375426?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AdamBear"><img src="https://avatars.githubusercontent.com/u/2288870?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wwhu"><img src="https://avatars.githubusercontent.com/u/6081200?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lispc"><img src="https://avatars.githubusercontent.com/u/2833376?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/harisankarh"><img src="https://avatars.githubusercontent.com/u/1307053?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/pengzhendong"><img src="https://avatars.githubusercontent.com/u/10704539?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackiexiao"><img src="https://avatars.githubusercontent.com/u/18050469?s=60&v=4" width=75 height=75></a>
</p> </p>
## Acknowledgement ## Acknowledgement
- Many thanks to [HighCWu](https://github.com/HighCWu) for adding [VITS-aishell3](./examples/aishell3/vits) and [VITS-VC](./examples/aishell3/vits-vc) examples.
- Many thanks to [BarryKCL](https://github.com/BarryKCL) improved TTS Chinses frontend based on [G2PW](https://github.com/GitYCC/g2pW) - Many thanks to [david-95](https://github.com/david-95) improved TTS, fixed multi-punctuation bug, and contributed to multiple program and data.
- Many thanks to [BarryKCL](https://github.com/BarryKCL) improved TTS Chinses frontend based on [G2PW](https://github.com/GitYCC/g2pW).
- Many thanks to [yeyupiaoling](https://github.com/yeyupiaoling)/[PPASR](https://github.com/yeyupiaoling/PPASR)/[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)/[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)/[AudioClassification-PaddlePaddle](https://github.com/yeyupiaoling/AudioClassification-PaddlePaddle) for years of attention, constructive advice and great help. - Many thanks to [yeyupiaoling](https://github.com/yeyupiaoling)/[PPASR](https://github.com/yeyupiaoling/PPASR)/[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)/[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)/[AudioClassification-PaddlePaddle](https://github.com/yeyupiaoling/AudioClassification-PaddlePaddle) for years of attention, constructive advice and great help.
- Many thanks to [mymagicpower](https://github.com/mymagicpower) for the Java implementation of ASR upon [short](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_sdk) and [long](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_long_audio_sdk) audio files. - Many thanks to [mymagicpower](https://github.com/mymagicpower) for the Java implementation of ASR upon [short](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_sdk) and [long](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_long_audio_sdk) audio files.
- Many thanks to [JiehangXie](https://github.com/JiehangXie)/[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo) for developing Virtual Uploader(VUP)/Virtual YouTuber(VTuber) with PaddleSpeech TTS function. - Many thanks to [JiehangXie](https://github.com/JiehangXie)/[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo) for developing Virtual Uploader(VUP)/Virtual YouTuber(VTuber) with PaddleSpeech TTS function.
......
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
<p align="center"> <p align="center">
<img src="./docs/images/PaddleSpeech_logo.png" /> <img src="./docs/images/PaddleSpeech_logo.png" />
...@@ -165,13 +164,37 @@ ...@@ -165,13 +164,37 @@
- 🧩 级联模型应用: 作为传统语音任务的扩展,我们结合了自然语言处理、计算机视觉等任务,实现更接近实际需求的产业级应用。 - 🧩 级联模型应用: 作为传统语音任务的扩展,我们结合了自然语言处理、计算机视觉等任务,实现更接近实际需求的产业级应用。
### 近期更新 ### 近期活动
❗️重磅❗️飞桨智慧金融行业系列直播课
✅ 覆盖智能风控、智能运维、智能营销、智能客服四大金融主流场景
📆 9月6日-9月29日每周二、四19:00
+ 智慧金融行业深入洞察
+ 8节理论+实践精品直播课
+ 10+真实产业场景范例教学及实践
+ 更有免费算力+结业证书等礼品等你来拿
扫码报名码住直播链接,与行业精英深度交流
<div align="center">
<img src="https://user-images.githubusercontent.com/30135920/188431897-a02f028f-dd13-41e8-8ff6-749468cdc850.jpg" width = "200" />
</div>
### 近期更新
- ⚡ 2022.08.25: 发布 TTS [finetune](./examples/other/tts_finetune/tts3) 示例。
- 🔥 2022.08.22: 新增 ERNIE-SAT 模型: [ERNIE-SAT-vctk](./examples/vctk/ernie_sat)[ERNIE-SAT-aishell3](./examples/aishell3/ernie_sat)[ERNIE-SAT-zh_en](./examples/aishell3_vctk/ernie_sat)
- 🔥 2022.08.15: 将 [g2pW](https://github.com/GitYCC/g2pW) 引入 TTS 中文文本前端。
- 🔥 2022.08.09: 发布[中英文混合 TTS](./examples/zh_en_tts/tts3)
- ⚡ 2022.08.03: TTS CLI 新增 ONNXRuntime 推理方式。
- 🎉 2022.07.18: 发布 VITS 模型: [VITS-csmsc](./examples/csmsc/vits)[VITS-aishell3](./examples/aishell3/vits)[VITS-VC](./examples/aishell3/vits-vc)
- 🎉 2022.06.22: 所有 TTS 模型支持了 ONNX 格式。
- 🍀 2022.06.17: 新增 [PaddleSpeech 网页应用](./demos/speech_web)
- 👑 2022.05.13: PaddleSpeech 发布 [PP-ASR](./docs/source/asr/PPASR_cn.md) 流式语音识别系统、[PP-TTS](./docs/source/tts/PPTTS_cn.md) 流式语音合成系统、[PP-VPR](docs/source/vpr/PPVPR_cn.md) 全链路声纹识别系统 - 👑 2022.05.13: PaddleSpeech 发布 [PP-ASR](./docs/source/asr/PPASR_cn.md) 流式语音识别系统、[PP-TTS](./docs/source/tts/PPTTS_cn.md) 流式语音合成系统、[PP-VPR](docs/source/vpr/PPVPR_cn.md) 全链路声纹识别系统
- 👏🏻 2022.05.06: PaddleSpeech Streaming Server 上线! 覆盖了语音识别(标点恢复、时间戳),和语音合成。 - 👏🏻 2022.05.06: PaddleSpeech Streaming Server 上线!覆盖了语音识别(标点恢复、时间戳)和语音合成。
- 👏🏻 2022.05.06: PaddleSpeech Server 上线! 覆盖了声音分类、语音识别、语音合成、声纹识别,标点恢复。 - 👏🏻 2022.05.06: PaddleSpeech Server 上线!覆盖了声音分类、语音识别、语音合成、声纹识别,标点恢复。
- 👏🏻 2022.03.28: PaddleSpeech CLI 覆盖声音分类、语音识别、语音翻译(英译中)、语音合成,声纹验证。 - 👏🏻 2022.03.28: PaddleSpeech CLI 覆盖声音分类、语音识别、语音翻译(英译中)、语音合成和声纹验证。
- 🤗 2021.12.14: PaddleSpeech [ASR](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR) and [TTS](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS) Demos on Hugging Face Spaces are available! - 🤗 2021.12.14: PaddleSpeech [ASR](https://huggingface.co/spaces/KPatrick/PaddleSpeechASR)[TTS](https://huggingface.co/spaces/KPatrick/PaddleSpeechTTS) 可在 Hugging Face Spaces 上体验!
- 👏🏻 2021.12.10: PaddleSpeech CLI 支持语音分类, 语音识别, 语音翻译(英译中)和语音合成。
### 🔥 加入技术交流群获取入群福利 ### 🔥 加入技术交流群获取入群福利
...@@ -196,13 +219,13 @@ ...@@ -196,13 +219,13 @@
+ python >= 3.7 + python >= 3.7
+ linux(推荐), mac, windows + linux(推荐), mac, windows
PaddleSpeech依赖于paddlepaddle,安装可以参考[paddlepaddle官网](https://www.paddlepaddle.org.cn/),根据自己机器的情况进行选择。这里给出cpu版本示例,其它版本大家可以根据自己机器的情况进行安装。 PaddleSpeech 依赖于 paddlepaddle,安装可以参考[ paddlepaddle 官网](https://www.paddlepaddle.org.cn/),根据自己机器的情况进行选择。这里给出 cpu 版本示例,其它版本大家可以根据自己机器的情况进行安装。
```shell ```shell
pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
``` ```
PaddleSpeech快速安装方式有两种,一种是pip安装,一种是源码编译(推荐)。 PaddleSpeech 快速安装方式有两种,一种是 pip 安装,一种是源码编译(推荐)。
### pip 安装 ### pip 安装
```shell ```shell
...@@ -222,10 +245,9 @@ pip install . ...@@ -222,10 +245,9 @@ pip install .
<a name="快速开始"></a> <a name="快速开始"></a>
## 快速开始 ## 快速开始
安装完成后,开发者可以通过命令行或者 Python 快速开始,命令行模式下改变 `--input` 可以尝试用自己的音频或文本测试,支持 16k wav 格式音频。
安装完成后,开发者可以通过命令行或者Python快速开始,命令行模式下改变 `--input` 可以尝试用自己的音频或文本测试,支持16k wav格式音频。 你也可以在 `aistudio` 中快速体验 👉🏻[一键预测,快速上手 Speech 开发任务](https://aistudio.baidu.com/aistudio/projectdetail/4353348?sUid=2470186&shared=1&ts=1660878142250)
你也可以在`aistudio`中快速体验 👉🏻[PaddleSpeech API Demo ](https://aistudio.baidu.com/aistudio/projectdetail/4281335?shared=1)
测试音频示例下载 测试音频示例下载
```shell ```shell
...@@ -281,7 +303,7 @@ Python API 一键预测 ...@@ -281,7 +303,7 @@ Python API 一键预测
<details><summary>&emsp;适配多场景的开放领域声音分类工具</summary> <details><summary>&emsp;适配多场景的开放领域声音分类工具</summary>
基于AudioSet数据集527个类别的声音分类模型 基于 AudioSet 数据集 527 个类别的声音分类模型
命令行一键体验 命令行一键体验
...@@ -350,7 +372,7 @@ Python API 一键预测 ...@@ -350,7 +372,7 @@ Python API 一键预测
<details><summary>&emsp;端到端英译中语音翻译工具</summary> <details><summary>&emsp;端到端英译中语音翻译工具</summary>
使用预编译的kaldi相关工具,只支持在Ubuntu系统中体验 使用预编译的 kaldi 相关工具,只支持在 Ubuntu 系统中体验
命令行一键体验 命令行一键体验
...@@ -370,14 +392,15 @@ python API 一键预测 ...@@ -370,14 +392,15 @@ python API 一键预测
</details> </details>
<a name="快速使用服务"></a> <a name="快速使用服务"></a>
## 快速使用服务 ## 快速使用服务
安装完成后,开发者可以通过命令行一键启动语音识别,语音合成,音频分类三种服务。 安装完成后,开发者可以通过命令行一键启动语音识别,语音合成,音频分类等多种服务。
你可以在 AI Studio 中快速体验:[SpeechServer 一键部署](https://aistudio.baidu.com/aistudio/projectdetail/4354592?sUid=2470186&shared=1&ts=1660878208266)
**启动服务** **启动服务**
```shell ```shell
paddlespeech_server start --config_file ./paddlespeech/server/conf/application.yaml paddlespeech_server start --config_file ./demos/speech_server/conf/application.yaml
``` ```
**访问语音识别服务** **访问语音识别服务**
...@@ -529,7 +552,7 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 ...@@ -529,7 +552,7 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
</td> </td>
</tr> </tr>
<tr> <tr>
<td rowspan="4">声学模型</td> <td rowspan="5">声学模型</td>
<td>Tacotron2</td> <td>Tacotron2</td>
<td>LJSpeech / CSMSC</td> <td>LJSpeech / CSMSC</td>
<td> <td>
...@@ -552,9 +575,16 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 ...@@ -552,9 +575,16 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
</tr> </tr>
<tr> <tr>
<td>FastSpeech2</td> <td>FastSpeech2</td>
<td>LJSpeech / VCTK / CSMSC / AISHELL-3</td> <td>LJSpeech / VCTK / CSMSC / AISHELL-3 / ZH_EN / finetune</td>
<td>
<a href = "./examples/ljspeech/tts3">fastspeech2-ljspeech</a> / <a href = "./examples/vctk/tts3">fastspeech2-vctk</a> / <a href = "./examples/csmsc/tts3">fastspeech2-csmsc</a> / <a href = "./examples/aishell3/tts3">fastspeech2-aishell3</a> / <a href = "./examples/zh_en_tts/tts3">fastspeech2-zh_en</a> / <a href = "./examples/other/tts_finetune/tts3">fastspeech2-finetune</a>
</td>
</tr>
<tr>
<td>ERNIE-SAT</td>
<td>VCTK / AISHELL-3 / ZH_EN</td>
<td> <td>
<a href = "./examples/ljspeech/tts3">fastspeech2-ljspeech</a> / <a href = "./examples/vctk/tts3">fastspeech2-vctk</a> / <a href = "./examples/csmsc/tts3">fastspeech2-csmsc</a> / <a href = "./examples/aishell3/tts3">fastspeech2-aishell3</a> <a href = "./examples/vctk/ernie_sat">ERNIE-SAT-vctk</a> / <a href = "./examples/aishell3/ernie_sat">ERNIE-SAT-aishell3</a> / <a href = "./examples/aishell3_vctk/ernie_sat">ERNIE-SAT-zh_en</a>
</td> </td>
</tr> </tr>
<tr> <tr>
...@@ -601,34 +631,47 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 ...@@ -601,34 +631,47 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
</td> </td>
</tr> </tr>
<tr> <tr>
<td rowspan="3">声音克隆</td> <td rowspan="5">声音克隆</td>
<td>GE2E</td> <td>GE2E</td>
<td >Librispeech, etc.</td> <td >Librispeech, etc.</td>
<td> <td>
<a href = "./examples/other/ge2e">ge2e</a> <a href = "./examples/other/ge2e">GE2E</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td>GE2E + Tacotron2</td> <td>SV2TTS (GE2E + Tacotron2)</td>
<td>AISHELL-3</td> <td>AISHELL-3</td>
<td> <td>
<a href = "./examples/aishell3/vc0">ge2e-tacotron2-aishell3</a> <a href = "./examples/aishell3/vc0">VC0</a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td>GE2E + FastSpeech2</td> <td>SV2TTS (GE2E + FastSpeech2)</td>
<td>AISHELL-3</td> <td>AISHELL-3</td>
<td> <td>
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a> <a href = "./examples/aishell3/vc1">VC1</a>
</td> </td>
</tr> </tr>
<tr>
<td>SV2TTS (ECAPA-TDNN + FastSpeech2)</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vc2">VC2</a>
</td>
</tr>
<tr>
<td>GE2E + VITS</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vits-vc">VITS-VC</a>
</td>
</tr> </tr>
<tr> <tr>
<td rowspan="3">端到端</td> <td rowspan="3">端到端</td>
<td>VITS</td> <td>VITS</td>
<td >CSMSC</td> <td>CSMSC / AISHELL-3</td>
<td> <td>
<a href = "./examples/csmsc/vits">VITS-csmsc</a> <a href = "./examples/csmsc/vits">VITS-csmsc</a> / <a href = "./examples/aishell3/vits">VITS-aishell3</a>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -796,44 +839,79 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 ...@@ -796,44 +839,79 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
### 贡献者 ### 贡献者
<p align="center"> <p align="center">
<a href="https://github.com/zh794390558"><img src="https://avatars.githubusercontent.com/u/3038472?v=4" width=75 height=75></a> <a href="https://github.com/zh794390558"><img src="https://avatars.githubusercontent.com/u/3038472?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackwaterveg"><img src="https://avatars.githubusercontent.com/u/87408988?v=4" width=75 height=75></a> <a href="https://github.com/Jackwaterveg"><img src="https://avatars.githubusercontent.com/u/87408988?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/yt605155624"><img src="https://avatars.githubusercontent.com/u/24568452?v=4" width=75 height=75></a> <a href="https://github.com/yt605155624"><img src="https://avatars.githubusercontent.com/u/24568452?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kuke"><img src="https://avatars.githubusercontent.com/u/3064195?v=4" width=75 height=75></a> <a href="https://github.com/Honei"><img src="https://avatars.githubusercontent.com/u/11361692?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/xinghai-sun"><img src="https://avatars.githubusercontent.com/u/7038341?v=4" width=75 height=75></a> <a href="https://github.com/KPatr1ck"><img src="https://avatars.githubusercontent.com/u/22954146?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/pkuyym"><img src="https://avatars.githubusercontent.com/u/5782283?v=4" width=75 height=75></a> <a href="https://github.com/kuke"><img src="https://avatars.githubusercontent.com/u/3064195?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/KPatr1ck"><img src="https://avatars.githubusercontent.com/u/22954146?v=4" width=75 height=75></a> <a href="https://github.com/lym0302"><img src="https://avatars.githubusercontent.com/u/34430015?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/LittleChenCc"><img src="https://avatars.githubusercontent.com/u/10339970?v=4" width=75 height=75></a> <a href="https://github.com/SmileGoat"><img src="https://avatars.githubusercontent.com/u/56786796?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/745165806"><img src="https://avatars.githubusercontent.com/u/20623194?v=4" width=75 height=75></a> <a href="https://github.com/xinghai-sun"><img src="https://avatars.githubusercontent.com/u/7038341?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Mingxue-Xu"><img src="https://avatars.githubusercontent.com/u/92848346?v=4" width=75 height=75></a> <a href="https://github.com/pkuyym"><img src="https://avatars.githubusercontent.com/u/5782283?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chrisxu2016"><img src="https://avatars.githubusercontent.com/u/18379485?v=4" width=75 height=75></a> <a href="https://github.com/LittleChenCc"><img src="https://avatars.githubusercontent.com/u/10339970?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lfchener"><img src="https://avatars.githubusercontent.com/u/6771821?v=4" width=75 height=75></a> <a href="https://github.com/qingen"><img src="https://avatars.githubusercontent.com/u/3139179?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/luotao1"><img src="https://avatars.githubusercontent.com/u/6836917?v=4" width=75 height=75></a> <a href="https://github.com/D-DanielYang"><img src="https://avatars.githubusercontent.com/u/23690325?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wanghaoshuang"><img src="https://avatars.githubusercontent.com/u/7534971?v=4" width=75 height=75></a> <a href="https://github.com/Mingxue-Xu"><img src="https://avatars.githubusercontent.com/u/92848346?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/gongel"><img src="https://avatars.githubusercontent.com/u/24390500?v=4" width=75 height=75></a> <a href="https://github.com/745165806"><img src="https://avatars.githubusercontent.com/u/20623194?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/mmglove"><img src="https://avatars.githubusercontent.com/u/38800877?v=4" width=75 height=75></a> <a href="https://github.com/jerryuhoo"><img src="https://avatars.githubusercontent.com/u/24245709?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/iclementine"><img src="https://avatars.githubusercontent.com/u/16222986?v=4" width=75 height=75></a> <a href="https://github.com/WilliamZhang06"><img src="https://avatars.githubusercontent.com/u/97937340?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ZeyuChen"><img src="https://avatars.githubusercontent.com/u/1371212?v=4" width=75 height=75></a> <a href="https://github.com/chrisxu2016"><img src="https://avatars.githubusercontent.com/u/18379485?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AK391"><img src="https://avatars.githubusercontent.com/u/81195143?v=4" width=75 height=75></a> <a href="https://github.com/iftaken"><img src="https://avatars.githubusercontent.com/u/30135920?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/qingqing01"><img src="https://avatars.githubusercontent.com/u/7845005?v=4" width=75 height=75></a> <a href="https://github.com/lfchener"><img src="https://avatars.githubusercontent.com/u/6771821?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ericxk"><img src="https://avatars.githubusercontent.com/u/4719594?v=4" width=75 height=75></a> <a href="https://github.com/BarryKCL"><img src="https://avatars.githubusercontent.com/u/48039828?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kvinwang"><img src="https://avatars.githubusercontent.com/u/6442159?v=4" width=75 height=75></a> <a href="https://github.com/mmglove"><img src="https://avatars.githubusercontent.com/u/38800877?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jiqiren11"><img src="https://avatars.githubusercontent.com/u/82639260?v=4" width=75 height=75></a> <a href="https://github.com/gongel"><img src="https://avatars.githubusercontent.com/u/24390500?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AshishKarel"><img src="https://avatars.githubusercontent.com/u/58069375?v=4" width=75 height=75></a> <a href="https://github.com/luotao1"><img src="https://avatars.githubusercontent.com/u/6836917?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chesterkuo"><img src="https://avatars.githubusercontent.com/u/6285069?v=4" width=75 height=75></a> <a href="https://github.com/wanghaoshuang"><img src="https://avatars.githubusercontent.com/u/7534971?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/tensor-tang"><img src="https://avatars.githubusercontent.com/u/21351065?v=4" width=75 height=75></a> <a href="https://github.com/kslz"><img src="https://avatars.githubusercontent.com/u/54951765?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/hysunflower"><img src="https://avatars.githubusercontent.com/u/52739577?v=4" width=75 height=75></a> <a href="https://github.com/JiehangXie"><img src="https://avatars.githubusercontent.com/u/51190264?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wwhu"><img src="https://avatars.githubusercontent.com/u/6081200?v=4" width=75 height=75></a> <a href="https://github.com/david-95"><img src="https://avatars.githubusercontent.com/u/15189190?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lispc"><img src="https://avatars.githubusercontent.com/u/2833376?v=4" width=75 height=75></a> <a href="https://github.com/THUzyt21"><img src="https://avatars.githubusercontent.com/u/91456992?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jerryuhoo"><img src="https://avatars.githubusercontent.com/u/24245709?v=4" width=75 height=75></a> <a href="https://github.com/buchongyu2"><img src="https://avatars.githubusercontent.com/u/29157444?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/harisankarh"><img src="https://avatars.githubusercontent.com/u/1307053?v=4" width=75 height=75></a> <a href="https://github.com/iclementine"><img src="https://avatars.githubusercontent.com/u/16222986?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackiexiao"><img src="https://avatars.githubusercontent.com/u/18050469?v=4" width=75 height=75></a> <a href="https://github.com/phecda-xu"><img src="https://avatars.githubusercontent.com/u/46859427?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/limpidezza"><img src="https://avatars.githubusercontent.com/u/71760778?v=4" width=75 height=75></a> <a href="https://github.com/freeliuzc"><img src="https://avatars.githubusercontent.com/u/23568094?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ZeyuChen"><img src="https://avatars.githubusercontent.com/u/1371212?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ccrrong"><img src="https://avatars.githubusercontent.com/u/101700995?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AK391"><img src="https://avatars.githubusercontent.com/u/81195143?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/qingqing01"><img src="https://avatars.githubusercontent.com/u/7845005?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/0x45f"><img src="https://avatars.githubusercontent.com/u/23097963?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/vpegasus"><img src="https://avatars.githubusercontent.com/u/22723154?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ericxk"><img src="https://avatars.githubusercontent.com/u/4719594?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Betterman-qs"><img src="https://avatars.githubusercontent.com/u/61459181?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/sneaxiy"><img src="https://avatars.githubusercontent.com/u/32832641?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Doubledongli"><img src="https://avatars.githubusercontent.com/u/20540661?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/apps/dependabot"><img src="https://avatars.githubusercontent.com/in/29110?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/kvinwang"><img src="https://avatars.githubusercontent.com/u/6442159?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chenkui164"><img src="https://avatars.githubusercontent.com/u/34813030?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/PaddleZhang"><img src="https://avatars.githubusercontent.com/u/97284124?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/billishyahao"><img src="https://avatars.githubusercontent.com/u/96406262?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/BrightXiaoHan"><img src="https://avatars.githubusercontent.com/u/25839309?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/jiqiren11"><img src="https://avatars.githubusercontent.com/u/82639260?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/ryanrussell"><img src="https://avatars.githubusercontent.com/u/523300?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/GT-ZhangAcer"><img src="https://avatars.githubusercontent.com/u/46156734?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/tensor-tang"><img src="https://avatars.githubusercontent.com/u/21351065?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/hysunflower"><img src="https://avatars.githubusercontent.com/u/52739577?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/oyjxer"><img src="https://avatars.githubusercontent.com/u/16233945?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/JamesLim-sy"><img src="https://avatars.githubusercontent.com/u/61349199?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/limpidezza"><img src="https://avatars.githubusercontent.com/u/71760778?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/windstamp"><img src="https://avatars.githubusercontent.com/u/34057289?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AshishKarel"><img src="https://avatars.githubusercontent.com/u/58069375?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/chesterkuo"><img src="https://avatars.githubusercontent.com/u/6285069?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/YDX-2147483647"><img src="https://avatars.githubusercontent.com/u/73375426?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/AdamBear"><img src="https://avatars.githubusercontent.com/u/2288870?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/wwhu"><img src="https://avatars.githubusercontent.com/u/6081200?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/lispc"><img src="https://avatars.githubusercontent.com/u/2833376?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/harisankarh"><img src="https://avatars.githubusercontent.com/u/1307053?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/pengzhendong"><img src="https://avatars.githubusercontent.com/u/10704539?s=60&v=4" width=75 height=75></a>
<a href="https://github.com/Jackiexiao"><img src="https://avatars.githubusercontent.com/u/18050469?s=60&v=4" width=75 height=75></a>
</p> </p>
## 致谢 ## 致谢
- 非常感谢 [HighCWu](https://github.com/HighCWu) 新增 [VITS-aishell3](./examples/aishell3/vits)[VITS-VC](./examples/aishell3/vits-vc) 代码示例。
- 非常感谢 [BarryKCL](https://github.com/BarryKCL)基于[G2PW](https://github.com/GitYCC/g2pW)对TTS中文文本前端的优化。 - 非常感谢 [david-95](https://github.com/david-95) 修复句尾多标点符号出错的问题,贡献补充多条程序和数据。
- 非常感谢 [BarryKCL](https://github.com/BarryKCL) 基于 [G2PW](https://github.com/GitYCC/g2pW) 对 TTS 中文文本前端的优化。
- 非常感谢 [yeyupiaoling](https://github.com/yeyupiaoling)/[PPASR](https://github.com/yeyupiaoling/PPASR)/[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)/[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)/[AudioClassification-PaddlePaddle](https://github.com/yeyupiaoling/AudioClassification-PaddlePaddle) 多年来的关注和建议,以及在诸多问题上的帮助。 - 非常感谢 [yeyupiaoling](https://github.com/yeyupiaoling)/[PPASR](https://github.com/yeyupiaoling/PPASR)/[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)/[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)/[AudioClassification-PaddlePaddle](https://github.com/yeyupiaoling/AudioClassification-PaddlePaddle) 多年来的关注和建议,以及在诸多问题上的帮助。
- 非常感谢 [mymagicpower](https://github.com/mymagicpower) 采用PaddleSpeech 对 ASR 的[短语音](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_sdk)[长语音](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_long_audio_sdk)进行 Java 实现。 - 非常感谢 [mymagicpower](https://github.com/mymagicpower) 采用PaddleSpeech 对 ASR 的[短语音](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_sdk)[长语音](https://github.com/mymagicpower/AIAS/tree/main/3_audio_sdks/asr_long_audio_sdk)进行 Java 实现。
- 非常感谢 [JiehangXie](https://github.com/JiehangXie)/[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo) 采用 PaddleSpeech 语音合成功能实现 Virtual Uploader(VUP)/Virtual YouTuber(VTuber) 虚拟主播。 - 非常感谢 [JiehangXie](https://github.com/JiehangXie)/[PaddleBoBo](https://github.com/JiehangXie/PaddleBoBo) 采用 PaddleSpeech 语音合成功能实现 Virtual Uploader(VUP)/Virtual YouTuber(VTuber) 虚拟主播。
......
...@@ -226,6 +226,12 @@ recall and elapsed time statistics are shown in the following figure: ...@@ -226,6 +226,12 @@ recall and elapsed time statistics are shown in the following figure:
The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios. The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios.
* compute embeding takes 500 ms
* retrieval with cosine takes 2.9 ms
* total takes 503 ms
> test audio is 5 sec
### 6.Pretrained Models ### 6.Pretrained Models
Here is a list of pretrained models released by PaddleSpeech : Here is a list of pretrained models released by PaddleSpeech :
......
...@@ -26,8 +26,9 @@ def get_audios(path): ...@@ -26,8 +26,9 @@ def get_audios(path):
""" """
supported_formats = [".wav", ".mp3", ".ogg", ".flac", ".m4a"] supported_formats = [".wav", ".mp3", ".ogg", ".flac", ".m4a"]
return [ return [
item for sublist in [[os.path.join(dir, file) for file in files] item
for dir, _, files in list(os.walk(path))] for sublist in [[os.path.join(dir, file) for file in files]
for dir, _, files in list(os.walk(path))]
for item in sublist if os.path.splitext(item)[1] in supported_formats for item in sublist if os.path.splitext(item)[1] in supported_formats
] ]
......
([简体中文](./README_cn.md)|English)
# Metaverse # Metaverse
## Introduction ## Introduction
Metaverse is a new Internet application and social form integrating virtual reality produced by integrating a variety of new technologies. Metaverse is a new Internet application and social form integrating virtual reality produced by integrating a variety of new technologies.
......
(简体中文|[English](./README.md))
# Metaverse
## 简介
Metaverse 是一种新的互联网应用和社交形式,融合了多种新技术,产生了虚拟现实。
这个演示是一个让图片中的名人“说话”的实现。通过 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 的组合,我们集成了安装和特定模块到一个 shell 脚本中。
## 使用
您可以使用 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 让您最喜欢的人说出指定的内容,并构建您的虚拟人。
运行 `run.sh` 完成所有基本程序,包括安装。
```bash
./run.sh
```
`run.sh`, 先会执行 `source path.sh` 来设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt` 中的句子。
如果您想尝试图像,请将图像替换 shell 脚本中的 `download/Lamarr.png`
结果已显示在我们的 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
...@@ -19,6 +19,7 @@ The input of this cli demo should be a WAV file(`.wav`), and the sample rate mus ...@@ -19,6 +19,7 @@ The input of this cli demo should be a WAV file(`.wav`), and the sample rate mus
Here are sample files for this demo that can be downloaded: Here are sample files for this demo that can be downloaded:
```bash ```bash
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
``` ```
### 3. Usage ### 3. Usage
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
```bash ```bash
# 该音频的内容是数字串 85236145389 # 该音频的内容是数字串 85236145389
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
``` ```
### 3. 使用方法 ### 3. 使用方法
- 命令行 (推荐使用) - 命令行 (推荐使用)
......
...@@ -61,7 +61,7 @@ tts_python: ...@@ -61,7 +61,7 @@ tts_python:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3', # voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3',
# 'pwgan_vctk', 'mb_melgan_csmsc', 'style_melgan_csmsc', # 'pwgan_vctk', 'mb_melgan_csmsc', 'style_melgan_csmsc',
...@@ -87,7 +87,7 @@ tts_inference: ...@@ -87,7 +87,7 @@ tts_inference:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_predictor_conf: am_predictor_conf:
device: # set 'gpu:id' or 'cpu' device: # set 'gpu:id' or 'cpu'
......
...@@ -401,4 +401,4 @@ curl -X 'GET' \ ...@@ -401,4 +401,4 @@ curl -X 'GET' \
"code": 0, "code": 0,
"result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", "result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"message": "ok" "message": "ok"
``` ```
\ No newline at end of file
...@@ -3,48 +3,48 @@ ...@@ -3,48 +3,48 @@
# 2. 接收录音音频,返回识别结果 # 2. 接收录音音频,返回识别结果
# 3. 接收ASR识别结果,返回NLP对话结果 # 3. 接收ASR识别结果,返回NLP对话结果
# 4. 接收NLP对话结果,返回TTS音频 # 4. 接收NLP对话结果,返回TTS音频
import argparse
import base64 import base64
import yaml
import os
import json
import datetime import datetime
import json
import os
from typing import List
import aiofiles
import librosa import librosa
import soundfile as sf import soundfile as sf
import numpy as np
import argparse
import uvicorn import uvicorn
import aiofiles from fastapi import FastAPI
from typing import Optional, List from fastapi import File
from pydantic import BaseModel from fastapi import Form
from fastapi import FastAPI, Header, File, UploadFile, Form, Cookie, WebSocket, WebSocketDisconnect from fastapi import UploadFile
from fastapi import WebSocket
from fastapi import WebSocketDisconnect
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from starlette.responses import FileResponse from pydantic import BaseModel
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.websockets import WebSocketState as WebSocketState
from src.AudioManeger import AudioMannger from src.AudioManeger import AudioMannger
from src.util import *
from src.robot import Robot from src.robot import Robot
from src.WebsocketManeger import ConnectionManager
from src.SpeechBase.vpr import VPR from src.SpeechBase.vpr import VPR
from src.util import *
from src.WebsocketManeger import ConnectionManager
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.responses import FileResponse
from starlette.websockets import WebSocketState as WebSocketState
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.audio_process import float2pcm from paddlespeech.server.utils.audio_process import float2pcm
# 解析配置 # 解析配置
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(prog='PaddleSpeechDemo', add_help=True)
prog='PaddleSpeechDemo', add_help=True)
parser.add_argument( parser.add_argument(
"--port", "--port",
action="store", action="store",
type=int, type=int,
help="port of the app", help="port of the app",
default=8010, default=8010,
required=False) required=False)
args = parser.parse_args() args = parser.parse_args()
port = args.port port = args.port
...@@ -60,39 +60,41 @@ ie_model_path = "source/model" ...@@ -60,39 +60,41 @@ ie_model_path = "source/model"
UPLOAD_PATH = "source/vpr" UPLOAD_PATH = "source/vpr"
WAV_PATH = "source/wav" WAV_PATH = "source/wav"
base_sources = [UPLOAD_PATH, WAV_PATH]
base_sources = [
UPLOAD_PATH, WAV_PATH
]
for path in base_sources: for path in base_sources:
os.makedirs(path, exist_ok=True) os.makedirs(path, exist_ok=True)
# 初始化 # 初始化
app = FastAPI() app = FastAPI()
chatbot = Robot(asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path) chatbot = Robot(
asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path)
manager = ConnectionManager() manager = ConnectionManager()
aumanager = AudioMannger(chatbot) aumanager = AudioMannger(chatbot)
aumanager.init() aumanager.init()
vpr = VPR(db_path, dim = 192, top_k = 5) vpr = VPR(db_path, dim=192, top_k=5)
# 服务配置 # 服务配置
class NlpBase(BaseModel): class NlpBase(BaseModel):
chat: str chat: str
class TtsBase(BaseModel): class TtsBase(BaseModel):
text: str text: str
class Audios: class Audios:
def __init__(self) -> None: def __init__(self) -> None:
self.audios = b"" self.audios = b""
audios = Audios() audios = Audios()
###################################################################### ######################################################################
########################### ASR 服务 ################################# ########################### ASR 服务 #################################
##################################################################### #####################################################################
# 接收文件,返回ASR结果 # 接收文件,返回ASR结果
# 上传文件 # 上传文件
@app.post("/asr/offline") @app.post("/asr/offline")
...@@ -101,7 +103,8 @@ async def speech2textOffline(files: List[UploadFile]): ...@@ -101,7 +103,8 @@ async def speech2textOffline(files: List[UploadFile]):
asr_res = "" asr_res = ""
for file in files[:1]: for file in files[:1]:
# 生成时间戳 # 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file: async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read content = await file.read() # async read
...@@ -110,10 +113,9 @@ async def speech2textOffline(files: List[UploadFile]): ...@@ -110,10 +113,9 @@ async def speech2textOffline(files: List[UploadFile]):
# 返回ASR识别结果 # 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path) asr_res = chatbot.speech2text(out_file_path)
return SuccessRequest(result=asr_res) return SuccessRequest(result=asr_res)
# else:
# return ErrorRequest(message="文件不是.wav格式")
return ErrorRequest(message="上传文件为空") return ErrorRequest(message="上传文件为空")
# 接收文件,同时将wav强制转成16k, int16类型 # 接收文件,同时将wav强制转成16k, int16类型
@app.post("/asr/offlinefile") @app.post("/asr/offlinefile")
async def speech2textOfflineFile(files: List[UploadFile]): async def speech2textOfflineFile(files: List[UploadFile]):
...@@ -121,7 +123,8 @@ async def speech2textOfflineFile(files: List[UploadFile]): ...@@ -121,7 +123,8 @@ async def speech2textOfflineFile(files: List[UploadFile]):
asr_res = "" asr_res = ""
for file in files[:1]: for file in files[:1]:
# 生成时间戳 # 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file: async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read content = await file.read() # async read
...@@ -132,22 +135,18 @@ async def speech2textOfflineFile(files: List[UploadFile]): ...@@ -132,22 +135,18 @@ async def speech2textOfflineFile(files: List[UploadFile]):
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
# 将文件重新写入 # 将文件重新写入
now_name = now_name[:-4] + "_16k" + ".wav" now_name = now_name[:-4] + "_16k" + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
sf.write(out_file_path,wav,16000) sf.write(out_file_path, wav, 16000)
# 返回ASR识别结果 # 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path) asr_res = chatbot.speech2text(out_file_path)
response_res = { response_res = {"asr_result": asr_res, "wav_base64": wav_base64}
"asr_result": asr_res,
"wav_base64": wav_base64
}
return SuccessRequest(result=response_res) return SuccessRequest(result=response_res)
return ErrorRequest(message="上传文件为空")
return ErrorRequest(message="上传文件为空")
# 流式接收测试 # 流式接收测试
...@@ -161,15 +160,17 @@ async def speech2textOnlineRecive(files: List[UploadFile]): ...@@ -161,15 +160,17 @@ async def speech2textOnlineRecive(files: List[UploadFile]):
print(f"audios长度变化: {len(audios.audios)}") print(f"audios长度变化: {len(audios.audios)}")
return SuccessRequest(message="接收成功") return SuccessRequest(message="接收成功")
# 采集环境噪音大小 # 采集环境噪音大小
@app.post("/asr/collectEnv") @app.post("/asr/collectEnv")
async def collectEnv(files: List[UploadFile]): async def collectEnv(files: List[UploadFile]):
for file in files[:1]: for file in files[:1]:
content = await file.read() # async read content = await file.read() # async read
# 初始化, wav 前44字节是头部信息 # 初始化, wav 前44字节是头部信息
aumanager.compute_env_volume(content[44:]) aumanager.compute_env_volume(content[44:])
vad_ = aumanager.vad_threshold vad_ = aumanager.vad_threshold
return SuccessRequest(result=vad_,message="采集环境噪音成功") return SuccessRequest(result=vad_, message="采集环境噪音成功")
# 停止录音 # 停止录音
@app.get("/asr/stopRecord") @app.get("/asr/stopRecord")
...@@ -179,6 +180,7 @@ async def stopRecord(): ...@@ -179,6 +180,7 @@ async def stopRecord():
print("Online录音暂停") print("Online录音暂停")
return SuccessRequest(message="停止成功") return SuccessRequest(message="停止成功")
# 恢复录音 # 恢复录音
@app.get("/asr/resumeRecord") @app.get("/asr/resumeRecord")
async def resumeRecord(): async def resumeRecord():
...@@ -187,7 +189,7 @@ async def resumeRecord(): ...@@ -187,7 +189,7 @@ async def resumeRecord():
return SuccessRequest(message="Online录音恢复") return SuccessRequest(message="Online录音恢复")
# 聊天用的ASR # 聊天用的 ASR
@app.websocket("/ws/asr/offlineStream") @app.websocket("/ws/asr/offlineStream")
async def websocket_endpoint(websocket: WebSocket): async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket) await manager.connect(websocket)
...@@ -210,9 +212,9 @@ async def websocket_endpoint(websocket: WebSocket): ...@@ -210,9 +212,9 @@ async def websocket_endpoint(websocket: WebSocket):
# print(f"用户-{user}-离开") # print(f"用户-{user}-离开")
# Online识别的ASR # 流式识别的 ASR
@app.websocket('/ws/asr/onlineStream') @app.websocket('/ws/asr/onlineStream')
async def websocket_endpoint(websocket: WebSocket): async def websocket_endpoint_online(websocket: WebSocket):
"""PaddleSpeech Online ASR Server api """PaddleSpeech Online ASR Server api
Args: Args:
...@@ -298,12 +300,14 @@ async def websocket_endpoint(websocket: WebSocket): ...@@ -298,12 +300,14 @@ async def websocket_endpoint(websocket: WebSocket):
except WebSocketDisconnect: except WebSocketDisconnect:
pass pass
###################################################################### ######################################################################
########################### NLP 服务 ################################# ########################### NLP 服务 #################################
##################################################################### #####################################################################
@app.post("/nlp/chat") @app.post("/nlp/chat")
async def chatOffline(nlp_base:NlpBase): async def chatOffline(nlp_base: NlpBase):
chat = nlp_base.chat chat = nlp_base.chat
if not chat: if not chat:
return ErrorRequest(message="传入文本为空") return ErrorRequest(message="传入文本为空")
...@@ -311,8 +315,9 @@ async def chatOffline(nlp_base:NlpBase): ...@@ -311,8 +315,9 @@ async def chatOffline(nlp_base:NlpBase):
res = chatbot.chat(chat) res = chatbot.chat(chat)
return SuccessRequest(result=res) return SuccessRequest(result=res)
@app.post("/nlp/ie") @app.post("/nlp/ie")
async def ieOffline(nlp_base:NlpBase): async def ieOffline(nlp_base: NlpBase):
nlp_text = nlp_base.chat nlp_text = nlp_base.chat
if not nlp_text: if not nlp_text:
return ErrorRequest(message="传入文本为空") return ErrorRequest(message="传入文本为空")
...@@ -320,17 +325,20 @@ async def ieOffline(nlp_base:NlpBase): ...@@ -320,17 +325,20 @@ async def ieOffline(nlp_base:NlpBase):
res = chatbot.ie(nlp_text) res = chatbot.ie(nlp_text)
return SuccessRequest(result=res) return SuccessRequest(result=res)
###################################################################### ######################################################################
########################### TTS 服务 ################################# ########################### TTS 服务 #################################
##################################################################### #####################################################################
@app.post("/tts/offline") @app.post("/tts/offline")
async def text2speechOffline(tts_base:TtsBase): async def text2speechOffline(tts_base: TtsBase):
text = tts_base.text text = tts_base.text
if not text: if not text:
return ErrorRequest(message="文本为空") return ErrorRequest(message="文本为空")
else: else:
now_name = "tts_"+ datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "tts_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
# 保存为文件,再转成base64传输 # 保存为文件,再转成base64传输
chatbot.text2speech(text, outpath=out_file_path) chatbot.text2speech(text, outpath=out_file_path)
...@@ -339,12 +347,14 @@ async def text2speechOffline(tts_base:TtsBase): ...@@ -339,12 +347,14 @@ async def text2speechOffline(tts_base:TtsBase):
base_str = base64.b64encode(data_bin) base_str = base64.b64encode(data_bin)
return SuccessRequest(result=base_str) return SuccessRequest(result=base_str)
# http流式TTS # http流式TTS
@app.post("/tts/online") @app.post("/tts/online")
async def stream_tts(request_body: TtsBase): async def stream_tts(request_body: TtsBase):
text = request_body.text text = request_body.text
return StreamingResponse(chatbot.text2speechStreamBytes(text=text)) return StreamingResponse(chatbot.text2speechStreamBytes(text=text))
# ws流式TTS # ws流式TTS
@app.websocket("/ws/tts/online") @app.websocket("/ws/tts/online")
async def stream_ttsWS(websocket: WebSocket): async def stream_ttsWS(websocket: WebSocket):
...@@ -356,17 +366,11 @@ async def stream_ttsWS(websocket: WebSocket): ...@@ -356,17 +366,11 @@ async def stream_ttsWS(websocket: WebSocket):
if text: if text:
for sub_wav in chatbot.text2speechStream(text=text): for sub_wav in chatbot.text2speechStream(text=text):
# print("发送sub wav: ", len(sub_wav)) # print("发送sub wav: ", len(sub_wav))
res = { res = {"wav": sub_wav, "done": False}
"wav": sub_wav,
"done": False
}
await websocket.send_json(res) await websocket.send_json(res)
# 输送结束 # 输送结束
res = { res = {"wav": sub_wav, "done": True}
"wav": sub_wav,
"done": True
}
await websocket.send_json(res) await websocket.send_json(res)
# manager.disconnect(websocket) # manager.disconnect(websocket)
...@@ -396,8 +400,9 @@ async def vpr_enroll(table_name: str=None, ...@@ -396,8 +400,9 @@ async def vpr_enroll(table_name: str=None,
return {'status': False, 'msg': "spk_id can not be None"} return {'status': False, 'msg': "spk_id can not be None"}
# Save the upload data to server. # Save the upload data to server.
content = await audio.read() content = await audio.read()
now_name = "vpr_enroll_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "vpr_enroll_" + datetime.datetime.strftime(
audio_path = os.path.join(UPLOAD_PATH, now_name) datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(audio_path, "wb+") as f: with open(audio_path, "wb+") as f:
f.write(content) f.write(content)
...@@ -413,20 +418,19 @@ async def vpr_recog(request: Request, ...@@ -413,20 +418,19 @@ async def vpr_recog(request: Request,
audio: UploadFile=File(...)): audio: UploadFile=File(...)):
# Voice print recognition online # Voice print recognition online
# try: # try:
# Save the upload data to server. # Save the upload data to server.
content = await audio.read() content = await audio.read()
now_name = "vpr_query_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "vpr_query_" + datetime.datetime.strftime(
query_audio_path = os.path.join(UPLOAD_PATH, now_name) datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
query_audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(query_audio_path, "wb+") as f: with open(query_audio_path, "wb+") as f:
f.write(content) f.write(content)
spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path) spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path)
res = dict(zip(spk_ids, zip(paths, scores))) res = dict(zip(spk_ids, zip(paths, scores)))
# Sort results by distance metric, closest distances first # Sort results by distance metric, closest distances first
res = sorted(res.items(), key=lambda item: item[1][1], reverse=True) res = sorted(res.items(), key=lambda item: item[1][1], reverse=True)
return res return res
# except Exception as e:
# return {'status': False, 'msg': e}, 400
@app.post('/vpr/del') @app.post('/vpr/del')
...@@ -460,17 +464,18 @@ async def vpr_database64(vprId: int): ...@@ -460,17 +464,18 @@ async def vpr_database64(vprId: int):
return {'status': False, 'msg': "vpr_id can not be None"} return {'status': False, 'msg': "vpr_id can not be None"}
audio_path = vpr.do_get_wav(vprId) audio_path = vpr.do_get_wav(vprId)
# 返回base64 # 返回base64
# 将文件转成16k, 16bit类型的wav文件 # 将文件转成16k, 16bit类型的wav文件
wav, sr = librosa.load(audio_path, sr=16000) wav, sr = librosa.load(audio_path, sr=16000)
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
return SuccessRequest(result=wav_base64) return SuccessRequest(result=wav_base64)
except Exception as e: except Exception as e:
return {'status': False, 'msg': e}, 400 return {'status': False, 'msg': e}, 400
@app.get('/vpr/data') @app.get('/vpr/data')
async def vpr_data(vprId: int): async def vpr_data(vprId: int):
# Get the audio file from path by spk_id in MySQL # Get the audio file from path by spk_id in MySQL
...@@ -482,11 +487,6 @@ async def vpr_data(vprId: int): ...@@ -482,11 +487,6 @@ async def vpr_data(vprId: int):
except Exception as e: except Exception as e:
return {'status': False, 'msg': e}, 400 return {'status': False, 'msg': e}, 400
if __name__ == '__main__': if __name__ == '__main__':
uvicorn.run(app=app, host='0.0.0.0', port=port) uvicorn.run(app=app, host='0.0.0.0', port=port)
aiofiles aiofiles
faiss-cpu
fastapi fastapi
librosa librosa
numpy numpy
paddlenlp
paddlepaddle
paddlespeech
pydantic pydantic
scikit_learn python-multipartscikit_learn
SoundFile SoundFile
starlette starlette
uvicorn uvicorn
paddlepaddle
paddlespeech
paddlenlp
faiss-cpu
python-multipart
\ No newline at end of file
import imp import datetime
from queue import Queue
import numpy as np
import os import os
import wave import wave
import random
import datetime import numpy as np
from .util import randName from .util import randName
class AudioMannger: class AudioMannger:
def __init__(self, robot, frame_length=160, frame=10, data_width=2, vad_default = 300): def __init__(self,
robot,
frame_length=160,
frame=10,
data_width=2,
vad_default=300):
# 二进制 pcm 流 # 二进制 pcm 流
self.audios = b'' self.audios = b''
self.asr_result = "" self.asr_result = ""
...@@ -20,8 +24,9 @@ class AudioMannger: ...@@ -20,8 +24,9 @@ class AudioMannger:
os.makedirs(self.file_dir, exist_ok=True) os.makedirs(self.file_dir, exist_ok=True)
self.vad_deafult = vad_default self.vad_deafult = vad_default
self.vad_threshold = vad_default self.vad_threshold = vad_default
self.vad_threshold_path = os.path.join(self.file_dir, "vad_threshold.npy") self.vad_threshold_path = os.path.join(self.file_dir,
"vad_threshold.npy")
# 10ms 一帧 # 10ms 一帧
self.frame_length = frame_length self.frame_length = frame_length
# 10帧,检测一次 vad # 10帧,检测一次 vad
...@@ -30,67 +35,64 @@ class AudioMannger: ...@@ -30,67 +35,64 @@ class AudioMannger:
self.data_width = data_width self.data_width = data_width
# window # window
self.window_length = frame_length * frame * data_width self.window_length = frame_length * frame * data_width
# 是否开始录音 # 是否开始录音
self.on_asr = False self.on_asr = False
self.silence_cnt = 0 self.silence_cnt = 0
self.max_silence_cnt = 4 self.max_silence_cnt = 4
self.is_pause = False # 录音暂停与恢复 self.is_pause = False # 录音暂停与恢复
def init(self): def init(self):
if os.path.exists(self.vad_threshold_path): if os.path.exists(self.vad_threshold_path):
# 平均响度文件存在 # 平均响度文件存在
self.vad_threshold = np.load(self.vad_threshold_path) self.vad_threshold = np.load(self.vad_threshold_path)
def clear_audio(self): def clear_audio(self):
# 清空 pcm 累积片段与 asr 识别结果 # 清空 pcm 累积片段与 asr 识别结果
self.audios = b'' self.audios = b''
def clear_asr(self): def clear_asr(self):
self.asr_result = "" self.asr_result = ""
def compute_chunk_volume(self, start_index, pcm_bins): def compute_chunk_volume(self, start_index, pcm_bins):
# 根据帧长计算能量平均值 # 根据帧长计算能量平均值
pcm_bin = pcm_bins[start_index: start_index + self.window_length] pcm_bin = pcm_bins[start_index:start_index + self.window_length]
# 转成 numpy # 转成 numpy
pcm_np = np.frombuffer(pcm_bin, np.int16) pcm_np = np.frombuffer(pcm_bin, np.int16)
# 归一化 + 计算响度 # 归一化 + 计算响度
x = pcm_np.astype(np.float32) x = pcm_np.astype(np.float32)
x = np.abs(x) x = np.abs(x)
return np.mean(x) return np.mean(x)
def is_speech(self, start_index, pcm_bins): def is_speech(self, start_index, pcm_bins):
# 检查是否没 # 检查是否没
if start_index > len(pcm_bins): if start_index > len(pcm_bins):
return False return False
# 检查从这个 start 开始是否为静音帧 # 检查从这个 start 开始是否为静音帧
energy = self.compute_chunk_volume(start_index=start_index, pcm_bins=pcm_bins) energy = self.compute_chunk_volume(
start_index=start_index, pcm_bins=pcm_bins)
# print(energy) # print(energy)
if energy > self.vad_threshold: if energy > self.vad_threshold:
return True return True
else: else:
return False return False
def compute_env_volume(self, pcm_bins): def compute_env_volume(self, pcm_bins):
max_energy = 0 max_energy = 0
start = 0 start = 0
while start < len(pcm_bins): while start < len(pcm_bins):
energy = self.compute_chunk_volume(start_index=start, pcm_bins=pcm_bins) energy = self.compute_chunk_volume(
start_index=start, pcm_bins=pcm_bins)
if energy > max_energy: if energy > max_energy:
max_energy = energy max_energy = energy
start += self.window_length start += self.window_length
self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult
# 保存成文件 # 保存成文件
np.save(self.vad_threshold_path, self.vad_threshold) np.save(self.vad_threshold_path, self.vad_threshold)
print(f"vad 阈值大小: {self.vad_threshold}") print(f"vad 阈值大小: {self.vad_threshold}")
print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}") print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}")
def stream_asr(self, pcm_bin): def stream_asr(self, pcm_bin):
# 先把 pcm_bin 送进去做端点检测 # 先把 pcm_bin 送进去做端点检测
start = 0 start = 0
...@@ -99,7 +101,7 @@ class AudioMannger: ...@@ -99,7 +101,7 @@ class AudioMannger:
self.on_asr = True self.on_asr = True
self.silence_cnt = 0 self.silence_cnt = 0
print("录音中") print("录音中")
self.audios += pcm_bin[ start : start + self.window_length] self.audios += pcm_bin[start:start + self.window_length]
else: else:
if self.on_asr: if self.on_asr:
self.silence_cnt += 1 self.silence_cnt += 1
...@@ -110,41 +112,42 @@ class AudioMannger: ...@@ -110,41 +112,42 @@ class AudioMannger:
print("录音停止") print("录音停止")
# audios 保存为 wav, 送入 ASR # audios 保存为 wav, 送入 ASR
if len(self.audios) > 2 * 16000: if len(self.audios) > 2 * 16000:
file_path = os.path.join(self.file_dir, "asr_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav") file_path = os.path.join(
self.file_dir,
"asr_" + datetime.datetime.strftime(
datetime.datetime.now(),
'%Y%m%d%H%M%S') + randName() + ".wav")
self.save_audio(file_path=file_path) self.save_audio(file_path=file_path)
self.asr_result = self.robot.speech2text(file_path) self.asr_result = self.robot.speech2text(file_path)
self.clear_audio() self.clear_audio()
return self.asr_result return self.asr_result
else: else:
# 正常接收 # 正常接收
print("录音中 静音") print("录音中 静音")
self.audios += pcm_bin[ start : start + self.window_length] self.audios += pcm_bin[start:start + self.window_length]
start += self.window_length start += self.window_length
return "" return ""
def save_audio(self, file_path): def save_audio(self, file_path):
print("保存音频") print("保存音频")
wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav" wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav"
wf.setnchannels(1) # 设置声道数为2 wf.setnchannels(1) # 设置声道数为2
wf.setsampwidth(2) # 设置采样深度为 wf.setsampwidth(2) # 设置采样深度为
wf.setframerate(16000) # 设置采样率为16000 wf.setframerate(16000) # 设置采样率为16000
# 将数据写入创建的音频文件 # 将数据写入创建的音频文件
wf.writeframes(self.audios) wf.writeframes(self.audios)
# 写完后将文件关闭 # 写完后将文件关闭
wf.close() wf.close()
def end(self): def end(self):
# audios 保存为 wav, 送入 ASR # audios 保存为 wav, 送入 ASR
file_path = os.path.join(self.file_dir, "asr.wav") file_path = os.path.join(self.file_dir, "asr.wav")
self.save_audio(file_path=file_path) self.save_audio(file_path=file_path)
return self.robot.speech2text(file_path) return self.robot.speech2text(file_path)
def stop(self): def stop(self):
self.is_pause = True self.is_pause = True
self.audios = b'' self.audios = b''
def resume(self): def resume(self):
self.is_pause = False self.is_pause = False
\ No newline at end of file
from re import sub
import numpy as np import numpy as np
import paddle
import librosa
import soundfile
from paddlespeech.server.engine.asr.online.python.asr_engine import ASREngine from paddlespeech.server.engine.asr.online.python.asr_engine import ASREngine
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.config import get_config from paddlespeech.server.utils.config import get_config
def readWave(samples): def readWave(samples):
x_len = len(samples) x_len = len(samples)
...@@ -31,20 +28,23 @@ def readWave(samples): ...@@ -31,20 +28,23 @@ def readWave(samples):
class ASR: class ASR:
def __init__(self, config_path, ) -> None: def __init__(
self,
config_path, ) -> None:
self.config = get_config(config_path)['asr_online'] self.config = get_config(config_path)['asr_online']
self.engine = ASREngine() self.engine = ASREngine()
self.engine.init(self.config) self.engine.init(self.config)
self.connection_handler = PaddleASRConnectionHanddler(self.engine) self.connection_handler = PaddleASRConnectionHanddler(self.engine)
def offlineASR(self, samples, sample_rate=16000): def offlineASR(self, samples, sample_rate=16000):
x_chunk, x_chunk_lens = self.engine.preprocess(samples=samples, sample_rate=sample_rate) x_chunk, x_chunk_lens = self.engine.preprocess(
samples=samples, sample_rate=sample_rate)
self.engine.run(x_chunk, x_chunk_lens) self.engine.run(x_chunk, x_chunk_lens)
result = self.engine.postprocess() result = self.engine.postprocess()
self.engine.reset() self.engine.reset()
return result return result
def onlineASR(self, samples:bytes=None, is_finished=False): def onlineASR(self, samples: bytes=None, is_finished=False):
if not is_finished: if not is_finished:
# 流式开始 # 流式开始
self.connection_handler.extract_feat(samples) self.connection_handler.extract_feat(samples)
...@@ -58,5 +58,3 @@ class ASR: ...@@ -58,5 +58,3 @@ class ASR:
asr_results = self.connection_handler.get_result() asr_results = self.connection_handler.get_result()
self.connection_handler.reset() self.connection_handler.reset()
return asr_results return asr_results
\ No newline at end of file
from paddlenlp import Taskflow from paddlenlp import Taskflow
class NLP: class NLP:
def __init__(self, ie_model_path=None): def __init__(self, ie_model_path=None):
schema = ["时间", "出发地", "目的地", "费用"] schema = ["时间", "出发地", "目的地", "费用"]
if ie_model_path: if ie_model_path:
self.ie_model = Taskflow("information_extraction", self.ie_model = Taskflow(
schema=schema, task_path=ie_model_path) "information_extraction",
schema=schema,
task_path=ie_model_path)
else: else:
self.ie_model = Taskflow("information_extraction", self.ie_model = Taskflow("information_extraction", schema=schema)
schema=schema)
self.dialogue_model = Taskflow("dialogue") self.dialogue_model = Taskflow("dialogue")
def chat(self, text): def chat(self, text):
result = self.dialogue_model([text]) result = self.dialogue_model([text])
return result[0] return result[0]
def ie(self, text): def ie(self, text):
result = self.ie_model(text) result = self.ie_model(text)
return result return result
\ No newline at end of file
import base64 import base64
import sqlite3
import os import os
import sqlite3
import numpy as np import numpy as np
from pkg_resources import resource_stream
def dict_factory(cursor, row): def dict_factory(cursor, row):
d = {} d = {}
for idx, col in enumerate(cursor.description): for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx] d[col[0]] = row[idx]
return d return d
class DataBase(object): class DataBase(object):
def __init__(self, db_path:str): def __init__(self, db_path: str):
db_path = os.path.realpath(db_path) db_path = os.path.realpath(db_path)
if os.path.exists(db_path): if os.path.exists(db_path):
...@@ -21,12 +22,12 @@ class DataBase(object): ...@@ -21,12 +22,12 @@ class DataBase(object):
db_path_dir = os.path.dirname(db_path) db_path_dir = os.path.dirname(db_path)
os.makedirs(db_path_dir, exist_ok=True) os.makedirs(db_path_dir, exist_ok=True)
self.db_path = db_path self.db_path = db_path
self.conn = sqlite3.connect(self.db_path) self.conn = sqlite3.connect(self.db_path)
self.conn.row_factory = dict_factory self.conn.row_factory = dict_factory
self.cursor = self.conn.cursor() self.cursor = self.conn.cursor()
self.init_database() self.init_database()
def init_database(self): def init_database(self):
""" """
初始化数据库, 若表不存在则创建 初始化数据库, 若表不存在则创建
...@@ -41,20 +42,21 @@ class DataBase(object): ...@@ -41,20 +42,21 @@ class DataBase(object):
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def execute_base(self, sql, data_dict): def execute_base(self, sql, data_dict):
self.cursor.execute(sql, data_dict) self.cursor.execute(sql, data_dict)
self.conn.commit() self.conn.commit()
def insert_one(self, username, vector_base64:str, wav_path): def insert_one(self, username, vector_base64: str, wav_path):
if not os.path.exists(wav_path): if not os.path.exists(wav_path):
return None, "wav not exists" return None, "wav not exists"
else: else:
sql = f""" sql = """
insert into insert into
vprtable (username, vector, wavpath) vprtable (username, vector, wavpath)
values (?, ?, ?) values (?, ?, ?)
""" """
try: try:
self.cursor.execute(sql, (username, vector_base64, wav_path)) self.cursor.execute(sql, (username, vector_base64, wav_path))
self.conn.commit() self.conn.commit()
...@@ -63,25 +65,27 @@ class DataBase(object): ...@@ -63,25 +65,27 @@ class DataBase(object):
except Exception as e: except Exception as e:
print(e) print(e)
return None, e return None, e
def select_all(self): def select_all(self):
sql = """ sql = """
SELECT * from vprtable SELECT * from vprtable
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
def select_by_id(self, vpr_id): def select_by_id(self, vpr_id):
sql = f""" sql = f"""
SELECT * from vprtable WHERE `id` = {vpr_id} SELECT * from vprtable WHERE `id` = {vpr_id}
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
def select_by_username(self, username): def select_by_username(self, username):
sql = f""" sql = f"""
SELECT * from vprtable WHERE `username` = '{username}' SELECT * from vprtable WHERE `username` = '{username}'
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
...@@ -89,28 +93,30 @@ class DataBase(object): ...@@ -89,28 +93,30 @@ class DataBase(object):
sql = f""" sql = f"""
DELETE from vprtable WHERE `username`='{username}' DELETE from vprtable WHERE `username`='{username}'
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def drop_all(self): def drop_all(self):
sql = f""" sql = """
DELETE from vprtable DELETE from vprtable
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def drop_table(self): def drop_table(self):
sql = f""" sql = """
DROP TABLE vprtable DROP TABLE vprtable
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def encode_vector(self, vector:np.ndarray): def encode_vector(self, vector: np.ndarray):
return base64.b64encode(vector).decode('utf8') return base64.b64encode(vector).decode('utf8')
def decode_vector(self, vector_base64, dtype=np.float32): def decode_vector(self, vector_base64, dtype=np.float32):
b = base64.b64decode(vector_base64) b = base64.b64decode(vector_base64)
vc = np.frombuffer(b, dtype=dtype) vc = np.frombuffer(b, dtype=dtype)
return vc return vc
\ No newline at end of file
...@@ -5,18 +5,19 @@ ...@@ -5,18 +5,19 @@
# 2. 加载模型 # 2. 加载模型
# 3. 端到端推理 # 3. 端到端推理
# 4. 流式推理 # 4. 流式推理
import base64 import base64
import math
import logging import logging
import math
import numpy as np import numpy as np
from paddlespeech.server.utils.onnx_infer import get_sess
from paddlespeech.t2s.frontend.zh_frontend import Frontend from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
from paddlespeech.server.utils.util import denorm, get_chunks
from paddlespeech.server.utils.audio_process import float2pcm from paddlespeech.server.utils.audio_process import float2pcm
from paddlespeech.server.utils.config import get_config from paddlespeech.server.utils.config import get_config
from paddlespeech.server.utils.util import denorm
from paddlespeech.server.utils.util import get_chunks
from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
class TTS: class TTS:
def __init__(self, config_path): def __init__(self, config_path):
...@@ -26,12 +27,12 @@ class TTS: ...@@ -26,12 +27,12 @@ class TTS:
self.engine.init(self.config) self.engine.init(self.config)
self.executor = self.engine.executor self.executor = self.engine.executor
#self.engine.warm_up() #self.engine.warm_up()
# 前端初始化 # 前端初始化
self.frontend = Frontend( self.frontend = Frontend(
phone_vocab_path=self.engine.executor.phones_dict, phone_vocab_path=self.engine.executor.phones_dict,
tone_vocab_path=None) tone_vocab_path=None)
def depadding(self, data, chunk_num, chunk_id, block, pad, upsample): def depadding(self, data, chunk_num, chunk_id, block, pad, upsample):
""" """
Streaming inference removes the result of pad inference Streaming inference removes the result of pad inference
...@@ -48,39 +49,37 @@ class TTS: ...@@ -48,39 +49,37 @@ class TTS:
data = data[front_pad * upsample:(front_pad + block) * upsample] data = data[front_pad * upsample:(front_pad + block) * upsample]
return data return data
def offlineTTS(self, text): def offlineTTS(self, text):
get_tone_ids = False get_tone_ids = False
merge_sentences = False merge_sentences = False
input_ids = self.frontend.get_input_ids( input_ids = self.frontend.get_input_ids(
text, text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
wav_list = [] wav_list = []
for i in range(len(phone_ids)): for i in range(len(phone_ids)):
orig_hs = self.engine.executor.am_encoder_infer_sess.run( orig_hs = self.engine.executor.am_encoder_infer_sess.run(
None, input_feed={'text': phone_ids[i].numpy()} None, input_feed={'text': phone_ids[i].numpy()})
)
hs = orig_hs[0] hs = orig_hs[0]
am_decoder_output = self.engine.executor.am_decoder_sess.run( am_decoder_output = self.engine.executor.am_decoder_sess.run(
None, input_feed={'xs': hs}) None, input_feed={'xs': hs})
am_postnet_output = self.engine.executor.am_postnet_sess.run( am_postnet_output = self.engine.executor.am_postnet_sess.run(
None, None,
input_feed={ input_feed={
'xs': np.transpose(am_decoder_output[0], (0, 2, 1)) 'xs': np.transpose(am_decoder_output[0], (0, 2, 1))
}) })
am_output_data = am_decoder_output + np.transpose( am_output_data = am_decoder_output + np.transpose(
am_postnet_output[0], (0, 2, 1)) am_postnet_output[0], (0, 2, 1))
normalized_mel = am_output_data[0][0] normalized_mel = am_output_data[0][0]
mel = denorm(normalized_mel, self.engine.executor.am_mu, self.engine.executor.am_std) mel = denorm(normalized_mel, self.engine.executor.am_mu,
self.engine.executor.am_std)
wav = self.engine.executor.voc_sess.run( wav = self.engine.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel})[0] output_names=None, input_feed={'logmel': mel})[0]
wav_list.append(wav) wav_list.append(wav)
wavs = np.concatenate(wav_list) wavs = np.concatenate(wav_list)
return wavs return wavs
def streamTTS(self, text): def streamTTS(self, text):
get_tone_ids = False get_tone_ids = False
...@@ -88,9 +87,7 @@ class TTS: ...@@ -88,9 +87,7 @@ class TTS:
# front # front
input_ids = self.frontend.get_input_ids( input_ids = self.frontend.get_input_ids(
text, text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
for i in range(len(phone_ids)): for i in range(len(phone_ids)):
...@@ -105,14 +102,15 @@ class TTS: ...@@ -105,14 +102,15 @@ class TTS:
mel = mel[0] mel = mel[0]
# voc streaming # voc streaming
mel_chunks = get_chunks(mel, self.config.voc_block, self.config.voc_pad, "voc") mel_chunks = get_chunks(mel, self.config.voc_block,
self.config.voc_pad, "voc")
voc_chunk_num = len(mel_chunks) voc_chunk_num = len(mel_chunks)
for i, mel_chunk in enumerate(mel_chunks): for i, mel_chunk in enumerate(mel_chunks):
sub_wav = self.executor.voc_sess.run( sub_wav = self.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel_chunk}) output_names=None, input_feed={'logmel': mel_chunk})
sub_wav = self.depadding(sub_wav[0], voc_chunk_num, i, sub_wav = self.depadding(
self.config.voc_block, self.config.voc_pad, sub_wav[0], voc_chunk_num, i, self.config.voc_block,
self.config.voc_upsample) self.config.voc_pad, self.config.voc_upsample)
yield self.after_process(sub_wav) yield self.after_process(sub_wav)
...@@ -130,7 +128,8 @@ class TTS: ...@@ -130,7 +128,8 @@ class TTS:
end = min(self.config.voc_block + self.config.voc_pad, mel_len) end = min(self.config.voc_block + self.config.voc_pad, mel_len)
# streaming am # streaming am
hss = get_chunks(orig_hs, self.config.am_block, self.config.am_pad, "am") hss = get_chunks(orig_hs, self.config.am_block,
self.config.am_pad, "am")
am_chunk_num = len(hss) am_chunk_num = len(hss)
for i, hs in enumerate(hss): for i, hs in enumerate(hss):
am_decoder_output = self.executor.am_decoder_sess.run( am_decoder_output = self.executor.am_decoder_sess.run(
...@@ -147,7 +146,8 @@ class TTS: ...@@ -147,7 +146,8 @@ class TTS:
sub_mel = denorm(normalized_mel, self.executor.am_mu, sub_mel = denorm(normalized_mel, self.executor.am_mu,
self.executor.am_std) self.executor.am_std)
sub_mel = self.depadding(sub_mel, am_chunk_num, i, sub_mel = self.depadding(sub_mel, am_chunk_num, i,
self.config.am_block, self.config.am_pad, 1) self.config.am_block,
self.config.am_pad, 1)
if i == 0: if i == 0:
mel_streaming = sub_mel mel_streaming = sub_mel
...@@ -165,23 +165,22 @@ class TTS: ...@@ -165,23 +165,22 @@ class TTS:
output_names=None, input_feed={'logmel': voc_chunk}) output_names=None, input_feed={'logmel': voc_chunk})
sub_wav = self.depadding( sub_wav = self.depadding(
sub_wav[0], voc_chunk_num, voc_chunk_id, sub_wav[0], voc_chunk_num, voc_chunk_id,
self.config.voc_block, self.config.voc_pad, self.config.voc_upsample) self.config.voc_block, self.config.voc_pad,
self.config.voc_upsample)
yield self.after_process(sub_wav) yield self.after_process(sub_wav)
voc_chunk_id += 1 voc_chunk_id += 1
start = max( start = max(0, voc_chunk_id * self.config.voc_block -
0, voc_chunk_id * self.config.voc_block - self.config.voc_pad) self.config.voc_pad)
end = min( end = min((voc_chunk_id + 1) * self.config.voc_block +
(voc_chunk_id + 1) * self.config.voc_block + self.config.voc_pad, self.config.voc_pad, mel_len)
mel_len)
else: else:
logging.error( logging.error(
"Only support fastspeech2_csmsc or fastspeech2_cnndecoder_csmsc on streaming tts." "Only support fastspeech2_csmsc or fastspeech2_cnndecoder_csmsc on streaming tts."
) )
def streamTTSBytes(self, text): def streamTTSBytes(self, text):
for wav in self.engine.executor.infer( for wav in self.engine.executor.infer(
text=text, text=text,
...@@ -191,19 +190,14 @@ class TTS: ...@@ -191,19 +190,14 @@ class TTS:
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
yield wav_bytes yield wav_bytes
def after_process(self, wav): def after_process(self, wav):
# for tvm # for tvm
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64 wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64
return wav_base64 return wav_base64
def streamTTS_TVM(self, text): def streamTTS_TVM(self, text):
# 用 TVM 优化 # 用 TVM 优化
pass pass
\ No newline at end of file
# vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示 # vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示
import logging import logging
import faiss import faiss
from matplotlib import use
import numpy as np import numpy as np
from .sql_helper import DataBase from .sql_helper import DataBase
from .vpr_encode import get_audio_embedding from .vpr_encode import get_audio_embedding
class VPR: class VPR:
def __init__(self, db_path, dim, top_k) -> None: def __init__(self, db_path, dim, top_k) -> None:
# 初始化 # 初始化
...@@ -14,15 +16,15 @@ class VPR: ...@@ -14,15 +16,15 @@ class VPR:
self.top_k = top_k self.top_k = top_k
self.dtype = np.float32 self.dtype = np.float32
self.vpr_idx = 0 self.vpr_idx = 0
# db 初始化 # db 初始化
self.db = DataBase(db_path) self.db = DataBase(db_path)
# faiss 初始化 # faiss 初始化
index_ip = faiss.IndexFlatIP(dim) index_ip = faiss.IndexFlatIP(dim)
self.index_ip = faiss.IndexIDMap(index_ip) self.index_ip = faiss.IndexIDMap(index_ip)
self.init() self.init()
def init(self): def init(self):
# demo 初始化,把 mysql中的向量注册到 faiss 中 # demo 初始化,把 mysql中的向量注册到 faiss 中
sql_dbs = self.db.select_all() sql_dbs = self.db.select_all()
...@@ -34,12 +36,13 @@ class VPR: ...@@ -34,12 +36,13 @@ class VPR:
if len(vc.shape) == 1: if len(vc.shape) == 1:
vc = np.expand_dims(vc, axis=0) vc = np.expand_dims(vc, axis=0)
# 构建数据库 # 构建数据库
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64')) self.index_ip.add_with_ids(vc, np.array(
(idx, )).astype('int64'))
logging.info("faiss 构建完毕") logging.info("faiss 构建完毕")
def faiss_enroll(self, idx, vc): def faiss_enroll(self, idx, vc):
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64')) self.index_ip.add_with_ids(vc, np.array((idx, )).astype('int64'))
def vpr_enroll(self, username, wav_path): def vpr_enroll(self, username, wav_path):
# 注册声纹 # 注册声纹
emb = get_audio_embedding(wav_path) emb = get_audio_embedding(wav_path)
...@@ -53,21 +56,22 @@ class VPR: ...@@ -53,21 +56,22 @@ class VPR:
else: else:
last_idx, mess = None last_idx, mess = None
return last_idx return last_idx
def vpr_recog(self, wav_path): def vpr_recog(self, wav_path):
# 识别声纹 # 识别声纹
emb_search = get_audio_embedding(wav_path) emb_search = get_audio_embedding(wav_path)
if emb_search is not None: if emb_search is not None:
emb_search = np.expand_dims(emb_search, axis=0) emb_search = np.expand_dims(emb_search, axis=0)
D, I = self.index_ip.search(emb_search, self.top_k) D, I = self.index_ip.search(emb_search, self.top_k)
D = D.tolist()[0] D = D.tolist()[0]
I = I.tolist()[0] I = I.tolist()[0]
return [(round(D[i] * 100, 2 ), I[i]) for i in range(len(D)) if I[i] != -1] return [(round(D[i] * 100, 2), I[i]) for i in range(len(D))
if I[i] != -1]
else: else:
logging.error("识别失败") logging.error("识别失败")
return None return None
def do_search_vpr(self, wav_path): def do_search_vpr(self, wav_path):
spk_ids, paths, scores = [], [], [] spk_ids, paths, scores = [], [], []
recog_result = self.vpr_recog(wav_path) recog_result = self.vpr_recog(wav_path)
...@@ -78,41 +82,39 @@ class VPR: ...@@ -78,41 +82,39 @@ class VPR:
scores.append(score) scores.append(score)
paths.append("") paths.append("")
return spk_ids, paths, scores return spk_ids, paths, scores
def vpr_del(self, username): def vpr_del(self, username):
# 根据用户username, 删除声纹 # 根据用户username, 删除声纹
# 查用户ID,删除对应向量 # 查用户ID,删除对应向量
res = self.db.select_by_username(username) res = self.db.select_by_username(username)
for r in res: for r in res:
idx = r['id'] idx = r['id']
self.index_ip.remove_ids(np.array((idx,)).astype('int64')) self.index_ip.remove_ids(np.array((idx, )).astype('int64'))
self.db.drop_by_username(username) self.db.drop_by_username(username)
def vpr_list(self): def vpr_list(self):
# 获取数据列表 # 获取数据列表
return self.db.select_all() return self.db.select_all()
def do_list(self): def do_list(self):
spk_ids, vpr_ids = [], [] spk_ids, vpr_ids = [], []
for res in self.db.select_all(): for res in self.db.select_all():
spk_ids.append(res['username']) spk_ids.append(res['username'])
vpr_ids.append(res['id']) vpr_ids.append(res['id'])
return spk_ids, vpr_ids return spk_ids, vpr_ids
def do_get_wav(self, vpr_idx): def do_get_wav(self, vpr_idx):
res = self.db.select_by_id(vpr_idx) res = self.db.select_by_id(vpr_idx)
return res[0]['wavpath'] return res[0]['wavpath']
def vpr_data(self, idx): def vpr_data(self, idx):
# 获取对应ID的数据 # 获取对应ID的数据
res = self.db.select_by_id(idx) res = self.db.select_by_id(idx)
return res return res
def vpr_droptable(self): def vpr_droptable(self):
# 删除表 # 删除表
self.db.drop_table() self.db.drop_table()
# 清空 faiss # 清空 faiss
self.index_ip.reset() self.index_ip.reset()
from paddlespeech.cli.vector import VectorExecutor
import numpy as np
import logging import logging
import numpy as np
from paddlespeech.cli.vector import VectorExecutor
vector_executor = VectorExecutor() vector_executor = VectorExecutor()
def get_audio_embedding(path): def get_audio_embedding(path):
""" """
Use vpr_inference to generate embedding of audio Use vpr_inference to generate embedding of audio
...@@ -16,5 +19,3 @@ def get_audio_embedding(path): ...@@ -16,5 +19,3 @@ def get_audio_embedding(path):
except Exception as e: except Exception as e:
logging.error(f"Error with embedding:{e}") logging.error(f"Error with embedding:{e}")
return None return None
\ No newline at end of file
...@@ -2,6 +2,7 @@ from typing import List ...@@ -2,6 +2,7 @@ from typing import List
from fastapi import WebSocket from fastapi import WebSocket
class ConnectionManager: class ConnectionManager:
def __init__(self): def __init__(self):
# 存放激活的ws连接对象 # 存放激活的ws连接对象
...@@ -28,4 +29,4 @@ class ConnectionManager: ...@@ -28,4 +29,4 @@ class ConnectionManager:
await connection.send_text(message) await connection.send_text(message)
manager = ConnectionManager() manager = ConnectionManager()
\ No newline at end of file
from paddlespeech.cli.asr.infer import ASRExecutor
import soundfile as sf
import os import os
import librosa
import soundfile as sf
from src.SpeechBase.asr import ASR from src.SpeechBase.asr import ASR
from src.SpeechBase.tts import TTS
from src.SpeechBase.nlp import NLP from src.SpeechBase.nlp import NLP
from src.SpeechBase.tts import TTS
from paddlespeech.cli.asr.infer import ASRExecutor
class Robot: class Robot:
def __init__(self, asr_config, tts_config,asr_init_path, def __init__(self,
asr_config,
tts_config,
asr_init_path,
ie_model_path=None) -> None: ie_model_path=None) -> None:
self.nlp = NLP(ie_model_path=ie_model_path) self.nlp = NLP(ie_model_path=ie_model_path)
self.asr = ASR(config_path=asr_config) self.asr = ASR(config_path=asr_config)
self.tts = TTS(config_path=tts_config) self.tts = TTS(config_path=tts_config)
self.tts_sample_rate = 24000 self.tts_sample_rate = 24000
self.asr_sample_rate = 16000 self.asr_sample_rate = 16000
# 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开 # 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开
self.asr_model = ASRExecutor() self.asr_model = ASRExecutor()
self.asr_name = "conformer_wenetspeech" self.asr_name = "conformer_wenetspeech"
self.warm_up_asrmodel(asr_init_path) self.warm_up_asrmodel(asr_init_path)
def warm_up_asrmodel(self, asr_init_path): def warm_up_asrmodel(self, asr_init_path):
if not os.path.exists(asr_init_path): if not os.path.exists(asr_init_path):
path_dir = os.path.dirname(asr_init_path) path_dir = os.path.dirname(asr_init_path)
if not os.path.exists(path_dir): if not os.path.exists(path_dir):
os.makedirs(path_dir, exist_ok=True) os.makedirs(path_dir, exist_ok=True)
# TTS生成,采样率24000 # TTS生成,采样率24000
text = "生成初始音频" text = "生成初始音频"
self.text2speech(text, asr_init_path) self.text2speech(text, asr_init_path)
# asr model初始化 # asr model初始化
self.asr_model(asr_init_path, model=self.asr_name,lang='zh', self.asr_model(
sample_rate=16000, force_yes=True) asr_init_path,
model=self.asr_name,
lang='zh',
sample_rate=16000,
force_yes=True)
def speech2text(self, audio_file): def speech2text(self, audio_file):
self.asr_model.preprocess(self.asr_name, audio_file) self.asr_model.preprocess(self.asr_name, audio_file)
self.asr_model.infer(self.asr_name) self.asr_model.infer(self.asr_name)
res = self.asr_model.postprocess() res = self.asr_model.postprocess()
return res return res
def text2speech(self, text, outpath): def text2speech(self, text, outpath):
wav = self.tts.offlineTTS(text) wav = self.tts.offlineTTS(text)
sf.write( sf.write(outpath, wav, samplerate=self.tts_sample_rate)
outpath, wav, samplerate=self.tts_sample_rate)
res = wav res = wav
return res return res
def text2speechStream(self, text): def text2speechStream(self, text):
for sub_wav_base64 in self.tts.streamTTS(text=text): for sub_wav_base64 in self.tts.streamTTS(text=text):
yield sub_wav_base64 yield sub_wav_base64
def text2speechStreamBytes(self, text): def text2speechStreamBytes(self, text):
for wav_bytes in self.tts.streamTTSBytes(text=text): for wav_bytes in self.tts.streamTTSBytes(text=text):
yield wav_bytes yield wav_bytes
...@@ -66,5 +70,3 @@ class Robot: ...@@ -66,5 +70,3 @@ class Robot:
def ie(self, text): def ie(self, text):
result = self.nlp.ie(text) result = self.nlp.ie(text)
return result return result
\ No newline at end of file
import random import random
def randName(n=5): def randName(n=5):
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba',n)) return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba', n))
def SuccessRequest(result=None, message="ok"): def SuccessRequest(result=None, message="ok"):
return { return {"code": 0, "result": result, "message": message}
"code": 0,
"result":result,
"message": message
}
def ErrorRequest(result=None, message="error"): def ErrorRequest(result=None, message="error"):
return { return {"code": -1, "result": result, "message": message}
"code": -1,
"result":result,
"message": message
}
\ No newline at end of file
([简体中文](./README_cn.md)|English)
# Story Talker # Story Talker
## Introduction ## Introduction
Storybooks are very important children's enlightenment books, but parents usually don't have enough time to read storybooks for their children. For very young children, they may not understand the Chinese characters in storybooks. Or sometimes, children just want to "listen" but don't want to "read". Storybooks are very important children's enlightenment books, but parents usually don't have enough time to read storybooks for their children. For very young children, they may not understand the Chinese characters in storybooks. Or sometimes, children just want to "listen" but don't want to "read".
......
(简体中文|[English](./README.md))
# Story Talker
## 简介
故事书是非常重要的儿童启蒙书,但家长通常没有足够的时间为孩子读故事书。对于非常小的孩子,他们可能不理解故事书中的汉字。或有时,孩子们只是想“听”,而不想“读”。
您可以使用 `PaddleOCR` 获取故事书的文本,并通过 `PaddleSpeech``TTS` 模块进行阅读。
## 使用
运行以下命令行开始:
```
./run.sh
```
结果已显示在 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
...@@ -28,6 +28,7 @@ asr_online: ...@@ -28,6 +28,7 @@ asr_online:
sample_rate: 16000 sample_rate: 16000
cfg_path: cfg_path:
decode_method: decode_method:
num_decoding_left_chunks: -1
force_yes: True force_yes: True
device: 'cpu' # cpu or gpu:id device: 'cpu' # cpu or gpu:id
decode_method: "attention_rescoring" decode_method: "attention_rescoring"
......
...@@ -34,7 +34,7 @@ if __name__ == '__main__': ...@@ -34,7 +34,7 @@ if __name__ == '__main__':
n = 0 n = 0
for m in rtfs: for m in rtfs:
# not accurate, may have duplicate log # not accurate, may have duplicate log
n += 1 n += 1
T += m['T'] T += m['T']
P += m['P'] P += m['P']
......
...@@ -29,7 +29,7 @@ tts_online: ...@@ -29,7 +29,7 @@ tts_online:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc'] # voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference # Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
...@@ -70,7 +70,6 @@ tts_online-onnx: ...@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_sample_rate: 24000 am_sample_rate: 24000
am_sess_conf: am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu' device: "cpu" # set 'gpu:id' or 'cpu'
......
...@@ -29,7 +29,7 @@ tts_online: ...@@ -29,7 +29,7 @@ tts_online:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc'] # voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference # Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
...@@ -70,7 +70,6 @@ tts_online-onnx: ...@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_sample_rate: 24000 am_sample_rate: 24000
am_sess_conf: am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu' device: "cpu" # set 'gpu:id' or 'cpu'
......
([简体中文](./README_cn.md)|English)
# Style FastSpeech2 # Style FastSpeech2
## Introduction ## Introduction
[FastSpeech2](https://arxiv.org/abs/2006.04558) is a classical acoustic model for Text-to-Speech synthesis, which introduces controllable speech input, including `phoneme duration``energy` and `pitch`. [FastSpeech2](https://arxiv.org/abs/2006.04558) is a classical acoustic model for Text-to-Speech synthesis, which introduces controllable speech input, including `phoneme duration``energy` and `pitch`.
......
(简体中文|[English](./README.md))
# Style FastSpeech2
## 简介
[FastSpeech2](https://arxiv.org/abs/2006.04558) 是用于语音合成的经典声学模型,它引入了可控语音输入,包括 `phoneme duration``energy``pitch`
在预测阶段,您可以更改这些变量以获得一些有趣的结果。
例如:
1. `FastSpeech2` 中的 `duration` 可以控制音频的速度 ,并保持 `pitch` 。(在某些语音工具中,增加速度将增加音调,反之亦然。)
2. 当我们将一个句子的 `pitch` 设置为平均值并将音素的 `tones` 设置为 `1` 时,我们将获得 `robot-style` 的音色。
3. 当我们提高成年女性的 `pitch` (比例固定)时,我们会得到 `child-style` 的音色。
句子中不同音素的 `duration``pitch` 可以具有不同的比例。您可以设置不同的音阶比例来强调或削弱某些音素的发音。
## 运行
运行以下命令行开始:
```
./run.sh
```
`run.sh`, 会首先执行 `source path.sh` 去设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt`中的句子。
更多的细节,请查看 `style_syn.py`
语音样例可以在 [style-control-in-fastspeech2](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html#style-control-in-fastspeech2) 查看。
...@@ -16,8 +16,8 @@ You can choose one way from easy, meduim and hard to install paddlespeech. ...@@ -16,8 +16,8 @@ You can choose one way from easy, meduim and hard to install paddlespeech.
The input of this demo should be a text of the specific language that can be passed via argument. The input of this demo should be a text of the specific language that can be passed via argument.
### 3. Usage ### 3. Usage
- Command Line (Recommended) - Command Line (Recommended)
The default acoustic model is `Fastspeech2`, and the default vocoder is `HiFiGAN`, the default inference method is dygraph inference.
- Chinese - Chinese
The default acoustic model is `Fastspeech2`, and the default vocoder is `Parallel WaveGAN`.
```bash ```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
``` ```
...@@ -45,7 +45,33 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -45,7 +45,33 @@ The input of this demo should be a text of the specific language that can be pas
You can change `spk_id` here. You can change `spk_id` here.
```bash ```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0 paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
``` ```
- Chinese English Mixed, multi-speaker
You can change `spk_id` here.
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- Use ONNXRuntime infer:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
Usage: Usage:
```bash ```bash
...@@ -68,6 +94,8 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -68,6 +94,8 @@ The input of this demo should be a text of the specific language that can be pas
- `lang`: Language of tts task. Default: `zh`. - `lang`: Language of tts task. Default: `zh`.
- `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment. - `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment.
- `output`: Output wave filepath. Default: `output.wav`. - `output`: Output wave filepath. Default: `output.wav`.
- `use_onnx`: whether to usen ONNXRuntime inference.
- `fs`: sample rate for ONNX models when use specified model files.
Output: Output:
```bash ```bash
...@@ -75,54 +103,76 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -75,54 +103,76 @@ The input of this demo should be a text of the specific language that can be pas
``` ```
- Python API - Python API
```python - Dygraph infer:
import paddle ```python
from paddlespeech.cli.tts import TTSExecutor import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor() tts_executor = TTSExecutor()
wav_file = tts_executor( wav_file = tts_executor(
text='今天的天气不错啊', text='今天的天气不错啊',
output='output.wav', output='output.wav',
am='fastspeech2_csmsc', am='fastspeech2_csmsc',
am_config=None, am_config=None,
am_ckpt=None, am_ckpt=None,
am_stat=None, am_stat=None,
spk_id=0, spk_id=0,
phones_dict=None, phones_dict=None,
tones_dict=None, tones_dict=None,
speaker_dict=None, speaker_dict=None,
voc='pwgan_csmsc', voc='pwgan_csmsc',
voc_config=None, voc_config=None,
voc_ckpt=None, voc_ckpt=None,
voc_stat=None, voc_stat=None,
lang='zh', lang='zh',
device=paddle.get_device()) device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file)) print('Wave file has been generated: {}'.format(wav_file))
``` ```
- ONNXRuntime infer:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
Output: Output:
```bash ```bash
Wave file has been generated: output.wav Wave file has been generated: output.wav
``` ```
### 4. Pretrained Models ### 4. Pretrained Models
Here is a list of pretrained models released by PaddleSpeech that can be used by command and python API: Here is a list of pretrained models released by PaddleSpeech that can be used by command and python API:
- Acoustic model - Acoustic model
| Model | Language | Model | Language |
| :--- | :---: | | :--- | :---: |
| speedyspeech_csmsc| zh | speedyspeech_csmsc | zh |
| fastspeech2_csmsc| zh | fastspeech2_csmsc | zh |
| fastspeech2_aishell3| zh | fastspeech2_ljspeech | en |
| fastspeech2_ljspeech| en | fastspeech2_aishell3 | zh |
| fastspeech2_vctk| en | fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- Vocoder - Vocoder
| Model | Language | Model | Language |
| :--- | :---: | | :--- | :---: |
| pwgan_csmsc| zh | pwgan_csmsc | zh |
| pwgan_aishell3| zh | pwgan_ljspeech | en |
| pwgan_ljspeech| en | pwgan_aishell3 | zh |
| pwgan_vctk| en | pwgan_vctk | en |
| mb_melgan_csmsc| zh | mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
# 语音合成 # 语音合成
## 介绍 ## 介绍
语音合成是一种自然语言建模过程,其将文本转换为语音以进行音频演示。 语音合成是一种自然语言建模过程,其将文本转换为语音以进行音频演示。
这个 demo 是一个从给定文本生成音频的实现,它可以通过使用 `PaddleSpeech` 的单个命令或 python 中的几行代码来实现。 这个 demo 是一个从给定文本生成音频的实现,它可以通过使用 `PaddleSpeech` 的单个命令或 python 中的几行代码来实现。
## 使用方法 ## 使用方法
### 1. 安装 ### 1. 安装
请看[安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install_cn.md) 请看[安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install_cn.md)
你可以从 easy,medium,hard 三方式中选择一种方式安装。 你可以从 easy,medium,hard 三方式中选择一种方式安装。
### 2. 准备输入 ### 2. 准备输入
这个 demo 的输入是通过参数传递的特定语言的文本。 这个 demo 的输入是通过参数传递的特定语言的文本。
### 3. 使用方法 ### 3. 使用方法
- 命令行 (推荐使用) - 命令行 (推荐使用)
默认的声学模型是 `Fastspeech2`,默认的声码器是 `HiFiGAN`,默认推理方式是动态图推理。
- 中文 - 中文
默认的声学模型是 `Fastspeech2`,默认的声码器是 `Parallel WaveGAN`.
```bash ```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
``` ```
...@@ -34,7 +31,7 @@ ...@@ -34,7 +31,7 @@
``` ```
- 中文, 多说话人 - 中文, 多说话人
你可以改变 `spk_id` 你可以改变 `spk_id`
```bash ```bash
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0
``` ```
...@@ -45,10 +42,36 @@ ...@@ -45,10 +42,36 @@
``` ```
- 英文,多说话人 - 英文,多说话人
你可以改变 `spk_id` 你可以改变 `spk_id`
```bash ```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0 paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
``` ```
- 中英文混合,多说话人
你可以改变 `spk_id`
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- 使用 ONNXRuntime 推理:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
使用方法: 使用方法:
```bash ```bash
...@@ -71,6 +94,8 @@ ...@@ -71,6 +94,8 @@
- `lang`:TTS 任务的语言, 默认值:`zh` - `lang`:TTS 任务的语言, 默认值:`zh`
- `device`:执行预测的设备, 默认值:当前系统下 paddlepaddle 的默认 device。 - `device`:执行预测的设备, 默认值:当前系统下 paddlepaddle 的默认 device。
- `output`:输出音频的路径, 默认值:`output.wav` - `output`:输出音频的路径, 默认值:`output.wav`
- `use_onnx`: 是否使用 ONNXRuntime 进行推理。
- `fs`: 使用特定 ONNX 模型时的采样率。
输出: 输出:
```bash ```bash
...@@ -78,31 +103,44 @@ ...@@ -78,31 +103,44 @@
``` ```
- Python API - Python API
```python - 动态图推理:
import paddle ```python
from paddlespeech.cli.tts import TTSExecutor import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor() tts_executor = TTSExecutor()
wav_file = tts_executor( wav_file = tts_executor(
text='今天的天气不错啊', text='今天的天气不错啊',
output='output.wav', output='output.wav',
am='fastspeech2_csmsc', am='fastspeech2_csmsc',
am_config=None, am_config=None,
am_ckpt=None, am_ckpt=None,
am_stat=None, am_stat=None,
spk_id=0, spk_id=0,
phones_dict=None, phones_dict=None,
tones_dict=None, tones_dict=None,
speaker_dict=None, speaker_dict=None,
voc='pwgan_csmsc', voc='pwgan_csmsc',
voc_config=None, voc_config=None,
voc_ckpt=None, voc_ckpt=None,
voc_stat=None, voc_stat=None,
lang='zh', lang='zh',
device=paddle.get_device()) device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file)) print('Wave file has been generated: {}'.format(wav_file))
``` ```
- ONNXRuntime 推理:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
输出: 输出:
```bash ```bash
Wave file has been generated: output.wav Wave file has been generated: output.wav
...@@ -112,19 +150,29 @@ ...@@ -112,19 +150,29 @@
以下是 PaddleSpeech 提供的可以被命令行和 python API 使用的预训练模型列表: 以下是 PaddleSpeech 提供的可以被命令行和 python API 使用的预训练模型列表:
- 声学模型 - 声学模型
| 模型 | 语言 | 模型 | 语言 |
| :--- | :---: | | :--- | :---: |
| speedyspeech_csmsc| zh | speedyspeech_csmsc | zh |
| fastspeech2_csmsc| zh | fastspeech2_csmsc | zh |
| fastspeech2_aishell3| zh | fastspeech2_ljspeech | en |
| fastspeech2_ljspeech| en | fastspeech2_aishell3 | zh |
| fastspeech2_vctk| en | fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- 声码器 - 声码器
| 模型 | 语言 | 模型 | 语言 |
| :--- | :---: | | :--- | :---: |
| pwgan_csmsc| zh | pwgan_csmsc | zh |
| pwgan_aishell3| zh | pwgan_ljspeech | en |
| pwgan_ljspeech| en | pwgan_aishell3 | zh |
| pwgan_vctk| en | pwgan_vctk | en |
| mb_melgan_csmsc| zh | mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
myst-parser braceexpand
numpydoc colorlog
recommonmark>=0.5.0
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
paddlepaddle>=2.2.2
editdistance editdistance
fastapi
g2p_en g2p_en
g2pM g2pM
h5py h5py
...@@ -14,40 +9,45 @@ inflect ...@@ -14,40 +9,45 @@ inflect
jieba jieba
jsonlines jsonlines
kaldiio kaldiio
keyboard
librosa==0.8.1 librosa==0.8.1
loguru loguru
matplotlib matplotlib
myst-parser
nara_wpe nara_wpe
numpydoc
onnxruntime==1.10.0 onnxruntime==1.10.0
opencc opencc
pandas
paddlenlp paddlenlp
paddlepaddle>=2.2.2
paddlespeech_feat paddlespeech_feat
pandas
pathos == 0.2.8
pattern_singleton
Pillow>=9.0.0 Pillow>=9.0.0
praatio==5.0.0 praatio==5.0.0
pypinyin prettytable
pypinyin<=0.44.0
pypinyin-dict pypinyin-dict
python-dateutil python-dateutil
pyworld==0.2.12 pyworld==0.2.12
recommonmark>=0.5.0
resampy==0.2.2 resampy==0.2.2
sacrebleu sacrebleu
scipy scipy
sentencepiece~=0.1.96 sentencepiece~=0.1.96
soundfile~=0.10 soundfile~=0.10
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
textgrid textgrid
timer timer
tqdm tqdm
typeguard typeguard
uvicorn
visualdl visualdl
webrtcvad webrtcvad
websockets
yacs~=0.1.8 yacs~=0.1.8
prettytable
zhon zhon
colorlog
pathos == 0.2.8
fastapi
websockets
keyboard
uvicorn
pattern_singleton
braceexpand
\ No newline at end of file
...@@ -20,10 +20,11 @@ ...@@ -20,10 +20,11 @@
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
import os
import sys
import recommonmark.parser import recommonmark.parser
import sphinx_rtd_theme import sphinx_rtd_theme
import sys
import os
sys.path.insert(0, os.path.abspath('../..')) sys.path.insert(0, os.path.abspath('../..'))
autodoc_mock_imports = ["soundfile", "librosa"] autodoc_mock_imports = ["soundfile", "librosa"]
......
...@@ -42,9 +42,11 @@ SpeedySpeech| CSMSC | [speedyspeech-csmsc](https://github.com/PaddlePaddle/Paddl ...@@ -42,9 +42,11 @@ SpeedySpeech| CSMSC | [speedyspeech-csmsc](https://github.com/PaddlePaddle/Paddl
FastSpeech2| CSMSC |[fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)|[fastspeech2_nosil_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_baker_ckpt_0.4.zip)|[fastspeech2_csmsc_static_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_csmsc_static_0.2.0.zip) </br> [fastspeech2_csmsc_onnx_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_csmsc_onnx_0.2.0.zip)|157MB| FastSpeech2| CSMSC |[fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)|[fastspeech2_nosil_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_baker_ckpt_0.4.zip)|[fastspeech2_csmsc_static_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_csmsc_static_0.2.0.zip) </br> [fastspeech2_csmsc_onnx_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_csmsc_onnx_0.2.0.zip)|157MB|
FastSpeech2-Conformer| CSMSC |[fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)|[fastspeech2_conformer_baker_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip)||| FastSpeech2-Conformer| CSMSC |[fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)|[fastspeech2_conformer_baker_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip)|||
FastSpeech2-CNNDecoder| CSMSC| [fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)| [fastspeech2_cnndecoder_csmsc_ckpt_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_ckpt_1.0.0.zip) | [fastspeech2_cnndecoder_csmsc_static_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_static_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_streaming_static_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_static_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_onnx_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_onnx_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip) | 84MB| FastSpeech2-CNNDecoder| CSMSC| [fastspeech2-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/tts3)| [fastspeech2_cnndecoder_csmsc_ckpt_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_ckpt_1.0.0.zip) | [fastspeech2_cnndecoder_csmsc_static_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_static_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_streaming_static_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_static_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_onnx_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_onnx_1.0.0.zip) </br>[fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip) | 84MB|
FastSpeech2| AISHELL-3 |[fastspeech2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3)|[fastspeech2_nosil_aishell3_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_aishell3_ckpt_0.4.zip)|[fastspeech2_aishell3_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_static_1.1.0.zip) </br> [fastspeech2_aishell3_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_onnx_1.1.0.zip)|147MB| FastSpeech2| AISHELL-3 |[fastspeech2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3)|[fastspeech2_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_ckpt_1.1.0.zip)|[fastspeech2_aishell3_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_static_1.1.0.zip) </br> [fastspeech2_aishell3_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_onnx_1.1.0.zip)|147MB|
FastSpeech2| LJSpeech |[fastspeech2-ljspeech](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/tts3)|[fastspeech2_nosil_ljspeech_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip)|[fastspeech2_ljspeech_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_ljspeech_static_1.1.0.zip) </br> [fastspeech2_ljspeech_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_ljspeech_onnx_1.1.0.zip)|145MB| FastSpeech2| LJSpeech |[fastspeech2-ljspeech](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/tts3)|[fastspeech2_nosil_ljspeech_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip)|[fastspeech2_ljspeech_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_ljspeech_static_1.1.0.zip) </br> [fastspeech2_ljspeech_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_ljspeech_onnx_1.1.0.zip)|145MB|
FastSpeech2| VCTK |[fastspeech2-vctk](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/vctk/tts3)|[fastspeech2_nosil_vctk_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_vctk_ckpt_0.5.zip)|[fastspeech2_vctk_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_static_1.1.0.zip) </br> [fastspeech2_vctk_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_onnx_1.1.0.zip) | 145MB| FastSpeech2| VCTK |[fastspeech2-vctk](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/vctk/tts3)|[fastspeech2_vctk_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_ckpt_1.2.0.zip)|[fastspeech2_vctk_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_static_1.1.0.zip) </br> [fastspeech2_vctk_onnx_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_onnx_1.1.0.zip) | 145MB|
FastSpeech2| ZH_EN |[fastspeech2-zh_en](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/zh_en_tts/tts3)|[fastspeech2_mix_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_ckpt_1.2.0.zip)|[fastspeech2_mix_static_0.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_static_0.2.0.zip) </br> [fastspeech2_mix_onnx_0.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_onnx_0.2.0.zip) | 145MB|
### Vocoders ### Vocoders
Model Type | Dataset| Example Link | Pretrained Models| Static/ONNX Models|Size (static) Model Type | Dataset| Example Link | Pretrained Models| Static/ONNX Models|Size (static)
...@@ -67,7 +69,7 @@ WaveRNN | CSMSC |[WaveRNN-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tr ...@@ -67,7 +69,7 @@ WaveRNN | CSMSC |[WaveRNN-csmsc](https://github.com/PaddlePaddle/PaddleSpeech/tr
Model Type | Dataset| Example Link | Pretrained Models Model Type | Dataset| Example Link | Pretrained Models
:-------------:| :------------:| :-----: | :-----: | :-------------:| :------------:| :-----: | :-----: |
GE2E| AISHELL-3, etc. |[ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e)|[ge2e_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ge2e/ge2e_ckpt_0.3.zip) GE2E| AISHELL-3, etc. |[ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e)|[ge2e_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ge2e/ge2e_ckpt_0.3.zip)
GE2E + Tactron2| AISHELL-3 |[ge2e-tactron2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vc0)|[tacotron2_aishell3_ckpt_vc0_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/tacotron2/tacotron2_aishell3_ckpt_vc0_0.2.0.zip) GE2E + Tacotron2| AISHELL-3 |[ge2e-Tacotron2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vc0)|[tacotron2_aishell3_ckpt_vc0_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/tacotron2/tacotron2_aishell3_ckpt_vc0_0.2.0.zip)
GE2E + FastSpeech2 | AISHELL-3 |[ge2e-fastspeech2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vc1)|[fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip) GE2E + FastSpeech2 | AISHELL-3 |[ge2e-fastspeech2-aishell3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vc1)|[fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip)
......
...@@ -7,7 +7,7 @@ The examples in PaddleSpeech are mainly classified by datasets, the TTS datasets ...@@ -7,7 +7,7 @@ The examples in PaddleSpeech are mainly classified by datasets, the TTS datasets
* VCTK (English multiple speakers) * VCTK (English multiple speakers)
The models in PaddleSpeech TTS have the following mapping relationship: The models in PaddleSpeech TTS have the following mapping relationship:
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
...@@ -17,7 +17,7 @@ The models in PaddleSpeech TTS have the following mapping relationship: ...@@ -17,7 +17,7 @@ The models in PaddleSpeech TTS have the following mapping relationship:
* voc3 - MultiBand MelGAN * voc3 - MultiBand MelGAN
* voc4 - Style MelGAN * voc4 - Style MelGAN
* voc5 - HiFiGAN * voc5 - HiFiGAN
* vc0 - Tactron2 Voice Clone with GE2E * vc0 - Tacotron2 Voice Clone with GE2E
* vc1 - FastSpeech2 Voice Clone with GE2E * vc1 - FastSpeech2 Voice Clone with GE2E
## Quick Start ## Quick Start
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
PaddleSpeech 的 TTS 模型具有以下映射关系: PaddleSpeech 的 TTS 模型具有以下映射关系:
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
...@@ -19,7 +19,7 @@ PaddleSpeech 的 TTS 模型具有以下映射关系: ...@@ -19,7 +19,7 @@ PaddleSpeech 的 TTS 模型具有以下映射关系:
* voc3 - MultiBand MelGAN * voc3 - MultiBand MelGAN
* voc4 - Style MelGAN * voc4 - Style MelGAN
* voc5 - HiFiGAN * voc5 - HiFiGAN
* vc0 - Tactron2 Voice Clone with GE2E * vc0 - Tacotron2 Voice Clone with GE2E
* vc1 - FastSpeech2 Voice Clone with GE2E * vc1 - FastSpeech2 Voice Clone with GE2E
## 快速开始 ## 快速开始
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
- [Disambiguation of Chinese Polyphones in an End-to-End Framework with Semantic Features Extracted by Pre-trained BERT](https://www1.se.cuhk.edu.hk/~hccl/publications/pub/201909_INTERSPEECH_DongyangDAI.pdf) - [Disambiguation of Chinese Polyphones in an End-to-End Framework with Semantic Features Extracted by Pre-trained BERT](https://www1.se.cuhk.edu.hk/~hccl/publications/pub/201909_INTERSPEECH_DongyangDAI.pdf)
- [Polyphone Disambiguation in Mandarin Chinese with Semi-Supervised Learning](https://www.isca-speech.org/archive/pdfs/interspeech_2021/shi21d_interspeech.pdf) - [Polyphone Disambiguation in Mandarin Chinese with Semi-Supervised Learning](https://www.isca-speech.org/archive/pdfs/interspeech_2021/shi21d_interspeech.pdf)
* github: https://github.com/PaperMechanica/SemiPPL * github: https://github.com/PaperMechanica/SemiPPL
- [WikipediaHomographData](https://github.com/google-research-datasets/WikipediaHomographData)
### Text Normalization ### Text Normalization
#### English #### English
- [applenob/text_normalization](https://github.com/applenob/text_normalization) - [applenob/text_normalization](https://github.com/applenob/text_normalization)
......
...@@ -769,7 +769,7 @@ ...@@ -769,7 +769,7 @@
"```\n", "```\n",
"我们在每个数据集的 README.md 介绍了子目录和模型的对应关系, 在 TTS 中有如下对应关系:\n", "我们在每个数据集的 README.md 介绍了子目录和模型的对应关系, 在 TTS 中有如下对应关系:\n",
"```text\n", "```text\n",
"tts0 - Tactron2\n", "tts0 - Tacotron2\n",
"tts1 - TransformerTTS\n", "tts1 - TransformerTTS\n",
"tts2 - SpeedySpeech\n", "tts2 - SpeedySpeech\n",
"tts3 - FastSpeech2\n", "tts3 - FastSpeech2\n",
......
...@@ -197,7 +197,7 @@ In some situations, you want to use the trained model to do the inference for th ...@@ -197,7 +197,7 @@ In some situations, you want to use the trained model to do the inference for th
```bash ```bash
if [ ${stage} -le 6 ] && [ ${stop_stage} -ge 6 ]; then if [ ${stage} -le 6 ] && [ ${stop_stage} -ge 6 ]; then
# test a single .wav file # test a single .wav file
CUDA_VISIBLE_DEVICES=0 ./local/test_wav.sh ${conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} ${model_type} ${audio_file} CUDA_VISIBLE_DEVICES=0 ./local/test_wav.sh ${conf_path} ${decode_conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} ${model_type} ${audio_file}
fi fi
``` ```
you can train the model by yourself, or you can download the pretrained model by the script below: you can train the model by yourself, or you can download the pretrained model by the script below:
...@@ -211,5 +211,5 @@ wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wa ...@@ -211,5 +211,5 @@ wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wa
``` ```
You need to prepare an audio file or use the audio demo above, please confirm the sample rate of the audio is 16K. You can get the result of the audio demo by running the script below. You need to prepare an audio file or use the audio demo above, please confirm the sample rate of the audio is 16K. You can get the result of the audio demo by running the script below.
```bash ```bash
CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1 data/demo_01_03.wav CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_1 data/demo_01_03.wav
``` ```
# Aishell3 # Aishell3
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
...@@ -8,5 +8,7 @@ ...@@ -8,5 +8,7 @@
* voc1 - Parallel WaveGAN * voc1 - Parallel WaveGAN
* voc2 - MelGAN * voc2 - MelGAN
* voc3 - MultiBand MelGAN * voc3 - MultiBand MelGAN
* vc0 - Tactron2 Voice Cloning with GE2E * vc0 - Tacotron2 Voice Cloning with GE2E
* vc1 - FastSpeech2 Voice Cloning with GE2E * vc1 - FastSpeech2 Voice Cloning with GE2E
* vc2 - FastSpeech2 Voice Cloning with ECAPA-TDNN
* ernie_sat - ERNIE-SAT
# ERNIE SAT with AISHELL3 dataset # ERNIE-SAT with VCTK dataset
ERNIE-SAT speech-text joint pretraining framework, which achieves SOTA results in cross-lingual multi-speaker speech synthesis and cross-lingual speech editing tasks, It can be applied to a series of scenarios such as Speech Editing, personalized Speech Synthesis, and Voice Cloning.
## Model Framework
In ERNIE-SAT, we propose two innovations:
- In the pretraining process, the phonemes corresponding to Chinese and English are used as input to achieve cross-language and personalized soft phoneme mapping
- The joint mask learning of speech and text is used to realize the alignment of speech and text
<p align="center">
<img src="https://user-images.githubusercontent.com/24568452/186110814-1b9c6618-a0ab-4c0c-bb3d-3d860b0e8cc2.png" />
</p>
## Dataset
### Download and Extract
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
### Get MFA Result and Extract
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for aishell3_fastspeech2.
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/data_aishell3`.
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── norm
├── raw
└── speech_stats.npy
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, speaker, and id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
### Synthesizing
We use [HiFiGAN](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc5) as the neural vocoder.
Download pretrained HiFiGAN model from [hifigan_aishell3_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/hifigan/hifigan_aishell3_ckpt_0.2.0.zip) and unzip it.
```bash
unzip hifigan_aishell3_ckpt_0.2.0.zip
```
HiFiGAN checkpoint contains files listed below.
```text
hifigan_aishell3_ckpt_0.2.0
├── default.yaml # default config used to train HiFiGAN
├── feats_stats.npy # statistics used to normalize spectrogram when training HiFiGAN
└── snapshot_iter_2500000.pdz # generator parameters of HiFiGAN
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
## Speech Synthesis and Speech Editing
### Prepare
**prepare aligner**
```bash
mkdir -p tools/aligner
cd tools
# download MFA
wget https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/releases/download/v1.0.1/montreal-forced-aligner_linux.tar.gz
# extract MFA
tar xvf montreal-forced-aligner_linux.tar.gz
# fix .so of MFA
cd montreal-forced-aligner/lib
ln -snf libpython3.6m.so.1.0 libpython3.6m.so
cd -
# download align models and dicts
cd aligner
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/aishell3_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/simple.lexicon
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/vctk_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/LJSpeech-1.1/cmudict-0.7b
cd ../../
```
**prepare pretrained FastSpeech2 models**
ERNIE-SAT use FastSpeech2 as phoneme duration predictor:
```bash
mkdir download
cd download
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip
unzip fastspeech2_conformer_baker_ckpt_0.5.zip
unzip fastspeech2_nosil_ljspeech_ckpt_0.5.zip
cd ../
```
**prepare source data**
```bash
mkdir source
cd source
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540307.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540428.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/LJ050-0278.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p243_313.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p299_096.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/this_was_not_the_show_for_me.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/README.md
cd ../
```
You can check the text of downloaded wavs in `source/README.md`.
### Speech Synthesis and Speech Editing
```bash
./run.sh --stage 3 --stop-stage 3 --gpus 0
```
`stage 3` of `run.sh` calls `local/synthesize_e2e.sh`, `stage 0` of it is **Speech Synthesis** and `stage 1` of it is **Speech Editing**.
You can modify `--wav_path``--old_str` and `--new_str` yourself, `--old_str` should be the text corresponding to the audio of `--wav_path`, `--new_str` should be designed according to `--task_name`, both `--source_lang` and `--target_lang` should be `zh` for model trained with AISHELL3 dataset.
## Pretrained Model
Pretrained ErnieSAT model:
- [erniesat_aishell3_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/erniesat_aishell3_ckpt_1.2.0.zip)
Model | Step | eval/mlm_loss | eval/loss
:-------------:| :------------:| :-----: | :-----:
default| 8(gpu) x 289500|51.723782|51.723782
# This configuration tested on 8 GPUs (A100) with 80GB GPU memory.
# It takes around 3 days to finish the training,You can adjust
# batch_size、num_workers here and ngpu in local/train.sh for your machine
########################################################### ###########################################################
# FEATURE EXTRACTION SETTING # # FEATURE EXTRACTION SETTING #
########################################################### ###########################################################
...@@ -21,8 +24,8 @@ mlm_prob: 0.8 ...@@ -21,8 +24,8 @@ mlm_prob: 0.8
########################################################### ###########################################################
# DATA SETTING # # DATA SETTING #
########################################################### ###########################################################
batch_size: 20 batch_size: 40
num_workers: 2 num_workers: 8
########################################################### ###########################################################
# MODEL SETTING # # MODEL SETTING #
...@@ -280,4 +283,4 @@ token_list: ...@@ -280,4 +283,4 @@ token_list:
- o3 - o3
- iang5 - iang5
- ei5 - ei5
- <sos/eos> - <sos/eos>
\ No newline at end of file
...@@ -4,28 +4,11 @@ config_path=$1 ...@@ -4,28 +4,11 @@ config_path=$1
train_output_path=$2 train_output_path=$2
ckpt_name=$3 ckpt_name=$3
stage=1 stage=0
stop_stage=1 stop_stage=0
# pwgan
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--erniesat_config=${config_path} \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=pwg_aishell3_ckpt_0.5/feats_stats.npy \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--phones_dict=dump/phone_id_map.txt
fi
# hifigan # hifigan
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \ python3 ${BIN_DIR}/synthesize.py \
......
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
echo 'speech synthesize !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=synthesize \
--wav_path=source/SSB03540307.wav \
--old_str='请播放歌曲小苹果' \
--new_str='歌曲真好听' \
--source_lang=zh \
--target_lang=zh \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_aishell3 \
--voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_aishell3_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_gen.wav
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
echo 'speech edit !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=edit \
--wav_path=source/SSB03540428.wav \
--old_str='今天天气很好' \
--new_str='今天心情很好' \
--source_lang=zh \
--target_lang=zh \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_aishell3 \
--voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_aishell3_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_edit.wav
fi
...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \ ...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \
--dev-metadata=dump/dev/norm/metadata.jsonl \ --dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \ --config=${config_path} \
--output-dir=${train_output_path} \ --output-dir=${train_output_path} \
--ngpu=2 \ --ngpu=8 \
--phones-dict=dump/phone_id_map.txt --phones-dict=dump/phone_id_map.txt
\ No newline at end of file
...@@ -3,13 +3,13 @@ ...@@ -3,13 +3,13 @@
set -e set -e
source path.sh source path.sh
gpus=0,1 gpus=0,1,2,3,4,5,6,7
stage=0 stage=0
stop_stage=100 stop_stage=100
conf_path=conf/default.yaml conf_path=conf/default.yaml
train_output_path=exp/default train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz ckpt_name=snapshot_iter_289500.pdz
# with the following command, you can choose the stage range you want to run # with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0` # such as `./run.sh --stage 0 --stop-stage 0`
...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then ...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan # synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1 CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
...@@ -217,7 +217,7 @@ optional arguments: ...@@ -217,7 +217,7 @@ optional arguments:
## Pretrained Model ## Pretrained Model
Pretrained FastSpeech2 model with no silence in the edge of audios: Pretrained FastSpeech2 model with no silence in the edge of audios:
- [fastspeech2_nosil_aishell3_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_aishell3_ckpt_0.4.zip) - [fastspeech2_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_ckpt_1.1.0.zip)
- [fastspeech2_conformer_aishell3_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_aishell3_ckpt_0.2.0.zip) (Thanks for [@awmmmm](https://github.com/awmmmm)'s contribution) - [fastspeech2_conformer_aishell3_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_aishell3_ckpt_0.2.0.zip) (Thanks for [@awmmmm](https://github.com/awmmmm)'s contribution)
The static model can be downloaded here: The static model can be downloaded here:
...@@ -229,9 +229,11 @@ The ONNX model can be downloaded here: ...@@ -229,9 +229,11 @@ The ONNX model can be downloaded here:
FastSpeech2 checkpoint contains files listed below. FastSpeech2 checkpoint contains files listed below.
```text ```text
fastspeech2_nosil_aishell3_ckpt_0.4 fastspeech2_aishell3_ckpt_1.1.0
├── default.yaml # default config used to train fastspeech2 ├── default.yaml # default config used to train fastspeech2
├── energy_stats.npy # statistics used to normalize energy when training fastspeech2
├── phone_id_map.txt # phone vocabulary file when training fastspeech2 ├── phone_id_map.txt # phone vocabulary file when training fastspeech2
├── pitch_stats.npy # statistics used to normalize pitch when training fastspeech2
├── snapshot_iter_96400.pdz # model parameters and optimizer states ├── snapshot_iter_96400.pdz # model parameters and optimizer states
├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2 ├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2 └── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
...@@ -244,9 +246,9 @@ FLAGS_allocator_strategy=naive_best_fit \ ...@@ -244,9 +246,9 @@ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \ python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_aishell3 \ --am=fastspeech2_aishell3 \
--am_config=fastspeech2_nosil_aishell3_ckpt_0.4/default.yaml \ --am_config=fastspeech2_aishell3_ckpt_1.1.0/default.yaml \
--am_ckpt=fastspeech2_nosil_aishell3_ckpt_0.4/snapshot_iter_96400.pdz \ --am_ckpt=fastspeech2_aishell3_ckpt_1.1.0/snapshot_iter_96400.pdz \
--am_stat=fastspeech2_nosil_aishell3_ckpt_0.4/speech_stats.npy \ --am_stat=fastspeech2_aishell3_ckpt_1.1.0/speech_stats.npy \
--voc=pwgan_aishell3 \ --voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \ --voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \ --voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
...@@ -254,9 +256,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \ ...@@ -254,9 +256,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \
--lang=zh \ --lang=zh \
--text=${BIN_DIR}/../sentences.txt \ --text=${BIN_DIR}/../sentences.txt \
--output_dir=exp/default/test_e2e \ --output_dir=exp/default/test_e2e \
--phones_dict=fastspeech2_nosil_aishell3_ckpt_0.4/phone_id_map.txt \ --phones_dict=fastspeech2_aishell3_ckpt_1.1.0/phone_id_map.txt \
--speaker_dict=fastspeech2_nosil_aishell3_ckpt_0.4/speaker_id_map.txt \ --speaker_dict=fastspeech2_aishell3_ckpt_1.1.0/speaker_id_map.txt \
--spk_id=0 \ --spk_id=0 \
--inference_dir=exp/default/inference --inference_dir=exp/default/inference
``` ```
...@@ -38,7 +38,7 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then ...@@ -38,7 +38,7 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
--am=fastspeech2_aishell3 \ --am=fastspeech2_aishell3 \
--am_config=${config_path} \ --am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \ --am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=fastspeech2_nosil_aishell3_ckpt_0.4/speech_stats.npy \ --am_stat=dump/train/speech_stats.npy \
--voc=hifigan_aishell3 \ --voc=hifigan_aishell3 \
--voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \ --voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \ --voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
...@@ -46,8 +46,8 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then ...@@ -46,8 +46,8 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
--lang=zh \ --lang=zh \
--text=${BIN_DIR}/../sentences.txt \ --text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \ --output_dir=${train_output_path}/test_e2e \
--phones_dict=fastspeech2_nosil_aishell3_ckpt_0.4/phone_id_map.txt \ --phones_dict=dump/phone_id_map.txt \
--speaker_dict=fastspeech2_nosil_aishell3_ckpt_0.4/speaker_id_map.txt \ --speaker_dict=dump/speaker_id_map.txt \
--spk_id=0 \ --spk_id=0 \
--inference_dir=${train_output_path}/inference --inference_dir=${train_output_path}/inference
fi fi
...@@ -44,8 +44,8 @@ fi ...@@ -44,8 +44,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_aishell3 ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_aishell3
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -99,7 +99,7 @@ CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_p ...@@ -99,7 +99,7 @@ CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_p
The synthesizing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/../synthesize.py`. The synthesizing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/../synthesize.py`.
### Voice Cloning ### Voice Cloning
Assume there are some reference audios in `./ref_audio` Assume there are some reference audios in `./ref_audio`
```text ```text
ref_audio ref_audio
├── 001238.wav ├── 001238.wav
...@@ -116,7 +116,7 @@ CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_outpu ...@@ -116,7 +116,7 @@ CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_outpu
Model | Step | eval/loss | eval/l1_loss | eval/duration_loss | eval/pitch_loss| eval/energy_loss Model | Step | eval/loss | eval/l1_loss | eval/duration_loss | eval/pitch_loss| eval/energy_loss
:-------------:| :------------:| :-----: | :-----: | :--------: |:--------:|:---------: :-------------:| :------------:| :-----: | :-----: | :--------: |:--------:|:---------:
default|2(gpu) x 96400|0.99699|0.62013|0.53057|0.11954| 0.20426| default|2(gpu) x 96400|0.99699|0.62013|0.053057|0.11954| 0.20426|
FastSpeech2 checkpoint contains files listed below. FastSpeech2 checkpoint contains files listed below.
(There is no need for `speaker_id_map.txt` here ) (There is no need for `speaker_id_map.txt` here )
......
# FastSpeech2 + AISHELL-3 Voice Cloning (ECAPA-TDNN)
This example contains code used to train a [FastSpeech2](https://arxiv.org/abs/2006.04558) model with [AISHELL-3](http://www.aishelltech.com/aishell_3). The trained model can be used in Voice Cloning Task, We refer to the model structure of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf). The general steps are as follows:
1. Speaker Encoder: We use Speaker Verification to train a speaker encoder. Datasets used in this task are different from those used in `FastSpeech2` because the transcriptions are not needed, we use more datasets, refer to [ECAPA-TDNN](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/voxceleb/sv0).
2. Synthesizer: We use the trained speaker encoder to generate speaker embedding for each sentence in AISHELL-3. This embedding is an extra input of `FastSpeech2` which will be concated with encoder outputs.
3. Vocoder: We use [Parallel Wave GAN](http://arxiv.org/abs/1910.11480) as the neural Vocoder, refer to [voc1](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc1).
## Dataset
### Download and Extract
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
### Get MFA Result and Extract
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for aishell3_fastspeech2.
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/data_aishell3`.
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize waveform from `metadata.jsonl`.
5. start a voice cloning inference.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│ ├── norm
│ └── raw
├── embed
│ ├── SSB0005
│ ├── SSB0009
│ ├── ...
│ └── ...
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── energy_stats.npy
├── norm
├── pitch_stats.npy
├── raw
└── speech_stats.npy
```
The `embed` contains the generated speaker embedding for each sentence in AISHELL-3, which has the same file structure with wav files and the format is `.npy`.
The computing time of utterance embedding can be x hours.
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, the path of pitch features, the path of energy features, speaker, and id of each utterance.
The preprocessing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but there is one more `ECAPA-TDNN/inference` step here.
### Model Training
`./local/train.sh` calls `${BIN_DIR}/train.py`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
The training step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/train.py`.
### Synthesizing
We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc1) as the neural vocoder.
Download pretrained parallel wavegan model from [pwg_aishell3_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_aishell3_ckpt_0.5.zip) and unzip it.
```bash
unzip pwg_aishell3_ckpt_0.5.zip
```
Parallel WaveGAN checkpoint contains files listed below.
```text
pwg_aishell3_ckpt_0.5
├── default.yaml # default config used to train parallel wavegan
├── feats_stats.npy # statistics used to normalize spectrogram when training parallel wavegan
└── snapshot_iter_1000000.pdz # generator parameters of parallel wavegan
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
The synthesizing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/../synthesize.py`.
### Voice Cloning
Assume there are some reference audios in `./ref_audio` (the format must be wav here)
```text
ref_audio
├── 001238.wav
├── LJ015-0254.wav
└── audio_self_test.wav
```
`./local/voice_cloning.sh` calls `${BIN_DIR}/../voice_cloning.py`
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ref_audio_dir}
```
## Pretrained Model
- [fastspeech2_aishell3_ckpt_vc2_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_ckpt_vc2_1.2.0.zip)
Model | Step | eval/loss | eval/l1_loss | eval/duration_loss | eval/pitch_loss| eval/energy_loss
:-------------:| :------------:| :-----: | :-----: | :--------: |:--------:|:---------:
default|2(gpu) x 96400|0.991855|0.599517|0.052142|0.094877| 0.245318|
FastSpeech2 checkpoint contains files listed below.
(There is no need for `speaker_id_map.txt` here )
```text
fastspeech2_aishell3_ckpt_vc2_1.2.0
├── default.yaml # default config used to train fastspeech2
├── energy_stats.npy # statistics used to normalize energy when training fastspeech2
├── phone_id_map.txt # phone vocabulary file when training fastspeech2
├── pitch_stats.npy # statistics used to normalize pitch when training fastspeech2
├── snapshot_iter_96400.pdz # model parameters and optimizer states
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
```
###########################################################
# FEATURE EXTRACTION SETTING #
###########################################################
fs: 24000 # sr
n_fft: 2048 # FFT size (samples).
n_shift: 300 # Hop size (samples). 12.5ms
win_length: 1200 # Window length (samples). 50ms
# If set to null, it will be the same as fft_size.
window: "hann" # Window function.
# Only used for feats_type != raw
fmin: 80 # Minimum frequency of Mel basis.
fmax: 7600 # Maximum frequency of Mel basis.
n_mels: 80 # The number of mel basis.
# Only used for the model using pitch features (e.g. FastSpeech2)
f0min: 80 # Minimum f0 for pitch extraction.
f0max: 400 # Maximum f0 for pitch extraction.
###########################################################
# DATA SETTING #
###########################################################
batch_size: 64
num_workers: 2
###########################################################
# MODEL SETTING #
###########################################################
model:
adim: 384 # attention dimension
aheads: 2 # number of attention heads
elayers: 4 # number of encoder layers
eunits: 1536 # number of encoder ff units
dlayers: 4 # number of decoder layers
dunits: 1536 # number of decoder ff units
positionwise_layer_type: conv1d # type of position-wise layer
positionwise_conv_kernel_size: 3 # kernel size of position wise conv layer
duration_predictor_layers: 2 # number of layers of duration predictor
duration_predictor_chans: 256 # number of channels of duration predictor
duration_predictor_kernel_size: 3 # filter size of duration predictor
postnet_layers: 5 # number of layers of postnset
postnet_filts: 5 # filter size of conv layers in postnet
postnet_chans: 256 # number of channels of conv layers in postnet
use_scaled_pos_enc: True # whether to use scaled positional encoding
encoder_normalize_before: True # whether to perform layer normalization before the input
decoder_normalize_before: True # whether to perform layer normalization before the input
reduction_factor: 1 # reduction factor
init_type: xavier_uniform # initialization type
init_enc_alpha: 1.0 # initial value of alpha of encoder scaled position encoding
init_dec_alpha: 1.0 # initial value of alpha of decoder scaled position encoding
transformer_enc_dropout_rate: 0.2 # dropout rate for transformer encoder layer
transformer_enc_positional_dropout_rate: 0.2 # dropout rate for transformer encoder positional encoding
transformer_enc_attn_dropout_rate: 0.2 # dropout rate for transformer encoder attention layer
transformer_dec_dropout_rate: 0.2 # dropout rate for transformer decoder layer
transformer_dec_positional_dropout_rate: 0.2 # dropout rate for transformer decoder positional encoding
transformer_dec_attn_dropout_rate: 0.2 # dropout rate for transformer decoder attention layer
pitch_predictor_layers: 5 # number of conv layers in pitch predictor
pitch_predictor_chans: 256 # number of channels of conv layers in pitch predictor
pitch_predictor_kernel_size: 5 # kernel size of conv leyers in pitch predictor
pitch_predictor_dropout: 0.5 # dropout rate in pitch predictor
pitch_embed_kernel_size: 1 # kernel size of conv embedding layer for pitch
pitch_embed_dropout: 0.0 # dropout rate after conv embedding layer for pitch
stop_gradient_from_pitch_predictor: True # whether to stop the gradient from pitch predictor to encoder
energy_predictor_layers: 2 # number of conv layers in energy predictor
energy_predictor_chans: 256 # number of channels of conv layers in energy predictor
energy_predictor_kernel_size: 3 # kernel size of conv leyers in energy predictor
energy_predictor_dropout: 0.5 # dropout rate in energy predictor
energy_embed_kernel_size: 1 # kernel size of conv embedding layer for energy
energy_embed_dropout: 0.0 # dropout rate after conv embedding layer for energy
stop_gradient_from_energy_predictor: False # whether to stop the gradient from energy predictor to encoder
spk_embed_dim: 192 # speaker embedding dimension
spk_embed_integration_type: concat # speaker embedding integration type
###########################################################
# UPDATER SETTING #
###########################################################
updater:
use_masking: True # whether to apply masking for padded part in loss calculation
###########################################################
# OPTIMIZER SETTING #
###########################################################
optimizer:
optim: adam # optimizer type
learning_rate: 0.001 # learning rate
###########################################################
# TRAINING SETTING #
###########################################################
max_epoch: 200
num_snapshots: 5
###########################################################
# OTHER SETTING #
###########################################################
seed: 10086
#!/bin/bash
stage=0
stop_stage=100
config_path=$1
# gen speaker embedding
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
python3 ${BIN_DIR}/vc2_infer.py \
--input=~/datasets/data_aishell3/train/wav/ \
--output=dump/embed \
--num-cpu=20
fi
# copy from tts3/preprocess
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# get durations from MFA's result
echo "Generate durations.txt from MFA results ..."
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
--inputdir=./aishell3_alignment_tone \
--output durations.txt \
--config=${config_path}
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# extract features
echo "Extract features ..."
python3 ${BIN_DIR}/preprocess.py \
--dataset=aishell3 \
--rootdir=~/datasets/data_aishell3/ \
--dumpdir=dump \
--dur-file=durations.txt \
--config=${config_path} \
--num-cpu=20 \
--cut-sil=True \
--spk_emb_dir=dump/embed
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# get features' stats(mean and std)
echo "Get features' stats ..."
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="speech"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="pitch"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="energy"
fi
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
# normalize and covert phone/speaker to id, dev and test should use train's stats
echo "Normalize ..."
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/train/raw/metadata.jsonl \
--dumpdir=dump/train/norm \
--speech-stats=dump/train/speech_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/dev/raw/metadata.jsonl \
--dumpdir=dump/dev/norm \
--speech-stats=dump/train/speech_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/test/raw/metadata.jsonl \
--dumpdir=dump/test/norm \
--speech-stats=dump/train/speech_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize.py \
--am=fastspeech2_aishell3 \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=pwg_aishell3_ckpt_0.5/feats_stats.npy \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--phones_dict=dump/phone_id_map.txt \
--speaker_dict=dump/speaker_id_map.txt \
--voice-cloning=True
#!/bin/bash
config_path=$1
train_output_path=$2
python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \
--output-dir=${train_output_path} \
--ngpu=2 \
--phones-dict=dump/phone_id_map.txt \
--voice-cloning=True
\ No newline at end of file
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
ref_audio_dir=$4
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../voice_cloning.py \
--am=fastspeech2_aishell3 \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=pwg_aishell3_ckpt_0.5/feats_stats.npy \
--text="凯莫瑞安联合体的经济崩溃迫在眉睫。" \
--input-dir=${ref_audio_dir} \
--output-dir=${train_output_path}/vc_syn \
--phones-dict=dump/phone_id_map.txt \
--use_ecapa=True
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=fastspeech2
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
#!/bin/bash
set -e
source path.sh
gpus=0,1
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_96400.pdz
ref_audio_dir=ref_audio
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} || exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ref_audio_dir} || exit -1
fi
# VITS with AISHELL-3
This example contains code used to train a [VITS](https://arxiv.org/abs/2106.06103) model with [AISHELL-3](http://www.aishelltech.com/aishell_3). The trained model can be used in Voice Cloning Task, We refer to the model structure of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf). The general steps are as follows:
1. Speaker Encoder: We use Speaker Verification to train a speaker encoder. Datasets used in this task are different from those used in `VITS` because the transcriptions are not needed, we use more datasets, refer to [ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e).
2. Synthesizer and Vocoder: We use the trained speaker encoder to generate speaker embedding for each sentence in AISHELL-3. This embedding is an extra input of `VITS` which will be concated with encoder outputs. The vocoder is part of `VITS` due to its special structure.
## Dataset
### Download and Extract
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
### Get MFA Result and Extract
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes for VITS, the durations of MFA are not needed here.
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
## Pretrained GE2E Model
We use pretrained GE2E model to generate speaker embedding for each sentence.
Download pretrained GE2E model from here [ge2e_ckpt_0.3.zip](https://bj.bcebos.com/paddlespeech/Parakeet/released_models/ge2e/ge2e_ckpt_0.3.zip), and `unzip` it.
## Get Started
Assume the path to the dataset is `~/datasets/data_aishell3`.
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
Assume the path to the pretrained ge2e model is `./ge2e_ckpt_0.3`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize waveform from `metadata.jsonl`.
5. start a voice cloning inference.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} ${ge2e_ckpt_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│   ├── norm
│   └── raw
├── embed
│ ├── SSB0005
│ ├── SSB0009
│ ├── ...
│ └── ...
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│   ├── norm
│   └── raw
└── train
├── feats_stats.npy
├── norm
└── raw
```
The `embed` contains the generated speaker embedding for each sentence in AISHELL-3, which has the same file structure with wav files and the format is `.npy`.
The computing time of utterance embedding can be x hours.
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave and linear spectrogram of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, feats, feats_lengths, the path of linear spectrogram features, the path of raw waves, speaker, and the id of each utterance.
The preprocessing step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but there is one more `ge2e/inference` step here.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
The training step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/train.py`.
### Synthesizing
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize.py [-h] [--config CONFIG] [--ckpt CKPT]
[--phones_dict PHONES_DICT] [--speaker_dict SPEAKER_DICT]
[--voice-cloning VOICE_CLONING] [--ngpu NGPU]
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
Synthesize with VITS
optional arguments:
-h, --help show this help message and exit
--config CONFIG Config of VITS.
--ckpt CKPT Checkpoint file of VITS.
--phones_dict PHONES_DICT
phone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
--ngpu NGPU if ngpu == 0, use cpu.
--test_metadata TEST_METADATA
test metadata.
--output_dir OUTPUT_DIR
output dir.
```
The synthesizing step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/../synthesize.py`.
### Voice Cloning
Assume there are some reference audios in `./ref_audio`
```text
ref_audio
├── 001238.wav
├── LJ015-0254.wav
└── audio_self_test.mp3
```
`./local/voice_cloning.sh` calls `${BIN_DIR}/voice_cloning.py`
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${add_blank} ${ref_audio_dir}
```
If you want to convert a speaker audio file to refered speaker, run:
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${add_blank} ${ref_audio_dir} ${src_audio_path}
```
<!-- TODO display these after we trained the model -->
<!--
## Pretrained Model
The pretrained model can be downloaded here:
- [vits_vc_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/vits/vits_vc_aishell3_ckpt_1.1.0.zip) (add_blank=true)
VITS checkpoint contains files listed below.
(There is no need for `speaker_id_map.txt` here )
```text
vits_vc_aishell3_ckpt_1.1.0
├── default.yaml # default config used to train vitx
├── phone_id_map.txt # phone vocabulary file when training vits
└── snapshot_iter_333000.pdz # model parameters and optimizer states
```
ps: This ckpt is not good enough, a better result is training
-->
# This configuration tested on 4 GPUs (V100) with 32GB GPU
# memory. It takes around 2 weeks to finish the training
# but 100k iters model should generate reasonable results.
###########################################################
# FEATURE EXTRACTION SETTING #
###########################################################
fs: 22050 # sr
n_fft: 1024 # FFT size (samples).
n_shift: 256 # Hop size (samples). 12.5ms
win_length: null # Window length (samples). 50ms
# If set to null, it will be the same as fft_size.
window: "hann" # Window function.
##########################################################
# TTS MODEL SETTING #
##########################################################
model:
# generator related
generator_type: vits_generator
generator_params:
hidden_channels: 192
spk_embed_dim: 256
global_channels: 256
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: "conv1d"
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: "rel_pos"
text_encoder_self_attention_layer_type: "rel_selfattn"
text_encoder_activation_type: "swish"
text_encoder_normalize_before: True
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: True
use_conformer_conv_in_text_encoder: False
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales: [8, 8, 2, 2]
decoder_upsample_kernel_sizes: [16, 16, 4, 4]
decoder_resblock_kernel_sizes: [3, 7, 11]
decoder_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
use_weight_norm_in_decoder: True
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: True
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: True
use_only_mean_in_flow: True
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
# discriminator related
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: "AvgPool1D"
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [15, 41, 5, 3]
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: True
downsample_scales: [2, 2, 4, 4, 1]
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
follow_official_norm: False
periods: [2, 3, 5, 7, 11]
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [5, 3]
channels: 32
downsample_scales: [3, 3, 3, 3, 1]
max_downsample_channels: 1024
bias: True
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
###########################################################
# LOSS SETTING #
###########################################################
# loss function related
generator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
discriminator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
feat_match_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
average_by_layers: False # whether to average loss value by #layers of each discriminator
include_final_outputs: True # whether to include final outputs for loss calculation
mel_loss_params:
fs: 22050 # must be the same as the training data
fft_size: 1024 # fft points
hop_size: 256 # hop size
win_length: null # window length
window: hann # window type
num_mels: 80 # number of Mel basis
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
log_base: null # null represent natural log
###########################################################
# ADVERSARIAL LOSS SETTING #
###########################################################
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
lambda_dur: 1.0 # loss scaling coefficient for duration loss
lambda_kl: 1.0 # loss scaling coefficient for KL divergence loss
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
###########################################################
# DATA LOADER SETTING #
###########################################################
batch_size: 50 # Batch size.
num_workers: 4 # Number of workers in DataLoader.
##########################################################
# OPTIMIZER & SCHEDULER SETTING #
##########################################################
# optimizer setting for generator
generator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
generator_scheduler: exponential_decay
generator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
# optimizer setting for discriminator
discriminator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
discriminator_scheduler: exponential_decay
discriminator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
generator_first: False # whether to start updating generator first
##########################################################
# OTHER TRAINING SETTING #
##########################################################
num_snapshots: 10 # max number of snapshots to keep while training
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
save_interval_steps: 1000 # Interval steps to save checkpoint.
eval_interval_steps: 250 # Interval steps to evaluate the network.
seed: 777 # random seed number
#!/bin/bash
stage=0
stop_stage=100
config_path=$1
add_blank=$2
ge2e_ckpt_path=$3
# gen speaker embedding
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
python3 ${MAIN_ROOT}/paddlespeech/vector/exps/ge2e/inference.py \
--input=~/datasets/data_aishell3/train/wav/ \
--output=dump/embed \
--checkpoint_path=${ge2e_ckpt_path}
fi
# copy from tts3/preprocess
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# get durations from MFA's result
echo "Generate durations.txt from MFA results ..."
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
--inputdir=./aishell3_alignment_tone \
--output durations.txt \
--config=${config_path}
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# extract features
echo "Extract features ..."
python3 ${BIN_DIR}/preprocess.py \
--dataset=aishell3 \
--rootdir=~/datasets/data_aishell3/ \
--dumpdir=dump \
--dur-file=durations.txt \
--config=${config_path} \
--num-cpu=20 \
--cut-sil=True \
--spk_emb_dir=dump/embed
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# get features' stats(mean and std)
echo "Get features' stats ..."
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="feats"
fi
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
# normalize and covert phone/speaker to id, dev and test should use train's stats
echo "Normalize ..."
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/train/raw/metadata.jsonl \
--dumpdir=dump/train/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/dev/raw/metadata.jsonl \
--dumpdir=dump/dev/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/test/raw/metadata.jsonl \
--dumpdir=dump/test/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--voice-cloning=True
fi
#!/bin/bash
config_path=$1
train_output_path=$2
# install monotonic_align
cd ${MAIN_ROOT}/paddlespeech/t2s/models/vits/monotonic_align
python3 setup.py build_ext --inplace
cd -
python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \
--output-dir=${train_output_path} \
--ngpu=4 \
--phones-dict=dump/phone_id_map.txt \
--voice-cloning=True
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
ge2e_params_path=$4
add_blank=$5
ref_audio_dir=$6
src_audio_path=$7
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/voice_cloning.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--ge2e_params_path=${ge2e_params_path} \
--phones_dict=dump/phone_id_map.txt \
--text="凯莫瑞安联合体的经济崩溃迫在眉睫。" \
--audio-path=${src_audio_path} \
--input-dir=${ref_audio_dir} \
--output-dir=${train_output_path}/vc_syn \
--add-blank=${add_blank}
#!/bin/bash #!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../` export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH} export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C export LC_ALL=C
...@@ -9,5 +9,5 @@ export PYTHONDONTWRITEBYTECODE=1 ...@@ -9,5 +9,5 @@ export PYTHONDONTWRITEBYTECODE=1
export PYTHONIOENCODING=UTF-8 export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH} export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=ernie_sat MODEL=vits
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL} export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
\ No newline at end of file
#!/bin/bash
set -e
source path.sh
gpus=0,1,2,3
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz
add_blank=true
ref_audio_dir=ref_audio
src_audio_path=''
# not include ".pdparams" here
ge2e_ckpt_path=./ge2e_ckpt_0.3/step-3000000
# include ".pdparams" here
ge2e_params_path=${ge2e_ckpt_path}.pdparams
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} ${add_blank} ${ge2e_ckpt_path} || exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} \
${ge2e_params_path} ${add_blank} ${ref_audio_dir} ${src_audio_path} || exit -1
fi
# VITS with AISHELL-3
This example contains code used to train a [VITS](https://arxiv.org/abs/2106.06103) model with [AISHELL-3](http://www.aishelltech.com/aishell_3).
AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus that could be used to train multi-speaker Text-to-Speech (TTS) systems.
We use AISHELL-3 to train a multi-speaker VITS model here.
## Dataset
### Download and Extract
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
### Get MFA Result and Extract
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes for VITS, the durations of MFA are not needed here.
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/data_aishell3`.
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from a text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│   ├── norm
│   └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│   ├── norm
│   └── raw
└── train
├── feats_stats.npy
├── norm
└── raw
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave and linear spectrogram of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, feats, feats_lengths, the path of linear spectrogram features, the path of raw waves, speaker, and the id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
[--speaker-dict SPEAKER_DICT] [--voice-cloning VOICE_CLONING]
Train a VITS model.
optional arguments:
-h, --help show this help message and exit
--config CONFIG config file to overwrite default config.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
--speaker-dict SPEAKER_DICT
speaker id map file for multiple speaker model.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
```
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
6. `--speaker-dict` is the path of the speaker id map file when training a multi-speaker VITS.
### Synthesizing
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize.py [-h] [--config CONFIG] [--ckpt CKPT]
[--phones_dict PHONES_DICT] [--speaker_dict SPEAKER_DICT]
[--voice-cloning VOICE_CLONING] [--ngpu NGPU]
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
Synthesize with VITS
optional arguments:
-h, --help show this help message and exit
--config CONFIG Config of VITS.
--ckpt CKPT Checkpoint file of VITS.
--phones_dict PHONES_DICT
phone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
--ngpu NGPU if ngpu == 0, use cpu.
--test_metadata TEST_METADATA
test metadata.
--output_dir OUTPUT_DIR
output dir.
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize_e2e.py [-h] [--config CONFIG] [--ckpt CKPT]
[--phones_dict PHONES_DICT]
[--speaker_dict SPEAKER_DICT] [--spk_id SPK_ID]
[--lang LANG]
[--inference_dir INFERENCE_DIR] [--ngpu NGPU]
[--text TEXT] [--output_dir OUTPUT_DIR]
Synthesize with VITS
optional arguments:
-h, --help show this help message and exit
--config CONFIG Config of VITS.
--ckpt CKPT Checkpoint file of VITS.
--phones_dict PHONES_DICT
phone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--spk_id SPK_ID spk id for multi speaker acoustic model
--lang LANG Choose model language. zh or en
--inference_dir INFERENCE_DIR
dir to save inference models
--ngpu NGPU if ngpu == 0, use cpu.
--text TEXT text to synthesize, a 'utt_id sentence' pair per line.
--output_dir OUTPUT_DIR
output dir.
```
1. `--config`, `--ckpt`, `--phones_dict` and `--speaker_dict` are arguments for acoustic model, which correspond to the 3 files in the VITS pretrained model.
2. `--lang` is the model language, which can be `zh` or `en`.
3. `--test_metadata` should be the metadata file in the normalized subfolder of `test` in the `dump` folder.
4. `--text` is the text file, which contains sentences to synthesize.
5. `--output_dir` is the directory to save synthesized audio files.
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
<!-- TODO display these after we trained the model -->
<!--
## Pretrained Model
The pretrained model can be downloaded here:
- [vits_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/vits/vits_aishell3_ckpt_1.1.0.zip) (add_blank=true)
VITS checkpoint contains files listed below.
```text
vits_aishell3_ckpt_1.1.0
├── default.yaml # default config used to train vitx
├── phone_id_map.txt # phone vocabulary file when training vits
├── speaker_id_map.txt # speaker id map file when training a multi-speaker vits
└── snapshot_iter_333000.pdz # model parameters and optimizer states
```
ps: This ckpt is not good enough, a better result is training
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained VITS.
```bash
source path.sh
add_blank=true
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--config=vits_aishell3_ckpt_1.1.0/default.yaml \
--ckpt=vits_aishell3_ckpt_1.1.0/snapshot_iter_333000.pdz \
--phones_dict=vits_aishell3_ckpt_1.1.0/phone_id_map.txt \
--speaker_dict=vits_aishell3_ckpt_1.1.0/speaker_id_map.txt \
--output_dir=exp/default/test_e2e \
--text=${BIN_DIR}/../sentences.txt \
--add-blank=${add_blank}
```
-->
# This configuration tested on 4 GPUs (V100) with 32GB GPU
# memory. It takes around 2 weeks to finish the training
# but 100k iters model should generate reasonable results.
###########################################################
# FEATURE EXTRACTION SETTING #
###########################################################
fs: 22050 # sr
n_fft: 1024 # FFT size (samples).
n_shift: 256 # Hop size (samples). 12.5ms
win_length: null # Window length (samples). 50ms
# If set to null, it will be the same as fft_size.
window: "hann" # Window function.
##########################################################
# TTS MODEL SETTING #
##########################################################
model:
# generator related
generator_type: vits_generator
generator_params:
hidden_channels: 192
global_channels: 256
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: "conv1d"
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: "rel_pos"
text_encoder_self_attention_layer_type: "rel_selfattn"
text_encoder_activation_type: "swish"
text_encoder_normalize_before: True
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: True
use_conformer_conv_in_text_encoder: False
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales: [8, 8, 2, 2]
decoder_upsample_kernel_sizes: [16, 16, 4, 4]
decoder_resblock_kernel_sizes: [3, 7, 11]
decoder_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
use_weight_norm_in_decoder: True
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: True
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: True
use_only_mean_in_flow: True
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
# discriminator related
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: "AvgPool1D"
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [15, 41, 5, 3]
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: True
downsample_scales: [2, 2, 4, 4, 1]
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
follow_official_norm: False
periods: [2, 3, 5, 7, 11]
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [5, 3]
channels: 32
downsample_scales: [3, 3, 3, 3, 1]
max_downsample_channels: 1024
bias: True
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
###########################################################
# LOSS SETTING #
###########################################################
# loss function related
generator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
discriminator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
feat_match_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
average_by_layers: False # whether to average loss value by #layers of each discriminator
include_final_outputs: True # whether to include final outputs for loss calculation
mel_loss_params:
fs: 22050 # must be the same as the training data
fft_size: 1024 # fft points
hop_size: 256 # hop size
win_length: null # window length
window: hann # window type
num_mels: 80 # number of Mel basis
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
log_base: null # null represent natural log
###########################################################
# ADVERSARIAL LOSS SETTING #
###########################################################
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
lambda_dur: 1.0 # loss scaling coefficient for duration loss
lambda_kl: 1.0 # loss scaling coefficient for KL divergence loss
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
###########################################################
# DATA LOADER SETTING #
###########################################################
batch_size: 50 # Batch size.
num_workers: 4 # Number of workers in DataLoader.
##########################################################
# OPTIMIZER & SCHEDULER SETTING #
##########################################################
# optimizer setting for generator
generator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
generator_scheduler: exponential_decay
generator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
# optimizer setting for discriminator
discriminator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
discriminator_scheduler: exponential_decay
discriminator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
generator_first: False # whether to start updating generator first
##########################################################
# OTHER TRAINING SETTING #
##########################################################
num_snapshots: 10 # max number of snapshots to keep while training
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
save_interval_steps: 1000 # Interval steps to save checkpoint.
eval_interval_steps: 250 # Interval steps to evaluate the network.
seed: 777 # random seed number
#!/bin/bash
stage=0
stop_stage=100
config_path=$1
add_blank=$2
# copy from tts3/preprocess
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# get durations from MFA's result
echo "Generate durations.txt from MFA results ..."
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
--inputdir=./aishell3_alignment_tone \
--output durations.txt \
--config=${config_path}
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# extract features
echo "Extract features ..."
python3 ${BIN_DIR}/preprocess.py \
--dataset=aishell3 \
--rootdir=~/datasets/data_aishell3/ \
--dumpdir=dump \
--dur-file=durations.txt \
--config=${config_path} \
--num-cpu=20 \
--cut-sil=True
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# get features' stats(mean and std)
echo "Get features' stats ..."
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="feats"
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# normalize and covert phone/speaker to id, dev and test should use train's stats
echo "Normalize ..."
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/train/raw/metadata.jsonl \
--dumpdir=dump/train/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/dev/raw/metadata.jsonl \
--dumpdir=dump/dev/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/test/raw/metadata.jsonl \
--dumpdir=dump/test/norm \
--feats-stats=dump/train/feats_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt \
--add-blank=${add_blank} \
--skip-wav-copy
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--speaker_dict=dump/speaker_id_map.txt \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
add_blank=$4
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--speaker_dict=dump/speaker_id_map.txt \
--spk_id=0 \
--output_dir=${train_output_path}/test_e2e \
--text=${BIN_DIR}/../sentences.txt \
--add-blank=${add_blank}
fi
#!/bin/bash
config_path=$1
train_output_path=$2
# install monotonic_align
cd ${MAIN_ROOT}/paddlespeech/t2s/models/vits/monotonic_align
python3 setup.py build_ext --inplace
cd -
python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \
--output-dir=${train_output_path} \
--ngpu=4 \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=vits
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
\ No newline at end of file
#!/bin/bash
set -e
source path.sh
gpus=0,1,2,3
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz
add_blank=true
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
./local/preprocess.sh ${conf_path} ${add_blank}|| exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} ${add_blank}|| exit -1
fi
# Mixed Chinese and English TTS with AISHELL3 and VCTK datasets # Mixed Chinese and English TTS with AISHELL3 and VCTK datasets
* ernie_sat - ERNIE-SAT
# ERNIE SAT with AISHELL3 and VCTK dataset # ERNIE-SAT with VCTK dataset
ERNIE-SAT speech-text joint pretraining framework, which achieves SOTA results in cross-lingual multi-speaker speech synthesis and cross-lingual speech editing tasks, It can be applied to a series of scenarios such as Speech Editing, personalized Speech Synthesis, and Voice Cloning.
## Model Framework
In ERNIE-SAT, we propose two innovations:
- In the pretraining process, the phonemes corresponding to Chinese and English are used as input to achieve cross-language and personalized soft phoneme mapping
- The joint mask learning of speech and text is used to realize the alignment of speech and text
<p align="center">
<img src="https://user-images.githubusercontent.com/24568452/186110814-1b9c6618-a0ab-4c0c-bb3d-3d860b0e8cc2.png" />
</p>
## Dataset
### Download and Extract
Download all datasets and extract it to `~/datasets`:
- The aishell3 dataset is in the directory `~/datasets/data_aishell3`
- The vctk dataset is in the directory `~/datasets/VCTK-Corpus-0.92`
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for the fastspeech2 training.
You can download from here:
- [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz)
- [vctk_alignment.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/VCTK-Corpus-0.92/vctk_alignment.tar.gz)
Or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
## Get Started
Assume the paths to the datasets are:
- `~/datasets/data_aishell3`
- `~/datasets/VCTK-Corpus-0.92`
Assume the path to the MFA results of the datasets are:
- `./aishell3_alignment_tone`
- `./vctk_alignment`
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── norm
├── raw
└── speech_stats.npy
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, speaker, and id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
### Synthesizing
We use [HiFiGAN](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc5) as the neural vocoder.
Download pretrained HiFiGAN model from [hifigan_aishell3_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/hifigan/hifigan_aishell3_ckpt_0.2.0.zip) and unzip it.
```bash
unzip hifigan_aishell3_ckpt_0.2.0.zip
```
HiFiGAN checkpoint contains files listed below.
```text
hifigan_aishell3_ckpt_0.2.0
├── default.yaml # default config used to train HiFiGAN
├── feats_stats.npy # statistics used to normalize spectrogram when training HiFiGAN
└── snapshot_iter_2500000.pdz # generator parameters of HiFiGAN
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
## Speech Synthesis and Speech Editing
### Prepare
**prepare aligner**
```bash
mkdir -p tools/aligner
cd tools
# download MFA
wget https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/releases/download/v1.0.1/montreal-forced-aligner_linux.tar.gz
# extract MFA
tar xvf montreal-forced-aligner_linux.tar.gz
# fix .so of MFA
cd montreal-forced-aligner/lib
ln -snf libpython3.6m.so.1.0 libpython3.6m.so
cd -
# download align models and dicts
cd aligner
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/aishell3_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/simple.lexicon
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/vctk_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/LJSpeech-1.1/cmudict-0.7b
cd ../../
```
**prepare pretrained FastSpeech2 models**
ERNIE-SAT use FastSpeech2 as phoneme duration predictor:
```bash
mkdir download
cd download
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip
unzip fastspeech2_conformer_baker_ckpt_0.5.zip
unzip fastspeech2_nosil_ljspeech_ckpt_0.5.zip
cd ../
```
**prepare source data**
```bash
mkdir source
cd source
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540307.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540428.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/LJ050-0278.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p243_313.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p299_096.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/this_was_not_the_show_for_me.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/README.md
cd ../
```
You can check the text of downloaded wavs in `source/README.md`.
### Cross Language Voice Cloning
```bash
./run.sh --stage 3 --stop-stage 3 --gpus 0
```
`stage 3` of `run.sh` calls `local/synthesize_e2e.sh`.
You can modify `--wav_path``--old_str` and `--new_str` yourself, `--old_str` should be the text corresponding to the audio of `--wav_path`, `--new_str` should be designed according to `--task_name`, `--source_lang` and `--target_lang` should be different in this example.
## Pretrained Model
Pretrained ErnieSAT model:
- [erniesat_aishell3_vctk_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/erniesat_aishell3_vctk_ckpt_1.2.0.zip)
Model | Step | eval/text_mlm_loss | eval/mlm_loss | eval/loss
:-------------:| :------------:| :-----: | :-----:| :-----:
default| 8(gpu) x 489000|0.000001|52.477642 |52.477642
# This configuration tested on 8 GPUs (A100) with 80GB GPU memory.
# It takes around 4 days to finish the training,You can adjust
# batch_size、num_workers here and ngpu in local/train.sh for your machine
########################################################### ###########################################################
# FEATURE EXTRACTION SETTING # # FEATURE EXTRACTION SETTING #
########################################################### ###########################################################
...@@ -21,8 +24,8 @@ mlm_prob: 0.8 ...@@ -21,8 +24,8 @@ mlm_prob: 0.8
########################################################### ###########################################################
# DATA SETTING # # DATA SETTING #
########################################################### ###########################################################
batch_size: 20 batch_size: 40
num_workers: 2 num_workers: 8
########################################################### ###########################################################
# MODEL SETTING # # MODEL SETTING #
...@@ -79,7 +82,7 @@ grad_clip: 1.0 ...@@ -79,7 +82,7 @@ grad_clip: 1.0
########################################################### ###########################################################
# TRAINING SETTING # # TRAINING SETTING #
########################################################### ###########################################################
max_epoch: 700 max_epoch: 1500
num_snapshots: 50 num_snapshots: 50
########################################################### ###########################################################
......
...@@ -4,28 +4,11 @@ config_path=$1 ...@@ -4,28 +4,11 @@ config_path=$1
train_output_path=$2 train_output_path=$2
ckpt_name=$3 ckpt_name=$3
stage=1 stage=0
stop_stage=1 stop_stage=0
# pwgan
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--erniesat_config=${config_path} \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=pwg_aishell3_ckpt_0.5/feats_stats.npy \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--phones_dict=dump/phone_id_map.txt
fi
# hifigan # hifigan
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \ python3 ${BIN_DIR}/synthesize.py \
......
# not ready yet
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
echo 'speech cross language from en to zh !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=synthesize \
--wav_path=source/p243_313.wav \
--old_str='For that reason cover should not be given' \
--new_str='今天天气很好' \
--source_lang=en \
--target_lang=zh \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_aishell3 \
--voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_aishell3_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_clone_en_zh.wav
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
echo 'speech cross language from zh to en !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=synthesize \
--wav_path=source/SSB03540307.wav \
--old_str='请播放歌曲小苹果' \
--new_str="Thank you" \
--source_lang=zh \
--target_lang=en \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_aishell3 \
--voc_config=hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_aishell3_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_clone_zh_en.wav
fi
...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \ ...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \
--dev-metadata=dump/dev/norm/metadata.jsonl \ --dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \ --config=${config_path} \
--output-dir=${train_output_path} \ --output-dir=${train_output_path} \
--ngpu=2 \ --ngpu=8 \
--phones-dict=dump/phone_id_map.txt --phones-dict=dump/phone_id_map.txt
\ No newline at end of file
...@@ -3,13 +3,13 @@ ...@@ -3,13 +3,13 @@
set -e set -e
source path.sh source path.sh
gpus=0,1 gpus=0,1,2,3,4,5,6,7
stage=0 stage=0
stop_stage=100 stop_stage=100
conf_path=conf/default.yaml conf_path=conf/default.yaml
train_output_path=exp/default train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz ckpt_name=snapshot_iter_489000.pdz
# with the following command, you can choose the stage range you want to run # with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0` # such as `./run.sh --stage 0 --stop-stage 0`
...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then ...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan # synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1 CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
# CSMSC # CSMSC
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
......
...@@ -46,8 +46,8 @@ fi ...@@ -46,8 +46,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx speedyspeech_csmsc ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx speedyspeech_csmsc
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -46,8 +46,8 @@ fi ...@@ -46,8 +46,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_csmsc ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_csmsc
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -59,8 +59,8 @@ fi ...@@ -59,8 +59,8 @@ fi
if [ ${stage} -le 7 ] && [ ${stop_stage} -ge 7 ]; then if [ ${stage} -le 7 ] && [ ${stop_stage} -ge 7 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_csmsc ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_csmsc
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
...@@ -79,8 +79,8 @@ fi ...@@ -79,8 +79,8 @@ fi
if [ ${stage} -le 9 ] && [ ${stop_stage} -ge 9 ]; then if [ ${stage} -le 9 ] && [ ${stop_stage} -ge 9 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
# streaming acoustic model # streaming acoustic model
./local/paddle2onnx.sh ${train_output_path} inference_streaming inference_onnx_streaming fastspeech2_csmsc_am_encoder_infer ./local/paddle2onnx.sh ${train_output_path} inference_streaming inference_onnx_streaming fastspeech2_csmsc_am_encoder_infer
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
set -e set -e
source path.sh source path.sh
gpus=0,1 gpus=0,1,2,3
stage=0 stage=0
stop_stage=100 stop_stage=100
......
ERNIE-SAT 是可以同时处理中英文的跨语言的语音-语言跨模态大模型,其在语音编辑、个性化语音合成以及跨语言的语音合成等多个任务取得了领先效果。可以应用于语音编辑、个性化合成、语音克隆、同传翻译等一系列场景,该项目供研究使用。
## 模型框架
ERNIE-SAT 中我们提出了两项创新:
- 在预训练过程中将中英双语对应的音素作为输入,实现了跨语言、个性化的软音素映射
- 采用语言和语音的联合掩码学习实现了语言和语音的对齐
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3lOXKJXE-1655380879339)(.meta/framework.png)]
## 使用说明
### 1.安装飞桨与环境依赖
- 本项目的代码基于 Paddle(version>=2.0)
- 本项目开放提供加载 torch 版本的 vocoder 的功能
- torch version>=1.8
- 安装 htk: 在[官方地址](https://htk.eng.cam.ac.uk/)注册完成后,即可进行下载较新版本的 htk (例如 3.4.1)。同时提供[历史版本 htk 下载地址](https://htk.eng.cam.ac.uk/ftp/software/)
- 1.注册账号,下载 htk
- 2.解压 htk 文件,**放入项目根目录的 tools 文件夹中, 以 htk 文件夹名称放入**
- 3.**注意**: 如果您下载的是 3.4.1 或者更高版本, 需要进入 HTKLib/HRec.c 文件中, **修改 1626 行和 1650 行**, 即把**以下两行的 dur<=0 都修改为 dur<0**,如下所示:
```bash
以htk3.4.1版本举例:
(1)第1626行: if (dur<=0 && labid != splabid) HError(8522,"LatFromPaths: Align have dur<=0");
修改为: if (dur<0 && labid != splabid) HError(8522,"LatFromPaths: Align have dur<0");
(2)1650行: if (dur<=0 && labid != splabid) HError(8522,"LatFromPaths: Align have dur<=0 ");
修改为: if (dur<0 && labid != splabid) HError(8522,"LatFromPaths: Align have dur<0 ");
```
- 4.**编译**: 详情参见解压后的 htk 中的 README 文件(如果未编译, 则无法正常运行)
- 安装 ParallelWaveGAN: 参见[官方地址](https://github.com/kan-bayashi/ParallelWaveGAN):按照该官方链接的安装流程,直接在**项目的根目录下** git clone ParallelWaveGAN 项目并且安装相关依赖即可。
- 安装其他依赖: **sox, libsndfile**
### 2.预训练模型
预训练模型 ERNIE-SAT 的模型如下所示:
- [ERNIE-SAT_ZH](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/old/model-ernie-sat-base-zh.tar.gz)
- [ERNIE-SAT_EN](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/old/model-ernie-sat-base-en.tar.gz)
- [ERNIE-SAT_ZH_and_EN](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/old/model-ernie-sat-base-en_zh.tar.gz)
创建 pretrained_model 文件夹,下载上述 ERNIE-SAT 预训练模型并将其解压:
```bash
mkdir pretrained_model
cd pretrained_model
tar -zxvf model-ernie-sat-base-en.tar.gz
tar -zxvf model-ernie-sat-base-zh.tar.gz
tar -zxvf model-ernie-sat-base-en_zh.tar.gz
```
### 3.下载
1. 本项目使用 parallel wavegan 作为声码器(vocoder):
- [pwg_aishell3_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_aishell3_ckpt_0.5.zip)
创建 download 文件夹,下载上述预训练的声码器(vocoder)模型并将其解压:
```bash
mkdir download
cd download
unzip pwg_aishell3_ckpt_0.5.zip
```
2. 本项目使用 [FastSpeech2](https://arxiv.org/abs/2006.04558) 作为音素(phoneme)的持续时间预测器:
- [fastspeech2_conformer_baker_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip) 中文场景下使用
- [fastspeech2_nosil_ljspeech_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip) 英文场景下使用
下载上述预训练的 fastspeech2 模型并将其解压:
```bash
cd download
unzip fastspeech2_conformer_baker_ckpt_0.5.zip
unzip fastspeech2_nosil_ljspeech_ckpt_0.5.zip
```
3. 本项目使用 HTK 获取输入音频和文本的对齐信息:
- [aligner.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/old/aligner.zip)
下载上述文件到 tools 文件夹并将其解压:
```bash
cd tools
unzip aligner.zip
```
### 4.推理
本项目当前开源了语音编辑、个性化语音合成、跨语言语音合成的推理代码,后续会逐步开源。
注:当前英文场下的合成语音采用的声码器默认为 vctk_parallel_wavegan.v1.long, 可在[该链接](https://github.com/kan-bayashi/ParallelWaveGAN)中找到; 若 use_pt_vocoder 参数设置为 False,则英文场景下使用 paddle 版本的声码器。
我们提供特定音频文件, 以及其对应的文本、音素相关文件:
- prompt_wav: 提供的音频文件
- prompt/dev: 基于上述特定音频对应的文本、音素相关文件
```text
prompt_wav
├── p299_096.wav # 样例语音文件1
├── p243_313.wav # 样例语音文件2
└── ...
```
```text
prompt/dev
├── text # 样例语音对应文本
├── wav.scp # 样例语音路径
├── mfa_text # 样例语音对应音素
├── mfa_start # 样例语音中各个音素的开始时间
└── mfa_end # 样例语音中各个音素的结束时间
```
1. `--am` 声学模型格式符合 {model_name}_{dataset}
2. `--am_config`, `--am_checkpoint`, `--am_stat``--phones_dict` 是声学模型的参数,对应于 fastspeech2 预训练模型中的 4 个文件。
3. `--voc` 声码器(vocoder)格式是否符合 {model_name}_{dataset}
4. `--voc_config`, `--voc_checkpoint`, `--voc_stat` 是声码器的参数,对应于 parallel wavegan 预训练模型中的 3 个文件。
5. `--lang` 对应模型的语言可以是 `zh``en`
6. `--ngpu` 要使用的 GPU 数,如果 ngpu==0,则使用 cpu。
7. `--model_name` 模型名称
8. `--uid` 特定提示(prompt)语音的 id
9. `--new_str` 输入的文本(本次开源暂时先设置特定的文本)
10. `--prefix` 特定音频对应的文本、音素相关文件的地址
11. `--source_lang` , 源语言
12. `--target_lang` , 目标语言
13. `--output_name` , 合成语音名称
14. `--task_name` , 任务名称, 包括:语音编辑任务、个性化语音合成任务、跨语言语音合成任务
运行以下脚本即可进行实验
```shell
./run_sedit_en.sh # 语音编辑任务(英文)
./run_gen_en.sh # 个性化语音合成任务(英文)
./run_clone_en_to_zh.sh # 跨语言语音合成任务(英文到中文的语音克隆)
```
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Usage:
align.py wavfile trsfile outwordfile outphonefile
"""
import os
import sys
PHONEME = 'tools/aligner/english_envir/english2phoneme/phoneme'
MODEL_DIR_EN = 'tools/aligner/english'
MODEL_DIR_ZH = 'tools/aligner/mandarin'
HVITE = 'tools/htk/HTKTools/HVite'
HCOPY = 'tools/htk/HTKTools/HCopy'
def get_unk_phns(word_str: str):
tmpbase = '/tmp/tp.'
f = open(tmpbase + 'temp.words', 'w')
f.write(word_str)
f.close()
os.system(PHONEME + ' ' + tmpbase + 'temp.words' + ' ' + tmpbase +
'temp.phons')
f = open(tmpbase + 'temp.phons', 'r')
lines2 = f.readline().strip().split()
f.close()
phns = []
for phn in lines2:
phons = phn.replace('\n', '').replace(' ', '')
seq = []
j = 0
while (j < len(phons)):
if (phons[j] > 'Z'):
if (phons[j] == 'j'):
seq.append('JH')
elif (phons[j] == 'h'):
seq.append('HH')
else:
seq.append(phons[j].upper())
j += 1
else:
p = phons[j:j + 2]
if (p == 'WH'):
seq.append('W')
elif (p in ['TH', 'SH', 'HH', 'DH', 'CH', 'ZH', 'NG']):
seq.append(p)
elif (p == 'AX'):
seq.append('AH0')
else:
seq.append(p + '1')
j += 2
phns.extend(seq)
return phns
def words2phns(line: str):
'''
Args:
line (str): input text.
eg: for that reason cover is impossible to be given.
Returns:
List[str]: phones of input text.
eg:
['F', 'AO1', 'R', 'DH', 'AE1', 'T', 'R', 'IY1', 'Z', 'AH0', 'N', 'K', 'AH1', 'V', 'ER0',
'IH1', 'Z', 'IH2', 'M', 'P', 'AA1', 'S', 'AH0', 'B', 'AH0', 'L', 'T', 'UW1', 'B', 'IY1',
'G', 'IH1', 'V', 'AH0', 'N']
Dict(str, str): key - idx_word
value - phones
eg:
{'0_FOR': ['F', 'AO1', 'R'], '1_THAT': ['DH', 'AE1', 'T'], '2_REASON': ['R', 'IY1', 'Z', 'AH0', 'N'],
'3_COVER': ['K', 'AH1', 'V', 'ER0'], '4_IS': ['IH1', 'Z'], '5_IMPOSSIBLE': ['IH2', 'M', 'P', 'AA1', 'S', 'AH0', 'B', 'AH0', 'L'],
'6_TO': ['T', 'UW1'], '7_BE': ['B', 'IY1'], '8_GIVEN': ['G', 'IH1', 'V', 'AH0', 'N']}
'''
dictfile = MODEL_DIR_EN + '/dict'
line = line.strip()
words = []
for pun in [',', '.', ':', ';', '!', '?', '"', '(', ')', '--', '---']:
line = line.replace(pun, ' ')
for wrd in line.split():
if (wrd[-1] == '-'):
wrd = wrd[:-1]
if (wrd[0] == "'"):
wrd = wrd[1:]
if wrd:
words.append(wrd)
ds = set([])
word2phns_dict = {}
with open(dictfile, 'r') as fid:
for line in fid:
word = line.split()[0]
ds.add(word)
if word not in word2phns_dict.keys():
word2phns_dict[word] = " ".join(line.split()[1:])
phns = []
wrd2phns = {}
for index, wrd in enumerate(words):
if wrd == '[MASK]':
wrd2phns[str(index) + "_" + wrd] = [wrd]
phns.append(wrd)
elif (wrd.upper() not in ds):
wrd2phns[str(index) + "_" + wrd.upper()] = get_unk_phns(wrd)
phns.extend(get_unk_phns(wrd))
else:
wrd2phns[str(index) +
"_" + wrd.upper()] = word2phns_dict[wrd.upper()].split()
phns.extend(word2phns_dict[wrd.upper()].split())
return phns, wrd2phns
def words2phns_zh(line: str):
dictfile = MODEL_DIR_ZH + '/dict'
line = line.strip()
words = []
for pun in [
',', '.', ':', ';', '!', '?', '"', '(', ')', '--', '---', u',',
u'。', u':', u';', u'!', u'?', u'(', u')'
]:
line = line.replace(pun, ' ')
for wrd in line.split():
if (wrd[-1] == '-'):
wrd = wrd[:-1]
if (wrd[0] == "'"):
wrd = wrd[1:]
if wrd:
words.append(wrd)
ds = set([])
word2phns_dict = {}
with open(dictfile, 'r') as fid:
for line in fid:
word = line.split()[0]
ds.add(word)
if word not in word2phns_dict.keys():
word2phns_dict[word] = " ".join(line.split()[1:])
phns = []
wrd2phns = {}
for index, wrd in enumerate(words):
if wrd == '[MASK]':
wrd2phns[str(index) + "_" + wrd] = [wrd]
phns.append(wrd)
elif (wrd.upper() not in ds):
print("出现非法词错误,请输入正确的文本...")
else:
wrd2phns[str(index) + "_" + wrd] = word2phns_dict[wrd].split()
phns.extend(word2phns_dict[wrd].split())
return phns, wrd2phns
def prep_txt_zh(line: str, tmpbase: str, dictfile: str):
words = []
line = line.strip()
for pun in [
',', '.', ':', ';', '!', '?', '"', '(', ')', '--', '---', u',',
u'。', u':', u';', u'!', u'?', u'(', u')'
]:
line = line.replace(pun, ' ')
for wrd in line.split():
if (wrd[-1] == '-'):
wrd = wrd[:-1]
if (wrd[0] == "'"):
wrd = wrd[1:]
if wrd:
words.append(wrd)
ds = set([])
with open(dictfile, 'r') as fid:
for line in fid:
ds.add(line.split()[0])
unk_words = set([])
with open(tmpbase + '.txt', 'w') as fwid:
for wrd in words:
if (wrd not in ds):
unk_words.add(wrd)
fwid.write(wrd + ' ')
fwid.write('\n')
return unk_words
def prep_txt_en(line: str, tmpbase, dictfile):
words = []
line = line.strip()
for pun in [',', '.', ':', ';', '!', '?', '"', '(', ')', '--', '---']:
line = line.replace(pun, ' ')
for wrd in line.split():
if (wrd[-1] == '-'):
wrd = wrd[:-1]
if (wrd[0] == "'"):
wrd = wrd[1:]
if wrd:
words.append(wrd)
ds = set([])
with open(dictfile, 'r') as fid:
for line in fid:
ds.add(line.split()[0])
unk_words = set([])
with open(tmpbase + '.txt', 'w') as fwid:
for wrd in words:
if (wrd.upper() not in ds):
unk_words.add(wrd.upper())
fwid.write(wrd + ' ')
fwid.write('\n')
#generate pronounciations for unknows words using 'letter to sound'
with open(tmpbase + '_unk.words', 'w') as fwid:
for unk in unk_words:
fwid.write(unk + '\n')
try:
os.system(PHONEME + ' ' + tmpbase + '_unk.words' + ' ' + tmpbase +
'_unk.phons')
except Exception:
print('english2phoneme error!')
sys.exit(1)
#add unknown words to the standard dictionary, generate a tmp dictionary for alignment
fw = open(tmpbase + '.dict', 'w')
with open(dictfile, 'r') as fid:
for line in fid:
fw.write(line)
f = open(tmpbase + '_unk.words', 'r')
lines1 = f.readlines()
f.close()
f = open(tmpbase + '_unk.phons', 'r')
lines2 = f.readlines()
f.close()
for i in range(len(lines1)):
wrd = lines1[i].replace('\n', '')
phons = lines2[i].replace('\n', '').replace(' ', '')
seq = []
j = 0
while (j < len(phons)):
if (phons[j] > 'Z'):
if (phons[j] == 'j'):
seq.append('JH')
elif (phons[j] == 'h'):
seq.append('HH')
else:
seq.append(phons[j].upper())
j += 1
else:
p = phons[j:j + 2]
if (p == 'WH'):
seq.append('W')
elif (p in ['TH', 'SH', 'HH', 'DH', 'CH', 'ZH', 'NG']):
seq.append(p)
elif (p == 'AX'):
seq.append('AH0')
else:
seq.append(p + '1')
j += 2
fw.write(wrd + ' ')
for s in seq:
fw.write(' ' + s)
fw.write('\n')
fw.close()
def prep_mlf(txt: str, tmpbase: str):
with open(tmpbase + '.mlf', 'w') as fwid:
fwid.write('#!MLF!#\n')
fwid.write('"' + tmpbase + '.lab"\n')
fwid.write('sp\n')
wrds = txt.split()
for wrd in wrds:
fwid.write(wrd.upper() + '\n')
fwid.write('sp\n')
fwid.write('.\n')
def _get_user():
return os.path.expanduser('~').split("/")[-1]
def alignment(wav_path: str, text: str):
'''
intervals: List[phn, start, end]
'''
tmpbase = '/tmp/' + _get_user() + '_' + str(os.getpid())
#prepare wav and trs files
try:
os.system('sox ' + wav_path + ' -r 16000 ' + tmpbase + '.wav remix -')
except Exception:
print('sox error!')
return None
#prepare clean_transcript file
try:
prep_txt_en(line=text, tmpbase=tmpbase, dictfile=MODEL_DIR_EN + '/dict')
except Exception:
print('prep_txt error!')
return None
#prepare mlf file
try:
with open(tmpbase + '.txt', 'r') as fid:
txt = fid.readline()
prep_mlf(txt, tmpbase)
except Exception:
print('prep_mlf error!')
return None
#prepare scp
try:
os.system(HCOPY + ' -C ' + MODEL_DIR_EN + '/16000/config ' + tmpbase +
'.wav' + ' ' + tmpbase + '.plp')
except Exception:
print('HCopy error!')
return None
#run alignment
try:
os.system(HVITE + ' -a -m -t 10000.0 10000.0 100000.0 -I ' + tmpbase +
'.mlf -H ' + MODEL_DIR_EN + '/16000/macros -H ' + MODEL_DIR_EN
+ '/16000/hmmdefs -i ' + tmpbase + '.aligned ' + tmpbase +
'.dict ' + MODEL_DIR_EN + '/monophones ' + tmpbase +
'.plp 2>&1 > /dev/null')
except Exception:
print('HVite error!')
return None
with open(tmpbase + '.txt', 'r') as fid:
words = fid.readline().strip().split()
words = txt.strip().split()
words.reverse()
with open(tmpbase + '.aligned', 'r') as fid:
lines = fid.readlines()
i = 2
intervals = []
word2phns = {}
current_word = ''
index = 0
while (i < len(lines)):
splited_line = lines[i].strip().split()
if (len(splited_line) >= 4) and (splited_line[0] != splited_line[1]):
phn = splited_line[2]
pst = (int(splited_line[0]) / 1000 + 125) / 10000
pen = (int(splited_line[1]) / 1000 + 125) / 10000
intervals.append([phn, pst, pen])
# splited_line[-1]!='sp'
if len(splited_line) == 5:
current_word = str(index) + '_' + splited_line[-1]
word2phns[current_word] = phn
index += 1
elif len(splited_line) == 4:
word2phns[current_word] += ' ' + phn
i += 1
return intervals, word2phns
def alignment_zh(wav_path: str, text: str):
tmpbase = '/tmp/' + _get_user() + '_' + str(os.getpid())
#prepare wav and trs files
try:
os.system('sox ' + wav_path + ' -r 16000 -b 16 ' + tmpbase +
'.wav remix -')
except Exception:
print('sox error!')
return None
#prepare clean_transcript file
try:
unk_words = prep_txt_zh(
line=text, tmpbase=tmpbase, dictfile=MODEL_DIR_ZH + '/dict')
if unk_words:
print('Error! Please add the following words to dictionary:')
for unk in unk_words:
print("非法words: ", unk)
except Exception:
print('prep_txt error!')
return None
#prepare mlf file
try:
with open(tmpbase + '.txt', 'r') as fid:
txt = fid.readline()
prep_mlf(txt, tmpbase)
except Exception:
print('prep_mlf error!')
return None
#prepare scp
try:
os.system(HCOPY + ' -C ' + MODEL_DIR_ZH + '/16000/config ' + tmpbase +
'.wav' + ' ' + tmpbase + '.plp')
except Exception:
print('HCopy error!')
return None
#run alignment
try:
os.system(HVITE + ' -a -m -t 10000.0 10000.0 100000.0 -I ' + tmpbase +
'.mlf -H ' + MODEL_DIR_ZH + '/16000/macros -H ' + MODEL_DIR_ZH
+ '/16000/hmmdefs -i ' + tmpbase + '.aligned ' + MODEL_DIR_ZH
+ '/dict ' + MODEL_DIR_ZH + '/monophones ' + tmpbase +
'.plp 2>&1 > /dev/null')
except Exception:
print('HVite error!')
return None
with open(tmpbase + '.txt', 'r') as fid:
words = fid.readline().strip().split()
words = txt.strip().split()
words.reverse()
with open(tmpbase + '.aligned', 'r') as fid:
lines = fid.readlines()
i = 2
intervals = []
word2phns = {}
current_word = ''
index = 0
while (i < len(lines)):
splited_line = lines[i].strip().split()
if (len(splited_line) >= 4) and (splited_line[0] != splited_line[1]):
phn = splited_line[2]
pst = (int(splited_line[0]) / 1000 + 125) / 10000
pen = (int(splited_line[1]) / 1000 + 125) / 10000
intervals.append([phn, pst, pen])
# splited_line[-1]!='sp'
if len(splited_line) == 5:
current_word = str(index) + '_' + splited_line[-1]
word2phns[current_word] = phn
index += 1
elif len(splited_line) == 4:
word2phns[current_word] += ' ' + phn
i += 1
return intervals, word2phns
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
from typing import Dict
from typing import List
import librosa
import numpy as np
import paddle
import soundfile as sf
from align import alignment
from align import alignment_zh
from align import words2phns
from align import words2phns_zh
from paddle import nn
from sedit_arg_parser import parse_args
from utils import eval_durs
from utils import get_voc_out
from utils import is_chinese
from utils import load_num_sequence_text
from utils import read_2col_text
from paddlespeech.t2s.datasets.am_batch_fn import build_mlm_collate_fn
from paddlespeech.t2s.models.ernie_sat.mlm import build_model_from_file
random.seed(0)
np.random.seed(0)
def get_wav(wav_path: str,
source_lang: str='english',
target_lang: str='english',
model_name: str="paddle_checkpoint_en",
old_str: str="",
new_str: str="",
non_autoreg: bool=True):
wav_org, output_feat, old_span_bdy, new_span_bdy, fs, hop_length = get_mlm_output(
source_lang=source_lang,
target_lang=target_lang,
model_name=model_name,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
use_teacher_forcing=non_autoreg)
masked_feat = output_feat[new_span_bdy[0]:new_span_bdy[1]]
alt_wav = get_voc_out(masked_feat)
old_time_bdy = [hop_length * x for x in old_span_bdy]
wav_replaced = np.concatenate(
[wav_org[:old_time_bdy[0]], alt_wav, wav_org[old_time_bdy[1]:]])
data_dict = {"origin": wav_org, "output": wav_replaced}
return data_dict
def load_model(model_name: str="paddle_checkpoint_en"):
config_path = './pretrained_model/{}/config.yaml'.format(model_name)
model_path = './pretrained_model/{}/model.pdparams'.format(model_name)
mlm_model, conf = build_model_from_file(
config_file=config_path, model_file=model_path)
return mlm_model, conf
def read_data(uid: str, prefix: os.PathLike):
# 获取 uid 对应的文本
mfa_text = read_2col_text(prefix + '/text')[uid]
# 获取 uid 对应的音频路径
mfa_wav_path = read_2col_text(prefix + '/wav.scp')[uid]
if not os.path.isabs(mfa_wav_path):
mfa_wav_path = prefix + mfa_wav_path
return mfa_text, mfa_wav_path
def get_align_data(uid: str, prefix: os.PathLike):
mfa_path = prefix + "mfa_"
mfa_text = read_2col_text(mfa_path + 'text')[uid]
mfa_start = load_num_sequence_text(
mfa_path + 'start', loader_type='text_float')[uid]
mfa_end = load_num_sequence_text(
mfa_path + 'end', loader_type='text_float')[uid]
mfa_wav_path = read_2col_text(mfa_path + 'wav.scp')[uid]
return mfa_text, mfa_start, mfa_end, mfa_wav_path
# 获取需要被 mask 的 mel 帧的范围
def get_masked_mel_bdy(mfa_start: List[float],
mfa_end: List[float],
fs: int,
hop_length: int,
span_to_repl: List[List[int]]):
align_start = np.array(mfa_start)
align_end = np.array(mfa_end)
align_start = np.floor(fs * align_start / hop_length).astype('int')
align_end = np.floor(fs * align_end / hop_length).astype('int')
if span_to_repl[0] >= len(mfa_start):
span_bdy = [align_end[-1], align_end[-1]]
else:
span_bdy = [
align_start[span_to_repl[0]], align_end[span_to_repl[1] - 1]
]
return span_bdy, align_start, align_end
def recover_dict(word2phns: Dict[str, str], tp_word2phns: Dict[str, str]):
dic = {}
keys_to_del = []
exist_idx = []
sp_count = 0
add_sp_count = 0
for key in word2phns.keys():
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
exist_idx.append(int(idx))
else:
keys_to_del.append(key)
for key in keys_to_del:
del word2phns[key]
cur_id = 0
for key in tp_word2phns.keys():
if cur_id in exist_idx:
dic[str(cur_id) + "_sp"] = 'sp'
cur_id += 1
add_sp_count += 1
idx, wrd = key.split('_')
dic[str(cur_id) + "_" + wrd] = tp_word2phns[key]
cur_id += 1
if add_sp_count + 1 == sp_count:
dic[str(cur_id) + "_sp"] = 'sp'
add_sp_count += 1
assert add_sp_count == sp_count, "sp are not added in dic"
return dic
def get_max_idx(dic):
return sorted([int(key.split('_')[0]) for key in dic.keys()])[-1]
def get_phns_and_spans(wav_path: str,
old_str: str="",
new_str: str="",
source_lang: str="english",
target_lang: str="english"):
is_append = (old_str == new_str[:len(old_str)])
old_phns, mfa_start, mfa_end = [], [], []
# source
if source_lang == "english":
intervals, word2phns = alignment(wav_path, old_str)
elif source_lang == "chinese":
intervals, word2phns = alignment_zh(wav_path, old_str)
_, tp_word2phns = words2phns_zh(old_str)
for key, value in tp_word2phns.items():
idx, wrd = key.split('_')
cur_val = " ".join(value)
tp_word2phns[key] = cur_val
word2phns = recover_dict(word2phns, tp_word2phns)
else:
assert source_lang == "chinese" or source_lang == "english", \
"source_lang is wrong..."
for item in intervals:
old_phns.append(item[0])
mfa_start.append(float(item[1]))
mfa_end.append(float(item[2]))
# target
if is_append and (source_lang != target_lang):
cross_lingual_clone = True
else:
cross_lingual_clone = False
if cross_lingual_clone:
str_origin = new_str[:len(old_str)]
str_append = new_str[len(old_str):]
if target_lang == "chinese":
phns_origin, origin_word2phns = words2phns(str_origin)
phns_append, append_word2phns_tmp = words2phns_zh(str_append)
elif target_lang == "english":
# 原始句子
phns_origin, origin_word2phns = words2phns_zh(str_origin)
# clone 句子
phns_append, append_word2phns_tmp = words2phns(str_append)
else:
assert target_lang == "chinese" or target_lang == "english", \
"cloning is not support for this language, please check it."
new_phns = phns_origin + phns_append
append_word2phns = {}
length = len(origin_word2phns)
for key, value in append_word2phns_tmp.items():
idx, wrd = key.split('_')
append_word2phns[str(int(idx) + length) + '_' + wrd] = value
new_word2phns = origin_word2phns.copy()
new_word2phns.update(append_word2phns)
else:
if source_lang == target_lang and target_lang == "english":
new_phns, new_word2phns = words2phns(new_str)
elif source_lang == target_lang and target_lang == "chinese":
new_phns, new_word2phns = words2phns_zh(new_str)
else:
assert source_lang == target_lang, \
"source language is not same with target language..."
span_to_repl = [0, len(old_phns) - 1]
span_to_add = [0, len(new_phns) - 1]
left_idx = 0
new_phns_left = []
sp_count = 0
# find the left different index
for key in word2phns.keys():
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
new_phns_left.append('sp')
else:
idx = str(int(idx) - sp_count)
if idx + '_' + wrd in new_word2phns:
left_idx += len(new_word2phns[idx + '_' + wrd])
new_phns_left.extend(word2phns[key].split())
else:
span_to_repl[0] = len(new_phns_left)
span_to_add[0] = len(new_phns_left)
break
# reverse word2phns and new_word2phns
right_idx = 0
new_phns_right = []
sp_count = 0
word2phns_max_idx = get_max_idx(word2phns)
new_word2phns_max_idx = get_max_idx(new_word2phns)
new_phns_mid = []
if is_append:
new_phns_right = []
new_phns_mid = new_phns[left_idx:]
span_to_repl[0] = len(new_phns_left)
span_to_add[0] = len(new_phns_left)
span_to_add[1] = len(new_phns_left) + len(new_phns_mid)
span_to_repl[1] = len(old_phns) - len(new_phns_right)
# speech edit
else:
for key in list(word2phns.keys())[::-1]:
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
new_phns_right = ['sp'] + new_phns_right
else:
idx = str(new_word2phns_max_idx - (word2phns_max_idx - int(idx)
- sp_count))
if idx + '_' + wrd in new_word2phns:
right_idx -= len(new_word2phns[idx + '_' + wrd])
new_phns_right = word2phns[key].split() + new_phns_right
else:
span_to_repl[1] = len(old_phns) - len(new_phns_right)
new_phns_mid = new_phns[left_idx:right_idx]
span_to_add[1] = len(new_phns_left) + len(new_phns_mid)
if len(new_phns_mid) == 0:
span_to_add[1] = min(span_to_add[1] + 1, len(new_phns))
span_to_add[0] = max(0, span_to_add[0] - 1)
span_to_repl[0] = max(0, span_to_repl[0] - 1)
span_to_repl[1] = min(span_to_repl[1] + 1,
len(old_phns))
break
new_phns = new_phns_left + new_phns_mid + new_phns_right
'''
For that reason cover should not be given.
For that reason cover is impossible to be given.
span_to_repl: [17, 23] "should not"
span_to_add: [17, 30] "is impossible to"
'''
return mfa_start, mfa_end, old_phns, new_phns, span_to_repl, span_to_add
# mfa 获得的 duration 和 fs2 的 duration_predictor 获取的 duration 可能不同
# 此处获得一个缩放比例, 用于预测值和真实值之间的缩放
def get_dur_adj_factor(orig_dur: List[int],
pred_dur: List[int],
phns: List[str]):
length = 0
factor_list = []
for orig, pred, phn in zip(orig_dur, pred_dur, phns):
if pred == 0 or phn == 'sp':
continue
else:
factor_list.append(orig / pred)
factor_list = np.array(factor_list)
factor_list.sort()
if len(factor_list) < 5:
return 1
length = 2
avg = np.average(factor_list[length:-length])
return avg
def prep_feats_with_dur(wav_path: str,
source_lang: str="English",
target_lang: str="English",
old_str: str="",
new_str: str="",
mask_reconstruct: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False,
fs: int=24000,
hop_length: int=300):
'''
Returns:
np.ndarray: new wav, replace the part to be edited in original wav with 0
List[str]: new phones
List[float]: mfa start of new wav
List[float]: mfa end of new wav
List[int]: masked mel boundary of original wav
List[int]: masked mel boundary of new wav
'''
wav_org, _ = librosa.load(wav_path, sr=fs)
mfa_start, mfa_end, old_phns, new_phns, span_to_repl, span_to_add = get_phns_and_spans(
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
source_lang=source_lang,
target_lang=target_lang)
if start_end_sp:
if new_phns[-1] != 'sp':
new_phns = new_phns + ['sp']
# 中文的 phns 不一定都在 fastspeech2 的字典里, 用 sp 代替
if target_lang == "english" or target_lang == "chinese":
old_durs = eval_durs(old_phns, target_lang=source_lang)
else:
assert target_lang == "chinese" or target_lang == "english", \
"calculate duration_predict is not support for this language..."
orig_old_durs = [e - s for e, s in zip(mfa_end, mfa_start)]
if '[MASK]' in new_str:
new_phns = old_phns
span_to_add = span_to_repl
d_factor_left = get_dur_adj_factor(
orig_dur=orig_old_durs[:span_to_repl[0]],
pred_dur=old_durs[:span_to_repl[0]],
phns=old_phns[:span_to_repl[0]])
d_factor_right = get_dur_adj_factor(
orig_dur=orig_old_durs[span_to_repl[1]:],
pred_dur=old_durs[span_to_repl[1]:],
phns=old_phns[span_to_repl[1]:])
d_factor = (d_factor_left + d_factor_right) / 2
new_durs_adjusted = [d_factor * i for i in old_durs]
else:
if duration_adjust:
d_factor = get_dur_adj_factor(
orig_dur=orig_old_durs, pred_dur=old_durs, phns=old_phns)
d_factor = d_factor * 1.25
else:
d_factor = 1
if target_lang == "english" or target_lang == "chinese":
new_durs = eval_durs(new_phns, target_lang=target_lang)
else:
assert target_lang == "chinese" or target_lang == "english", \
"calculate duration_predict is not support for this language..."
new_durs_adjusted = [d_factor * i for i in new_durs]
new_span_dur_sum = sum(new_durs_adjusted[span_to_add[0]:span_to_add[1]])
old_span_dur_sum = sum(orig_old_durs[span_to_repl[0]:span_to_repl[1]])
dur_offset = new_span_dur_sum - old_span_dur_sum
new_mfa_start = mfa_start[:span_to_repl[0]]
new_mfa_end = mfa_end[:span_to_repl[0]]
for i in new_durs_adjusted[span_to_add[0]:span_to_add[1]]:
if len(new_mfa_end) == 0:
new_mfa_start.append(0)
new_mfa_end.append(i)
else:
new_mfa_start.append(new_mfa_end[-1])
new_mfa_end.append(new_mfa_end[-1] + i)
new_mfa_start += [i + dur_offset for i in mfa_start[span_to_repl[1]:]]
new_mfa_end += [i + dur_offset for i in mfa_end[span_to_repl[1]:]]
# 3. get new wav
# 在原始句子后拼接
if span_to_repl[0] >= len(mfa_start):
left_idx = len(wav_org)
right_idx = left_idx
# 在原始句子中间替换
else:
left_idx = int(np.floor(mfa_start[span_to_repl[0]] * fs))
right_idx = int(np.ceil(mfa_end[span_to_repl[1] - 1] * fs))
blank_wav = np.zeros(
(int(np.ceil(new_span_dur_sum * fs)), ), dtype=wav_org.dtype)
# 原始音频,需要编辑的部分替换成空音频,空音频的时间由 fs2 的 duration_predictor 决定
new_wav = np.concatenate(
[wav_org[:left_idx], blank_wav, wav_org[right_idx:]])
# 4. get old and new mel span to be mask
# [92, 92]
old_span_bdy, mfa_start, mfa_end = get_masked_mel_bdy(
mfa_start=mfa_start,
mfa_end=mfa_end,
fs=fs,
hop_length=hop_length,
span_to_repl=span_to_repl)
# [92, 174]
# new_mfa_start, new_mfa_end 时间级别的开始和结束时间 -> 帧级别
new_span_bdy, new_mfa_start, new_mfa_end = get_masked_mel_bdy(
mfa_start=new_mfa_start,
mfa_end=new_mfa_end,
fs=fs,
hop_length=hop_length,
span_to_repl=span_to_add)
# old_span_bdy, new_span_bdy 是帧级别的范围
return new_wav, new_phns, new_mfa_start, new_mfa_end, old_span_bdy, new_span_bdy
def prep_feats(wav_path: str,
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
duration_adjust: bool=True,
start_end_sp: bool=False,
mask_reconstruct: bool=False,
fs: int=24000,
hop_length: int=300,
token_list: List[str]=[]):
wav, phns, mfa_start, mfa_end, old_span_bdy, new_span_bdy = prep_feats_with_dur(
source_lang=source_lang,
target_lang=target_lang,
old_str=old_str,
new_str=new_str,
wav_path=wav_path,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
mask_reconstruct=mask_reconstruct,
fs=fs,
hop_length=hop_length)
token_to_id = {item: i for i, item in enumerate(token_list)}
text = np.array(
list(map(lambda x: token_to_id.get(x, token_to_id['<unk>']), phns)))
span_bdy = np.array(new_span_bdy)
batch = [('1', {
"speech": wav,
"align_start": mfa_start,
"align_end": mfa_end,
"text": text,
"span_bdy": span_bdy
})]
return batch, old_span_bdy, new_span_bdy
def decode_with_model(mlm_model: nn.Layer,
collate_fn,
wav_path: str,
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
use_teacher_forcing: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False,
fs: int=24000,
hop_length: int=300,
token_list: List[str]=[]):
batch, old_span_bdy, new_span_bdy = prep_feats(
source_lang=source_lang,
target_lang=target_lang,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
fs=fs,
hop_length=hop_length,
token_list=token_list)
feats = collate_fn(batch)[1]
if 'text_masked_pos' in feats.keys():
feats.pop('text_masked_pos')
output = mlm_model.inference(
text=feats['text'],
speech=feats['speech'],
masked_pos=feats['masked_pos'],
speech_mask=feats['speech_mask'],
text_mask=feats['text_mask'],
speech_seg_pos=feats['speech_seg_pos'],
text_seg_pos=feats['text_seg_pos'],
span_bdy=new_span_bdy,
use_teacher_forcing=use_teacher_forcing)
# 拼接音频
output_feat = paddle.concat(x=output, axis=0)
wav_org, _ = librosa.load(wav_path, sr=fs)
return wav_org, output_feat, old_span_bdy, new_span_bdy, fs, hop_length
def get_mlm_output(wav_path: str,
model_name: str="paddle_checkpoint_en",
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
use_teacher_forcing: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False):
mlm_model, train_conf = load_model(model_name)
mlm_model.eval()
collate_fn = build_mlm_collate_fn(
sr=train_conf.feats_extract_conf['fs'],
n_fft=train_conf.feats_extract_conf['n_fft'],
hop_length=train_conf.feats_extract_conf['hop_length'],
win_length=train_conf.feats_extract_conf['win_length'],
n_mels=train_conf.feats_extract_conf['n_mels'],
fmin=train_conf.feats_extract_conf['fmin'],
fmax=train_conf.feats_extract_conf['fmax'],
mlm_prob=train_conf['mlm_prob'],
mean_phn_span=train_conf['mean_phn_span'],
seg_emb=train_conf.encoder_conf['input_layer'] == 'sega_mlm')
return decode_with_model(
source_lang=source_lang,
target_lang=target_lang,
mlm_model=mlm_model,
collate_fn=collate_fn,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
use_teacher_forcing=use_teacher_forcing,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
fs=train_conf.feats_extract_conf['fs'],
hop_length=train_conf.feats_extract_conf['hop_length'],
token_list=train_conf.token_list)
def evaluate(uid: str,
source_lang: str="english",
target_lang: str="english",
prefix: os.PathLike="./prompt/dev/",
model_name: str="paddle_checkpoint_en",
new_str: str="",
prompt_decoding: bool=False,
task_name: str=None):
# get origin text and path of origin wav
old_str, wav_path = read_data(uid=uid, prefix=prefix)
if task_name == 'edit':
new_str = new_str
elif task_name == 'synthesize':
new_str = old_str + new_str
else:
new_str = old_str + ' '.join([ch for ch in new_str if is_chinese(ch)])
print('new_str is ', new_str)
results_dict = get_wav(
source_lang=source_lang,
target_lang=target_lang,
model_name=model_name,
wav_path=wav_path,
old_str=old_str,
new_str=new_str)
return results_dict
if __name__ == "__main__":
# parse config and args
args = parse_args()
data_dict = evaluate(
uid=args.uid,
source_lang=args.source_lang,
target_lang=args.target_lang,
prefix=args.prefix,
model_name=args.model_name,
new_str=args.new_str,
task_name=args.task_name)
sf.write(args.output_name, data_dict['output'], samplerate=24000)
print("finished...")
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
from typing import Dict
from typing import List
import librosa
import numpy as np
import paddle
import soundfile as sf
import yaml
from align import alignment
from align import alignment_zh
from align import words2phns
from align import words2phns_zh
from paddle import nn
from sedit_arg_parser import parse_args
from utils import eval_durs
from utils import get_voc_out
from utils import is_chinese
from utils import load_num_sequence_text
from utils import read_2col_text
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import build_mlm_collate_fn
from paddlespeech.t2s.models.ernie_sat.ernie_sat import ErnieSAT
random.seed(0)
np.random.seed(0)
def get_wav(wav_path: str,
source_lang: str='english',
target_lang: str='english',
model_name: str="paddle_checkpoint_en",
old_str: str="",
new_str: str="",
non_autoreg: bool=True):
wav_org, output_feat, old_span_bdy, new_span_bdy, fs, hop_length = get_mlm_output(
source_lang=source_lang,
target_lang=target_lang,
model_name=model_name,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
use_teacher_forcing=non_autoreg)
masked_feat = output_feat[new_span_bdy[0]:new_span_bdy[1]]
alt_wav = get_voc_out(masked_feat)
old_time_bdy = [hop_length * x for x in old_span_bdy]
wav_replaced = np.concatenate(
[wav_org[:old_time_bdy[0]], alt_wav, wav_org[old_time_bdy[1]:]])
data_dict = {"origin": wav_org, "output": wav_replaced}
return data_dict
def load_model(model_name: str="paddle_checkpoint_en"):
config_path = './pretrained_model/{}/default.yaml'.format(model_name)
model_path = './pretrained_model/{}/model.pdparams'.format(model_name)
with open(config_path) as f:
conf = CfgNode(yaml.safe_load(f))
token_list = list(conf.token_list)
vocab_size = len(token_list)
odim = conf.n_mels
mlm_model = ErnieSAT(idim=vocab_size, odim=odim, **conf["model"])
state_dict = paddle.load(model_path)
new_state_dict = {}
for key, value in state_dict.items():
new_key = "model." + key
new_state_dict[new_key] = value
mlm_model.set_state_dict(new_state_dict)
mlm_model.eval()
return mlm_model, conf
def read_data(uid: str, prefix: os.PathLike):
# 获取 uid 对应的文本
mfa_text = read_2col_text(prefix + '/text')[uid]
# 获取 uid 对应的音频路径
mfa_wav_path = read_2col_text(prefix + '/wav.scp')[uid]
if not os.path.isabs(mfa_wav_path):
mfa_wav_path = prefix + mfa_wav_path
return mfa_text, mfa_wav_path
def get_align_data(uid: str, prefix: os.PathLike):
mfa_path = prefix + "mfa_"
mfa_text = read_2col_text(mfa_path + 'text')[uid]
mfa_start = load_num_sequence_text(
mfa_path + 'start', loader_type='text_float')[uid]
mfa_end = load_num_sequence_text(
mfa_path + 'end', loader_type='text_float')[uid]
mfa_wav_path = read_2col_text(mfa_path + 'wav.scp')[uid]
return mfa_text, mfa_start, mfa_end, mfa_wav_path
# 获取需要被 mask 的 mel 帧的范围
def get_masked_mel_bdy(mfa_start: List[float],
mfa_end: List[float],
fs: int,
hop_length: int,
span_to_repl: List[List[int]]):
align_start = np.array(mfa_start)
align_end = np.array(mfa_end)
align_start = np.floor(fs * align_start / hop_length).astype('int')
align_end = np.floor(fs * align_end / hop_length).astype('int')
if span_to_repl[0] >= len(mfa_start):
span_bdy = [align_end[-1], align_end[-1]]
else:
span_bdy = [
align_start[span_to_repl[0]], align_end[span_to_repl[1] - 1]
]
return span_bdy, align_start, align_end
def recover_dict(word2phns: Dict[str, str], tp_word2phns: Dict[str, str]):
dic = {}
keys_to_del = []
exist_idx = []
sp_count = 0
add_sp_count = 0
for key in word2phns.keys():
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
exist_idx.append(int(idx))
else:
keys_to_del.append(key)
for key in keys_to_del:
del word2phns[key]
cur_id = 0
for key in tp_word2phns.keys():
if cur_id in exist_idx:
dic[str(cur_id) + "_sp"] = 'sp'
cur_id += 1
add_sp_count += 1
idx, wrd = key.split('_')
dic[str(cur_id) + "_" + wrd] = tp_word2phns[key]
cur_id += 1
if add_sp_count + 1 == sp_count:
dic[str(cur_id) + "_sp"] = 'sp'
add_sp_count += 1
assert add_sp_count == sp_count, "sp are not added in dic"
return dic
def get_max_idx(dic):
return sorted([int(key.split('_')[0]) for key in dic.keys()])[-1]
def get_phns_and_spans(wav_path: str,
old_str: str="",
new_str: str="",
source_lang: str="english",
target_lang: str="english"):
is_append = (old_str == new_str[:len(old_str)])
old_phns, mfa_start, mfa_end = [], [], []
# source
if source_lang == "english":
intervals, word2phns = alignment(wav_path, old_str)
elif source_lang == "chinese":
intervals, word2phns = alignment_zh(wav_path, old_str)
_, tp_word2phns = words2phns_zh(old_str)
for key, value in tp_word2phns.items():
idx, wrd = key.split('_')
cur_val = " ".join(value)
tp_word2phns[key] = cur_val
word2phns = recover_dict(word2phns, tp_word2phns)
else:
assert source_lang == "chinese" or source_lang == "english", \
"source_lang is wrong..."
for item in intervals:
old_phns.append(item[0])
mfa_start.append(float(item[1]))
mfa_end.append(float(item[2]))
# target
if is_append and (source_lang != target_lang):
cross_lingual_clone = True
else:
cross_lingual_clone = False
if cross_lingual_clone:
str_origin = new_str[:len(old_str)]
str_append = new_str[len(old_str):]
if target_lang == "chinese":
phns_origin, origin_word2phns = words2phns(str_origin)
phns_append, append_word2phns_tmp = words2phns_zh(str_append)
elif target_lang == "english":
# 原始句子
phns_origin, origin_word2phns = words2phns_zh(str_origin)
# clone 句子
phns_append, append_word2phns_tmp = words2phns(str_append)
else:
assert target_lang == "chinese" or target_lang == "english", \
"cloning is not support for this language, please check it."
new_phns = phns_origin + phns_append
append_word2phns = {}
length = len(origin_word2phns)
for key, value in append_word2phns_tmp.items():
idx, wrd = key.split('_')
append_word2phns[str(int(idx) + length) + '_' + wrd] = value
new_word2phns = origin_word2phns.copy()
new_word2phns.update(append_word2phns)
else:
if source_lang == target_lang and target_lang == "english":
new_phns, new_word2phns = words2phns(new_str)
elif source_lang == target_lang and target_lang == "chinese":
new_phns, new_word2phns = words2phns_zh(new_str)
else:
assert source_lang == target_lang, \
"source language is not same with target language..."
span_to_repl = [0, len(old_phns) - 1]
span_to_add = [0, len(new_phns) - 1]
left_idx = 0
new_phns_left = []
sp_count = 0
# find the left different index
for key in word2phns.keys():
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
new_phns_left.append('sp')
else:
idx = str(int(idx) - sp_count)
if idx + '_' + wrd in new_word2phns:
left_idx += len(new_word2phns[idx + '_' + wrd])
new_phns_left.extend(word2phns[key].split())
else:
span_to_repl[0] = len(new_phns_left)
span_to_add[0] = len(new_phns_left)
break
# reverse word2phns and new_word2phns
right_idx = 0
new_phns_right = []
sp_count = 0
word2phns_max_idx = get_max_idx(word2phns)
new_word2phns_max_idx = get_max_idx(new_word2phns)
new_phns_mid = []
if is_append:
new_phns_right = []
new_phns_mid = new_phns[left_idx:]
span_to_repl[0] = len(new_phns_left)
span_to_add[0] = len(new_phns_left)
span_to_add[1] = len(new_phns_left) + len(new_phns_mid)
span_to_repl[1] = len(old_phns) - len(new_phns_right)
# speech edit
else:
for key in list(word2phns.keys())[::-1]:
idx, wrd = key.split('_')
if wrd == 'sp':
sp_count += 1
new_phns_right = ['sp'] + new_phns_right
else:
idx = str(new_word2phns_max_idx - (word2phns_max_idx - int(idx)
- sp_count))
if idx + '_' + wrd in new_word2phns:
right_idx -= len(new_word2phns[idx + '_' + wrd])
new_phns_right = word2phns[key].split() + new_phns_right
else:
span_to_repl[1] = len(old_phns) - len(new_phns_right)
new_phns_mid = new_phns[left_idx:right_idx]
span_to_add[1] = len(new_phns_left) + len(new_phns_mid)
if len(new_phns_mid) == 0:
span_to_add[1] = min(span_to_add[1] + 1, len(new_phns))
span_to_add[0] = max(0, span_to_add[0] - 1)
span_to_repl[0] = max(0, span_to_repl[0] - 1)
span_to_repl[1] = min(span_to_repl[1] + 1,
len(old_phns))
break
new_phns = new_phns_left + new_phns_mid + new_phns_right
'''
For that reason cover should not be given.
For that reason cover is impossible to be given.
span_to_repl: [17, 23] "should not"
span_to_add: [17, 30] "is impossible to"
'''
return mfa_start, mfa_end, old_phns, new_phns, span_to_repl, span_to_add
# mfa 获得的 duration 和 fs2 的 duration_predictor 获取的 duration 可能不同
# 此处获得一个缩放比例, 用于预测值和真实值之间的缩放
def get_dur_adj_factor(orig_dur: List[int],
pred_dur: List[int],
phns: List[str]):
length = 0
factor_list = []
for orig, pred, phn in zip(orig_dur, pred_dur, phns):
if pred == 0 or phn == 'sp':
continue
else:
factor_list.append(orig / pred)
factor_list = np.array(factor_list)
factor_list.sort()
if len(factor_list) < 5:
return 1
length = 2
avg = np.average(factor_list[length:-length])
return avg
def prep_feats_with_dur(wav_path: str,
source_lang: str="English",
target_lang: str="English",
old_str: str="",
new_str: str="",
mask_reconstruct: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False,
fs: int=24000,
hop_length: int=300):
'''
Returns:
np.ndarray: new wav, replace the part to be edited in original wav with 0
List[str]: new phones
List[float]: mfa start of new wav
List[float]: mfa end of new wav
List[int]: masked mel boundary of original wav
List[int]: masked mel boundary of new wav
'''
wav_org, _ = librosa.load(wav_path, sr=fs)
mfa_start, mfa_end, old_phns, new_phns, span_to_repl, span_to_add = get_phns_and_spans(
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
source_lang=source_lang,
target_lang=target_lang)
if start_end_sp:
if new_phns[-1] != 'sp':
new_phns = new_phns + ['sp']
# 中文的 phns 不一定都在 fastspeech2 的字典里, 用 sp 代替
if target_lang == "english" or target_lang == "chinese":
old_durs = eval_durs(old_phns, target_lang=source_lang)
else:
assert target_lang == "chinese" or target_lang == "english", \
"calculate duration_predict is not support for this language..."
orig_old_durs = [e - s for e, s in zip(mfa_end, mfa_start)]
if '[MASK]' in new_str:
new_phns = old_phns
span_to_add = span_to_repl
d_factor_left = get_dur_adj_factor(
orig_dur=orig_old_durs[:span_to_repl[0]],
pred_dur=old_durs[:span_to_repl[0]],
phns=old_phns[:span_to_repl[0]])
d_factor_right = get_dur_adj_factor(
orig_dur=orig_old_durs[span_to_repl[1]:],
pred_dur=old_durs[span_to_repl[1]:],
phns=old_phns[span_to_repl[1]:])
d_factor = (d_factor_left + d_factor_right) / 2
new_durs_adjusted = [d_factor * i for i in old_durs]
else:
if duration_adjust:
d_factor = get_dur_adj_factor(
orig_dur=orig_old_durs, pred_dur=old_durs, phns=old_phns)
d_factor = d_factor * 1.25
else:
d_factor = 1
if target_lang == "english" or target_lang == "chinese":
new_durs = eval_durs(new_phns, target_lang=target_lang)
else:
assert target_lang == "chinese" or target_lang == "english", \
"calculate duration_predict is not support for this language..."
new_durs_adjusted = [d_factor * i for i in new_durs]
new_span_dur_sum = sum(new_durs_adjusted[span_to_add[0]:span_to_add[1]])
old_span_dur_sum = sum(orig_old_durs[span_to_repl[0]:span_to_repl[1]])
dur_offset = new_span_dur_sum - old_span_dur_sum
new_mfa_start = mfa_start[:span_to_repl[0]]
new_mfa_end = mfa_end[:span_to_repl[0]]
for i in new_durs_adjusted[span_to_add[0]:span_to_add[1]]:
if len(new_mfa_end) == 0:
new_mfa_start.append(0)
new_mfa_end.append(i)
else:
new_mfa_start.append(new_mfa_end[-1])
new_mfa_end.append(new_mfa_end[-1] + i)
new_mfa_start += [i + dur_offset for i in mfa_start[span_to_repl[1]:]]
new_mfa_end += [i + dur_offset for i in mfa_end[span_to_repl[1]:]]
# 3. get new wav
# 在原始句子后拼接
if span_to_repl[0] >= len(mfa_start):
left_idx = len(wav_org)
right_idx = left_idx
# 在原始句子中间替换
else:
left_idx = int(np.floor(mfa_start[span_to_repl[0]] * fs))
right_idx = int(np.ceil(mfa_end[span_to_repl[1] - 1] * fs))
blank_wav = np.zeros(
(int(np.ceil(new_span_dur_sum * fs)), ), dtype=wav_org.dtype)
# 原始音频,需要编辑的部分替换成空音频,空音频的时间由 fs2 的 duration_predictor 决定
new_wav = np.concatenate(
[wav_org[:left_idx], blank_wav, wav_org[right_idx:]])
# 4. get old and new mel span to be mask
# [92, 92]
old_span_bdy, mfa_start, mfa_end = get_masked_mel_bdy(
mfa_start=mfa_start,
mfa_end=mfa_end,
fs=fs,
hop_length=hop_length,
span_to_repl=span_to_repl)
# [92, 174]
# new_mfa_start, new_mfa_end 时间级别的开始和结束时间 -> 帧级别
new_span_bdy, new_mfa_start, new_mfa_end = get_masked_mel_bdy(
mfa_start=new_mfa_start,
mfa_end=new_mfa_end,
fs=fs,
hop_length=hop_length,
span_to_repl=span_to_add)
# old_span_bdy, new_span_bdy 是帧级别的范围
return new_wav, new_phns, new_mfa_start, new_mfa_end, old_span_bdy, new_span_bdy
def prep_feats(wav_path: str,
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
duration_adjust: bool=True,
start_end_sp: bool=False,
mask_reconstruct: bool=False,
fs: int=24000,
hop_length: int=300,
token_list: List[str]=[]):
wav, phns, mfa_start, mfa_end, old_span_bdy, new_span_bdy = prep_feats_with_dur(
source_lang=source_lang,
target_lang=target_lang,
old_str=old_str,
new_str=new_str,
wav_path=wav_path,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
mask_reconstruct=mask_reconstruct,
fs=fs,
hop_length=hop_length)
token_to_id = {item: i for i, item in enumerate(token_list)}
text = np.array(
list(map(lambda x: token_to_id.get(x, token_to_id['<unk>']), phns)))
span_bdy = np.array(new_span_bdy)
batch = [('1', {
"speech": wav,
"align_start": mfa_start,
"align_end": mfa_end,
"text": text,
"span_bdy": span_bdy
})]
return batch, old_span_bdy, new_span_bdy
def decode_with_model(mlm_model: nn.Layer,
collate_fn,
wav_path: str,
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
use_teacher_forcing: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False,
fs: int=24000,
hop_length: int=300,
token_list: List[str]=[]):
batch, old_span_bdy, new_span_bdy = prep_feats(
source_lang=source_lang,
target_lang=target_lang,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
fs=fs,
hop_length=hop_length,
token_list=token_list)
feats = collate_fn(batch)[1]
if 'text_masked_pos' in feats.keys():
feats.pop('text_masked_pos')
output = mlm_model.inference(
text=feats['text'],
speech=feats['speech'],
masked_pos=feats['masked_pos'],
speech_mask=feats['speech_mask'],
text_mask=feats['text_mask'],
speech_seg_pos=feats['speech_seg_pos'],
text_seg_pos=feats['text_seg_pos'],
span_bdy=new_span_bdy,
use_teacher_forcing=use_teacher_forcing)
# 拼接音频
output_feat = paddle.concat(x=output, axis=0)
wav_org, _ = librosa.load(wav_path, sr=fs)
return wav_org, output_feat, old_span_bdy, new_span_bdy, fs, hop_length
def get_mlm_output(wav_path: str,
model_name: str="paddle_checkpoint_en",
source_lang: str="english",
target_lang: str="english",
old_str: str="",
new_str: str="",
use_teacher_forcing: bool=False,
duration_adjust: bool=True,
start_end_sp: bool=False):
mlm_model, train_conf = load_model(model_name)
collate_fn = build_mlm_collate_fn(
sr=train_conf.fs,
n_fft=train_conf.n_fft,
hop_length=train_conf.n_shift,
win_length=train_conf.win_length,
n_mels=train_conf.n_mels,
fmin=train_conf.fmin,
fmax=train_conf.fmax,
mlm_prob=train_conf.mlm_prob,
mean_phn_span=train_conf.mean_phn_span,
seg_emb=train_conf.model['enc_input_layer'] == 'sega_mlm')
return decode_with_model(
source_lang=source_lang,
target_lang=target_lang,
mlm_model=mlm_model,
collate_fn=collate_fn,
wav_path=wav_path,
old_str=old_str,
new_str=new_str,
use_teacher_forcing=use_teacher_forcing,
duration_adjust=duration_adjust,
start_end_sp=start_end_sp,
fs=train_conf.fs,
hop_length=train_conf.n_shift,
token_list=train_conf.token_list)
def evaluate(uid: str,
source_lang: str="english",
target_lang: str="english",
prefix: os.PathLike="./prompt/dev/",
model_name: str="paddle_checkpoint_en",
new_str: str="",
prompt_decoding: bool=False,
task_name: str=None):
# get origin text and path of origin wav
old_str, wav_path = read_data(uid=uid, prefix=prefix)
if task_name == 'edit':
new_str = new_str
elif task_name == 'synthesize':
new_str = old_str + new_str
else:
new_str = old_str + ' '.join([ch for ch in new_str if is_chinese(ch)])
print('new_str is ', new_str)
results_dict = get_wav(
source_lang=source_lang,
target_lang=target_lang,
model_name=model_name,
wav_path=wav_path,
old_str=old_str,
new_str=new_str)
return results_dict
if __name__ == "__main__":
# parse config and args
args = parse_args()
data_dict = evaluate(
uid=args.uid,
source_lang=args.source_lang,
target_lang=args.target_lang,
prefix=args.prefix,
model_name=args.model_name,
new_str=args.new_str,
task_name=args.task_name)
sf.write(args.output_name, data_dict['output'], samplerate=24000)
print("finished...")
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
def parse_args():
# parse args and config and redirect to train_sp
parser = argparse.ArgumentParser(
description="Synthesize with acoustic model & vocoder")
# acoustic model
parser.add_argument(
'--am',
type=str,
default='fastspeech2_csmsc',
choices=[
'speedyspeech_csmsc', 'fastspeech2_csmsc', 'fastspeech2_ljspeech',
'fastspeech2_aishell3', 'fastspeech2_vctk', 'tacotron2_csmsc',
'tacotron2_ljspeech', 'tacotron2_aishell3'
],
help='Choose acoustic model type of tts task.')
parser.add_argument(
'--am_config',
type=str,
default=None,
help='Config of acoustic model. Use deault config when it is None.')
parser.add_argument(
'--am_ckpt',
type=str,
default=None,
help='Checkpoint file of acoustic model.')
parser.add_argument(
"--am_stat",
type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training acoustic model."
)
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--tones_dict", type=str, default=None, help="tone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
# vocoder
parser.add_argument(
'--voc',
type=str,
default='pwgan_aishell3',
choices=[
'pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3', 'pwgan_vctk',
'mb_melgan_csmsc', 'wavernn_csmsc', 'hifigan_csmsc',
'hifigan_ljspeech', 'hifigan_aishell3', 'hifigan_vctk',
'style_melgan_csmsc'
],
help='Choose vocoder type of tts task.')
parser.add_argument(
'--voc_config',
type=str,
default=None,
help='Config of voc. Use deault config when it is None.')
parser.add_argument(
'--voc_ckpt', type=str, default=None, help='Checkpoint file of voc.')
parser.add_argument(
"--voc_stat",
type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training voc."
)
# other
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument("--model_name", type=str, help="model name")
parser.add_argument("--uid", type=str, help="uid")
parser.add_argument("--new_str", type=str, help="new string")
parser.add_argument("--prefix", type=str, help="prefix")
parser.add_argument(
"--source_lang", type=str, default="english", help="source language")
parser.add_argument(
"--target_lang", type=str, default="english", help="target language")
parser.add_argument("--output_name", type=str, help="output name")
parser.add_argument("--task_name", type=str, help="task name")
# pre
args = parser.parse_args()
return args
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pathlib import Path
from typing import Dict
from typing import List
from typing import Union
import numpy as np
import paddle
import yaml
from sedit_arg_parser import parse_args
from yacs.config import CfgNode
from paddlespeech.t2s.exps.syn_utils import get_am_inference
from paddlespeech.t2s.exps.syn_utils import get_voc_inference
def read_2col_text(path: Union[Path, str]) -> Dict[str, str]:
"""Read a text file having 2 column as dict object.
Examples:
wav.scp:
key1 /some/path/a.wav
key2 /some/path/b.wav
>>> read_2col_text('wav.scp')
{'key1': '/some/path/a.wav', 'key2': '/some/path/b.wav'}
"""
data = {}
with Path(path).open("r", encoding="utf-8") as f:
for linenum, line in enumerate(f, 1):
sps = line.rstrip().split(maxsplit=1)
if len(sps) == 1:
k, v = sps[0], ""
else:
k, v = sps
if k in data:
raise RuntimeError(f"{k} is duplicated ({path}:{linenum})")
data[k] = v
return data
def load_num_sequence_text(path: Union[Path, str], loader_type: str="csv_int"
) -> Dict[str, List[Union[float, int]]]:
"""Read a text file indicating sequences of number
Examples:
key1 1 2 3
key2 34 5 6
>>> d = load_num_sequence_text('text')
>>> np.testing.assert_array_equal(d["key1"], np.array([1, 2, 3]))
"""
if loader_type == "text_int":
delimiter = " "
dtype = int
elif loader_type == "text_float":
delimiter = " "
dtype = float
elif loader_type == "csv_int":
delimiter = ","
dtype = int
elif loader_type == "csv_float":
delimiter = ","
dtype = float
else:
raise ValueError(f"Not supported loader_type={loader_type}")
# path looks like:
# utta 1,0
# uttb 3,4,5
# -> return {'utta': np.ndarray([1, 0]),
# 'uttb': np.ndarray([3, 4, 5])}
d = read_2column_text(path)
# Using for-loop instead of dict-comprehension for debuggability
retval = {}
for k, v in d.items():
try:
retval[k] = [dtype(i) for i in v.split(delimiter)]
except TypeError:
print(f'Error happened with path="{path}", id="{k}", value="{v}"')
raise
return retval
def is_chinese(ch):
if u'\u4e00' <= ch <= u'\u9fff':
return True
else:
return False
def get_voc_out(mel):
# vocoder
args = parse_args()
with open(args.voc_config) as f:
voc_config = CfgNode(yaml.safe_load(f))
voc_inference = get_voc_inference(
voc=args.voc,
voc_config=voc_config,
voc_ckpt=args.voc_ckpt,
voc_stat=args.voc_stat)
with paddle.no_grad():
wav = voc_inference(mel)
return np.squeeze(wav)
def eval_durs(phns, target_lang="chinese", fs=24000, hop_length=300):
args = parse_args()
if target_lang == 'english':
args.am = "fastspeech2_ljspeech"
args.am_config = "download/fastspeech2_nosil_ljspeech_ckpt_0.5/default.yaml"
args.am_ckpt = "download/fastspeech2_nosil_ljspeech_ckpt_0.5/snapshot_iter_100000.pdz"
args.am_stat = "download/fastspeech2_nosil_ljspeech_ckpt_0.5/speech_stats.npy"
args.phones_dict = "download/fastspeech2_nosil_ljspeech_ckpt_0.5/phone_id_map.txt"
elif target_lang == 'chinese':
args.am = "fastspeech2_csmsc"
args.am_config = "download/fastspeech2_conformer_baker_ckpt_0.5/conformer.yaml"
args.am_ckpt = "download/fastspeech2_conformer_baker_ckpt_0.5/snapshot_iter_76000.pdz"
args.am_stat = "download/fastspeech2_conformer_baker_ckpt_0.5/speech_stats.npy"
args.phones_dict = "download/fastspeech2_conformer_baker_ckpt_0.5/phone_id_map.txt"
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
# Init body.
with open(args.am_config) as f:
am_config = CfgNode(yaml.safe_load(f))
am_inference, am = get_am_inference(
am=args.am,
am_config=am_config,
am_ckpt=args.am_ckpt,
am_stat=args.am_stat,
phones_dict=args.phones_dict,
tones_dict=args.tones_dict,
speaker_dict=args.speaker_dict,
return_am=True)
vocab_phones = {}
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
for tone, id in phn_id:
vocab_phones[tone] = int(id)
vocab_size = len(vocab_phones)
phonemes = [phn if phn in vocab_phones else "sp" for phn in phns]
phone_ids = [vocab_phones[item] for item in phonemes]
phone_ids.append(vocab_size - 1)
phone_ids = paddle.to_tensor(np.array(phone_ids, np.int64))
_, d_outs, _, _ = am.inference(phone_ids, spk_id=None, spk_emb=None)
pre_d_outs = d_outs
phu_durs_new = pre_d_outs * hop_length / fs
phu_durs_new = phu_durs_new.tolist()[:-1]
return phu_durs_new
p243_new For that reason cover should not be given.
Prompt_003_new This was not the show for me.
p299_096 We are trying to establish a date.
p243_new ../../prompt_wav/p243_313.wav
Prompt_003_new ../../prompt_wav/this_was_not_the_show_for_me.wav
p299_096 ../../prompt_wav/p299_096.wav
#!/bin/bash
set -e
source path.sh
# en --> zh 的 语音合成
# 根据 Prompt_003_new 作为提示语音: This was not the show for me. 来合成: '今天天气很好'
# 注: 输入的 new_str 需为中文汉字, 否则会通过预处理只保留中文汉字, 即合成预处理后的中文语音。
python local/inference.py \
--task_name=cross-lingual_clone \
--model_name=paddle_checkpoint_dual_mask_enzh \
--uid=Prompt_003_new \
--new_str='今天天气很好.' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=chinese \
--output_name=pred_clone.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_csmsc \
--am_config=download/fastspeech2_conformer_baker_ckpt_0.5/conformer.yaml \
--am_ckpt=download/fastspeech2_conformer_baker_ckpt_0.5/snapshot_iter_76000.pdz \
--am_stat=download/fastspeech2_conformer_baker_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_conformer_baker_ckpt_0.5/phone_id_map.txt
#!/bin/bash
set -e
source path.sh
# en --> zh 的 语音合成
# 根据 Prompt_003_new 作为提示语音: This was not the show for me. 来合成: '今天天气很好'
# 注: 输入的 new_str 需为中文汉字, 否则会通过预处理只保留中文汉字, 即合成预处理后的中文语音。
python local/inference_new.py \
--task_name=cross-lingual_clone \
--model_name=paddle_checkpoint_dual_mask_enzh \
--uid=Prompt_003_new \
--new_str='今天天气很好.' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=chinese \
--output_name=pred_clone.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_csmsc \
--am_config=download/fastspeech2_conformer_baker_ckpt_0.5/conformer.yaml \
--am_ckpt=download/fastspeech2_conformer_baker_ckpt_0.5/snapshot_iter_76000.pdz \
--am_stat=download/fastspeech2_conformer_baker_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_conformer_baker_ckpt_0.5/phone_id_map.txt
#!/bin/bash
set -e
source path.sh
# 纯英文的语音合成
# 样例为根据 p299_096 对应的语音作为提示语音: This was not the show for me. 来合成: 'I enjoy my life.'
python local/inference.py \
--task_name=synthesize \
--model_name=paddle_checkpoint_en \
--uid=p299_096 \
--new_str='I enjoy my life, do you?' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=english \
--output_name=pred_gen.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_ljspeech \
--am_config=download/fastspeech2_nosil_ljspeech_ckpt_0.5/default.yaml \
--am_ckpt=download/fastspeech2_nosil_ljspeech_ckpt_0.5/snapshot_iter_100000.pdz \
--am_stat=download/fastspeech2_nosil_ljspeech_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_nosil_ljspeech_ckpt_0.5/phone_id_map.txt
#!/bin/bash
set -e
source path.sh
# 纯英文的语音合成
# 样例为根据 p299_096 对应的语音作为提示语音: This was not the show for me. 来合成: 'I enjoy my life.'
python local/inference_new.py \
--task_name=synthesize \
--model_name=paddle_checkpoint_en \
--uid=p299_096 \
--new_str='I enjoy my life, do you?' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=english \
--output_name=pred_gen.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_ljspeech \
--am_config=download/fastspeech2_nosil_ljspeech_ckpt_0.5/default.yaml \
--am_ckpt=download/fastspeech2_nosil_ljspeech_ckpt_0.5/snapshot_iter_100000.pdz \
--am_stat=download/fastspeech2_nosil_ljspeech_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_nosil_ljspeech_ckpt_0.5/phone_id_map.txt
#!/bin/bash
set -e
source path.sh
# 纯英文的语音编辑
# 样例为把 p243_new 对应的原始语音: For that reason cover should not be given.编辑成 'for that reason cover is impossible to be given.' 对应的语音
# NOTE: 语音编辑任务暂支持句子中 1 个位置的替换或者插入文本操作
python local/inference.py \
--task_name=edit \
--model_name=paddle_checkpoint_en \
--uid=p243_new \
--new_str='for that reason cover is impossible to be given.' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=english \
--output_name=pred_edit.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_ljspeech \
--am_config=download/fastspeech2_nosil_ljspeech_ckpt_0.5/default.yaml \
--am_ckpt=download/fastspeech2_nosil_ljspeech_ckpt_0.5/snapshot_iter_100000.pdz \
--am_stat=download/fastspeech2_nosil_ljspeech_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_nosil_ljspeech_ckpt_0.5/phone_id_map.txt
#!/bin/bash
set -e
source path.sh
# 纯英文的语音编辑
# 样例为把 p243_new 对应的原始语音: For that reason cover should not be given.编辑成 'for that reason cover is impossible to be given.' 对应的语音
# NOTE: 语音编辑任务暂支持句子中 1 个位置的替换或者插入文本操作
python local/inference_new.py \
--task_name=edit \
--model_name=paddle_checkpoint_en \
--uid=p243_new \
--new_str='for that reason cover is impossible to be given.' \
--prefix='./prompt/dev/' \
--source_lang=english \
--target_lang=english \
--output_name=pred_edit.wav \
--voc=pwgan_aishell3 \
--voc_config=download/pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=download/pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
--voc_stat=download/pwg_aishell3_ckpt_0.5/feats_stats.npy \
--am=fastspeech2_ljspeech \
--am_config=download/fastspeech2_nosil_ljspeech_ckpt_0.5/default.yaml \
--am_ckpt=download/fastspeech2_nosil_ljspeech_ckpt_0.5/snapshot_iter_100000.pdz \
--am_stat=download/fastspeech2_nosil_ljspeech_ckpt_0.5/speech_stats.npy \
--phones_dict=download/fastspeech2_nosil_ljspeech_ckpt_0.5/phone_id_map.txt
#!/bin/bash
rm -rf *.wav
./run_sedit_en.sh # 语音编辑任务(英文)
./run_gen_en.sh # 个性化语音合成任务(英文)
./run_clone_en_to_zh.sh # 跨语言语音合成任务(英文到中文的语音克隆)
\ No newline at end of file
#!/bin/bash
rm -rf *.wav
./run_sedit_en_new.sh # 语音编辑任务(英文)
./run_gen_en_new.sh # 个性化语音合成任务(英文)
./run_clone_en_to_zh_new.sh # 跨语言语音合成任务(英文到中文的语音克隆)
\ No newline at end of file
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# Data # # Data #
########################################### ###########################################
dataset: 'paddlespeech.audio.datasets:HeySnips' dataset: 'paddlespeech.audio.datasets:HeySnips'
data_dir: '/PATH/TO/DATA/hey_snips_research_6k_en_train_eval_clean_ter' data_dir: '../tests/hey_snips_research_6k_en_train_eval_clean_ter'
############################################ ############################################
# Network Architecture # # Network Architecture #
...@@ -46,4 +46,4 @@ num_workers: 16 ...@@ -46,4 +46,4 @@ num_workers: 16
checkpoint: './checkpoint/epoch_100/model.pdparams' checkpoint: './checkpoint/epoch_100/model.pdparams'
score_file: './scores.txt' score_file: './scores.txt'
stats_file: './stats.0.txt' stats_file: './stats.0.txt'
img_file: './det.png' img_file: './det.png'
\ No newline at end of file
...@@ -18,12 +18,62 @@ ...@@ -18,12 +18,62 @@
./run.sh --stage 3 --stop-stage 3 ./run.sh --stage 3 --stop-stage 3
``` ```
## Pretrained Model ## Pretrained Model
The pretrained model can be downloaded here [ernie_linear_p3_iwslt2012_zh_ckpt_0.1.1.zip](https://paddlespeech.bj.bcebos.com/text/ernie_linear_p3_iwslt2012_zh_ckpt_0.1.1.zip). The pretrained model can be downloaded here:
[ernie_linear_p3_iwslt2012_zh_ckpt_0.1.1.zip](https://paddlespeech.bj.bcebos.com/text/ernie_linear_p3_iwslt2012_zh_ckpt_0.1.1.zip)
[ernie-3.0-base.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-3.0-base.tar.gz)
[ernie-3.0-medium.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-3.0-medium.tar.gz)
[ernie-3.0-micro.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-3.0-micro.tar.gz)
[ernie-mini.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-mini.tar.gz)
[ernie-nano.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-nano.tar.gz)
[ernie-tiny.tar.gz](https://paddlespeech.bj.bcebos.com/punc_restore/ernie-tiny.tar.gz)
### Test Result ### Test Result
- Ernie - Ernie 1.0
| |COMMA | PERIOD | QUESTION | OVERALL| | |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:| |:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.510955 |0.526462 |0.820755 |0.619391| |Precision |0.510955 |0.526462 |0.820755 |0.619391|
|Recall |0.517433 |0.564179 |0.861386 |0.647666| |Recall |0.517433 |0.564179 |0.861386 |0.647666|
|F1 |0.514173 |0.544669 |0.840580 |0.633141| |F1 |0.514173 |0.544669 |0.840580 |0.633141|
- Ernie-tiny
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.733177 |0.721448 |0.754717 |0.736447|
|Recall |0.380740 |0.524646 |0.733945 |0.546443|
|F1 |0.501204 |0.607506 |0.744186 |0.617632|
- Ernie-3.0-base-zh
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.805947 |0.764160 |0.858491 |0.809532|
|Recall |0.399070 |0.567978 |0.850467 |0.605838|
|F1 |0.533817 |0.651623 |0.854460 |0.679967|
- Ernie-3.0-medium-zh
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.730829 |0.699164 |0.707547 |0.712514|
|Recall |0.388196 |0.533286 |0.797872 |0.573118|
|F1 |0.507058 |0.605062 |0.750000 |0.620707|
- Ernie-3.0-mini-zh
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.757433 |0.708449 |0.707547 |0.724477|
|Recall |0.355752 |0.506977 |0.735294 |0.532674|
|F1 |0.484121 |0.591015 |0.721154 |0.598763|
- Ernie-3.0-micro-zh
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.733959 |0.679666 |0.726415 |0.713347|
|Recall |0.332742 |0.483487 |0.712963 |0.509731|
|F1 |0.457896 |0.565033 |0.719626 |0.580852|
- Ernie-3.0-nano-zh
| |COMMA | PERIOD | QUESTION | OVERALL|
|:-----:|:-----:|:-----:|:-----:|:-----:|
|Precision |0.693271 |0.682451 |0.754717 |0.710146|
|Recall |0.327784 |0.491968 |0.666667 |0.495473|
|F1 |0.445114 |0.571762 |0.707965 |0.574947|
import argparse
def process_sentence(line):
if line == '':
return ''
res = line[0]
for i in range(1, len(line)):
res += (' ' + line[i])
return res
if __name__ == "__main__":
paser = argparse.ArgumentParser(description="Input filename")
paser.add_argument('-input_file')
paser.add_argument('-output_file')
sentence_cnt = 0
args = paser.parse_args()
with open(args.input_file, 'r') as f:
with open(args.output_file, 'w') as write_f:
while True:
line = f.readline()
if line:
sentence_cnt += 1
write_f.write(process_sentence(line))
else:
break
print('preprocess over')
print('total sentences number:', sentence_cnt)
# LJSpeech # LJSpeech
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
......
...@@ -46,8 +46,8 @@ fi ...@@ -46,8 +46,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_ljspeech ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_ljspeech
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -7,18 +7,21 @@ We use `WER` as an evaluation criterion. ...@@ -7,18 +7,21 @@ We use `WER` as an evaluation criterion.
# Start # Start
Run the command below to get the results of the test. Run the command below to get the results of the test.
```bash ```bash
cd ../../../tools
bash extras/install_sclite.sh
cd -
./run.sh ./run.sh
``` ```
The `avg WER` of g2p is: 0.028952373312476395 The `avg WER` of g2p is: 0.024075726733983775
```text ```text
,--------------------------------------------------------------------. ,--------------------------------------------------------------------.
| ./exp/g2p/text.g2p | | ./exp/g2p/text.g2p |
|--------------------------------------------------------------------| |--------------------------------------------------------------------|
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err | | SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
|--------+-----------------+-----------------------------------------| | Sum/Avg| 9996 299181 | 97.6 2.4 0.0 0.0 2.4 49.0 |
| Sum/Avg| 9996 299181 | 97.2 2.8 0.0 0.1 2.9 53.3 |
`--------------------------------------------------------------------' `--------------------------------------------------------------------'
``` ```
# -*- encoding:utf-8 -*-
import re
import sys
'''
@arthur: david_95
Assum you executed g2p test twice, the WER rate have some gap, you would like to see what sentences error cause your rate up.
so you may get test result ( exp/g2p )into two directories, as exp/prefolder and exp/curfolder
run this program as "python compare_badcase.py prefolder curfolder"
then you will get diffrences between two run, uuid, phonetics, chinese samples
examples: python compare_badcase.py exp/g2p_laotouzi exp/g2p
in this example: exp/g2p_laotouzi and exp/g2p are two folders with two g2p tests result
'''
def compare(prefolder, curfolder):
'''
compare file of text.g2p.pra in two folders
result P1 will be prefolder ; P2 will be curfolder, just about the sequence you input in argvs
'''
linecnt = 0
pre_block = []
cur_block = []
zh_lines = []
with open(prefolder + "/text.g2p.pra", "r") as pre_file, open(
curfolder + "/text.g2p.pra", "r") as cur_file:
for pre_line, cur_line in zip(pre_file, cur_file):
linecnt += 1
if linecnt < 11: #skip non-data head in files
continue
else:
pre_block.append(pre_line.strip())
cur_block.append(cur_line.strip())
if pre_line.strip().startswith(
"Eval:") and pre_line.strip() != cur_line.strip():
uuid = pre_block[-5].replace("id: (baker_", "").replace(")",
"")
with open("data/g2p/text", 'r') as txt:
conlines = txt.readlines()
for line in conlines:
if line.strip().startswith(uuid.strip()):
print(line)
zh_lines.append(re.sub(r"#[1234]", "", line))
break
print("*" + cur_block[-3]) # ref
print("P1 " + pre_block[-2])
print("P2 " + cur_block[-2])
print("P1 " + pre_block[-1])
print("P2 " + cur_block[-1] + "\n\n")
pre_block = []
cur_block = []
print("\n")
print(str.join("\n", zh_lines))
if __name__ == '__main__':
assert len(
sys.argv) == 3, "Usage: python compare_badcase.py %prefolder %curfolder"
compare(sys.argv[1], sys.argv[2])
...@@ -5,6 +5,9 @@ We use `CER` as an evaluation criterion. ...@@ -5,6 +5,9 @@ We use `CER` as an evaluation criterion.
## Start ## Start
Run the command below to get the results of the test. Run the command below to get the results of the test.
```bash ```bash
cd ../../../tools
bash extras/install_sclite.sh
cd -
./run.sh ./run.sh
``` ```
The `avg CER` of text normalization is: 0.00730093543235227 The `avg CER` of text normalization is: 0.00730093543235227
......
# Finetune your own AM based on FastSpeech2 with AISHELL-3.
This example shows how to finetune your own AM based on FastSpeech2 with AISHELL-3. We use part of csmsc's data (top 200) as finetune data in this example. The example is implemented according to this [discussion](https://github.com/PaddlePaddle/PaddleSpeech/discussions/1842). Thanks to the developer for the idea.
We use AISHELL-3 to train a multi-speaker fastspeech2 model. You can refer [examples/aishell3/tts3](https://github.com/lym0302/PaddleSpeech/tree/develop/examples/aishell3/tts3) to train multi-speaker fastspeech2 from scratch.
## Prepare
### Download Pretrained Fastspeech2 model
Assume the path to the model is `./pretrained_models`. Download pretrained fastspeech2 model with aishell3: [fastspeech2_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_ckpt_1.1.0.zip).
```bash
mkdir -p pretrained_models && cd pretrained_models
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_aishell3_ckpt_1.1.0.zip
unzip fastspeech2_aishell3_ckpt_1.1.0.zip
cd ../
```
### Download MFA tools and pretrained model
Assume the path to the MFA tool is `./tools`. Download [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/releases/download/v1.0.1/montreal-forced-aligner_linux.tar.gz) and pretrained MFA models with aishell3: [aishell3_model.zip](https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/aishell3_model.zip).
```bash
mkdir -p tools && cd tools
# mfa tool
wget https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/releases/download/v1.0.1/montreal-forced-aligner_linux.tar.gz
tar xvf montreal-forced-aligner_linux.tar.gz
cp montreal-forced-aligner/lib/libpython3.6m.so.1.0 montreal-forced-aligner/lib/libpython3.6m.so
# pretrained mfa model
mkdir -p aligner && cd aligner
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/aishell3_model.zip
unzip aishell3_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/simple.lexicon
cd ../../
```
### Prepare your data
Assume the path to the dataset is `./input`. This directory contains audio files (*.wav) and label file (labels.txt). The audio file is in wav format. The format of the label file is: utt_id|pinyin. Here is an example of the first 200 data of csmsc.
```bash
mkdir -p input && cd input
wget https://paddlespeech.bj.bcebos.com/datasets/csmsc_mini.zip
unzip csmsc_mini.zip
cd ../
```
When "Prepare" done. The structure of the current directory is listed below.
```text
├── input
│ ├── csmsc_mini
│ │ ├── 000001.wav
│ │ ├── 000002.wav
│ │ ├── 000003.wav
│ │ ├── ...
│ │ ├── 000200.wav
│ │ ├── labels.txt
│ └── csmsc_mini.zip
├── pretrained_models
│ ├── fastspeech2_aishell3_ckpt_1.1.0
│ │ ├── default.yaml
│ │ ├── energy_stats.npy
│ │ ├── phone_id_map.txt
│ │ ├── pitch_stats.npy
│ │ ├── snapshot_iter_96400.pdz
│ │ ├── speaker_id_map.txt
│ │ └── speech_stats.npy
│ └── fastspeech2_aishell3_ckpt_1.1.0.zip
└── tools
├── aligner
│ ├── aishell3_model
│ ├── aishell3_model.zip
│ └── simple.lexicon
├── montreal-forced-aligner
│ ├── bin
│ ├── lib
│ └── pretrained_models
└── montreal-forced-aligner_linux.tar.gz
...
```
### Set finetune.yaml
`finetune.yaml` contains some configurations for fine-tuning. You can try various options to fine better result.
Arguments:
- `batch_size`: finetune batch size. Default: -1, means 64 which same to pretrained model
- `learning_rate`: learning rate. Default: 0.0001
- `num_snapshots`: number of save models. Default: -1, means 5 which same to pretrained model
- `frozen_layers`: frozen layers. must be a list. If you don't want to frozen any layer, set [].
## Get Started
Run the command below to
1. **source path**.
2. finetune the model.
3. synthesize wavs.
- synthesize waveform from text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to run only one stage.
### Model Finetune
Finetune a FastSpeech2 model.
```bash
./run.sh --stage 0 --stop-stage 0
```
`stage 0` of `run.sh` calls `finetune.py`, here's the complete help message.
```text
usage: finetune.py [-h] [--input_dir INPUT_DIR] [--pretrained_model_dir PRETRAINED_MODEL_DIR]
[--mfa_dir MFA_DIR] [--dump_dir DUMP_DIR]
[--output_dir OUTPUT_DIR] [--lang LANG]
[--ngpu NGPU]
optional arguments:
-h, --help show this help message and exit
--input_dir INPUT_DIR
directory containing audio and label file
--pretrained_model_dir PRETRAINED_MODEL_DIR
Path to pretrained model
--mfa_dir MFA_DIR directory to save aligned files
--dump_dir DUMP_DIR
directory to save feature files and metadata
--output_dir OUTPUT_DIR
directory to save finetune model
--lang LANG Choose input audio language, zh or en
--ngpu NGPU if ngpu=0, use cpu
--epoch EPOCH the epoch of finetune
--batch_size BATCH_SIZE
the batch size of finetune, default -1 means same as pretrained model
```
1. `--input_dir` is the directory containing audio and label file.
2. `--pretrained_model_dir` is the directory incluing pretrained fastspeech2_aishell3 model.
3. `--mfa_dir` is the directory to save the results of aligning from pretrained MFA_aishell3 model.
4. `--dump_dir` is the directory including audio feature and metadata.
5. `--output_dir` is the directory to save finetune model.
6. `--lang` is the language of input audio, zh or en.
7. `--ngpu` is the number of gpu.
8. `--epoch` is the epoch of finetune.
9. `--batch_size` is the batch size of finetune.
### Synthesizing
We use [HiFiGAN](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc5) as the neural vocoder.
Assume the path to the hifigan model is `./pretrained_models`. Download the pretrained HiFiGAN model from [hifigan_aishell3_ckpt_0.2.0](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/hifigan/hifigan_aishell3_ckpt_0.2.0.zip) and unzip it.
```bash
cd pretrained_models
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/hifigan/hifigan_aishell3_ckpt_0.2.0.zip
unzip hifigan_aishell3_ckpt_0.2.0.zip
cd ../
```
HiFiGAN checkpoint contains files listed below.
```text
hifigan_aishell3_ckpt_0.2.0
├── default.yaml # default config used to train HiFiGAN
├── feats_stats.npy # statistics used to normalize spectrogram when training HiFiGAN
└── snapshot_iter_2500000.pdz # generator parameters of HiFiGAN
```
Modify `ckpt` in `run.sh` to the final model in `exp/default/checkpoints`.
```bash
./run.sh --stage 1 --stop-stage 1
```
`stage 1` of `run.sh` calls `${BIN_DIR}/../synthesize_e2e.py`, which can synthesize waveform from text file.
```text
usage: synthesize_e2e.py [-h]
[--am {speedyspeech_csmsc,speedyspeech_aishell3,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech}]
[--am_config AM_CONFIG] [--am_ckpt AM_CKPT]
[--am_stat AM_STAT] [--phones_dict PHONES_DICT]
[--tones_dict TONES_DICT]
[--speaker_dict SPEAKER_DICT] [--spk_id SPK_ID]
[--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,style_melgan_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,wavernn_csmsc}]
[--voc_config VOC_CONFIG] [--voc_ckpt VOC_CKPT]
[--voc_stat VOC_STAT] [--lang LANG]
[--inference_dir INFERENCE_DIR] [--ngpu NGPU]
[--text TEXT] [--output_dir OUTPUT_DIR]
Synthesize with acoustic model & vocoder
optional arguments:
-h, --help show this help message and exit
--am {speedyspeech_csmsc,speedyspeech_aishell3,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech}
Choose acoustic model type of tts task.
--am_config AM_CONFIG
Config of acoustic model.
--am_ckpt AM_CKPT Checkpoint file of acoustic model.
--am_stat AM_STAT mean and standard deviation used to normalize
spectrogram when training acoustic model.
--phones_dict PHONES_DICT
phone vocabulary file.
--tones_dict TONES_DICT
tone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--spk_id SPK_ID spk id for multi speaker acoustic model
--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,style_melgan_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,wavernn_csmsc}
Choose vocoder type of tts task.
--voc_config VOC_CONFIG
Config of voc.
--voc_ckpt VOC_CKPT Checkpoint file of voc.
--voc_stat VOC_STAT mean and standard deviation used to normalize
spectrogram when training voc.
--lang LANG Choose model language. zh or en
--inference_dir INFERENCE_DIR
dir to save inference models
--ngpu NGPU if ngpu == 0, use cpu.
--text TEXT text to synthesize, a 'utt_id sentence' pair per line.
--output_dir OUTPUT_DIR
output dir.
```
1. `--am` is acoustic model type with the format {model_name}_{dataset}
2. `--am_config`, `--am_ckpt`, `--am_stat`, `--phones_dict` `--speaker_dict` are arguments for acoustic model, which correspond to the 5 files in the fastspeech2 pretrained model.
3. `--voc` is vocoder type with the format {model_name}_{dataset}
4. `--voc_config`, `--voc_ckpt`, `--voc_stat` are arguments for vocoder, which correspond to the 3 files in the parallel wavegan pretrained model.
5. `--lang` is the model language, which can be `zh` or `en`.
6. `--text` is the text file, which contains sentences to synthesize.
7. `--output_dir` is the directory to save synthesized audio files.
8. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
### Tips
If you want to get better audio quality, you can use more audios to finetune.
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
from pathlib import Path
from typing import List
from typing import Union
import yaml
from local.check_oov import get_check_result
from local.extract import extract_feature
from local.label_process import get_single_label
from local.prepare_env import generate_finetune_env
from local.train import train_sp
from paddle import distributed as dist
from yacs.config import CfgNode
from utils.gen_duration_from_textgrid import gen_duration_from_textgrid
DICT_EN = 'tools/aligner/cmudict-0.7b'
DICT_ZH = 'tools/aligner/simple.lexicon'
MODEL_DIR_EN = 'tools/aligner/vctk_model.zip'
MODEL_DIR_ZH = 'tools/aligner/aishell3_model.zip'
MFA_PHONE_EN = 'tools/aligner/vctk_model/meta.yaml'
MFA_PHONE_ZH = 'tools/aligner/aishell3_model/meta.yaml'
MFA_PATH = 'tools/montreal-forced-aligner/bin'
os.environ['PATH'] = MFA_PATH + '/:' + os.environ['PATH']
class TrainArgs():
def __init__(self,
ngpu,
config_file,
dump_dir: Path,
output_dir: Path,
frozen_layers: List[str]):
# config: fastspeech2 config file.
self.config = str(config_file)
self.train_metadata = str(dump_dir / "train/norm/metadata.jsonl")
self.dev_metadata = str(dump_dir / "dev/norm/metadata.jsonl")
# model output dir.
self.output_dir = str(output_dir)
self.ngpu = ngpu
self.phones_dict = str(dump_dir / "phone_id_map.txt")
self.speaker_dict = str(dump_dir / "speaker_id_map.txt")
self.voice_cloning = False
# frozen layers
self.frozen_layers = frozen_layers
def get_mfa_result(
input_dir: Union[str, Path],
mfa_dir: Union[str, Path],
lang: str='en', ):
"""get mfa result
Args:
input_dir (Union[str, Path]): input dir including wav file and label
mfa_dir (Union[str, Path]): mfa result dir
lang (str, optional): input audio language. Defaults to 'en'.
"""
# MFA
if lang == 'en':
DICT = DICT_EN
MODEL_DIR = MODEL_DIR_EN
elif lang == 'zh':
DICT = DICT_ZH
MODEL_DIR = MODEL_DIR_ZH
else:
print('please input right lang!!')
CMD = 'mfa_align' + ' ' + str(
input_dir) + ' ' + DICT + ' ' + MODEL_DIR + ' ' + str(mfa_dir)
os.system(CMD)
if __name__ == '__main__':
# parse config and args
parser = argparse.ArgumentParser(
description="Preprocess audio and then extract features.")
parser.add_argument(
"--input_dir",
type=str,
default="./input/baker_mini",
help="directory containing audio and label file")
parser.add_argument(
"--pretrained_model_dir",
type=str,
default="./pretrained_models/fastspeech2_aishell3_ckpt_1.1.0",
help="Path to pretrained model")
parser.add_argument(
"--mfa_dir",
type=str,
default="./mfa_result",
help="directory to save aligned files")
parser.add_argument(
"--dump_dir",
type=str,
default="./dump",
help="directory to save feature files and metadata.")
parser.add_argument(
"--output_dir",
type=str,
default="./exp/default/",
help="directory to save finetune model.")
parser.add_argument(
'--lang',
type=str,
default='zh',
choices=['zh', 'en'],
help='Choose input audio language. zh or en')
parser.add_argument(
"--ngpu", type=int, default=2, help="if ngpu=0, use cpu.")
parser.add_argument("--epoch", type=int, default=100, help="finetune epoch")
parser.add_argument(
"--finetune_config",
type=str,
default="./finetune.yaml",
help="Path to finetune config file")
args = parser.parse_args()
fs = 24000
n_shift = 300
input_dir = Path(args.input_dir).expanduser()
mfa_dir = Path(args.mfa_dir).expanduser()
mfa_dir.mkdir(parents=True, exist_ok=True)
dump_dir = Path(args.dump_dir).expanduser()
dump_dir.mkdir(parents=True, exist_ok=True)
output_dir = Path(args.output_dir).expanduser()
output_dir.mkdir(parents=True, exist_ok=True)
pretrained_model_dir = Path(args.pretrained_model_dir).expanduser()
# read config
config_file = pretrained_model_dir / "default.yaml"
with open(config_file) as f:
config = CfgNode(yaml.safe_load(f))
config.max_epoch = config.max_epoch + args.epoch
with open(args.finetune_config) as f2:
finetune_config = CfgNode(yaml.safe_load(f2))
config.batch_size = finetune_config.batch_size if finetune_config.batch_size > 0 else config.batch_size
config.optimizer.learning_rate = finetune_config.learning_rate if finetune_config.learning_rate > 0 else config.optimizer.learning_rate
config.num_snapshots = finetune_config.num_snapshots if finetune_config.num_snapshots > 0 else config.num_snapshots
frozen_layers = finetune_config.frozen_layers
assert type(frozen_layers) == list, "frozen_layers should be set a list."
if args.lang == 'en':
lexicon_file = DICT_EN
mfa_phone_file = MFA_PHONE_EN
elif args.lang == 'zh':
lexicon_file = DICT_ZH
mfa_phone_file = MFA_PHONE_ZH
else:
print('please input right lang!!')
print(f"finetune max_epoch: {config.max_epoch}")
print(f"finetune batch_size: {config.batch_size}")
print(f"finetune learning_rate: {config.optimizer.learning_rate}")
print(f"finetune num_snapshots: {config.num_snapshots}")
print(f"finetune frozen_layers: {frozen_layers}")
am_phone_file = pretrained_model_dir / "phone_id_map.txt"
label_file = input_dir / "labels.txt"
#check phone for mfa and am finetune
oov_words, oov_files, oov_file_words = get_check_result(
label_file, lexicon_file, mfa_phone_file, am_phone_file)
input_dir = get_single_label(label_file, oov_files, input_dir)
# get mfa result
get_mfa_result(input_dir, mfa_dir, args.lang)
# # generate durations.txt
duration_file = "./durations.txt"
gen_duration_from_textgrid(mfa_dir, duration_file, fs, n_shift)
# generate phone and speaker map files
extract_feature(duration_file, config, input_dir, dump_dir,
pretrained_model_dir)
# create finetune env
generate_finetune_env(output_dir, pretrained_model_dir)
# create a new args for training
train_args = TrainArgs(args.ngpu, config_file, dump_dir, output_dir,
frozen_layers)
# finetune models
# dispatch
if args.ngpu > 1:
dist.spawn(train_sp, (train_args, config), nprocs=args.ngpu)
else:
train_sp(train_args, config)
###########################################################
# PARAS SETTING #
###########################################################
# Set to -1 to indicate that the parameter is the same as the pretrained model configuration
batch_size: -1
learning_rate: 0.0001 # learning rate
num_snapshots: -1
# frozen_layers should be a list
# if you don't need to freeze, set frozen_layers to []
frozen_layers: ["encoder", "duration_predictor"]
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pathlib import Path
from typing import Dict
from typing import List
from typing import Union
def check_phone(label_file: Union[str, Path],
pinyin_phones: Dict[str, str],
mfa_phones: List[str],
am_phones: List[str],
oov_record: str="./oov_info.txt"):
"""Check whether the phoneme corresponding to the audio text content
is in the phoneme list of the pretrained mfa model to ensure that the alignment is normal.
Check whether the phoneme corresponding to the audio text content
is in the phoneme list of the pretrained am model to ensure finetune (normalize) is normal.
Args:
label_file (Union[str, Path]): label file, format: utt_id|phone seq
pinyin_phones (dict): pinyin to phones map dict
mfa_phones (list): the phone list of pretrained mfa model
am_phones (list): the phone list of pretrained mfa model
Returns:
oov_words (list): oov words
oov_files (list): utt id list that exist oov
oov_file_words (dict): the oov file and oov phone in this file
"""
oov_words = []
oov_files = []
oov_file_words = {}
with open(label_file, "r") as f:
for line in f.readlines():
utt_id = line.split("|")[0]
transcription = line.strip().split("|")[1]
flag = 0
temp_oov_words = []
for word in transcription.split(" "):
if word not in pinyin_phones.keys():
temp_oov_words.append(word)
flag = 1
if word not in oov_words:
oov_words.append(word)
else:
for p in pinyin_phones[word]:
if p not in mfa_phones or p not in am_phones:
temp_oov_words.append(word)
flag = 1
if word not in oov_words:
oov_words.append(word)
if flag == 1:
oov_files.append(utt_id)
oov_file_words[utt_id] = temp_oov_words
if oov_record is not None:
with open(oov_record, "w") as fw:
fw.write("oov_words: " + str(oov_words) + "\n")
fw.write("oov_files: " + str(oov_files) + "\n")
fw.write("oov_file_words: " + str(oov_file_words) + "\n")
return oov_words, oov_files, oov_file_words
def get_pinyin_phones(lexicon_file: Union[str, Path]):
# pinyin to phones
pinyin_phones = {}
with open(lexicon_file, "r") as f2:
for line in f2.readlines():
line_list = line.strip().split(" ")
pinyin = line_list[0]
if line_list[1] == '':
phones = line_list[2:]
else:
phones = line_list[1:]
pinyin_phones[pinyin] = phones
return pinyin_phones
def get_mfa_phone(mfa_phone_file: Union[str, Path]):
# get phones from pretrained mfa model (meta.yaml)
mfa_phones = []
with open(mfa_phone_file, "r") as f:
for line in f.readlines():
if line.startswith("-"):
phone = line.strip().split(" ")[-1]
mfa_phones.append(phone)
return mfa_phones
def get_am_phone(am_phone_file: Union[str, Path]):
# get phones from pretrained am model (phone_id_map.txt)
am_phones = []
with open(am_phone_file, "r") as f:
for line in f.readlines():
phone = line.strip().split(" ")[0]
am_phones.append(phone)
return am_phones
def get_check_result(label_file: Union[str, Path],
lexicon_file: Union[str, Path],
mfa_phone_file: Union[str, Path],
am_phone_file: Union[str, Path]):
pinyin_phones = get_pinyin_phones(lexicon_file)
mfa_phones = get_mfa_phone(mfa_phone_file)
am_phones = get_am_phone(am_phone_file)
oov_words, oov_files, oov_file_words = check_phone(
label_file, pinyin_phones, mfa_phones, am_phones)
return oov_words, oov_files, oov_file_words
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from operator import itemgetter
from pathlib import Path
from typing import Dict
from typing import Union
import jsonlines
import numpy as np
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.datasets.get_feats import Energy
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.datasets.get_feats import Pitch
from paddlespeech.t2s.datasets.preprocess_utils import get_phn_dur
from paddlespeech.t2s.datasets.preprocess_utils import merge_silence
from paddlespeech.t2s.exps.fastspeech2.preprocess import process_sentences
def read_stats(stats_file: Union[str, Path]):
scaler = StandardScaler()
scaler.mean_ = np.load(stats_file)[0]
scaler.scale_ = np.load(stats_file)[1]
scaler.n_features_in_ = scaler.mean_.shape[0]
return scaler
def get_stats(pretrained_model_dir: Path):
speech_stats_file = pretrained_model_dir / "speech_stats.npy"
pitch_stats_file = pretrained_model_dir / "pitch_stats.npy"
energy_stats_file = pretrained_model_dir / "energy_stats.npy"
speech_scaler = read_stats(speech_stats_file)
pitch_scaler = read_stats(pitch_stats_file)
energy_scaler = read_stats(energy_stats_file)
return speech_scaler, pitch_scaler, energy_scaler
def get_map(duration_file: Union[str, Path],
dump_dir: Path,
pretrained_model_dir: Path):
"""get phone map and speaker map, save on dump_dir
Args:
duration_file (str): durantions.txt
dump_dir (Path): dump dir
pretrained_model_dir (Path): pretrained model dir
"""
# copy phone map file from pretrained model path
phones_dict = dump_dir / "phone_id_map.txt"
os.system("cp %s %s" %
(pretrained_model_dir / "phone_id_map.txt", phones_dict))
# create a new speaker map file, replace the previous speakers.
sentences, speaker_set = get_phn_dur(duration_file)
merge_silence(sentences)
speakers = sorted(list(speaker_set))
num = len(speakers)
speaker_dict = dump_dir / "speaker_id_map.txt"
with open(speaker_dict, 'w') as f, open(pretrained_model_dir /
"speaker_id_map.txt", 'r') as fr:
for i, spk in enumerate(speakers):
f.write(spk + ' ' + str(i) + '\n')
for line in fr.readlines():
spk_id = line.strip().split(" ")[-1]
if int(spk_id) >= num:
f.write(line)
vocab_phones = {}
with open(phones_dict, 'rt') as f:
phn_id = [line.strip().split() for line in f.readlines()]
for phn, id in phn_id:
vocab_phones[phn] = int(id)
vocab_speaker = {}
with open(speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
for spk, id in spk_id:
vocab_speaker[spk] = int(id)
return sentences, vocab_phones, vocab_speaker
def get_extractor(config):
# Extractor
mel_extractor = LogMelFBank(
sr=config.fs,
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window,
n_mels=config.n_mels,
fmin=config.fmin,
fmax=config.fmax)
pitch_extractor = Pitch(
sr=config.fs,
hop_length=config.n_shift,
f0min=config.f0min,
f0max=config.f0max)
energy_extractor = Energy(
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window)
return mel_extractor, pitch_extractor, energy_extractor
def normalize(speech_scaler,
pitch_scaler,
energy_scaler,
vocab_phones: Dict,
vocab_speaker: Dict,
raw_dump_dir: Path,
type: str):
dumpdir = raw_dump_dir / type / "norm"
dumpdir = Path(dumpdir).expanduser()
dumpdir.mkdir(parents=True, exist_ok=True)
# get dataset
metadata_file = raw_dump_dir / type / "raw" / "metadata.jsonl"
with jsonlines.open(metadata_file, 'r') as reader:
metadata = list(reader)
dataset = DataTable(
metadata,
converters={
"speech": np.load,
"pitch": np.load,
"energy": np.load,
})
logging.info(f"The number of files = {len(dataset)}.")
# process each file
output_metadata = []
for item in tqdm(dataset):
utt_id = item['utt_id']
speech = item['speech']
pitch = item['pitch']
energy = item['energy']
# normalize
speech = speech_scaler.transform(speech)
speech_dir = dumpdir / "data_speech"
speech_dir.mkdir(parents=True, exist_ok=True)
speech_path = speech_dir / f"{utt_id}_speech.npy"
np.save(speech_path, speech.astype(np.float32), allow_pickle=False)
pitch = pitch_scaler.transform(pitch)
pitch_dir = dumpdir / "data_pitch"
pitch_dir.mkdir(parents=True, exist_ok=True)
pitch_path = pitch_dir / f"{utt_id}_pitch.npy"
np.save(pitch_path, pitch.astype(np.float32), allow_pickle=False)
energy = energy_scaler.transform(energy)
energy_dir = dumpdir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / f"{utt_id}_energy.npy"
np.save(energy_path, energy.astype(np.float32), allow_pickle=False)
phone_ids = [vocab_phones[p] for p in item['phones']]
spk_id = vocab_speaker[item["speaker"]]
record = {
"utt_id": item['utt_id'],
"spk_id": spk_id,
"text": phone_ids,
"text_lengths": item['text_lengths'],
"speech_lengths": item['speech_lengths'],
"durations": item['durations'],
"speech": str(speech_path),
"pitch": str(pitch_path),
"energy": str(energy_path)
}
# add spk_emb for voice cloning
if "spk_emb" in item:
record["spk_emb"] = str(item["spk_emb"])
output_metadata.append(record)
output_metadata.sort(key=itemgetter('utt_id'))
output_metadata_path = Path(dumpdir) / "metadata.jsonl"
with jsonlines.open(output_metadata_path, 'w') as writer:
for item in output_metadata:
writer.write(item)
logging.info(f"metadata dumped into {output_metadata_path}")
def extract_feature(duration_file: str,
config,
input_dir: Path,
dump_dir: Path,
pretrained_model_dir: Path):
sentences, vocab_phones, vocab_speaker = get_map(duration_file, dump_dir,
pretrained_model_dir)
mel_extractor, pitch_extractor, energy_extractor = get_extractor(config)
wav_files = sorted(list((input_dir).rglob("*.wav")))
# split data into 3 sections, train: len(wav_files) - 2, dev: 1, test: 1
num_train = len(wav_files) - 2
num_dev = 1
print(num_train, num_dev)
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
train_dump_dir = dump_dir / "train" / "raw"
train_dump_dir.mkdir(parents=True, exist_ok=True)
dev_dump_dir = dump_dir / "dev" / "raw"
dev_dump_dir.mkdir(parents=True, exist_ok=True)
test_dump_dir = dump_dir / "test" / "raw"
test_dump_dir.mkdir(parents=True, exist_ok=True)
# process for the 3 sections
num_cpu = 4
cut_sil = True
spk_emb_dir = None
write_metadata_method = "w"
speech_scaler, pitch_scaler, energy_scaler = get_stats(pretrained_model_dir)
if train_wav_files:
process_sentences(
config=config,
fps=train_wav_files,
sentences=sentences,
output_dir=train_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=num_cpu,
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=write_metadata_method)
# norm
normalize(speech_scaler, pitch_scaler, energy_scaler, vocab_phones,
vocab_speaker, dump_dir, "train")
if dev_wav_files:
process_sentences(
config=config,
fps=dev_wav_files,
sentences=sentences,
output_dir=dev_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=num_cpu,
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=write_metadata_method)
# norm
normalize(speech_scaler, pitch_scaler, energy_scaler, vocab_phones,
vocab_speaker, dump_dir, "dev")
if test_wav_files:
process_sentences(
config=config,
fps=test_wav_files,
sentences=sentences,
output_dir=test_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=num_cpu,
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=write_metadata_method)
# norm
normalize(speech_scaler, pitch_scaler, energy_scaler, vocab_phones,
vocab_speaker, dump_dir, "test")
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from pathlib import Path
from typing import List
from typing import Union
def change_baker_label(baker_label_file: Union[str, Path],
out_label_file: Union[str, Path]):
"""change baker label file to regular label file
Args:
baker_label_file (Union[str, Path]): Original baker label file
out_label_file (Union[str, Path]): regular label file
"""
with open(baker_label_file) as f:
lines = f.readlines()
with open(out_label_file, "w") as fw:
for i in range(0, len(lines), 2):
utt_id = lines[i].split()[0]
transcription = lines[i + 1].strip()
fw.write(utt_id + "|" + transcription + "\n")
def get_single_label(label_file: Union[str, Path],
oov_files: List[Union[str, Path]],
input_dir: Union[str, Path]):
"""Divide the label file into individual files according to label_file
Args:
label_file (str or Path): label file, format: utt_id|phones id
input_dir (Path): input dir including audios
"""
input_dir = Path(input_dir).expanduser()
new_dir = input_dir / "newdir"
new_dir.mkdir(parents=True, exist_ok=True)
with open(label_file, "r") as f:
for line in f.readlines():
utt_id = line.split("|")[0]
if utt_id not in oov_files:
transcription = line.split("|")[1].strip()
wav_file = str(input_dir) + "/" + utt_id + ".wav"
new_wav_file = str(new_dir) + "/" + utt_id + ".wav"
os.system("cp %s %s" % (wav_file, new_wav_file))
single_file = str(new_dir) + "/" + utt_id + ".txt"
with open(single_file, "w") as fw:
fw.write(transcription)
return new_dir
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from pathlib import Path
def generate_finetune_env(output_dir: Path, pretrained_model_dir: Path):
output_dir = output_dir / "checkpoints/"
output_dir = output_dir.resolve()
output_dir.mkdir(parents=True, exist_ok=True)
model_path = sorted(list((pretrained_model_dir).rglob("*.pdz")))[0]
model_path = model_path.resolve()
iter = int(str(model_path).split("_")[-1].split(".")[0])
model_file = str(model_path).split("/")[-1]
os.system("cp %s %s" % (model_path, output_dir))
records_file = output_dir / "records.jsonl"
with open(records_file, "w") as f:
line = "\"time\": \"2022-08-06 07:51:53.463650\", \"path\": \"%s\", \"iteration\": %d" % (
str(output_dir / model_file), iter)
f.write("{" + line + "}" + "\n")
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import shutil
from pathlib import Path
from typing import List
import jsonlines
import numpy as np
import paddle
from paddle import DataParallel
from paddle import distributed as dist
from paddle.io import DataLoader
from paddle.io import DistributedBatchSampler
from paddlespeech.t2s.datasets.am_batch_fn import fastspeech2_multi_spk_batch_fn
from paddlespeech.t2s.datasets.am_batch_fn import fastspeech2_single_spk_batch_fn
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.fastspeech2 import FastSpeech2
from paddlespeech.t2s.models.fastspeech2 import FastSpeech2Evaluator
from paddlespeech.t2s.models.fastspeech2 import FastSpeech2Updater
from paddlespeech.t2s.training.extensions.snapshot import Snapshot
from paddlespeech.t2s.training.extensions.visualizer import VisualDL
from paddlespeech.t2s.training.optimizer import build_optimizers
from paddlespeech.t2s.training.seeding import seed_everything
from paddlespeech.t2s.training.trainer import Trainer
def freeze_layer(model, layers: List[str]):
"""freeze layers
Args:
layers (List[str]): frozen layers
"""
for layer in layers:
for param in eval("model." + layer + ".parameters()"):
param.trainable = False
def train_sp(args, config):
# decides device type and whether to run in parallel
# setup running environment correctly
if (not paddle.is_compiled_with_cuda()) or args.ngpu == 0:
paddle.set_device("cpu")
else:
paddle.set_device("gpu")
world_size = paddle.distributed.get_world_size()
if world_size > 1:
paddle.distributed.init_parallel_env()
# set the random seed, it is a must for multiprocess training
seed_everything(config.seed)
print(
f"rank: {dist.get_rank()}, pid: {os.getpid()}, parent_pid: {os.getppid()}",
)
fields = [
"text", "text_lengths", "speech", "speech_lengths", "durations",
"pitch", "energy"
]
converters = {"speech": np.load, "pitch": np.load, "energy": np.load}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker fastspeech2!")
collate_fn = fastspeech2_multi_spk_batch_fn
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Training voice cloning!")
collate_fn = fastspeech2_multi_spk_batch_fn
fields += ["spk_emb"]
converters["spk_emb"] = np.load
else:
print("single speaker fastspeech2!")
collate_fn = fastspeech2_single_spk_batch_fn
print("spk_num:", spk_num)
# dataloader has been too verbose
logging.getLogger("DataLoader").disabled = True
# construct dataset for training and validation
with jsonlines.open(args.train_metadata, 'r') as reader:
train_metadata = list(reader)
train_dataset = DataTable(
data=train_metadata,
fields=fields,
converters=converters, )
with jsonlines.open(args.dev_metadata, 'r') as reader:
dev_metadata = list(reader)
dev_dataset = DataTable(
data=dev_metadata,
fields=fields,
converters=converters, )
# collate function and dataloader
train_sampler = DistributedBatchSampler(
train_dataset,
batch_size=config.batch_size,
shuffle=True,
drop_last=True)
print("samplers done!")
train_dataloader = DataLoader(
train_dataset,
batch_sampler=train_sampler,
collate_fn=collate_fn,
num_workers=config.num_workers)
dev_dataloader = DataLoader(
dev_dataset,
shuffle=False,
drop_last=False,
batch_size=config.batch_size,
collate_fn=collate_fn,
num_workers=config.num_workers)
print("dataloaders done!")
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_mels
model = FastSpeech2(
idim=vocab_size, odim=odim, spk_num=spk_num, **config["model"])
# freeze layer
if args.frozen_layers != []:
freeze_layer(model, args.frozen_layers)
if world_size > 1:
model = DataParallel(model)
print("model done!")
optimizer = build_optimizers(model, **config["optimizer"])
print("optimizer done!")
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
if dist.get_rank() == 0:
config_name = args.config.split("/")[-1]
# copy conf to output_dir
shutil.copyfile(args.config, output_dir / config_name)
updater = FastSpeech2Updater(
model=model,
optimizer=optimizer,
dataloader=train_dataloader,
output_dir=output_dir,
**config["updater"])
trainer = Trainer(updater, (config.max_epoch, 'epoch'), output_dir)
evaluator = FastSpeech2Evaluator(
model, dev_dataloader, output_dir=output_dir, **config["updater"])
if dist.get_rank() == 0:
trainer.extend(evaluator, trigger=(1, "epoch"))
trainer.extend(VisualDL(output_dir), trigger=(1, "iteration"))
trainer.extend(
Snapshot(max_size=config.num_snapshots), trigger=(1, 'epoch'))
trainer.run()
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=fastspeech2
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
#!/bin/bash
set -e
source path.sh
input_dir=./input/csmsc_mini
pretrained_model_dir=./pretrained_models/fastspeech2_aishell3_ckpt_1.1.0
mfa_dir=./mfa_result
dump_dir=./dump
output_dir=./exp/default
lang=zh
ngpu=1
finetune_config=./finetune.yaml
ckpt=snapshot_iter_96699
gpus=1
CUDA_VISIBLE_DEVICES=${gpus}
stage=0
stop_stage=100
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# finetune
python3 finetune.py \
--input_dir=${input_dir} \
--pretrained_model_dir=${pretrained_model_dir} \
--mfa_dir=${mfa_dir} \
--dump_dir=${dump_dir} \
--output_dir=${output_dir} \
--lang=${lang} \
--ngpu=${ngpu} \
--epoch=100 \
--finetune_config=${finetune_config}
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
echo "in hifigan syn_e2e"
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_aishell3 \
--am_config=${pretrained_model_dir}/default.yaml \
--am_ckpt=${output_dir}/checkpoints/${ckpt}.pdz \
--am_stat=${pretrained_model_dir}/speech_stats.npy \
--voc=hifigan_aishell3 \
--voc_config=pretrained_models/hifigan_aishell3_ckpt_0.2.0/default.yaml \
--voc_ckpt=pretrained_models/hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=pretrained_models/hifigan_aishell3_ckpt_0.2.0/feats_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=./test_e2e/ \
--phones_dict=${dump_dir}/phone_id_map.txt \
--speaker_dict=${dump_dir}/speaker_id_map.txt \
--spk_id=0
fi
# VCTK # VCTK
* tts0 - Tactron2 * tts0 - Tacotron2
* tts1 - TransformerTTS * tts1 - TransformerTTS
* tts2 - SpeedySpeech * tts2 - SpeedySpeech
* tts3 - FastSpeech2 * tts3 - FastSpeech2
...@@ -9,3 +9,4 @@ ...@@ -9,3 +9,4 @@
* voc1 - Parallel WaveGAN * voc1 - Parallel WaveGAN
* voc2 - MelGAN * voc2 - MelGAN
* voc3 - MultiBand MelGAN * voc3 - MultiBand MelGAN
* ernie_sat - ERNIE-SAT
# ERNIE SAT with VCTK dataset # ERNIE-SAT with VCTK dataset
ERNIE-SAT speech-text joint pretraining framework, which achieves SOTA results in cross-lingual multi-speaker speech synthesis and cross-lingual speech editing tasks, It can be applied to a series of scenarios such as Speech Editing, personalized Speech Synthesis, and Voice Cloning.
## Model Framework
In ERNIE-SAT, we propose two innovations:
- In the pretraining process, the phonemes corresponding to Chinese and English are used as input to achieve cross-language and personalized soft phoneme mapping
- The joint mask learning of speech and text is used to realize the alignment of speech and text
<p align="center">
<img src="https://user-images.githubusercontent.com/24568452/186110814-1b9c6618-a0ab-4c0c-bb3d-3d860b0e8cc2.png" />
</p>
## Dataset
### Download and Extract the dataset
Download VCTK-0.92 from it's [Official Website](https://datashare.ed.ac.uk/handle/10283/3443) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/VCTK-Corpus-0.92`.
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2.
You can download from here [vctk_alignment.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/VCTK-Corpus-0.92/vctk_alignment.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
ps: we remove three speakers in VCTK-0.92 (see [reorganize_vctk.py](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/examples/other/mfa/local/reorganize_vctk.py)):
1. `p315`, because of no text for it.
2. `p280` and `p362`, because no *_mic2.flac (which is better than *_mic1.flac) for them.
## Get Started
Assume the path to the dataset is `~/datasets/VCTK-Corpus-0.92`.
Assume the path to the MFA result of VCTK is `./vctk_alignment`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── norm
├── raw
└── speech_stats.npy
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, speaker, and id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
### Synthesizing
We use [HiFiGAN](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/vctk/voc5) as the neural vocoder.
Download pretrained HiFiGAN model from [hifigan_vctk_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/hifigan/hifigan_vctk_ckpt_0.2.0.zip) and unzip it.
```bash
unzip hifigan_vctk_ckpt_0.2.0.zip
```
HiFiGAN checkpoint contains files listed below.
```text
hifigan_vctk_ckpt_0.2.0
├── default.yaml # default config used to train HiFiGAN
├── feats_stats.npy # statistics used to normalize spectrogram when training HiFiGAN
└── snapshot_iter_2500000.pdz # generator parameters of HiFiGAN
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
## Speech Synthesis and Speech Editing
### Prepare
**prepare aligner**
```bash
mkdir -p tools/aligner
cd tools
# download MFA
wget https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/releases/download/v1.0.1/montreal-forced-aligner_linux.tar.gz
# extract MFA
tar xvf montreal-forced-aligner_linux.tar.gz
# fix .so of MFA
cd montreal-forced-aligner/lib
ln -snf libpython3.6m.so.1.0 libpython3.6m.so
cd -
# download align models and dicts
cd aligner
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/aishell3_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/simple.lexicon
wget https://paddlespeech.bj.bcebos.com/MFA/ernie_sat/vctk_model.zip
wget https://paddlespeech.bj.bcebos.com/MFA/LJSpeech-1.1/cmudict-0.7b
cd ../../
```
**prepare pretrained FastSpeech2 models**
ERNIE-SAT use FastSpeech2 as phoneme duration predictor:
```bash
mkdir download
cd download
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_conformer_baker_ckpt_0.5.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_ljspeech_ckpt_0.5.zip
unzip fastspeech2_conformer_baker_ckpt_0.5.zip
unzip fastspeech2_nosil_ljspeech_ckpt_0.5.zip
cd ../
```
**prepare source data**
```bash
mkdir source
cd source
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540307.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/SSB03540428.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/LJ050-0278.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p243_313.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/p299_096.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/this_was_not_the_show_for_me.wav
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/source/README.md
cd ../
```
You can check the text of downloaded wavs in `source/README.md`.
### Speech Synthesis and Speech Editing
```bash
./run.sh --stage 3 --stop-stage 3 --gpus 0
```
`stage 3` of `run.sh` calls `local/synthesize_e2e.sh`, `stage 0` of it is **Speech Synthesis** and `stage 1` of it is **Speech Editing**.
You can modify `--wav_path``--old_str` and `--new_str` yourself, `--old_str` should be the text corresponding to the audio of `--wav_path`, `--new_str` should be designed according to `--task_name`, both `--source_lang` and `--target_lang` should be `en` for model trained with VCTK dataset.
## Pretrained Model
Pretrained ErnieSAT model:
- [erniesat_vctk_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/ernie_sat/erniesat_vctk_ckpt_1.2.0.zip)
Model | Step | eval/mlm_loss | eval/loss
:-------------:| :------------:| :-----: | :-----:
default| 8(gpu) x 199500|57.622215|57.622215
# This configuration tested on 8 GPUs (A100) with 80GB GPU memory.
# It takes around 2 days to finish the training,You can adjust
# batch_size、num_workers here and ngpu in local/train.sh for your machine
########################################################### ###########################################################
# FEATURE EXTRACTION SETTING # # FEATURE EXTRACTION SETTING #
########################################################### ###########################################################
...@@ -21,8 +24,8 @@ mlm_prob: 0.8 ...@@ -21,8 +24,8 @@ mlm_prob: 0.8
########################################################### ###########################################################
# DATA SETTING # # DATA SETTING #
########################################################### ###########################################################
batch_size: 20 batch_size: 40
num_workers: 2 num_workers: 8
########################################################### ###########################################################
# MODEL SETTING # # MODEL SETTING #
......
...@@ -4,31 +4,11 @@ config_path=$1 ...@@ -4,31 +4,11 @@ config_path=$1
train_output_path=$2 train_output_path=$2
ckpt_name=$3 ckpt_name=$3
stage=1 stage=0
stop_stage=1 stop_stage=0
# use am to predict duration here
# 增加 am_phones_dict am_tones_dict 等,也可以用新的方式构造 am, 不需要这么多参数了就
# pwgan
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--erniesat_config=${config_path} \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=pwgan_vctk \
--voc_config=pwg_vctk_ckpt_0.1.1/default.yaml \
--voc_ckpt=pwg_vctk_ckpt_0.1.1/snapshot_iter_1500000.pdz \
--voc_stat=pwg_vctk_ckpt_0.1.1/feats_stats.npy \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--phones_dict=dump/phone_id_map.txt
fi
# hifigan # hifigan
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \ python3 ${BIN_DIR}/synthesize.py \
......
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
echo 'speech synthesize !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=synthesize \
--wav_path=source/p243_313.wav \
--old_str='For that reason cover should not be given' \
--new_str='I love you very much do you love me' \
--source_lang=en \
--target_lang=en \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_vctk \
--voc_config=hifigan_vctk_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_vctk_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_vctk_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_gen.wav
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
echo 'speech edit !'
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--task_name=edit \
--wav_path=source/p243_313.wav \
--old_str='For that reason cover should not be given' \
--new_str='For that reason cover is not impossible to be given' \
--source_lang=en \
--target_lang=en \
--erniesat_config=${config_path} \
--phones_dict=dump/phone_id_map.txt \
--erniesat_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--erniesat_stat=dump/train/speech_stats.npy \
--voc=hifigan_vctk \
--voc_config=hifigan_vctk_ckpt_0.2.0/default.yaml \
--voc_ckpt=hifigan_vctk_ckpt_0.2.0/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_vctk_ckpt_0.2.0/feats_stats.npy \
--output_name=exp/pred_edit.wav
fi
...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \ ...@@ -8,5 +8,5 @@ python3 ${BIN_DIR}/train.py \
--dev-metadata=dump/dev/norm/metadata.jsonl \ --dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \ --config=${config_path} \
--output-dir=${train_output_path} \ --output-dir=${train_output_path} \
--ngpu=2 \ --ngpu=8 \
--phones-dict=dump/phone_id_map.txt --phones-dict=dump/phone_id_map.txt
\ No newline at end of file
...@@ -3,13 +3,13 @@ ...@@ -3,13 +3,13 @@
set -e set -e
source path.sh source path.sh
gpus=0,1 gpus=0,1,2,3,4,5,6,7
stage=0 stage=0
stop_stage=100 stop_stage=100
conf_path=conf/default.yaml conf_path=conf/default.yaml
train_output_path=exp/default train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz ckpt_name=snapshot_iter_199500.pdz
# with the following command, you can choose the stage range you want to run # with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0` # such as `./run.sh --stage 0 --stop-stage 0`
...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then ...@@ -30,3 +30,7 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan # synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1 CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
...@@ -216,7 +216,7 @@ optional arguments: ...@@ -216,7 +216,7 @@ optional arguments:
## Pretrained Model ## Pretrained Model
Pretrained FastSpeech2 model with no silence in the edge of audios: Pretrained FastSpeech2 model with no silence in the edge of audios:
- [fastspeech2_nosil_vctk_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_vctk_ckpt_0.5.zip) - [fastspeech2_vctk_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_ckpt_1.2.0.zip)
The static model can be downloaded here: The static model can be downloaded here:
- [fastspeech2_vctk_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_static_1.1.0.zip) - [fastspeech2_vctk_static_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_vctk_static_1.1.0.zip)
...@@ -226,9 +226,11 @@ The ONNX model can be downloaded here: ...@@ -226,9 +226,11 @@ The ONNX model can be downloaded here:
FastSpeech2 checkpoint contains files listed below. FastSpeech2 checkpoint contains files listed below.
```text ```text
fastspeech2_nosil_vctk_ckpt_0.5 fastspeech2_vctk_ckpt_1.2.0
├── default.yaml # default config used to train fastspeech2 ├── default.yaml # default config used to train fastspeech2
├── energy_stats.npy # statistics used to normalize energy when training fastspeech2
├── phone_id_map.txt # phone vocabulary file when training fastspeech2 ├── phone_id_map.txt # phone vocabulary file when training fastspeech2
├── pitch_stats.npy # statistics used to normalize pitch when training fastspeech2
├── snapshot_iter_66200.pdz # model parameters and optimizer states ├── snapshot_iter_66200.pdz # model parameters and optimizer states
├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2 ├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2 └── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
...@@ -241,9 +243,9 @@ FLAGS_allocator_strategy=naive_best_fit \ ...@@ -241,9 +243,9 @@ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \ python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_vctk \ --am=fastspeech2_vctk \
--am_config=fastspeech2_nosil_vctk_ckpt_0.5/default.yaml \ --am_config=fastspeech2_vctk_ckpt_1.2.0/default.yaml \
--am_ckpt=fastspeech2_nosil_vctk_ckpt_0.5/snapshot_iter_66200.pdz \ --am_ckpt=fastspeech2_vctk_ckpt_1.2.0/snapshot_iter_66200.pdz \
--am_stat=fastspeech2_nosil_vctk_ckpt_0.5/speech_stats.npy \ --am_stat=fastspeech2_vctk_ckpt_1.2.0/speech_stats.npy \
--voc=pwgan_vctk \ --voc=pwgan_vctk \
--voc_config=pwg_vctk_ckpt_0.1.1/default.yaml \ --voc_config=pwg_vctk_ckpt_0.1.1/default.yaml \
--voc_ckpt=pwg_vctk_ckpt_0.1.1/snapshot_iter_1500000.pdz \ --voc_ckpt=pwg_vctk_ckpt_0.1.1/snapshot_iter_1500000.pdz \
...@@ -251,8 +253,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \ ...@@ -251,8 +253,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \
--lang=en \ --lang=en \
--text=${BIN_DIR}/../sentences_en.txt \ --text=${BIN_DIR}/../sentences_en.txt \
--output_dir=exp/default/test_e2e \ --output_dir=exp/default/test_e2e \
--phones_dict=fastspeech2_nosil_vctk_ckpt_0.5/phone_id_map.txt \ --phones_dict=fastspeech2_vctk_ckpt_1.2.0/phone_id_map.txt \
--speaker_dict=fastspeech2_nosil_vctk_ckpt_0.5/speaker_id_map.txt \ --speaker_dict=fastspeech2_vctk_ckpt_1.2.0/speaker_id_map.txt \
--spk_id=0 \ --spk_id=0 \
--inference_dir=exp/default/inference --inference_dir=exp/default/inference
``` ```
...@@ -44,8 +44,8 @@ fi ...@@ -44,8 +44,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_vctk ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_vctk
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -148,4 +148,4 @@ source path.sh ...@@ -148,4 +148,4 @@ source path.sh
CUDA_VISIBLE_DEVICES= bash ./local/test.sh ./data sv0_ecapa_tdnn_voxceleb12_ckpt_0_2_1/model/ conf/ecapa_tdnn.yaml CUDA_VISIBLE_DEVICES= bash ./local/test.sh ./data sv0_ecapa_tdnn_voxceleb12_ckpt_0_2_1/model/ conf/ecapa_tdnn.yaml
``` ```
The performance of the released models are shown in [this](./RESULTS.md) The performance of the released models are shown in [this](./RESULT.md)
...@@ -5,3 +5,7 @@ ...@@ -5,3 +5,7 @@
| Model | Number of Params | Release | Config | dim | Test set | Cosine | Cosine + S-Norm | | Model | Number of Params | Release | Config | dim | Test set | Cosine | Cosine + S-Norm |
| --- | --- | --- | --- | --- | --- | --- | ---- | | --- | --- | --- | --- | --- | --- | --- | ---- |
| ECAPA-TDNN | 85M | 0.2.1 | conf/ecapa_tdnn.yaml | 192 | test | 0.8188 | 0.7815| | ECAPA-TDNN | 85M | 0.2.1 | conf/ecapa_tdnn.yaml | 192 | test | 0.8188 | 0.7815|
> [SpeechBrain result](https://github.com/speechbrain/speechbrain/tree/develop/recipes/VoxCeleb/SpeakerRec#speaker-verification-using-ecapa-tdnn-embeddings):
> EER = 0.90% (voxceleb1 + voxceleb2) without s-norm
> EER = 0.80% (voxceleb1 + voxceleb2) with s-norm.
...@@ -34,3 +34,22 @@ Pretrain model from http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/wen ...@@ -34,3 +34,22 @@ Pretrain model from http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/wen
| conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | ctc_greedy_search | - | 0.052534 | | conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | ctc_greedy_search | - | 0.052534 |
| conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | ctc_prefix_beam_search | - | 0.052915 | | conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | ctc_prefix_beam_search | - | 0.052915 |
| conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | attention_rescoring | - | 0.047904 | | conformer | 32.52 M | conf/conformer.yaml | spec_aug | aishell1 | attention_rescoring | - | 0.047904 |
## Conformer Steaming Pretrained Model
Pretrain model from https://paddlespeech.bj.bcebos.com/s2t/wenetspeech/asr1/asr1_chunk_conformer_wenetspeech_ckpt_1.0.0a.model.tar.gz
| Model | Params | Config | Augmentation| Test set | Decode method | Chunk Size | CER |
| --- | --- | --- | --- | --- | --- | --- | --- |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | attention | 16 | 0.056273 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | ctc_greedy_search | 16 | 0.078918 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | ctc_prefix_beam_search | 16 | 0.079080 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | attention_rescoring | 16 | 0.054401 |
| Model | Params | Config | Augmentation| Test set | Decode method | Chunk Size | CER |
| --- | --- | --- | --- | --- | --- | --- | --- |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | attention | -1 | 0.050767 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | ctc_greedy_search | -1 | 0.061884 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | ctc_prefix_beam_search | -1 | 0.062056 |
| conformer | 32.52 M | conf/chunk_conformer.yaml | spec_aug | aishell1 | attention_rescoring | -1 | 0.052110 |
...@@ -251,7 +251,7 @@ optional arguments: ...@@ -251,7 +251,7 @@ optional arguments:
## Pretrained Model ## Pretrained Model
Pretrained FastSpeech2 model with no silence in the edge of audios: Pretrained FastSpeech2 model with no silence in the edge of audios:
- [fastspeech2_mix_ckpt_0.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_ckpt_0.2.0.zip) - [fastspeech2_mix_ckpt_1.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_ckpt_1.2.0.zip)
The static model can be downloaded here: The static model can be downloaded here:
- [fastspeech2_mix_static_0.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_static_0.2.0.zip) - [fastspeech2_mix_static_0.2.0.zip](https://paddlespeech.bj.bcebos.com/t2s/chinse_english_mixed/models/fastspeech2_mix_static_0.2.0.zip)
...@@ -262,9 +262,11 @@ The ONNX model can be downloaded here: ...@@ -262,9 +262,11 @@ The ONNX model can be downloaded here:
FastSpeech2 checkpoint contains files listed below. FastSpeech2 checkpoint contains files listed below.
```text ```text
fastspeech2_mix_ckpt_0.2.0 fastspeech2_mix_ckpt_1.2.0
├── default.yaml # default config used to train fastspeech2 ├── default.yaml # default config used to train fastspeech2
├── energy_stats.npy # statistics used to energy spectrogram when training fastspeech2
├── phone_id_map.txt # phone vocabulary file when training fastspeech2 ├── phone_id_map.txt # phone vocabulary file when training fastspeech2
├── pitch_stats.npy # statistics used to normalize pitch when training fastspeech2
├── snapshot_iter_99200.pdz # model parameters and optimizer states ├── snapshot_iter_99200.pdz # model parameters and optimizer states
├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2 ├── speaker_id_map.txt # speaker id map file when training a multi-speaker fastspeech2
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2 └── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
...@@ -281,9 +283,9 @@ FLAGS_allocator_strategy=naive_best_fit \ ...@@ -281,9 +283,9 @@ FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \ python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_mix \ --am=fastspeech2_mix \
--am_config=fastspeech2_mix_ckpt_0.2.0/default.yaml \ --am_config=fastspeech2_mix_ckpt_1.2.0/default.yaml \
--am_ckpt=fastspeech2_mix_ckpt_0.2.0/snapshot_iter_99200.pdz \ --am_ckpt=fastspeech2_mix_ckpt_1.2.0/snapshot_iter_99200.pdz \
--am_stat=fastspeech2_mix_ckpt_0.2.0/speech_stats.npy \ --am_stat=fastspeech2_mix_ckpt_1.2.0/speech_stats.npy \
--voc=pwgan_aishell3 \ --voc=pwgan_aishell3 \
--voc_config=pwg_aishell3_ckpt_0.5/default.yaml \ --voc_config=pwg_aishell3_ckpt_0.5/default.yaml \
--voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \ --voc_ckpt=pwg_aishell3_ckpt_0.5/snapshot_iter_1000000.pdz \
...@@ -291,8 +293,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \ ...@@ -291,8 +293,8 @@ python3 ${BIN_DIR}/../synthesize_e2e.py \
--lang=mix \ --lang=mix \
--text=${BIN_DIR}/../sentences_mix.txt \ --text=${BIN_DIR}/../sentences_mix.txt \
--output_dir=exp/default/test_e2e \ --output_dir=exp/default/test_e2e \
--phones_dict=fastspeech2_mix_ckpt_0.2.0/phone_id_map.txt \ --phones_dict=fastspeech2_mix_ckpt_1.2.0/phone_id_map.txt \
--speaker_dict=fastspeech2_mix_ckpt_0.2.0/speaker_id_map.txt \ --speaker_dict=fastspeech2_mix_ckpt_1.2.0/speaker_id_map.txt \
--spk_id=174 \ --spk_id=174 \
--inference_dir=exp/default/inference --inference_dir=exp/default/inference
``` ```
...@@ -47,8 +47,8 @@ fi ...@@ -47,8 +47,8 @@ fi
if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then
# install paddle2onnx # install paddle2onnx
version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}') version=$(echo `pip list |grep "paddle2onnx"` |awk -F" " '{print $2}')
if [[ -z "$version" || ${version} != '0.9.8' ]]; then if [[ -z "$version" || ${version} != '1.0.0' ]]; then
pip install paddle2onnx==0.9.8 pip install paddle2onnx==1.0.0
fi fi
./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_mix ./local/paddle2onnx.sh ${train_output_path} inference inference_onnx fastspeech2_mix
# considering the balance between speed and quality, we recommend that you use hifigan as vocoder # considering the balance between speed and quality, we recommend that you use hifigan as vocoder
......
...@@ -13,9 +13,3 @@ ...@@ -13,9 +13,3 @@
# limitations under the License. # limitations under the License.
import _locale import _locale
_locale._getdefaultlocale = (lambda *args: ['en_US', 'utf8']) _locale._getdefaultlocale = (lambda *args: ['en_US', 'utf8'])
from . import audio
# _init_audio_backend must called after audio import
audio.backends.utils._init_audio_backend()
__all__ = ["audio"]
...@@ -16,27 +16,12 @@ from . import _extension ...@@ -16,27 +16,12 @@ from . import _extension
from . import compliance from . import compliance
from . import datasets from . import datasets
from . import features from . import features
from . import text
from . import transform
from . import streamdata
from . import functional from . import functional
from . import io from . import io
from . import metric from . import metric
from . import utils from . import sox_effects
from paddlespeech.audio.backends import get_audio_backend from . import streamdata
from paddlespeech.audio.backends import list_audio_backends from . import text
from paddlespeech.audio.backends import set_audio_backend from . import transform
from paddlespeech.audio.backends import soundfile_backend from .backends import load
from .backends import save
__all__ = [
"io",
"compliance",
"datasets",
"functional",
"features",
"utils",
"list_audio_backends",
"get_audio_backend",
"set_audio_backend",
"soudfile_backend",
]
...@@ -4,67 +4,66 @@ ...@@ -4,67 +4,66 @@
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
# flake8: noqa # flake8: noqa
from .cache import cached_tarfile_samples
from .cache import ( from .cache import cached_tarfile_to_samples
cached_tarfile_samples, from .cache import lru_cleanup
cached_tarfile_to_samples, from .cache import pipe_cleaner
lru_cleanup, from .compat import FluidWrapper
pipe_cleaner, from .compat import WebDataset
) from .compat import WebLoader
from .compat import WebDataset, WebLoader, FluidWrapper from .extradatasets import MockDataset
from .extradatasets import MockDataset, with_epoch, with_length from .extradatasets import with_epoch
from .filters import ( from .extradatasets import with_length
associate, from .filters import associate
batched, from .filters import audio_cmvn
decode, from .filters import audio_compute_fbank
detshuffle, from .filters import audio_data_filter
extract_keys, from .filters import audio_padding
getfirst, from .filters import audio_resample
info, from .filters import audio_spec_aug
map, from .filters import audio_tokenize
map_dict, from .filters import batched
map_tuple, from .filters import decode
pipelinefilter, from .filters import detshuffle
rename, from .filters import extract_keys
rename_keys, from .filters import getfirst
audio_resample, from .filters import info
select, from .filters import map
shuffle, from .filters import map_dict
slice, from .filters import map_tuple
to_tuple, from .filters import pipelinefilter
transform_with, from .filters import placeholder
unbatched, from .filters import rename
xdecode, from .filters import rename_keys
audio_data_filter, from .filters import select
audio_tokenize, from .filters import shuffle
audio_resample, from .filters import slice
audio_compute_fbank, from .filters import sort
audio_spec_aug, from .filters import to_tuple
sort, from .filters import transform_with
audio_padding, from .filters import unbatched
audio_cmvn, from .filters import xdecode
placeholder, from .handlers import ignore_and_continue
) from .handlers import ignore_and_stop
from .handlers import ( from .handlers import reraise_exception
ignore_and_continue, from .handlers import warn_and_continue
ignore_and_stop, from .handlers import warn_and_stop
reraise_exception, from .mix import RandomMix
warn_and_continue, from .mix import RoundRobin
warn_and_stop,
)
from .pipeline import DataPipeline from .pipeline import DataPipeline
from .shardlists import ( from .shardlists import MultiShardSample
MultiShardSample, from .shardlists import non_empty
ResampledShards, from .shardlists import resampled
SimpleShardList, from .shardlists import ResampledShards
non_empty, from .shardlists import shardspec
resampled, from .shardlists import SimpleShardList
shardspec, from .shardlists import single_node_only
single_node_only, from .shardlists import split_by_node
split_by_node, from .shardlists import split_by_worker
split_by_worker, from .tariterators import tarfile_samples
) from .tariterators import tarfile_to_samples
from .tariterators import tarfile_samples, tarfile_to_samples from .utils import PipelineStage
from .utils import PipelineStage, repeatedly from .utils import repeatedly
from .writer import ShardWriter, TarWriter, numpy_dumps from .writer import numpy_dumps
from .mix import RandomMix, RoundRobin from .writer import ShardWriter
from .writer import TarWriter
...@@ -5,18 +5,19 @@ ...@@ -5,18 +5,19 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
"""Automatically decode webdataset samples.""" """Automatically decode webdataset samples."""
import io
import io, json, os, pickle, re, tempfile import json
import os
import pickle
import re
import tempfile
from functools import partial from functools import partial
import numpy as np import numpy as np
"""Extensions passed on to the image decoder.""" """Extensions passed on to the image decoder."""
image_extensions = "jpg jpeg png ppm pgm pbm pnm".split() image_extensions = "jpg jpeg png ppm pgm pbm pnm".split()
################################################################ ################################################################
# handle basic datatypes # handle basic datatypes
################################################################ ################################################################
...@@ -128,7 +129,7 @@ def call_extension_handler(key, data, f, extensions): ...@@ -128,7 +129,7 @@ def call_extension_handler(key, data, f, extensions):
target = target.split(".") target = target.split(".")
if len(target) > len(extension): if len(target) > len(extension):
continue continue
if extension[-len(target) :] == target: if extension[-len(target):] == target:
return f(data) return f(data)
return None return None
...@@ -268,7 +269,6 @@ def imagehandler(imagespec, extensions=image_extensions): ...@@ -268,7 +269,6 @@ def imagehandler(imagespec, extensions=image_extensions):
################################################################ ################################################################
# torch video # torch video
################################################################ ################################################################
''' '''
def torch_video(key, data): def torch_video(key, data):
"""Decode video using the torchvideo library. """Decode video using the torchvideo library.
...@@ -289,7 +289,6 @@ def torch_video(key, data): ...@@ -289,7 +289,6 @@ def torch_video(key, data):
return torchvision.io.read_video(fname, pts_unit="sec") return torchvision.io.read_video(fname, pts_unit="sec")
''' '''
################################################################ ################################################################
# paddlespeech.audio # paddlespeech.audio
################################################################ ################################################################
...@@ -359,7 +358,6 @@ def gzfilter(key, data): ...@@ -359,7 +358,6 @@ def gzfilter(key, data):
# decode entire training amples # decode entire training amples
################################################################ ################################################################
default_pre_handlers = [gzfilter] default_pre_handlers = [gzfilter]
default_post_handlers = [basichandlers] default_post_handlers = [basichandlers]
...@@ -387,7 +385,8 @@ class Decoder: ...@@ -387,7 +385,8 @@ class Decoder:
pre = default_pre_handlers pre = default_pre_handlers
if post is None: if post is None:
post = default_post_handlers post = default_post_handlers
assert all(callable(h) for h in handlers), f"one of {handlers} not callable" assert all(callable(h)
for h in handlers), f"one of {handlers} not callable"
assert all(callable(h) for h in pre), f"one of {pre} not callable" assert all(callable(h) for h in pre), f"one of {pre} not callable"
assert all(callable(h) for h in post), f"one of {post} not callable" assert all(callable(h) for h in post), f"one of {post} not callable"
self.handlers = pre + handlers + post self.handlers = pre + handlers + post
......
...@@ -2,7 +2,10 @@ ...@@ -2,7 +2,10 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
import itertools, os, random, re, sys import os
import random
import re
import sys
from urllib.parse import urlparse from urllib.parse import urlparse
from . import filters from . import filters
...@@ -40,7 +43,7 @@ def lru_cleanup(cache_dir, cache_size, keyfn=os.path.getctime, verbose=False): ...@@ -40,7 +43,7 @@ def lru_cleanup(cache_dir, cache_size, keyfn=os.path.getctime, verbose=False):
os.remove(fname) os.remove(fname)
def download(url, dest, chunk_size=1024 ** 2, verbose=False): def download(url, dest, chunk_size=1024**2, verbose=False):
"""Download a file from `url` to `dest`.""" """Download a file from `url` to `dest`."""
temp = dest + f".temp{os.getpid()}" temp = dest + f".temp{os.getpid()}"
with gopen.gopen(url) as stream: with gopen.gopen(url) as stream:
...@@ -65,12 +68,11 @@ def pipe_cleaner(spec): ...@@ -65,12 +68,11 @@ def pipe_cleaner(spec):
def get_file_cached( def get_file_cached(
spec, spec,
cache_size=-1, cache_size=-1,
cache_dir=None, cache_dir=None,
url_to_name=pipe_cleaner, url_to_name=pipe_cleaner,
verbose=False, verbose=False, ):
):
if cache_size == -1: if cache_size == -1:
cache_size = default_cache_size cache_size = default_cache_size
if cache_dir is None: if cache_dir is None:
...@@ -107,15 +109,14 @@ verbose_cache = int(os.environ.get("WDS_VERBOSE_CACHE", "0")) ...@@ -107,15 +109,14 @@ verbose_cache = int(os.environ.get("WDS_VERBOSE_CACHE", "0"))
def cached_url_opener( def cached_url_opener(
data, data,
handler=reraise_exception, handler=reraise_exception,
cache_size=-1, cache_size=-1,
cache_dir=None, cache_dir=None,
url_to_name=pipe_cleaner, url_to_name=pipe_cleaner,
validator=check_tar_format, validator=check_tar_format,
verbose=False, verbose=False,
always=False, always=False, ):
):
"""Given a stream of url names (packaged in `dict(url=url)`), yield opened streams.""" """Given a stream of url names (packaged in `dict(url=url)`), yield opened streams."""
verbose = verbose or verbose_cache verbose = verbose or verbose_cache
for sample in data: for sample in data:
...@@ -132,8 +133,7 @@ def cached_url_opener( ...@@ -132,8 +133,7 @@ def cached_url_opener(
cache_size=cache_size, cache_size=cache_size,
cache_dir=cache_dir, cache_dir=cache_dir,
url_to_name=url_to_name, url_to_name=url_to_name,
verbose=verbose, verbose=verbose, )
)
if verbose: if verbose:
print("# opening %s" % dest, file=sys.stderr) print("# opening %s" % dest, file=sys.stderr)
assert os.path.exists(dest) assert os.path.exists(dest)
...@@ -143,9 +143,8 @@ def cached_url_opener( ...@@ -143,9 +143,8 @@ def cached_url_opener(
data = f.read(200) data = f.read(200)
os.remove(dest) os.remove(dest)
raise ValueError( raise ValueError(
"%s (%s) is not a tar archive, but a %s, contains %s" "%s (%s) is not a tar archive, but a %s, contains %s" %
% (dest, url, ftype, repr(data)) (dest, url, ftype, repr(data)))
)
try: try:
stream = open(dest, "rb") stream = open(dest, "rb")
sample.update(stream=stream) sample.update(stream=stream)
...@@ -158,7 +157,7 @@ def cached_url_opener( ...@@ -158,7 +157,7 @@ def cached_url_opener(
continue continue
raise exn raise exn
except Exception as exn: except Exception as exn:
exn.args = exn.args + (url,) exn.args = exn.args + (url, )
if handler(exn): if handler(exn):
continue continue
else: else:
...@@ -166,14 +165,13 @@ def cached_url_opener( ...@@ -166,14 +165,13 @@ def cached_url_opener(
def cached_tarfile_samples( def cached_tarfile_samples(
src, src,
handler=reraise_exception, handler=reraise_exception,
cache_size=-1, cache_size=-1,
cache_dir=None, cache_dir=None,
verbose=False, verbose=False,
url_to_name=pipe_cleaner, url_to_name=pipe_cleaner,
always=False, always=False, ):
):
streams = cached_url_opener( streams = cached_url_opener(
src, src,
handler=handler, handler=handler,
...@@ -181,8 +179,7 @@ def cached_tarfile_samples( ...@@ -181,8 +179,7 @@ def cached_tarfile_samples(
cache_dir=cache_dir, cache_dir=cache_dir,
verbose=verbose, verbose=verbose,
url_to_name=url_to_name, url_to_name=url_to_name,
always=always, always=always, )
)
samples = tar_file_and_group_expander(streams, handler=handler) samples = tar_file_and_group_expander(streams, handler=handler)
return samples return samples
......
...@@ -2,17 +2,17 @@ ...@@ -2,17 +2,17 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
from dataclasses import dataclass import yaml
from itertools import islice
from typing import List
import braceexpand, yaml
from . import autodecode from . import autodecode
from . import cache, filters, shardlists, tariterators from . import cache
from . import filters
from . import shardlists
from . import tariterators
from .filters import reraise_exception from .filters import reraise_exception
from .paddle_utils import DataLoader
from .paddle_utils import IterableDataset
from .pipeline import DataPipeline from .pipeline import DataPipeline
from .paddle_utils import DataLoader, IterableDataset
class FluidInterface: class FluidInterface:
...@@ -26,7 +26,8 @@ class FluidInterface: ...@@ -26,7 +26,8 @@ class FluidInterface:
return self.compose(filters.unbatched()) return self.compose(filters.unbatched())
def listed(self, batchsize, partial=True): def listed(self, batchsize, partial=True):
return self.compose(filters.batched(), batchsize=batchsize, collation_fn=None) return self.compose(
filters.batched(), batchsize=batchsize, collation_fn=None)
def unlisted(self): def unlisted(self):
return self.compose(filters.unlisted()) return self.compose(filters.unlisted())
...@@ -43,9 +44,19 @@ class FluidInterface: ...@@ -43,9 +44,19 @@ class FluidInterface:
def map(self, f, handler=reraise_exception): def map(self, f, handler=reraise_exception):
return self.compose(filters.map(f, handler=handler)) return self.compose(filters.map(f, handler=handler))
def decode(self, *args, pre=None, post=None, only=None, partial=False, handler=reraise_exception): def decode(self,
handlers = [autodecode.ImageHandler(x) if isinstance(x, str) else x for x in args] *args,
decoder = autodecode.Decoder(handlers, pre=pre, post=post, only=only, partial=partial) pre=None,
post=None,
only=None,
partial=False,
handler=reraise_exception):
handlers = [
autodecode.ImageHandler(x) if isinstance(x, str) else x
for x in args
]
decoder = autodecode.Decoder(
handlers, pre=pre, post=post, only=only, partial=partial)
return self.map(decoder, handler=handler) return self.map(decoder, handler=handler)
def map_dict(self, handler=reraise_exception, **kw): def map_dict(self, handler=reraise_exception, **kw):
...@@ -80,12 +91,12 @@ class FluidInterface: ...@@ -80,12 +91,12 @@ class FluidInterface:
def audio_data_filter(self, *args, **kw): def audio_data_filter(self, *args, **kw):
return self.compose(filters.audio_data_filter(*args, **kw)) return self.compose(filters.audio_data_filter(*args, **kw))
def audio_tokenize(self, *args, **kw): def audio_tokenize(self, *args, **kw):
return self.compose(filters.audio_tokenize(*args, **kw)) return self.compose(filters.audio_tokenize(*args, **kw))
def resample(self, *args, **kw): def resample(self, *args, **kw):
return self.compose(filters.resample(*args, **kw)) return self.compose(filters.resample(*args, **kw))
def audio_compute_fbank(self, *args, **kw): def audio_compute_fbank(self, *args, **kw):
return self.compose(filters.audio_compute_fbank(*args, **kw)) return self.compose(filters.audio_compute_fbank(*args, **kw))
...@@ -102,27 +113,28 @@ class FluidInterface: ...@@ -102,27 +113,28 @@ class FluidInterface:
def audio_cmvn(self, cmvn_file): def audio_cmvn(self, cmvn_file):
return self.compose(filters.audio_cmvn(cmvn_file)) return self.compose(filters.audio_cmvn(cmvn_file))
class WebDataset(DataPipeline, FluidInterface): class WebDataset(DataPipeline, FluidInterface):
"""Small fluid-interface wrapper for DataPipeline.""" """Small fluid-interface wrapper for DataPipeline."""
def __init__( def __init__(
self, self,
urls, urls,
handler=reraise_exception, handler=reraise_exception,
resampled=False, resampled=False,
repeat=False, repeat=False,
shardshuffle=None, shardshuffle=None,
cache_size=0, cache_size=0,
cache_dir=None, cache_dir=None,
detshuffle=False, detshuffle=False,
nodesplitter=shardlists.single_node_only, nodesplitter=shardlists.single_node_only,
verbose=False, verbose=False, ):
):
super().__init__() super().__init__()
if isinstance(urls, IterableDataset): if isinstance(urls, IterableDataset):
assert not resampled assert not resampled
self.append(urls) self.append(urls)
elif isinstance(urls, str) and (urls.endswith(".yaml") or urls.endswith(".yml")): elif isinstance(urls, str) and (urls.endswith(".yaml") or
urls.endswith(".yml")):
with (open(urls)) as stream: with (open(urls)) as stream:
spec = yaml.safe_load(stream) spec = yaml.safe_load(stream)
assert "datasets" in spec assert "datasets" in spec
...@@ -152,9 +164,7 @@ class WebDataset(DataPipeline, FluidInterface): ...@@ -152,9 +164,7 @@ class WebDataset(DataPipeline, FluidInterface):
handler=handler, handler=handler,
verbose=verbose, verbose=verbose,
cache_size=cache_size, cache_size=cache_size,
cache_dir=cache_dir, cache_dir=cache_dir, ))
)
)
class FluidWrapper(DataPipeline, FluidInterface): class FluidWrapper(DataPipeline, FluidInterface):
......
...@@ -5,20 +5,10 @@ ...@@ -5,20 +5,10 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
"""Train PyTorch models directly from POSIX tar archive. """Train PyTorch models directly from POSIX tar archive.
Code works locally or over HTTP connections. Code works locally or over HTTP connections.
""" """
import itertools as itt
import os
import random
import sys
import braceexpand
from . import utils from . import utils
from .paddle_utils import IterableDataset from .paddle_utils import IterableDataset
from .utils import PipelineStage from .utils import PipelineStage
...@@ -63,8 +53,7 @@ class repeatedly(IterableDataset, PipelineStage): ...@@ -63,8 +53,7 @@ class repeatedly(IterableDataset, PipelineStage):
return utils.repeatedly( return utils.repeatedly(
source, source,
nepochs=self.nepochs, nepochs=self.nepochs,
nbatches=self.nbatches, nbatches=self.nbatches, )
)
class with_epoch(IterableDataset): class with_epoch(IterableDataset):
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# #
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# Modified from wenet(https://github.com/wenet-e2e/wenet) # Modified from wenet(https://github.com/wenet-e2e/wenet)
"""A collection of iterators for data transformations. """A collection of iterators for data transformations.
...@@ -12,28 +11,29 @@ These functions are plain iterator functions. You can find curried versions ...@@ -12,28 +11,29 @@ These functions are plain iterator functions. You can find curried versions
in webdataset.filters, and you can find IterableDataset wrappers in in webdataset.filters, and you can find IterableDataset wrappers in
webdataset.processing. webdataset.processing.
""" """
import io import io
from fnmatch import fnmatch import itertools
import os
import random
import re import re
import itertools, os, random, sys, time import sys
from functools import reduce, wraps import time
from fnmatch import fnmatch
from functools import reduce
import numpy as np import paddle
from . import autodecode from . import autodecode
from . import utils from . import utils
from .paddle_utils import PaddleTensor
from .utils import PipelineStage
from .. import backends from .. import backends
from ..compliance import kaldi from ..compliance import kaldi
import paddle
from ..transform.cmvn import GlobalCMVN from ..transform.cmvn import GlobalCMVN
from ..utils.tensor_utils import pad_sequence
from ..transform.spec_augment import time_warp
from ..transform.spec_augment import time_mask
from ..transform.spec_augment import freq_mask from ..transform.spec_augment import freq_mask
from ..transform.spec_augment import time_mask
from ..transform.spec_augment import time_warp
from ..utils.tensor_utils import pad_sequence
from .utils import PipelineStage
class FilterFunction(object): class FilterFunction(object):
"""Helper class for currying pipeline stages. """Helper class for currying pipeline stages.
...@@ -159,10 +159,12 @@ def transform_with(sample, transformers): ...@@ -159,10 +159,12 @@ def transform_with(sample, transformers):
result[i] = f(sample[i]) result[i] = f(sample[i])
return result return result
### ###
# Iterators # Iterators
### ###
def _info(data, fmt=None, n=3, every=-1, width=50, stream=sys.stderr, name=""): def _info(data, fmt=None, n=3, every=-1, width=50, stream=sys.stderr, name=""):
"""Print information about the samples that are passing through. """Print information about the samples that are passing through.
...@@ -278,10 +280,16 @@ def _log_keys(data, logfile=None): ...@@ -278,10 +280,16 @@ def _log_keys(data, logfile=None):
log_keys = pipelinefilter(_log_keys) log_keys = pipelinefilter(_log_keys)
def _minedecode(x):
if isinstance(x, str):
return autodecode.imagehandler(x)
else:
return x
def _decode(data, *args, handler=reraise_exception, **kw): def _decode(data, *args, handler=reraise_exception, **kw):
"""Decode data based on the decoding functions given as arguments.""" """Decode data based on the decoding functions given as arguments."""
decoder = _minedecode
decoder = lambda x: autodecode.imagehandler(x) if isinstance(x, str) else x
handlers = [decoder(x) for x in args] handlers = [decoder(x) for x in args]
f = autodecode.Decoder(handlers, **kw) f = autodecode.Decoder(handlers, **kw)
...@@ -325,15 +333,24 @@ def _rename(data, handler=reraise_exception, keep=True, **kw): ...@@ -325,15 +333,24 @@ def _rename(data, handler=reraise_exception, keep=True, **kw):
for sample in data: for sample in data:
try: try:
if not keep: if not keep:
yield {k: getfirst(sample, v, missing_is_error=True) for k, v in kw.items()} yield {
k: getfirst(sample, v, missing_is_error=True)
for k, v in kw.items()
}
else: else:
def listify(v): def listify(v):
return v.split(";") if isinstance(v, str) else v return v.split(";") if isinstance(v, str) else v
to_be_replaced = {x for v in kw.values() for x in listify(v)} to_be_replaced = {x for v in kw.values() for x in listify(v)}
result = {k: v for k, v in sample.items() if k not in to_be_replaced} result = {
result.update({k: getfirst(sample, v, missing_is_error=True) for k, v in kw.items()}) k: v
for k, v in sample.items() if k not in to_be_replaced
}
result.update({
k: getfirst(sample, v, missing_is_error=True)
for k, v in kw.items()
})
yield result yield result
except Exception as exn: except Exception as exn:
if handler(exn): if handler(exn):
...@@ -381,7 +398,11 @@ def _map_dict(data, handler=reraise_exception, **kw): ...@@ -381,7 +398,11 @@ def _map_dict(data, handler=reraise_exception, **kw):
map_dict = pipelinefilter(_map_dict) map_dict = pipelinefilter(_map_dict)
def _to_tuple(data, *args, handler=reraise_exception, missing_is_error=True, none_is_error=None): def _to_tuple(data,
*args,
handler=reraise_exception,
missing_is_error=True,
none_is_error=None):
"""Convert dict samples to tuples.""" """Convert dict samples to tuples."""
if none_is_error is None: if none_is_error is None:
none_is_error = missing_is_error none_is_error = missing_is_error
...@@ -390,7 +411,10 @@ def _to_tuple(data, *args, handler=reraise_exception, missing_is_error=True, non ...@@ -390,7 +411,10 @@ def _to_tuple(data, *args, handler=reraise_exception, missing_is_error=True, non
for sample in data: for sample in data:
try: try:
result = tuple([getfirst(sample, f, missing_is_error=missing_is_error) for f in args]) result = tuple([
getfirst(sample, f, missing_is_error=missing_is_error)
for f in args
])
if none_is_error and any(x is None for x in result): if none_is_error and any(x is None for x in result):
raise ValueError(f"to_tuple {args} got {sample.keys()}") raise ValueError(f"to_tuple {args} got {sample.keys()}")
yield result yield result
...@@ -463,19 +487,28 @@ rsample = pipelinefilter(_rsample) ...@@ -463,19 +487,28 @@ rsample = pipelinefilter(_rsample)
slice = pipelinefilter(itertools.islice) slice = pipelinefilter(itertools.islice)
def _extract_keys(source, *patterns, duplicate_is_error=True, ignore_missing=False): def _extract_keys(source,
*patterns,
duplicate_is_error=True,
ignore_missing=False):
for sample in source: for sample in source:
result = [] result = []
for pattern in patterns: for pattern in patterns:
pattern = pattern.split(";") if isinstance(pattern, str) else pattern pattern = pattern.split(";") if isinstance(pattern,
matches = [x for x in sample.keys() if any(fnmatch("." + x, p) for p in pattern)] str) else pattern
matches = [
x for x in sample.keys()
if any(fnmatch("." + x, p) for p in pattern)
]
if len(matches) == 0: if len(matches) == 0:
if ignore_missing: if ignore_missing:
continue continue
else: else:
raise ValueError(f"Cannot find {pattern} in sample keys {sample.keys()}.") raise ValueError(
f"Cannot find {pattern} in sample keys {sample.keys()}.")
if len(matches) > 1 and duplicate_is_error: if len(matches) > 1 and duplicate_is_error:
raise ValueError(f"Multiple sample keys {sample.keys()} match {pattern}.") raise ValueError(
f"Multiple sample keys {sample.keys()} match {pattern}.")
value = sample[matches[0]] value = sample[matches[0]]
result.append(value) result.append(value)
yield tuple(result) yield tuple(result)
...@@ -484,7 +517,12 @@ def _extract_keys(source, *patterns, duplicate_is_error=True, ignore_missing=Fal ...@@ -484,7 +517,12 @@ def _extract_keys(source, *patterns, duplicate_is_error=True, ignore_missing=Fal
extract_keys = pipelinefilter(_extract_keys) extract_keys = pipelinefilter(_extract_keys)
def _rename_keys(source, *args, keep_unselected=False, must_match=True, duplicate_is_error=True, **kw): def _rename_keys(source,
*args,
keep_unselected=False,
must_match=True,
duplicate_is_error=True,
**kw):
renamings = [(pattern, output) for output, pattern in args] renamings = [(pattern, output) for output, pattern in args]
renamings += [(pattern, output) for output, pattern in kw.items()] renamings += [(pattern, output) for output, pattern in kw.items()]
for sample in source: for sample in source:
...@@ -504,11 +542,15 @@ def _rename_keys(source, *args, keep_unselected=False, must_match=True, duplicat ...@@ -504,11 +542,15 @@ def _rename_keys(source, *args, keep_unselected=False, must_match=True, duplicat
continue continue
if new_name in new_sample: if new_name in new_sample:
if duplicate_is_error: if duplicate_is_error:
raise ValueError(f"Duplicate value in sample {sample.keys()} after rename.") raise ValueError(
f"Duplicate value in sample {sample.keys()} after rename."
)
continue continue
new_sample[new_name] = value new_sample[new_name] = value
if must_match and not all(matched.values()): if must_match and not all(matched.values()):
raise ValueError(f"Not all patterns ({matched}) matched sample keys ({sample.keys()}).") raise ValueError(
f"Not all patterns ({matched}) matched sample keys ({sample.keys()})."
)
yield new_sample yield new_sample
...@@ -541,18 +583,18 @@ def find_decoder(decoders, path): ...@@ -541,18 +583,18 @@ def find_decoder(decoders, path):
if fname.startswith("__"): if fname.startswith("__"):
return lambda x: x return lambda x: x
for pattern, fun in decoders[::-1]: for pattern, fun in decoders[::-1]:
if fnmatch(fname.lower(), pattern) or fnmatch("." + fname.lower(), pattern): if fnmatch(fname.lower(), pattern) or fnmatch("." + fname.lower(),
pattern):
return fun return fun
return None return None
def _xdecode( def _xdecode(
source, source,
*args, *args,
must_decode=True, must_decode=True,
defaults=default_decoders, defaults=default_decoders,
**kw, **kw, ):
):
decoders = list(defaults) + list(args) decoders = list(defaults) + list(args)
decoders += [("*." + k, v) for k, v in kw.items()] decoders += [("*." + k, v) for k, v in kw.items()]
for sample in source: for sample in source:
...@@ -575,18 +617,18 @@ def _xdecode( ...@@ -575,18 +617,18 @@ def _xdecode(
new_sample[path] = value new_sample[path] = value
yield new_sample yield new_sample
xdecode = pipelinefilter(_xdecode)
xdecode = pipelinefilter(_xdecode)
def _audio_data_filter(source, def _audio_data_filter(source,
frame_shift=10, frame_shift=10,
max_length=10240, max_length=10240,
min_length=10, min_length=10,
token_max_length=200, token_max_length=200,
token_min_length=1, token_min_length=1,
min_output_input_ratio=0.0005, min_output_input_ratio=0.0005,
max_output_input_ratio=1): max_output_input_ratio=1):
""" Filter sample according to feature and label length """ Filter sample according to feature and label length
Inplace operation. Inplace operation.
...@@ -613,7 +655,8 @@ def _audio_data_filter(source, ...@@ -613,7 +655,8 @@ def _audio_data_filter(source,
assert 'wav' in sample assert 'wav' in sample
assert 'label' in sample assert 'label' in sample
# sample['wav'] is paddle.Tensor, we have 100 frames every second (default) # sample['wav'] is paddle.Tensor, we have 100 frames every second (default)
num_frames = sample['wav'].shape[1] / sample['sample_rate'] * (1000 / frame_shift) num_frames = sample['wav'].shape[1] / sample['sample_rate'] * (
1000 / frame_shift)
if num_frames < min_length: if num_frames < min_length:
continue continue
if num_frames > max_length: if num_frames > max_length:
...@@ -629,13 +672,15 @@ def _audio_data_filter(source, ...@@ -629,13 +672,15 @@ def _audio_data_filter(source,
continue continue
yield sample yield sample
audio_data_filter = pipelinefilter(_audio_data_filter) audio_data_filter = pipelinefilter(_audio_data_filter)
def _audio_tokenize(source, def _audio_tokenize(source,
symbol_table, symbol_table,
bpe_model=None, bpe_model=None,
non_lang_syms=None, non_lang_syms=None,
split_with_space=False): split_with_space=False):
""" Decode text to chars or BPE """ Decode text to chars or BPE
Inplace operation Inplace operation
...@@ -693,8 +738,10 @@ def _audio_tokenize(source, ...@@ -693,8 +738,10 @@ def _audio_tokenize(source,
sample['label'] = label sample['label'] = label
yield sample yield sample
audio_tokenize = pipelinefilter(_audio_tokenize) audio_tokenize = pipelinefilter(_audio_tokenize)
def _audio_resample(source, resample_rate=16000): def _audio_resample(source, resample_rate=16000):
""" Resample data. """ Resample data.
Inplace operation. Inplace operation.
...@@ -713,18 +760,22 @@ def _audio_resample(source, resample_rate=16000): ...@@ -713,18 +760,22 @@ def _audio_resample(source, resample_rate=16000):
waveform = sample['wav'] waveform = sample['wav']
if sample_rate != resample_rate: if sample_rate != resample_rate:
sample['sample_rate'] = resample_rate sample['sample_rate'] = resample_rate
sample['wav'] = paddle.to_tensor(backends.soundfile_backend.resample( sample['wav'] = paddle.to_tensor(
waveform.numpy(), src_sr = sample_rate, target_sr = resample_rate backends.soundfile_backend.resample(
)) waveform.numpy(),
src_sr=sample_rate,
target_sr=resample_rate))
yield sample yield sample
audio_resample = pipelinefilter(_audio_resample) audio_resample = pipelinefilter(_audio_resample)
def _audio_compute_fbank(source, def _audio_compute_fbank(source,
num_mel_bins=80, num_mel_bins=80,
frame_length=25, frame_length=25,
frame_shift=10, frame_shift=10,
dither=0.0): dither=0.0):
""" Extract fbank """ Extract fbank
Args: Args:
...@@ -746,30 +797,33 @@ def _audio_compute_fbank(source, ...@@ -746,30 +797,33 @@ def _audio_compute_fbank(source,
waveform = sample['wav'] waveform = sample['wav']
waveform = waveform * (1 << 15) waveform = waveform * (1 << 15)
# Only keep fname, feat, label # Only keep fname, feat, label
mat = kaldi.fbank(waveform, mat = kaldi.fbank(
n_mels=num_mel_bins, waveform,
frame_length=frame_length, n_mels=num_mel_bins,
frame_shift=frame_shift, frame_length=frame_length,
dither=dither, frame_shift=frame_shift,
energy_floor=0.0, dither=dither,
sr=sample_rate) energy_floor=0.0,
sr=sample_rate)
yield dict(fname=sample['fname'], label=sample['label'], feat=mat) yield dict(fname=sample['fname'], label=sample['label'], feat=mat)
audio_compute_fbank = pipelinefilter(_audio_compute_fbank) audio_compute_fbank = pipelinefilter(_audio_compute_fbank)
def _audio_spec_aug(source,
max_w=5, def _audio_spec_aug(
w_inplace=True, source,
w_mode="PIL", max_w=5,
max_f=30, w_inplace=True,
num_f_mask=2, w_mode="PIL",
f_inplace=True, max_f=30,
f_replace_with_zero=False, num_f_mask=2,
max_t=40, f_inplace=True,
num_t_mask=2, f_replace_with_zero=False,
t_inplace=True, max_t=40,
t_replace_with_zero=False,): num_t_mask=2,
t_inplace=True,
t_replace_with_zero=False, ):
""" Do spec augmentation """ Do spec augmentation
Inplace operation Inplace operation
...@@ -793,12 +847,23 @@ def _audio_spec_aug(source, ...@@ -793,12 +847,23 @@ def _audio_spec_aug(source,
for sample in source: for sample in source:
x = sample['feat'] x = sample['feat']
x = x.numpy() x = x.numpy()
x = time_warp(x, max_time_warp=max_w, inplace = w_inplace, mode= w_mode) x = time_warp(x, max_time_warp=max_w, inplace=w_inplace, mode=w_mode)
x = freq_mask(x, F = max_f, n_mask = num_f_mask, inplace = f_inplace, replace_with_zero = f_replace_with_zero) x = freq_mask(
x = time_mask(x, T = max_t, n_mask = num_t_mask, inplace = t_inplace, replace_with_zero = t_replace_with_zero) x,
F=max_f,
n_mask=num_f_mask,
inplace=f_inplace,
replace_with_zero=f_replace_with_zero)
x = time_mask(
x,
T=max_t,
n_mask=num_t_mask,
inplace=t_inplace,
replace_with_zero=t_replace_with_zero)
sample['feat'] = paddle.to_tensor(x, dtype=paddle.float32) sample['feat'] = paddle.to_tensor(x, dtype=paddle.float32)
yield sample yield sample
audio_spec_aug = pipelinefilter(_audio_spec_aug) audio_spec_aug = pipelinefilter(_audio_spec_aug)
...@@ -829,8 +894,10 @@ def _sort(source, sort_size=500): ...@@ -829,8 +894,10 @@ def _sort(source, sort_size=500):
for x in buf: for x in buf:
yield x yield x
sort = pipelinefilter(_sort) sort = pipelinefilter(_sort)
def _batched(source, batch_size=16): def _batched(source, batch_size=16):
""" Static batch the data by `batch_size` """ Static batch the data by `batch_size`
...@@ -850,8 +917,10 @@ def _batched(source, batch_size=16): ...@@ -850,8 +917,10 @@ def _batched(source, batch_size=16):
if len(buf) > 0: if len(buf) > 0:
yield buf yield buf
batched = pipelinefilter(_batched) batched = pipelinefilter(_batched)
def dynamic_batched(source, max_frames_in_batch=12000): def dynamic_batched(source, max_frames_in_batch=12000):
""" Dynamic batch the data until the total frames in batch """ Dynamic batch the data until the total frames in batch
reach `max_frames_in_batch` reach `max_frames_in_batch`
...@@ -892,8 +961,8 @@ def _audio_padding(source): ...@@ -892,8 +961,8 @@ def _audio_padding(source):
""" """
for sample in source: for sample in source:
assert isinstance(sample, list) assert isinstance(sample, list)
feats_length = paddle.to_tensor([x['feat'].shape[0] for x in sample], feats_length = paddle.to_tensor(
dtype="int64") [x['feat'].shape[0] for x in sample], dtype="int64")
order = paddle.argsort(feats_length, descending=True) order = paddle.argsort(feats_length, descending=True)
feats_lengths = paddle.to_tensor( feats_lengths = paddle.to_tensor(
[sample[i]['feat'].shape[0] for i in order], dtype="int64") [sample[i]['feat'].shape[0] for i in order], dtype="int64")
...@@ -902,20 +971,20 @@ def _audio_padding(source): ...@@ -902,20 +971,20 @@ def _audio_padding(source):
sorted_labels = [ sorted_labels = [
paddle.to_tensor(sample[i]['label'], dtype="int32") for i in order paddle.to_tensor(sample[i]['label'], dtype="int32") for i in order
] ]
label_lengths = paddle.to_tensor([x.shape[0] for x in sorted_labels], label_lengths = paddle.to_tensor(
dtype="int64") [x.shape[0] for x in sorted_labels], dtype="int64")
padded_feats = pad_sequence(sorted_feats, padded_feats = pad_sequence(
batch_first=True, sorted_feats, batch_first=True, padding_value=0)
padding_value=0) padding_labels = pad_sequence(
padding_labels = pad_sequence(sorted_labels, sorted_labels, batch_first=True, padding_value=-1)
batch_first=True,
padding_value=-1) yield (sorted_keys, padded_feats, feats_lengths, padding_labels,
yield (sorted_keys, padded_feats, feats_lengths, padding_labels,
label_lengths) label_lengths)
audio_padding = pipelinefilter(_audio_padding) audio_padding = pipelinefilter(_audio_padding)
def _audio_cmvn(source, cmvn_file): def _audio_cmvn(source, cmvn_file):
global_cmvn = GlobalCMVN(cmvn_file) global_cmvn = GlobalCMVN(cmvn_file)
for batch in source: for batch in source:
...@@ -923,13 +992,16 @@ def _audio_cmvn(source, cmvn_file): ...@@ -923,13 +992,16 @@ def _audio_cmvn(source, cmvn_file):
padded_feats = padded_feats.numpy() padded_feats = padded_feats.numpy()
padded_feats = global_cmvn(padded_feats) padded_feats = global_cmvn(padded_feats)
padded_feats = paddle.to_tensor(padded_feats, dtype=paddle.float32) padded_feats = paddle.to_tensor(padded_feats, dtype=paddle.float32)
yield (sorted_keys, padded_feats, feats_lengths, padding_labels, yield (sorted_keys, padded_feats, feats_lengths, padding_labels,
label_lengths) label_lengths)
audio_cmvn = pipelinefilter(_audio_cmvn) audio_cmvn = pipelinefilter(_audio_cmvn)
def _placeholder(source): def _placeholder(source):
for data in source: for data in source:
yield data yield data
placeholder = pipelinefilter(_placeholder) placeholder = pipelinefilter(_placeholder)
...@@ -3,12 +3,12 @@ ...@@ -3,12 +3,12 @@
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# #
"""Open URLs by calling subcommands.""" """Open URLs by calling subcommands."""
import os
import os, sys, re import re
from subprocess import PIPE, Popen import sys
from subprocess import PIPE
from subprocess import Popen
from urllib.parse import urlparse from urllib.parse import urlparse
# global used for printing additional node information during verbose output # global used for printing additional node information during verbose output
...@@ -31,14 +31,13 @@ class Pipe: ...@@ -31,14 +31,13 @@ class Pipe:
""" """
def __init__( def __init__(
self, self,
*args, *args,
mode=None, mode=None,
timeout=7200.0, timeout=7200.0,
ignore_errors=False, ignore_errors=False,
ignore_status=[], ignore_status=[],
**kw, **kw, ):
):
"""Create an IO Pipe.""" """Create an IO Pipe."""
self.ignore_errors = ignore_errors self.ignore_errors = ignore_errors
self.ignore_status = [0] + ignore_status self.ignore_status = [0] + ignore_status
...@@ -75,8 +74,7 @@ class Pipe: ...@@ -75,8 +74,7 @@ class Pipe:
if verbose: if verbose:
print( print(
f"pipe exit [{self.status} {os.getpid()}:{self.proc.pid}] {self.args} {info}", f"pipe exit [{self.status} {os.getpid()}:{self.proc.pid}] {self.args} {info}",
file=sys.stderr, file=sys.stderr, )
)
if self.status not in self.ignore_status and not self.ignore_errors: if self.status not in self.ignore_status and not self.ignore_errors:
raise Exception(f"{self.args}: exit {self.status} (read) {info}") raise Exception(f"{self.args}: exit {self.status} (read) {info}")
...@@ -114,9 +112,11 @@ class Pipe: ...@@ -114,9 +112,11 @@ class Pipe:
self.close() self.close()
def set_options( def set_options(obj,
obj, timeout=None, ignore_errors=None, ignore_status=None, handler=None timeout=None,
): ignore_errors=None,
ignore_status=None,
handler=None):
"""Set options for Pipes. """Set options for Pipes.
This function can be called on any stream. It will set pipe options only This function can be called on any stream. It will set pipe options only
...@@ -168,16 +168,14 @@ def gopen_pipe(url, mode="rb", bufsize=8192): ...@@ -168,16 +168,14 @@ def gopen_pipe(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141], ignore_status=[141], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
elif mode[0] == "w": elif mode[0] == "w":
return Pipe( return Pipe(
cmd, cmd,
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141], ignore_status=[141], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
else: else:
raise ValueError(f"{mode}: unknown mode") raise ValueError(f"{mode}: unknown mode")
...@@ -196,8 +194,7 @@ def gopen_curl(url, mode="rb", bufsize=8192): ...@@ -196,8 +194,7 @@ def gopen_curl(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141, 23], ignore_status=[141, 23], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
elif mode[0] == "w": elif mode[0] == "w":
cmd = f"curl -s -L -T - '{url}'" cmd = f"curl -s -L -T - '{url}'"
return Pipe( return Pipe(
...@@ -205,8 +202,7 @@ def gopen_curl(url, mode="rb", bufsize=8192): ...@@ -205,8 +202,7 @@ def gopen_curl(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141, 26], ignore_status=[141, 26], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
else: else:
raise ValueError(f"{mode}: unknown mode") raise ValueError(f"{mode}: unknown mode")
...@@ -226,15 +222,13 @@ def gopen_htgs(url, mode="rb", bufsize=8192): ...@@ -226,15 +222,13 @@ def gopen_htgs(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141, 23], ignore_status=[141, 23], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
elif mode[0] == "w": elif mode[0] == "w":
raise ValueError(f"{mode}: cannot write") raise ValueError(f"{mode}: cannot write")
else: else:
raise ValueError(f"{mode}: unknown mode") raise ValueError(f"{mode}: unknown mode")
def gopen_gsutil(url, mode="rb", bufsize=8192): def gopen_gsutil(url, mode="rb", bufsize=8192):
"""Open a URL with `curl`. """Open a URL with `curl`.
...@@ -249,8 +243,7 @@ def gopen_gsutil(url, mode="rb", bufsize=8192): ...@@ -249,8 +243,7 @@ def gopen_gsutil(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141, 23], ignore_status=[141, 23], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
elif mode[0] == "w": elif mode[0] == "w":
cmd = f"gsutil cp - '{url}'" cmd = f"gsutil cp - '{url}'"
return Pipe( return Pipe(
...@@ -258,13 +251,11 @@ def gopen_gsutil(url, mode="rb", bufsize=8192): ...@@ -258,13 +251,11 @@ def gopen_gsutil(url, mode="rb", bufsize=8192):
mode=mode, mode=mode,
shell=True, shell=True,
bufsize=bufsize, bufsize=bufsize,
ignore_status=[141, 26], ignore_status=[141, 26], ) # skipcq: BAN-B604
) # skipcq: BAN-B604
else: else:
raise ValueError(f"{mode}: unknown mode") raise ValueError(f"{mode}: unknown mode")
def gopen_error(url, *args, **kw): def gopen_error(url, *args, **kw):
"""Raise a value error. """Raise a value error.
...@@ -285,8 +276,7 @@ gopen_schemes = dict( ...@@ -285,8 +276,7 @@ gopen_schemes = dict(
ftps=gopen_curl, ftps=gopen_curl,
scp=gopen_curl, scp=gopen_curl,
gs=gopen_gsutil, gs=gopen_gsutil,
htgs=gopen_htgs, htgs=gopen_htgs, )
)
def gopen(url, mode="rb", bufsize=8192, **kw): def gopen(url, mode="rb", bufsize=8192, **kw):
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# #
"""Pluggable exception handlers. """Pluggable exception handlers.
These are functions that take an exception as an argument and then return... These are functions that take an exception as an argument and then return...
...@@ -14,8 +13,8 @@ These are functions that take an exception as an argument and then return... ...@@ -14,8 +13,8 @@ These are functions that take an exception as an argument and then return...
They are used as handler= arguments in much of the library. They are used as handler= arguments in much of the library.
""" """
import time
import time, warnings import warnings
def reraise_exception(exn): def reraise_exception(exn):
......
...@@ -5,17 +5,12 @@ ...@@ -5,17 +5,12 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
"""Classes for mixing samples from multiple sources.""" """Classes for mixing samples from multiple sources."""
import random
import itertools, os, random, time, sys
from functools import reduce, wraps
import numpy as np import numpy as np
from . import autodecode, utils from .paddle_utils import IterableDataset
from .paddle_utils import PaddleTensor, IterableDataset
from .utils import PipelineStage
def round_robin_shortest(*sources): def round_robin_shortest(*sources):
......
...@@ -5,12 +5,11 @@ ...@@ -5,12 +5,11 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
"""Mock implementations of paddle interfaces when paddle is not available.""" """Mock implementations of paddle interfaces when paddle is not available."""
try: try:
from paddle.io import DataLoader, IterableDataset from paddle.io import DataLoader
from paddle.io import IterableDataset
except ModuleNotFoundError: except ModuleNotFoundError:
class IterableDataset: class IterableDataset:
...@@ -22,12 +21,3 @@ except ModuleNotFoundError: ...@@ -22,12 +21,3 @@ except ModuleNotFoundError:
"""Empty implementation of DataLoader when paddle is not available.""" """Empty implementation of DataLoader when paddle is not available."""
pass pass
try:
from paddle import Tensor as PaddleTensor
except ModuleNotFoundError:
class TorchTensor:
"""Empty implementation of PaddleTensor when paddle is not available."""
pass
...@@ -3,15 +3,12 @@ ...@@ -3,15 +3,12 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
#%% #%%
import copy, os, random, sys, time import copy
from dataclasses import dataclass import sys
from itertools import islice from itertools import islice
from typing import List
import braceexpand, yaml from .paddle_utils import DataLoader
from .paddle_utils import IterableDataset
from .handlers import reraise_exception
from .paddle_utils import DataLoader, IterableDataset
from .utils import PipelineStage from .utils import PipelineStage
...@@ -22,8 +19,7 @@ def add_length_method(obj): ...@@ -22,8 +19,7 @@ def add_length_method(obj):
Combined = type( Combined = type(
obj.__class__.__name__ + "_Length", obj.__class__.__name__ + "_Length",
(obj.__class__, IterableDataset), (obj.__class__, IterableDataset),
{"__len__": length}, {"__len__": length}, )
)
obj.__class__ = Combined obj.__class__ = Combined
return obj return obj
......
...@@ -4,28 +4,30 @@ ...@@ -4,28 +4,30 @@
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# #
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
"""Train PyTorch models directly from POSIX tar archive. """Train PyTorch models directly from POSIX tar archive.
Code works locally or over HTTP connections. Code works locally or over HTTP connections.
""" """
import os
import os, random, sys, time import random
from dataclasses import dataclass, field import sys
import time
from dataclasses import dataclass
from dataclasses import field
from itertools import islice from itertools import islice
from typing import List from typing import List
import braceexpand, yaml import braceexpand
import yaml
from . import utils from . import utils
from ..utils.log import Logger
from .filters import pipelinefilter from .filters import pipelinefilter
from .paddle_utils import IterableDataset from .paddle_utils import IterableDataset
logger = Logger(__name__)
from ..utils.log import Logger
logger = Logger(__name__)
def expand_urls(urls): def expand_urls(urls):
if isinstance(urls, str): if isinstance(urls, str):
urllist = urls.split("::") urllist = urls.split("::")
...@@ -64,7 +66,8 @@ class SimpleShardList(IterableDataset): ...@@ -64,7 +66,8 @@ class SimpleShardList(IterableDataset):
def split_by_node(src, group=None): def split_by_node(src, group=None):
rank, world_size, worker, num_workers = utils.paddle_worker_info(group=group) rank, world_size, worker, num_workers = utils.paddle_worker_info(
group=group)
logger.info(f"world_size:{world_size}, rank:{rank}") logger.info(f"world_size:{world_size}, rank:{rank}")
if world_size > 1: if world_size > 1:
for s in islice(src, rank, None, world_size): for s in islice(src, rank, None, world_size):
...@@ -75,9 +78,11 @@ def split_by_node(src, group=None): ...@@ -75,9 +78,11 @@ def split_by_node(src, group=None):
def single_node_only(src, group=None): def single_node_only(src, group=None):
rank, world_size, worker, num_workers = utils.paddle_worker_info(group=group) rank, world_size, worker, num_workers = utils.paddle_worker_info(
group=group)
if world_size > 1: if world_size > 1:
raise ValueError("input pipeline needs to be reconfigured for multinode training") raise ValueError(
"input pipeline needs to be reconfigured for multinode training")
for s in src: for s in src:
yield s yield s
...@@ -104,7 +109,8 @@ def resampled_(src, n=sys.maxsize): ...@@ -104,7 +109,8 @@ def resampled_(src, n=sys.maxsize):
rng = random.Random(seed) rng = random.Random(seed)
print("# resampled loading", file=sys.stderr) print("# resampled loading", file=sys.stderr)
items = list(src) items = list(src)
print(f"# resampled got {len(items)} samples, yielding {n}", file=sys.stderr) print(
f"# resampled got {len(items)} samples, yielding {n}", file=sys.stderr)
for i in range(n): for i in range(n):
yield rng.choice(items) yield rng.choice(items)
...@@ -118,7 +124,9 @@ def non_empty(src): ...@@ -118,7 +124,9 @@ def non_empty(src):
yield s yield s
count += 1 count += 1
if count == 0: if count == 0:
raise ValueError("pipeline stage received no data at all and this was declared as an error") raise ValueError(
"pipeline stage received no data at all and this was declared as an error"
)
@dataclass @dataclass
...@@ -138,10 +146,6 @@ def expand(s): ...@@ -138,10 +146,6 @@ def expand(s):
return os.path.expanduser(os.path.expandvars(s)) return os.path.expanduser(os.path.expandvars(s))
class MultiShardSample(IterableDataset):
def __init__(self, fname):
"""Construct a shardlist from multiple sources using a YAML spec."""
self.epoch = -1
class MultiShardSample(IterableDataset): class MultiShardSample(IterableDataset):
def __init__(self, fname): def __init__(self, fname):
"""Construct a shardlist from multiple sources using a YAML spec.""" """Construct a shardlist from multiple sources using a YAML spec."""
...@@ -156,20 +160,23 @@ class MultiShardSample(IterableDataset): ...@@ -156,20 +160,23 @@ class MultiShardSample(IterableDataset):
else: else:
with open(fname) as stream: with open(fname) as stream:
spec = yaml.safe_load(stream) spec = yaml.safe_load(stream)
assert set(spec.keys()).issubset(set("prefix datasets buckets".split())), list(spec.keys()) assert set(spec.keys()).issubset(
set("prefix datasets buckets".split())), list(spec.keys())
prefix = expand(spec.get("prefix", "")) prefix = expand(spec.get("prefix", ""))
self.sources = [] self.sources = []
for ds in spec["datasets"]: for ds in spec["datasets"]:
assert set(ds.keys()).issubset(set("buckets name shards resample choose".split())), list( assert set(ds.keys()).issubset(
ds.keys() set("buckets name shards resample choose".split())), list(
) ds.keys())
buckets = ds.get("buckets", spec.get("buckets", [])) buckets = ds.get("buckets", spec.get("buckets", []))
if isinstance(buckets, str): if isinstance(buckets, str):
buckets = [buckets] buckets = [buckets]
buckets = [expand(s) for s in buckets] buckets = [expand(s) for s in buckets]
if buckets == []: if buckets == []:
buckets = [""] buckets = [""]
assert len(buckets) == 1, f"{buckets}: FIXME support for multiple buckets unimplemented" assert len(
buckets
) == 1, f"{buckets}: FIXME support for multiple buckets unimplemented"
bucket = buckets[0] bucket = buckets[0]
name = ds.get("name", "@" + bucket) name = ds.get("name", "@" + bucket)
urls = ds["shards"] urls = ds["shards"]
...@@ -177,15 +184,19 @@ class MultiShardSample(IterableDataset): ...@@ -177,15 +184,19 @@ class MultiShardSample(IterableDataset):
urls = [urls] urls = [urls]
# urls = [u for url in urls for u in braceexpand.braceexpand(url)] # urls = [u for url in urls for u in braceexpand.braceexpand(url)]
urls = [ urls = [
prefix + os.path.join(bucket, u) for url in urls for u in braceexpand.braceexpand(expand(url)) prefix + os.path.join(bucket, u)
for url in urls for u in braceexpand.braceexpand(expand(url))
] ]
resample = ds.get("resample", -1) resample = ds.get("resample", -1)
nsample = ds.get("choose", -1) nsample = ds.get("choose", -1)
if nsample > len(urls): if nsample > len(urls):
raise ValueError(f"perepoch {nsample} must be no greater than the number of shards") raise ValueError(
f"perepoch {nsample} must be no greater than the number of shards"
)
if (nsample > 0) and (resample > 0): if (nsample > 0) and (resample > 0):
raise ValueError("specify only one of perepoch or choose") raise ValueError("specify only one of perepoch or choose")
entry = MSSource(name=name, urls=urls, perepoch=nsample, resample=resample) entry = MSSource(
name=name, urls=urls, perepoch=nsample, resample=resample)
self.sources.append(entry) self.sources.append(entry)
print(f"# {name} {len(urls)} {nsample}", file=sys.stderr) print(f"# {name} {len(urls)} {nsample}", file=sys.stderr)
...@@ -203,7 +214,7 @@ class MultiShardSample(IterableDataset): ...@@ -203,7 +214,7 @@ class MultiShardSample(IterableDataset):
# sample without replacement # sample without replacement
l = list(source.urls) l = list(source.urls)
self.rng.shuffle(l) self.rng.shuffle(l)
l = l[: source.perepoch] l = l[:source.perepoch]
else: else:
l = list(source.urls) l = list(source.urls)
result += l result += l
...@@ -227,12 +238,11 @@ class ResampledShards(IterableDataset): ...@@ -227,12 +238,11 @@ class ResampledShards(IterableDataset):
"""An iterable dataset yielding a list of urls.""" """An iterable dataset yielding a list of urls."""
def __init__( def __init__(
self, self,
urls, urls,
nshards=sys.maxsize, nshards=sys.maxsize,
worker_seed=None, worker_seed=None,
deterministic=False, deterministic=False, ):
):
"""Sample shards from the shard list with replacement. """Sample shards from the shard list with replacement.
:param urls: a list of URLs as a Python list or brace notation string :param urls: a list of URLs as a Python list or brace notation string
...@@ -252,7 +262,8 @@ class ResampledShards(IterableDataset): ...@@ -252,7 +262,8 @@ class ResampledShards(IterableDataset):
if self.deterministic: if self.deterministic:
seed = utils.make_seed(self.worker_seed(), self.epoch) seed = utils.make_seed(self.worker_seed(), self.epoch)
else: else:
seed = utils.make_seed(self.worker_seed(), self.epoch, os.getpid(), time.time_ns(), os.urandom(4)) seed = utils.make_seed(self.worker_seed(), self.epoch,
os.getpid(), time.time_ns(), os.urandom(4))
if os.environ.get("WDS_SHOW_SEED", "0") == "1": if os.environ.get("WDS_SHOW_SEED", "0") == "1":
print(f"# ResampledShards seed {seed}") print(f"# ResampledShards seed {seed}")
self.rng = random.Random(seed) self.rng = random.Random(seed)
......
...@@ -3,13 +3,12 @@ ...@@ -3,13 +3,12 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# Modified from wenet(https://github.com/wenet-e2e/wenet) # Modified from wenet(https://github.com/wenet-e2e/wenet)
"""Low level iteration functions for tar archives.""" """Low level iteration functions for tar archives."""
import random
import random, re, tarfile import re
import tarfile
import braceexpand import braceexpand
...@@ -27,6 +26,7 @@ import numpy as np ...@@ -27,6 +26,7 @@ import numpy as np
AUDIO_FORMAT_SETS = set(['flac', 'mp3', 'm4a', 'ogg', 'opus', 'wav', 'wma']) AUDIO_FORMAT_SETS = set(['flac', 'mp3', 'm4a', 'ogg', 'opus', 'wav', 'wma'])
def base_plus_ext(path): def base_plus_ext(path):
"""Split off all file extensions. """Split off all file extensions.
...@@ -47,12 +47,8 @@ def valid_sample(sample): ...@@ -47,12 +47,8 @@ def valid_sample(sample):
:param sample: sample to be checked :param sample: sample to be checked
""" """
return ( return (sample is not None and isinstance(sample, dict) and
sample is not None len(list(sample.keys())) > 0 and not sample.get("__bad__", False))
and isinstance(sample, dict)
and len(list(sample.keys())) > 0
and not sample.get("__bad__", False)
)
# FIXME: UNUSED # FIXME: UNUSED
...@@ -79,16 +75,16 @@ def url_opener(data, handler=reraise_exception, **kw): ...@@ -79,16 +75,16 @@ def url_opener(data, handler=reraise_exception, **kw):
sample.update(stream=stream) sample.update(stream=stream)
yield sample yield sample
except Exception as exn: except Exception as exn:
exn.args = exn.args + (url,) exn.args = exn.args + (url, )
if handler(exn): if handler(exn):
continue continue
else: else:
break break
def tar_file_iterator( def tar_file_iterator(fileobj,
fileobj, skip_meta=r"__[^/]*__($|/)", handler=reraise_exception skip_meta=r"__[^/]*__($|/)",
): handler=reraise_exception):
"""Iterate over tar file, yielding filename, content pairs for the given tar stream. """Iterate over tar file, yielding filename, content pairs for the given tar stream.
:param fileobj: byte stream suitable for tarfile :param fileobj: byte stream suitable for tarfile
...@@ -103,11 +99,8 @@ def tar_file_iterator( ...@@ -103,11 +99,8 @@ def tar_file_iterator(
continue continue
if fname is None: if fname is None:
continue continue
if ( if ("/" not in fname and fname.startswith(meta_prefix) and
"/" not in fname fname.endswith(meta_suffix)):
and fname.startswith(meta_prefix)
and fname.endswith(meta_suffix)
):
# skipping metadata for now # skipping metadata for now
continue continue
if skip_meta is not None and re.match(skip_meta, fname): if skip_meta is not None and re.match(skip_meta, fname):
...@@ -118,8 +111,10 @@ def tar_file_iterator( ...@@ -118,8 +111,10 @@ def tar_file_iterator(
assert pos > 0 assert pos > 0
prefix, postfix = name[:pos], name[pos + 1:] prefix, postfix = name[:pos], name[pos + 1:]
if postfix == 'wav': if postfix == 'wav':
waveform, sample_rate = paddlespeech.audio.load(stream.extractfile(tarinfo), normal=False) waveform, sample_rate = paddlespeech.audio.load(
result = dict(fname=prefix, wav=waveform, sample_rate = sample_rate) stream.extractfile(tarinfo), normal=False)
result = dict(
fname=prefix, wav=waveform, sample_rate=sample_rate)
else: else:
txt = stream.extractfile(tarinfo).read().decode('utf8').strip() txt = stream.extractfile(tarinfo).read().decode('utf8').strip()
result = dict(fname=prefix, txt=txt) result = dict(fname=prefix, txt=txt)
...@@ -128,16 +123,17 @@ def tar_file_iterator( ...@@ -128,16 +123,17 @@ def tar_file_iterator(
stream.members = [] stream.members = []
except Exception as exn: except Exception as exn:
if hasattr(exn, "args") and len(exn.args) > 0: if hasattr(exn, "args") and len(exn.args) > 0:
exn.args = (exn.args[0] + " @ " + str(fileobj),) + exn.args[1:] exn.args = (exn.args[0] + " @ " + str(fileobj), ) + exn.args[1:]
if handler(exn): if handler(exn):
continue continue
else: else:
break break
del stream del stream
def tar_file_and_group_iterator(
fileobj, skip_meta=r"__[^/]*__($|/)", handler=reraise_exception def tar_file_and_group_iterator(fileobj,
): skip_meta=r"__[^/]*__($|/)",
handler=reraise_exception):
""" Expand a stream of open tar files into a stream of tar file contents. """ Expand a stream of open tar files into a stream of tar file contents.
And groups the file with same prefix And groups the file with same prefix
...@@ -167,8 +163,11 @@ def tar_file_and_group_iterator( ...@@ -167,8 +163,11 @@ def tar_file_and_group_iterator(
if postfix == 'txt': if postfix == 'txt':
example['txt'] = file_obj.read().decode('utf8').strip() example['txt'] = file_obj.read().decode('utf8').strip()
elif postfix in AUDIO_FORMAT_SETS: elif postfix in AUDIO_FORMAT_SETS:
waveform, sample_rate = paddlespeech.audio.load(file_obj, normal=False) waveform, sample_rate = paddlespeech.audio.load(
waveform = paddle.to_tensor(np.expand_dims(np.array(waveform),0), dtype=paddle.float32) file_obj, normal=False)
waveform = paddle.to_tensor(
np.expand_dims(np.array(waveform), 0),
dtype=paddle.float32)
example['wav'] = waveform example['wav'] = waveform
example['sample_rate'] = sample_rate example['sample_rate'] = sample_rate
...@@ -176,19 +175,21 @@ def tar_file_and_group_iterator( ...@@ -176,19 +175,21 @@ def tar_file_and_group_iterator(
example[postfix] = file_obj.read() example[postfix] = file_obj.read()
except Exception as exn: except Exception as exn:
if hasattr(exn, "args") and len(exn.args) > 0: if hasattr(exn, "args") and len(exn.args) > 0:
exn.args = (exn.args[0] + " @ " + str(fileobj),) + exn.args[1:] exn.args = (exn.args[0] + " @ " + str(fileobj),
) + exn.args[1:]
if handler(exn): if handler(exn):
continue continue
else: else:
break break
valid = False valid = False
# logging.warning('error to parse {}'.format(name)) # logging.warning('error to parse {}'.format(name))
prev_prefix = prefix prev_prefix = prefix
if prev_prefix is not None: if prev_prefix is not None:
example['fname'] = prev_prefix example['fname'] = prev_prefix
yield example yield example
stream.close() stream.close()
def tar_file_expander(data, handler=reraise_exception): def tar_file_expander(data, handler=reraise_exception):
"""Expand a stream of open tar files into a stream of tar file contents. """Expand a stream of open tar files into a stream of tar file contents.
...@@ -200,9 +201,8 @@ def tar_file_expander(data, handler=reraise_exception): ...@@ -200,9 +201,8 @@ def tar_file_expander(data, handler=reraise_exception):
assert isinstance(source, dict) assert isinstance(source, dict)
assert "stream" in source assert "stream" in source
for sample in tar_file_iterator(source["stream"]): for sample in tar_file_iterator(source["stream"]):
assert ( assert (isinstance(sample, dict) and "data" in sample and
isinstance(sample, dict) and "data" in sample and "fname" in sample "fname" in sample)
)
sample["__url__"] = url sample["__url__"] = url
yield sample yield sample
except Exception as exn: except Exception as exn:
...@@ -213,8 +213,6 @@ def tar_file_expander(data, handler=reraise_exception): ...@@ -213,8 +213,6 @@ def tar_file_expander(data, handler=reraise_exception):
break break
def tar_file_and_group_expander(data, handler=reraise_exception): def tar_file_and_group_expander(data, handler=reraise_exception):
"""Expand a stream of open tar files into a stream of tar file contents. """Expand a stream of open tar files into a stream of tar file contents.
...@@ -226,9 +224,8 @@ def tar_file_and_group_expander(data, handler=reraise_exception): ...@@ -226,9 +224,8 @@ def tar_file_and_group_expander(data, handler=reraise_exception):
assert isinstance(source, dict) assert isinstance(source, dict)
assert "stream" in source assert "stream" in source
for sample in tar_file_and_group_iterator(source["stream"]): for sample in tar_file_and_group_iterator(source["stream"]):
assert ( assert (isinstance(sample, dict) and "wav" in sample and
isinstance(sample, dict) and "wav" in sample and "txt" in sample and "fname" in sample "txt" in sample and "fname" in sample)
)
sample["__url__"] = url sample["__url__"] = url
yield sample yield sample
except Exception as exn: except Exception as exn:
...@@ -239,7 +236,11 @@ def tar_file_and_group_expander(data, handler=reraise_exception): ...@@ -239,7 +236,11 @@ def tar_file_and_group_expander(data, handler=reraise_exception):
break break
def group_by_keys(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=None): def group_by_keys(data,
keys=base_plus_ext,
lcase=True,
suffixes=None,
handler=None):
"""Return function over iterator that groups key, value pairs into samples. """Return function over iterator that groups key, value pairs into samples.
:param keys: function that splits the key into key and extension (base_plus_ext) :param keys: function that splits the key into key and extension (base_plus_ext)
...@@ -254,8 +255,8 @@ def group_by_keys(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=N ...@@ -254,8 +255,8 @@ def group_by_keys(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=N
print( print(
prefix, prefix,
suffix, suffix,
current_sample.keys() if isinstance(current_sample, dict) else None, current_sample.keys()
) if isinstance(current_sample, dict) else None, )
if prefix is None: if prefix is None:
continue continue
if lcase: if lcase:
......
...@@ -4,22 +4,23 @@ ...@@ -4,22 +4,23 @@
# This file is part of the WebDataset library. # This file is part of the WebDataset library.
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# #
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
"""Miscellaneous utility functions.""" """Miscellaneous utility functions."""
import importlib import importlib
import itertools as itt import itertools as itt
import os import os
import re import re
import sys import sys
from typing import Any, Callable, Iterator, Optional, Union from typing import Any
from typing import Callable
from typing import Iterator
from typing import Union
from ..utils.log import Logger from ..utils.log import Logger
logger = Logger(__name__) logger = Logger(__name__)
def make_seed(*args): def make_seed(*args):
seed = 0 seed = 0
for arg in args: for arg in args:
...@@ -37,7 +38,7 @@ def identity(x: Any) -> Any: ...@@ -37,7 +38,7 @@ def identity(x: Any) -> Any:
return x return x
def safe_eval(s: str, expr: str = "{}"): def safe_eval(s: str, expr: str="{}"):
"""Evaluate the given expression more safely.""" """Evaluate the given expression more safely."""
if re.sub("[^A-Za-z0-9_]", "", s) != s: if re.sub("[^A-Za-z0-9_]", "", s) != s:
raise ValueError(f"safe_eval: illegal characters in: '{s}'") raise ValueError(f"safe_eval: illegal characters in: '{s}'")
...@@ -54,9 +55,9 @@ def lookup_sym(sym: str, modules: list): ...@@ -54,9 +55,9 @@ def lookup_sym(sym: str, modules: list):
return None return None
def repeatedly0( def repeatedly0(loader: Iterator,
loader: Iterator, nepochs: int = sys.maxsize, nbatches: int = sys.maxsize nepochs: int=sys.maxsize,
): nbatches: int=sys.maxsize):
"""Repeatedly returns batches from a DataLoader.""" """Repeatedly returns batches from a DataLoader."""
for epoch in range(nepochs): for epoch in range(nepochs):
for sample in itt.islice(loader, nbatches): for sample in itt.islice(loader, nbatches):
...@@ -69,12 +70,11 @@ def guess_batchsize(batch: Union[tuple, list]): ...@@ -69,12 +70,11 @@ def guess_batchsize(batch: Union[tuple, list]):
def repeatedly( def repeatedly(
source: Iterator, source: Iterator,
nepochs: int = None, nepochs: int=None,
nbatches: int = None, nbatches: int=None,
nsamples: int = None, nsamples: int=None,
batchsize: Callable[..., int] = guess_batchsize, batchsize: Callable[..., int]=guess_batchsize, ):
):
"""Repeatedly yield samples from an iterator.""" """Repeatedly yield samples from an iterator."""
epoch = 0 epoch = 0
batch = 0 batch = 0
...@@ -93,6 +93,7 @@ def repeatedly( ...@@ -93,6 +93,7 @@ def repeatedly(
if nepochs is not None and epoch >= nepochs: if nepochs is not None and epoch >= nepochs:
return return
def paddle_worker_info(group=None): def paddle_worker_info(group=None):
"""Return node and worker info for PyTorch and some distributed environments.""" """Return node and worker info for PyTorch and some distributed environments."""
rank = 0 rank = 0
...@@ -116,7 +117,7 @@ def paddle_worker_info(group=None): ...@@ -116,7 +117,7 @@ def paddle_worker_info(group=None):
else: else:
try: try:
from paddle.io import get_worker_info from paddle.io import get_worker_info
worker_info = paddle.io.get_worker_info() worker_info = get_worker_info()
if worker_info is not None: if worker_info is not None:
worker = worker_info.id worker = worker_info.id
num_workers = worker_info.num_workers num_workers = worker_info.num_workers
...@@ -126,6 +127,7 @@ def paddle_worker_info(group=None): ...@@ -126,6 +127,7 @@ def paddle_worker_info(group=None):
return rank, world_size, worker, num_workers return rank, world_size, worker, num_workers
def paddle_worker_seed(group=None): def paddle_worker_seed(group=None):
"""Compute a distinct, deterministic RNG seed for each worker and node.""" """Compute a distinct, deterministic RNG seed for each worker and node."""
rank, world_size, worker, num_workers = paddle_worker_info(group=group) rank, world_size, worker, num_workers = paddle_worker_info(group=group)
......
...@@ -5,18 +5,24 @@ ...@@ -5,18 +5,24 @@
# See the LICENSE file for licensing terms (BSD-style). # See the LICENSE file for licensing terms (BSD-style).
# Modified from https://github.com/webdataset/webdataset # Modified from https://github.com/webdataset/webdataset
# #
"""Classes and functions for writing tar files and WebDataset files.""" """Classes and functions for writing tar files and WebDataset files."""
import io
import io, json, pickle, re, tarfile, time import json
from typing import Any, Callable, Optional, Union import pickle
import re
import tarfile
import time
from typing import Any
from typing import Callable
from typing import Optional
from typing import Union
import numpy as np import numpy as np
from . import gopen from . import gopen
def imageencoder(image: Any, format: str = "PNG"): # skipcq: PYL-W0622 def imageencoder(image: Any, format: str="PNG"): # skipcq: PYL-W0622
"""Compress an image using PIL and return it as a string. """Compress an image using PIL and return it as a string.
Can handle float or uint8 images. Can handle float or uint8 images.
...@@ -67,6 +73,7 @@ def bytestr(data: Any): ...@@ -67,6 +73,7 @@ def bytestr(data: Any):
return data.encode("ascii") return data.encode("ascii")
return str(data).encode("ascii") return str(data).encode("ascii")
def paddle_dumps(data: Any): def paddle_dumps(data: Any):
"""Dump data into a bytestring using paddle.dumps. """Dump data into a bytestring using paddle.dumps.
...@@ -82,6 +89,7 @@ def paddle_dumps(data: Any): ...@@ -82,6 +89,7 @@ def paddle_dumps(data: Any):
paddle.save(data, stream) paddle.save(data, stream)
return stream.getvalue() return stream.getvalue()
def numpy_dumps(data: np.ndarray): def numpy_dumps(data: np.ndarray):
"""Dump data into a bytestring using numpy npy format. """Dump data into a bytestring using numpy npy format.
...@@ -139,9 +147,8 @@ def add_handlers(d, keys, value): ...@@ -139,9 +147,8 @@ def add_handlers(d, keys, value):
def make_handlers(): def make_handlers():
"""Create a list of handlers for encoding data.""" """Create a list of handlers for encoding data."""
handlers = {} handlers = {}
add_handlers( add_handlers(handlers, "cls cls2 class count index inx id",
handlers, "cls cls2 class count index inx id", lambda x: str(x).encode("ascii") lambda x: str(x).encode("ascii"))
)
add_handlers(handlers, "txt text transcript", lambda x: x.encode("utf-8")) add_handlers(handlers, "txt text transcript", lambda x: x.encode("utf-8"))
add_handlers(handlers, "html htm", lambda x: x.encode("utf-8")) add_handlers(handlers, "html htm", lambda x: x.encode("utf-8"))
add_handlers(handlers, "pyd pickle", pickle.dumps) add_handlers(handlers, "pyd pickle", pickle.dumps)
...@@ -152,7 +159,8 @@ def make_handlers(): ...@@ -152,7 +159,8 @@ def make_handlers():
add_handlers(handlers, "json jsn", lambda x: json.dumps(x).encode("utf-8")) add_handlers(handlers, "json jsn", lambda x: json.dumps(x).encode("utf-8"))
add_handlers(handlers, "mp msgpack msg", mp_dumps) add_handlers(handlers, "mp msgpack msg", mp_dumps)
add_handlers(handlers, "cbor", cbor_dumps) add_handlers(handlers, "cbor", cbor_dumps)
add_handlers(handlers, "jpg jpeg img image", lambda data: imageencoder(data, "jpg")) add_handlers(handlers, "jpg jpeg img image",
lambda data: imageencoder(data, "jpg"))
add_handlers(handlers, "png", lambda data: imageencoder(data, "png")) add_handlers(handlers, "png", lambda data: imageencoder(data, "png"))
add_handlers(handlers, "pbm", lambda data: imageencoder(data, "pbm")) add_handlers(handlers, "pbm", lambda data: imageencoder(data, "pbm"))
add_handlers(handlers, "pgm", lambda data: imageencoder(data, "pgm")) add_handlers(handlers, "pgm", lambda data: imageencoder(data, "pgm"))
...@@ -192,7 +200,8 @@ def encode_based_on_extension(sample: dict, handlers: dict): ...@@ -192,7 +200,8 @@ def encode_based_on_extension(sample: dict, handlers: dict):
:param handlers: handlers for encoding :param handlers: handlers for encoding
""" """
return { return {
k: encode_based_on_extension1(v, k, handlers) for k, v in list(sample.items()) k: encode_based_on_extension1(v, k, handlers)
for k, v in list(sample.items())
} }
...@@ -258,15 +267,14 @@ class TarWriter: ...@@ -258,15 +267,14 @@ class TarWriter:
""" """
def __init__( def __init__(
self, self,
fileobj, fileobj,
user: str = "bigdata", user: str="bigdata",
group: str = "bigdata", group: str="bigdata",
mode: int = 0o0444, mode: int=0o0444,
compress: Optional[bool] = None, compress: Optional[bool]=None,
encoder: Union[None, bool, Callable] = True, encoder: Union[None, bool, Callable]=True,
keep_meta: bool = False, keep_meta: bool=False, ):
):
"""Create a tar writer. """Create a tar writer.
:param fileobj: stream to write data to :param fileobj: stream to write data to
...@@ -330,8 +338,7 @@ class TarWriter: ...@@ -330,8 +338,7 @@ class TarWriter:
continue continue
if not isinstance(v, (bytes, bytearray, memoryview)): if not isinstance(v, (bytes, bytearray, memoryview)):
raise ValueError( raise ValueError(
f"{k} doesn't map to a bytes after encoding ({type(v)})" f"{k} doesn't map to a bytes after encoding ({type(v)})")
)
key = obj["__key__"] key = obj["__key__"]
for k in sorted(obj.keys()): for k in sorted(obj.keys()):
if k == "__key__": if k == "__key__":
...@@ -349,7 +356,8 @@ class TarWriter: ...@@ -349,7 +356,8 @@ class TarWriter:
ti.uname = self.user ti.uname = self.user
ti.gname = self.group ti.gname = self.group
if not isinstance(v, (bytes, bytearray, memoryview)): if not isinstance(v, (bytes, bytearray, memoryview)):
raise ValueError(f"converter didn't yield bytes: {k}, {type(v)}") raise ValueError(
f"converter didn't yield bytes: {k}, {type(v)}")
stream = io.BytesIO(v) stream = io.BytesIO(v)
self.tarstream.addfile(ti, stream) self.tarstream.addfile(ti, stream)
total += ti.size total += ti.size
...@@ -360,14 +368,13 @@ class ShardWriter: ...@@ -360,14 +368,13 @@ class ShardWriter:
"""Like TarWriter but splits into multiple shards.""" """Like TarWriter but splits into multiple shards."""
def __init__( def __init__(
self, self,
pattern: str, pattern: str,
maxcount: int = 100000, maxcount: int=100000,
maxsize: float = 3e9, maxsize: float=3e9,
post: Optional[Callable] = None, post: Optional[Callable]=None,
start_shard: int = 0, start_shard: int=0,
**kw, **kw, ):
):
"""Create a ShardWriter. """Create a ShardWriter.
:param pattern: output file pattern :param pattern: output file pattern
...@@ -400,8 +407,7 @@ class ShardWriter: ...@@ -400,8 +407,7 @@ class ShardWriter:
self.fname, self.fname,
self.count, self.count,
"%.1f GB" % (self.size / 1e9), "%.1f GB" % (self.size / 1e9),
self.total, self.total, )
)
self.shard += 1 self.shard += 1
stream = open(self.fname, "wb") stream = open(self.fname, "wb")
self.tarstream = TarWriter(stream, **self.kw) self.tarstream = TarWriter(stream, **self.kw)
...@@ -413,11 +419,8 @@ class ShardWriter: ...@@ -413,11 +419,8 @@ class ShardWriter:
:param obj: sample to be written :param obj: sample to be written
""" """
if ( if (self.tarstream is None or self.count >= self.maxcount or
self.tarstream is None self.size >= self.maxsize):
or self.count >= self.maxcount
or self.size >= self.maxsize
):
self.next_stream() self.next_stream()
size = self.tarstream.write(obj) size = self.tarstream.write(obj)
self.count += 1 self.count += 1
......
...@@ -17,6 +17,7 @@ from typing import Union ...@@ -17,6 +17,7 @@ from typing import Union
import sentencepiece as spm import sentencepiece as spm
from ..utils.log import Logger
from .utility import BLANK from .utility import BLANK
from .utility import EOS from .utility import EOS
from .utility import load_dict from .utility import load_dict
...@@ -24,7 +25,6 @@ from .utility import MASKCTC ...@@ -24,7 +25,6 @@ from .utility import MASKCTC
from .utility import SOS from .utility import SOS
from .utility import SPACE from .utility import SPACE
from .utility import UNK from .utility import UNK
from ..utils.log import Logger
logger = Logger(__name__) logger = Logger(__name__)
......
...@@ -12,15 +12,16 @@ ...@@ -12,15 +12,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# Modified from espnet(https://github.com/espnet/espnet) # Modified from espnet(https://github.com/espnet/espnet)
import io
import os
import h5py
import librosa import librosa
import numpy import numpy
import numpy as np
import scipy import scipy
import soundfile import soundfile
import io
import os
import h5py
import numpy as np
class SoundHDF5File(): class SoundHDF5File():
"""Collecting sound files to a HDF5 file """Collecting sound files to a HDF5 file
...@@ -109,6 +110,7 @@ class SoundHDF5File(): ...@@ -109,6 +110,7 @@ class SoundHDF5File():
def close(self): def close(self):
self.file.close() self.file.close()
class SpeedPerturbation(): class SpeedPerturbation():
"""SpeedPerturbation """SpeedPerturbation
...@@ -558,4 +560,3 @@ class RIRConvolve(): ...@@ -558,4 +560,3 @@ class RIRConvolve():
[scipy.convolve(x, r, mode="same") for r in rir], axis=-1) [scipy.convolve(x, r, mode="same") for r in rir], axis=-1)
else: else:
return scipy.convolve(x, rir, mode="same") return scipy.convolve(x, rir, mode="same")
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
# Modified from espnet(https://github.com/espnet/espnet) # Modified from espnet(https://github.com/espnet/espnet)
"""Spec Augment module for preprocessing i.e., data augmentation""" """Spec Augment module for preprocessing i.e., data augmentation"""
import random import random
import numpy import numpy
from PIL import Image from PIL import Image
......
...@@ -99,8 +99,9 @@ class ASRExecutor(BaseExecutor): ...@@ -99,8 +99,9 @@ class ASRExecutor(BaseExecutor):
'-y', '-y',
action="store_true", action="store_true",
default=False, default=False,
help='No additional parameters required. Once set this parameter, it means accepting the request of the program by default, which includes transforming the audio sample rate' help='No additional parameters required. \
) Once set this parameter, it means accepting the request of the program by default, \
which includes transforming the audio sample rate')
self.parser.add_argument( self.parser.add_argument(
'--rtf', '--rtf',
action="store_true", action="store_true",
...@@ -340,7 +341,7 @@ class ASRExecutor(BaseExecutor): ...@@ -340,7 +341,7 @@ class ASRExecutor(BaseExecutor):
audio = np.round(audio).astype("int16") audio = np.round(audio).astype("int16")
return audio return audio
def _check(self, audio_file: str, sample_rate: int, force_yes: bool): def _check(self, audio_file: str, sample_rate: int, force_yes: bool=False):
self.sample_rate = sample_rate self.sample_rate = sample_rate
if self.sample_rate != 16000 and self.sample_rate != 8000: if self.sample_rate != 16000 and self.sample_rate != 8000:
logger.error( logger.error(
...@@ -434,8 +435,17 @@ class ASRExecutor(BaseExecutor): ...@@ -434,8 +435,17 @@ class ASRExecutor(BaseExecutor):
for id_, input_ in task_source.items(): for id_, input_ in task_source.items():
try: try:
res = self(input_, model, lang, sample_rate, config, ckpt_path, res = self(
decode_method, force_yes, rtf, device) audio_file=input_,
model=model,
lang=lang,
sample_rate=sample_rate,
config=config,
ckpt_path=ckpt_path,
decode_method=decode_method,
force_yes=force_yes,
rtf=rtf,
device=device)
task_results[id_] = res task_results[id_] = res
except Exception as e: except Exception as e:
has_exceptions = True has_exceptions = True
......
...@@ -125,9 +125,11 @@ class StatsCommand: ...@@ -125,9 +125,11 @@ class StatsCommand:
"Here is the list of {} pretrained models released by PaddleSpeech that can be used by command line and python API" "Here is the list of {} pretrained models released by PaddleSpeech that can be used by command line and python API"
.format(self.task.upper())) .format(self.task.upper()))
self.show_support_models(pretrained_models) self.show_support_models(pretrained_models)
return True
except BaseException: except BaseException:
print("Failed to get the list of {} pretrained models.".format( print("Failed to get the list of {} pretrained models.".format(
self.task.upper())) self.task.upper()))
return False
# Dynamic import when running specific command # Dynamic import when running specific command
......
...@@ -191,7 +191,7 @@ class BaseExecutor(ABC): ...@@ -191,7 +191,7 @@ class BaseExecutor(ABC):
line = line.strip() line = line.strip()
if not line: if not line:
continue continue
k, v = line.split() # space or \t k, v = line.split() # space or \t
job_contents[k] = v job_contents[k] = v
return job_contents return job_contents
......
...@@ -70,6 +70,14 @@ class VectorExecutor(BaseExecutor): ...@@ -70,6 +70,14 @@ class VectorExecutor(BaseExecutor):
type=str, type=str,
default=None, default=None,
help="Checkpoint file of model.") help="Checkpoint file of model.")
self.parser.add_argument(
'--yes',
'-y',
action="store_true",
default=False,
help='No additional parameters required. \
Once set this parameter, it means accepting the request of the program by default, \
which includes transforming the audio sample rate')
self.parser.add_argument( self.parser.add_argument(
'--config', '--config',
type=str, type=str,
...@@ -109,6 +117,7 @@ class VectorExecutor(BaseExecutor): ...@@ -109,6 +117,7 @@ class VectorExecutor(BaseExecutor):
sample_rate = parser_args.sample_rate sample_rate = parser_args.sample_rate
config = parser_args.config config = parser_args.config
ckpt_path = parser_args.ckpt_path ckpt_path = parser_args.ckpt_path
force_yes = parser_args.yes
device = parser_args.device device = parser_args.device
# stage 1: configurate the verbose flag # stage 1: configurate the verbose flag
...@@ -128,8 +137,14 @@ class VectorExecutor(BaseExecutor): ...@@ -128,8 +137,14 @@ class VectorExecutor(BaseExecutor):
# extract the speaker audio embedding # extract the speaker audio embedding
if parser_args.task == "spk": if parser_args.task == "spk":
logger.debug("do vector spk task") logger.debug("do vector spk task")
res = self(input_, model, sample_rate, config, ckpt_path, res = self(
device) audio_file=input_,
model=model,
sample_rate=sample_rate,
config=config,
ckpt_path=ckpt_path,
force_yes=force_yes,
device=device)
task_result[id_] = res task_result[id_] = res
elif parser_args.task == "score": elif parser_args.task == "score":
logger.debug("do vector score task") logger.debug("do vector score task")
...@@ -145,10 +160,22 @@ class VectorExecutor(BaseExecutor): ...@@ -145,10 +160,22 @@ class VectorExecutor(BaseExecutor):
logger.debug( logger.debug(
f"score task, enroll audio: {enroll_audio}, test audio: {test_audio}" f"score task, enroll audio: {enroll_audio}, test audio: {test_audio}"
) )
enroll_embedding = self(enroll_audio, model, sample_rate, enroll_embedding = self(
config, ckpt_path, device) audio_file=enroll_audio,
test_embedding = self(test_audio, model, sample_rate, model=model,
config, ckpt_path, device) sample_rate=sample_rate,
config=config,
ckpt_path=ckpt_path,
force_yes=force_yes,
device=device)
test_embedding = self(
audio_file=test_audio,
model=model,
sample_rate=sample_rate,
config=config,
ckpt_path=ckpt_path,
force_yes=force_yes,
device=device)
# get the score # get the score
res = self.get_embeddings_score(enroll_embedding, res = self.get_embeddings_score(enroll_embedding,
...@@ -222,6 +249,7 @@ class VectorExecutor(BaseExecutor): ...@@ -222,6 +249,7 @@ class VectorExecutor(BaseExecutor):
sample_rate: int=16000, sample_rate: int=16000,
config: os.PathLike=None, config: os.PathLike=None,
ckpt_path: os.PathLike=None, ckpt_path: os.PathLike=None,
force_yes: bool=False,
device=paddle.get_device()): device=paddle.get_device()):
"""Extract the audio embedding """Extract the audio embedding
...@@ -240,7 +268,7 @@ class VectorExecutor(BaseExecutor): ...@@ -240,7 +268,7 @@ class VectorExecutor(BaseExecutor):
""" """
# stage 0: check the audio format # stage 0: check the audio format
audio_file = os.path.abspath(audio_file) audio_file = os.path.abspath(audio_file)
if not self._check(audio_file, sample_rate): if not self._check(audio_file, sample_rate, force_yes):
sys.exit(-1) sys.exit(-1)
# stage 1: set the paddle runtime host device # stage 1: set the paddle runtime host device
...@@ -418,7 +446,7 @@ class VectorExecutor(BaseExecutor): ...@@ -418,7 +446,7 @@ class VectorExecutor(BaseExecutor):
logger.debug("audio extract the feat success") logger.debug("audio extract the feat success")
def _check(self, audio_file: str, sample_rate: int): def _check(self, audio_file: str, sample_rate: int, force_yes: bool=False):
"""Check if the model sample match the audio sample rate """Check if the model sample match the audio sample rate
Args: Args:
...@@ -462,13 +490,34 @@ class VectorExecutor(BaseExecutor): ...@@ -462,13 +490,34 @@ class VectorExecutor(BaseExecutor):
logger.debug(f"The sample rate is {audio_sample_rate}") logger.debug(f"The sample rate is {audio_sample_rate}")
if audio_sample_rate != self.sample_rate: if audio_sample_rate != self.sample_rate:
logger.error("The sample rate of the input file is not {}.\n \ logger.debug("The sample rate of the input file is not {}.\n \
The program will resample the wav file to {}.\n \ The program will resample the wav file to {}.\n \
If the result does not meet your expectations,\n \ If the result does not meet your expectations,\n \
Please input the 16k 16 bit 1 channel wav file. \ Please input the 16k 16 bit 1 channel wav file. \
".format(self.sample_rate, self.sample_rate)) ".format(self.sample_rate, self.sample_rate))
sys.exit(-1) if force_yes is False:
while (True):
logger.debug(
"Whether to change the sample rate and the channel. Y: change the sample. N: exit the prgream."
)
content = input("Input(Y/N):")
if content.strip() == "Y" or content.strip(
) == "y" or content.strip() == "yes" or content.strip(
) == "Yes":
logger.debug(
"change the sampele rate, channel to 16k and 1 channel"
)
break
elif content.strip() == "N" or content.strip(
) == "n" or content.strip() == "no" or content.strip(
) == "No":
logger.debug("Exit the program")
return False
else:
logger.warning("Not regular input, please input again")
self.change_format = True
else: else:
logger.debug("The audio file format is right") logger.debug("The audio file format is right")
self.change_format = False
return True return True
...@@ -1359,9 +1359,15 @@ g2pw_onnx_models = { ...@@ -1359,9 +1359,15 @@ g2pw_onnx_models = {
'G2PWModel': { 'G2PWModel': {
'1.0': { '1.0': {
'url': 'url':
'https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel.tar', 'https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.0.zip',
'md5': 'md5':
'63bc0894af15a5a591e58b2130a2bcac', '7e049a55547da840502cf99e8a64f20e',
},
'1.1': {
'url':
'https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip',
'md5':
'f8b60501770bff92ed6ce90860a610e6',
}, },
}, },
} }
...@@ -114,6 +114,7 @@ if not hasattr(paddle.Tensor, 'new_full'): ...@@ -114,6 +114,7 @@ if not hasattr(paddle.Tensor, 'new_full'):
paddle.Tensor.new_full = new_full paddle.Tensor.new_full = new_full
paddle.static.Variable.new_full = new_full paddle.static.Variable.new_full = new_full
def contiguous(xs: paddle.Tensor) -> paddle.Tensor: def contiguous(xs: paddle.Tensor) -> paddle.Tensor:
return xs return xs
......
...@@ -20,8 +20,8 @@ import paddle ...@@ -20,8 +20,8 @@ import paddle
import soundfile import soundfile
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
from paddlespeech.s2t.io.collator import SpeechCollator
from paddlespeech.s2t.models.ds2 import DeepSpeech2Model from paddlespeech.s2t.models.ds2 import DeepSpeech2Model
from paddlespeech.s2t.training.cli import default_argument_parser from paddlespeech.s2t.training.cli import default_argument_parser
from paddlespeech.s2t.utils import mp_tools from paddlespeech.s2t.utils import mp_tools
...@@ -38,24 +38,24 @@ class DeepSpeech2Tester_hub(): ...@@ -38,24 +38,24 @@ class DeepSpeech2Tester_hub():
self.args = args self.args = args
self.config = config self.config = config
self.audio_file = args.audio_file self.audio_file = args.audio_file
self.collate_fn_test = SpeechCollator.from_config(config)
self._text_featurizer = TextFeaturizer(
unit_type=config.unit_type, vocab=None)
def compute_result_transcripts(self, audio, audio_len, vocab_list, cfg): self.preprocess_conf = config.preprocess_config
result_transcripts = self.model.decode( self.preprocess_args = {"train": False}
audio, self.preprocessing = Transformation(self.preprocess_conf)
audio_len,
vocab_list, self.text_feature = TextFeaturizer(
decoding_method=cfg.decoding_method, unit_type=config.unit_type,
lang_model_path=cfg.lang_model_path, vocab=config.vocab_filepath,
beam_alpha=cfg.alpha, spm_model_prefix=config.spm_model_prefix)
beam_beta=cfg.beta, paddle.set_device('gpu' if self.args.ngpu > 0 else 'cpu')
beam_size=cfg.beam_size,
cutoff_prob=cfg.cutoff_prob,
cutoff_top_n=cfg.cutoff_top_n,
num_processes=cfg.num_proc_bsearch)
def compute_result_transcripts(self, audio, audio_len, vocab_list, cfg):
decode_batch_size = cfg.decode_batch_size
self.model.decoder.init_decoder(
decode_batch_size, vocab_list, cfg.decoding_method,
cfg.lang_model_path, cfg.alpha, cfg.beta, cfg.beam_size,
cfg.cutoff_prob, cfg.cutoff_top_n, cfg.num_proc_bsearch)
result_transcripts = self.model.decode(audio, audio_len)
return result_transcripts return result_transcripts
@mp_tools.rank_zero_only @mp_tools.rank_zero_only
...@@ -64,16 +64,23 @@ class DeepSpeech2Tester_hub(): ...@@ -64,16 +64,23 @@ class DeepSpeech2Tester_hub():
self.model.eval() self.model.eval()
cfg = self.config cfg = self.config
audio_file = self.audio_file audio_file = self.audio_file
collate_fn_test = self.collate_fn_test
audio, _ = collate_fn_test.process_utterance( audio, sample_rate = soundfile.read(
audio_file=audio_file, transcript=" ") self.audio_file, dtype="int16", always_2d=True)
audio_len = audio.shape[0]
audio = paddle.to_tensor(audio, dtype='float32') audio = audio[:, 0]
audio_len = paddle.to_tensor(audio_len) logger.info(f"audio shape: {audio.shape}")
audio = paddle.unsqueeze(audio, axis=0)
vocab_list = collate_fn_test.vocab_list # fbank
feat = self.preprocessing(audio, **self.preprocess_args)
logger.info(f"feat shape: {feat.shape}")
audio_len = paddle.to_tensor(feat.shape[0])
audio = paddle.to_tensor(feat, dtype='float32').unsqueeze(axis=0)
result_transcripts = self.compute_result_transcripts( result_transcripts = self.compute_result_transcripts(
audio, audio_len, vocab_list, cfg.decode) audio, audio_len, self.text_feature.vocab_list, cfg.decode)
logger.info("result_transcripts: " + result_transcripts[0]) logger.info("result_transcripts: " + result_transcripts[0])
def run_test(self): def run_test(self):
...@@ -109,11 +116,9 @@ class DeepSpeech2Tester_hub(): ...@@ -109,11 +116,9 @@ class DeepSpeech2Tester_hub():
def setup_model(self): def setup_model(self):
config = self.config.clone() config = self.config.clone()
with UpdateConfig(config): with UpdateConfig(config):
config.input_dim = self.collate_fn_test.feature_size config.input_dim = config.feat_dim
config.output_dim = self.collate_fn_test.vocab_size config.output_dim = self.text_feature.vocab_size
model = DeepSpeech2Model.from_config(config) model = DeepSpeech2Model.from_config(config)
self.model = model self.model = model
def setup_checkpointer(self): def setup_checkpointer(self):
......
...@@ -25,8 +25,6 @@ import paddle ...@@ -25,8 +25,6 @@ import paddle
from paddle import distributed as dist from paddle import distributed as dist
from paddlespeech.s2t.frontend.featurizer import TextFeaturizer from paddlespeech.s2t.frontend.featurizer import TextFeaturizer
from paddlespeech.s2t.io.dataloader import BatchDataLoader
from paddlespeech.s2t.io.dataloader import StreamDataLoader
from paddlespeech.s2t.io.dataloader import DataLoaderFactory from paddlespeech.s2t.io.dataloader import DataLoaderFactory
from paddlespeech.s2t.models.u2 import U2Model from paddlespeech.s2t.models.u2 import U2Model
from paddlespeech.s2t.training.optimizer import OptimizerFactory from paddlespeech.s2t.training.optimizer import OptimizerFactory
...@@ -109,7 +107,8 @@ class U2Trainer(Trainer): ...@@ -109,7 +107,8 @@ class U2Trainer(Trainer):
def valid(self): def valid(self):
self.model.eval() self.model.eval()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Valid Total Examples: {len(self.valid_loader.dataset)}") logger.info(
f"Valid Total Examples: {len(self.valid_loader.dataset)}")
valid_losses = defaultdict(list) valid_losses = defaultdict(list)
num_seen_utts = 1 num_seen_utts = 1
total_loss = 0.0 total_loss = 0.0
...@@ -136,7 +135,8 @@ class U2Trainer(Trainer): ...@@ -136,7 +135,8 @@ class U2Trainer(Trainer):
msg += "epoch: {}, ".format(self.epoch) msg += "epoch: {}, ".format(self.epoch)
msg += "step: {}, ".format(self.iteration) msg += "step: {}, ".format(self.iteration)
if not self.use_streamdata: if not self.use_streamdata:
msg += "batch: {}/{}, ".format(i + 1, len(self.valid_loader)) msg += "batch: {}/{}, ".format(i + 1,
len(self.valid_loader))
msg += ', '.join('{}: {:>.6f}'.format(k, v) msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in valid_dump.items()) for k, v in valid_dump.items())
logger.info(msg) logger.info(msg)
...@@ -157,7 +157,8 @@ class U2Trainer(Trainer): ...@@ -157,7 +157,8 @@ class U2Trainer(Trainer):
self.before_train() self.before_train()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Train Total Examples: {len(self.train_loader.dataset)}") logger.info(
f"Train Total Examples: {len(self.train_loader.dataset)}")
while self.epoch < self.config.n_epoch: while self.epoch < self.config.n_epoch:
with Timer("Epoch-Train Time Cost: {}"): with Timer("Epoch-Train Time Cost: {}"):
self.model.train() self.model.train()
...@@ -225,14 +226,18 @@ class U2Trainer(Trainer): ...@@ -225,14 +226,18 @@ class U2Trainer(Trainer):
config = self.config.clone() config = self.config.clone()
self.use_streamdata = config.get("use_stream_data", False) self.use_streamdata = config.get("use_stream_data", False)
if self.train: if self.train:
self.train_loader = DataLoaderFactory.get_dataloader('train', config, self.args) self.train_loader = DataLoaderFactory.get_dataloader(
self.valid_loader = DataLoaderFactory.get_dataloader('valid', config, self.args) 'train', config, self.args)
self.valid_loader = DataLoaderFactory.get_dataloader(
'valid', config, self.args)
logger.info("Setup train/valid Dataloader!") logger.info("Setup train/valid Dataloader!")
else: else:
decode_batch_size = config.get('decode', dict()).get( decode_batch_size = config.get('decode', dict()).get(
'decode_batch_size', 1) 'decode_batch_size', 1)
self.test_loader = DataLoaderFactory.get_dataloader('test', config, self.args) self.test_loader = DataLoaderFactory.get_dataloader('test', config,
self.align_loader = DataLoaderFactory.get_dataloader('align', config, self.args) self.args)
self.align_loader = DataLoaderFactory.get_dataloader(
'align', config, self.args)
logger.info("Setup test/align Dataloader!") logger.info("Setup test/align Dataloader!")
def setup_model(self): def setup_model(self):
......
...@@ -105,7 +105,8 @@ class U2Trainer(Trainer): ...@@ -105,7 +105,8 @@ class U2Trainer(Trainer):
def valid(self): def valid(self):
self.model.eval() self.model.eval()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Valid Total Examples: {len(self.valid_loader.dataset)}") logger.info(
f"Valid Total Examples: {len(self.valid_loader.dataset)}")
valid_losses = defaultdict(list) valid_losses = defaultdict(list)
num_seen_utts = 1 num_seen_utts = 1
total_loss = 0.0 total_loss = 0.0
...@@ -133,7 +134,8 @@ class U2Trainer(Trainer): ...@@ -133,7 +134,8 @@ class U2Trainer(Trainer):
msg += "epoch: {}, ".format(self.epoch) msg += "epoch: {}, ".format(self.epoch)
msg += "step: {}, ".format(self.iteration) msg += "step: {}, ".format(self.iteration)
if not self.use_streamdata: if not self.use_streamdata:
msg += "batch: {}/{}, ".format(i + 1, len(self.valid_loader)) msg += "batch: {}/{}, ".format(i + 1,
len(self.valid_loader))
msg += ', '.join('{}: {:>.6f}'.format(k, v) msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in valid_dump.items()) for k, v in valid_dump.items())
logger.info(msg) logger.info(msg)
...@@ -153,7 +155,8 @@ class U2Trainer(Trainer): ...@@ -153,7 +155,8 @@ class U2Trainer(Trainer):
self.before_train() self.before_train()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Train Total Examples: {len(self.train_loader.dataset)}") logger.info(
f"Train Total Examples: {len(self.train_loader.dataset)}")
while self.epoch < self.config.n_epoch: while self.epoch < self.config.n_epoch:
with Timer("Epoch-Train Time Cost: {}"): with Timer("Epoch-Train Time Cost: {}"):
self.model.train() self.model.train()
...@@ -165,8 +168,8 @@ class U2Trainer(Trainer): ...@@ -165,8 +168,8 @@ class U2Trainer(Trainer):
msg += "epoch: {}, ".format(self.epoch) msg += "epoch: {}, ".format(self.epoch)
msg += "step: {}, ".format(self.iteration) msg += "step: {}, ".format(self.iteration)
if not self.use_streamdata: if not self.use_streamdata:
msg += "batch : {}/{}, ".format(batch_index + 1, msg += "batch : {}/{}, ".format(
len(self.train_loader)) batch_index + 1, len(self.train_loader))
msg += "lr: {:>.8f}, ".format(self.lr_scheduler()) msg += "lr: {:>.8f}, ".format(self.lr_scheduler())
msg += "data time: {:>.3f}s, ".format(dataload_time) msg += "data time: {:>.3f}s, ".format(dataload_time)
self.train_batch(batch_index, batch, msg) self.train_batch(batch_index, batch, msg)
...@@ -204,21 +207,24 @@ class U2Trainer(Trainer): ...@@ -204,21 +207,24 @@ class U2Trainer(Trainer):
self.use_streamdata = config.get("use_stream_data", False) self.use_streamdata = config.get("use_stream_data", False)
if self.train: if self.train:
config = self.config.clone() config = self.config.clone()
self.train_loader = DataLoaderFactory.get_dataloader('train', config, self.args) self.train_loader = DataLoaderFactory.get_dataloader(
'train', config, self.args)
config = self.config.clone() config = self.config.clone()
config['preprocess_config'] = None config['preprocess_config'] = None
self.valid_loader = DataLoaderFactory.get_dataloader('valid', config, self.args) self.valid_loader = DataLoaderFactory.get_dataloader(
'valid', config, self.args)
logger.info("Setup train/valid Dataloader!") logger.info("Setup train/valid Dataloader!")
else: else:
config = self.config.clone() config = self.config.clone()
config['preprocess_config'] = None config['preprocess_config'] = None
self.test_loader = DataLoaderFactory.get_dataloader('test', config, self.args) self.test_loader = DataLoaderFactory.get_dataloader('test', config,
self.args)
config = self.config.clone() config = self.config.clone()
config['preprocess_config'] = None config['preprocess_config'] = None
self.align_loader = DataLoaderFactory.get_dataloader('align', config, self.args) self.align_loader = DataLoaderFactory.get_dataloader(
'align', config, self.args)
logger.info("Setup test/align Dataloader!") logger.info("Setup test/align Dataloader!")
def setup_model(self): def setup_model(self):
config = self.config config = self.config
......
...@@ -121,7 +121,8 @@ class U2STTrainer(Trainer): ...@@ -121,7 +121,8 @@ class U2STTrainer(Trainer):
def valid(self): def valid(self):
self.model.eval() self.model.eval()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Valid Total Examples: {len(self.valid_loader.dataset)}") logger.info(
f"Valid Total Examples: {len(self.valid_loader.dataset)}")
valid_losses = defaultdict(list) valid_losses = defaultdict(list)
num_seen_utts = 1 num_seen_utts = 1
total_loss = 0.0 total_loss = 0.0
...@@ -155,7 +156,8 @@ class U2STTrainer(Trainer): ...@@ -155,7 +156,8 @@ class U2STTrainer(Trainer):
msg += "epoch: {}, ".format(self.epoch) msg += "epoch: {}, ".format(self.epoch)
msg += "step: {}, ".format(self.iteration) msg += "step: {}, ".format(self.iteration)
if not self.use_streamdata: if not self.use_streamdata:
msg += "batch: {}/{}, ".format(i + 1, len(self.valid_loader)) msg += "batch: {}/{}, ".format(i + 1,
len(self.valid_loader))
msg += ', '.join('{}: {:>.6f}'.format(k, v) msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in valid_dump.items()) for k, v in valid_dump.items())
logger.info(msg) logger.info(msg)
...@@ -175,7 +177,8 @@ class U2STTrainer(Trainer): ...@@ -175,7 +177,8 @@ class U2STTrainer(Trainer):
self.before_train() self.before_train()
if not self.use_streamdata: if not self.use_streamdata:
logger.info(f"Train Total Examples: {len(self.train_loader.dataset)}") logger.info(
f"Train Total Examples: {len(self.train_loader.dataset)}")
while self.epoch < self.config.n_epoch: while self.epoch < self.config.n_epoch:
with Timer("Epoch-Train Time Cost: {}"): with Timer("Epoch-Train Time Cost: {}"):
self.model.train() self.model.train()
...@@ -248,14 +251,16 @@ class U2STTrainer(Trainer): ...@@ -248,14 +251,16 @@ class U2STTrainer(Trainer):
config['load_transcript'] = load_transcript config['load_transcript'] = load_transcript
self.use_streamdata = config.get("use_stream_data", False) self.use_streamdata = config.get("use_stream_data", False)
if self.train: if self.train:
self.train_loader = DataLoaderFactory.get_dataloader('train', config, self.args) self.train_loader = DataLoaderFactory.get_dataloader(
self.valid_loader = DataLoaderFactory.get_dataloader('valid', config, self.args) 'train', config, self.args)
self.valid_loader = DataLoaderFactory.get_dataloader(
'valid', config, self.args)
logger.info("Setup train/valid Dataloader!") logger.info("Setup train/valid Dataloader!")
else: else:
self.test_loader = DataLoaderFactory.get_dataloader('test', config, self.args) self.test_loader = DataLoaderFactory.get_dataloader('test', config,
self.args)
logger.info("Setup test Dataloader!") logger.info("Setup test Dataloader!")
def setup_model(self): def setup_model(self):
config = self.config config = self.config
model_conf = config model_conf = config
......
...@@ -22,17 +22,16 @@ import paddle ...@@ -22,17 +22,16 @@ import paddle
from paddle.io import BatchSampler from paddle.io import BatchSampler
from paddle.io import DataLoader from paddle.io import DataLoader
from paddle.io import DistributedBatchSampler from paddle.io import DistributedBatchSampler
from yacs.config import CfgNode
import paddlespeech.audio.streamdata as streamdata
from paddlespeech.audio.text.text_featurizer import TextFeaturizer
from paddlespeech.s2t.io.batchfy import make_batchset from paddlespeech.s2t.io.batchfy import make_batchset
from paddlespeech.s2t.io.converter import CustomConverter from paddlespeech.s2t.io.converter import CustomConverter
from paddlespeech.s2t.io.dataset import TransformDataset from paddlespeech.s2t.io.dataset import TransformDataset
from paddlespeech.s2t.io.reader import LoadInputsAndTargets from paddlespeech.s2t.io.reader import LoadInputsAndTargets
from paddlespeech.s2t.utils.log import Log from paddlespeech.s2t.utils.log import Log
import paddlespeech.audio.streamdata as streamdata
from paddlespeech.audio.text.text_featurizer import TextFeaturizer
from yacs.config import CfgNode
__all__ = ["BatchDataLoader", "StreamDataLoader"] __all__ = ["BatchDataLoader", "StreamDataLoader"]
logger = Log(__name__).getlog() logger = Log(__name__).getlog()
...@@ -61,6 +60,7 @@ def batch_collate(x): ...@@ -61,6 +60,7 @@ def batch_collate(x):
""" """
return x[0] return x[0]
def read_preprocess_cfg(preprocess_conf_file): def read_preprocess_cfg(preprocess_conf_file):
augment_conf = dict() augment_conf = dict()
preprocess_cfg = CfgNode(new_allowed=True) preprocess_cfg = CfgNode(new_allowed=True)
...@@ -82,7 +82,8 @@ def read_preprocess_cfg(preprocess_conf_file): ...@@ -82,7 +82,8 @@ def read_preprocess_cfg(preprocess_conf_file):
augment_conf['num_t_mask'] = process['n_mask'] augment_conf['num_t_mask'] = process['n_mask']
augment_conf['t_inplace'] = process['inplace'] augment_conf['t_inplace'] = process['inplace']
augment_conf['t_replace_with_zero'] = process['replace_with_zero'] augment_conf['t_replace_with_zero'] = process['replace_with_zero']
return augment_conf return augment_conf
class StreamDataLoader(): class StreamDataLoader():
def __init__(self, def __init__(self,
...@@ -95,12 +96,12 @@ class StreamDataLoader(): ...@@ -95,12 +96,12 @@ class StreamDataLoader():
frame_length=25, frame_length=25,
frame_shift=10, frame_shift=10,
dither=0.0, dither=0.0,
minlen_in: float=0.0, minlen_in: float=0.0,
maxlen_in: float=float('inf'), maxlen_in: float=float('inf'),
minlen_out: float=0.0, minlen_out: float=0.0,
maxlen_out: float=float('inf'), maxlen_out: float=float('inf'),
resample_rate: int=16000, resample_rate: int=16000,
shuffle_size: int=10000, shuffle_size: int=10000,
sort_size: int=1000, sort_size: int=1000,
n_iter_processes: int=1, n_iter_processes: int=1,
prefetch_factor: int=2, prefetch_factor: int=2,
...@@ -116,11 +117,11 @@ class StreamDataLoader(): ...@@ -116,11 +117,11 @@ class StreamDataLoader():
text_featurizer = TextFeaturizer(unit_type, vocab_filepath) text_featurizer = TextFeaturizer(unit_type, vocab_filepath)
symbol_table = text_featurizer.vocab_dict symbol_table = text_featurizer.vocab_dict
self.feat_dim = num_mel_bins self.feat_dim = num_mel_bins
self.vocab_size = text_featurizer.vocab_size self.vocab_size = text_featurizer.vocab_size
augment_conf = read_preprocess_cfg(preprocess_conf) augment_conf = read_preprocess_cfg(preprocess_conf)
# The list of shard # The list of shard
shardlist = [] shardlist = []
with open(manifest_file, "r") as f: with open(manifest_file, "r") as f:
...@@ -128,58 +129,68 @@ class StreamDataLoader(): ...@@ -128,58 +129,68 @@ class StreamDataLoader():
shardlist.append(line.strip()) shardlist.append(line.strip())
world_size = 1 world_size = 1
try: try:
world_size = paddle.distributed.get_world_size() world_size = paddle.distributed.get_world_size()
except Exception as e: except Exception as e:
logger.warninig(e) logger.warninig(e)
logger.warninig("can not get world_size using paddle.distributed.get_world_size(), use world_size=1") logger.warninig(
assert(len(shardlist) >= world_size, "the length of shard list should >= number of gpus/xpus/...") "can not get world_size using paddle.distributed.get_world_size(), use world_size=1"
)
assert len(shardlist) >= world_size, \
"the length of shard list should >= number of gpus/xpus/..."
update_n_iter_processes = int(max(min(len(shardlist)/world_size - 1, self.n_iter_processes), 0)) update_n_iter_processes = int(
max(min(len(shardlist) / world_size - 1, self.n_iter_processes), 0))
logger.info(f"update_n_iter_processes {update_n_iter_processes}") logger.info(f"update_n_iter_processes {update_n_iter_processes}")
if update_n_iter_processes != self.n_iter_processes: if update_n_iter_processes != self.n_iter_processes:
self.n_iter_processes = update_n_iter_processes self.n_iter_processes = update_n_iter_processes
logger.info(f"change nun_workers to {self.n_iter_processes}") logger.info(f"change nun_workers to {self.n_iter_processes}")
if self.dist_sampler: if self.dist_sampler:
base_dataset = streamdata.DataPipeline( base_dataset = streamdata.DataPipeline(
streamdata.SimpleShardList(shardlist), streamdata.SimpleShardList(shardlist), streamdata.split_by_node
streamdata.split_by_node if train_mode else streamdata.placeholder(), if train_mode else streamdata.placeholder(),
streamdata.split_by_worker, streamdata.split_by_worker,
streamdata.tarfile_to_samples(streamdata.reraise_exception) streamdata.tarfile_to_samples(streamdata.reraise_exception))
)
else: else:
base_dataset = streamdata.DataPipeline( base_dataset = streamdata.DataPipeline(
streamdata.SimpleShardList(shardlist), streamdata.SimpleShardList(shardlist),
streamdata.split_by_worker, streamdata.split_by_worker,
streamdata.tarfile_to_samples(streamdata.reraise_exception) streamdata.tarfile_to_samples(streamdata.reraise_exception))
)
self.dataset = base_dataset.append_list( self.dataset = base_dataset.append_list(
streamdata.audio_tokenize(symbol_table), streamdata.audio_tokenize(symbol_table),
streamdata.audio_data_filter(frame_shift=frame_shift, max_length=maxlen_in, min_length=minlen_in, token_max_length=maxlen_out, token_min_length=minlen_out), streamdata.audio_data_filter(
frame_shift=frame_shift,
max_length=maxlen_in,
min_length=minlen_in,
token_max_length=maxlen_out,
token_min_length=minlen_out),
streamdata.audio_resample(resample_rate=resample_rate), streamdata.audio_resample(resample_rate=resample_rate),
streamdata.audio_compute_fbank(num_mel_bins=num_mel_bins, frame_length=frame_length, frame_shift=frame_shift, dither=dither), streamdata.audio_compute_fbank(
streamdata.audio_spec_aug(**augment_conf) if train_mode else streamdata.placeholder(), # num_t_mask=2, num_f_mask=2, max_t=40, max_f=30, max_w=80) num_mel_bins=num_mel_bins,
frame_length=frame_length,
frame_shift=frame_shift,
dither=dither),
streamdata.audio_spec_aug(**augment_conf)
if train_mode else streamdata.placeholder(
), # num_t_mask=2, num_f_mask=2, max_t=40, max_f=30, max_w=80)
streamdata.shuffle(shuffle_size), streamdata.shuffle(shuffle_size),
streamdata.sort(sort_size=sort_size), streamdata.sort(sort_size=sort_size),
streamdata.batched(batch_size), streamdata.batched(batch_size),
streamdata.audio_padding(), streamdata.audio_padding(),
streamdata.audio_cmvn(cmvn_file) streamdata.audio_cmvn(cmvn_file))
)
if paddle.__version__ >= '2.3.2': if paddle.__version__ >= '2.3.2':
self.loader = streamdata.WebLoader( self.loader = streamdata.WebLoader(
self.dataset, self.dataset,
num_workers=self.n_iter_processes, num_workers=self.n_iter_processes,
prefetch_factor = self.prefetch_factor, prefetch_factor=self.prefetch_factor,
batch_size=None batch_size=None)
)
else: else:
self.loader = streamdata.WebLoader( self.loader = streamdata.WebLoader(
self.dataset, self.dataset,
num_workers=self.n_iter_processes, num_workers=self.n_iter_processes,
batch_size=None batch_size=None)
)
def __iter__(self): def __iter__(self):
return self.loader.__iter__() return self.loader.__iter__()
...@@ -188,7 +199,9 @@ class StreamDataLoader(): ...@@ -188,7 +199,9 @@ class StreamDataLoader():
return self.__iter__() return self.__iter__()
def __len__(self): def __len__(self):
logger.info("Stream dataloader does not support calculate the length of the dataset") logger.info(
"Stream dataloader does not support calculate the length of the dataset"
)
return -1 return -1
...@@ -347,7 +360,7 @@ class DataLoaderFactory(): ...@@ -347,7 +360,7 @@ class DataLoaderFactory():
config['train_mode'] = True config['train_mode'] = True
elif mode == 'valid': elif mode == 'valid':
config['manifest'] = config.dev_manifest config['manifest'] = config.dev_manifest
config['train_mode'] = False config['train_mode'] = False
elif model == 'test' or mode == 'align': elif model == 'test' or mode == 'align':
config['manifest'] = config.test_manifest config['manifest'] = config.test_manifest
config['train_mode'] = False config['train_mode'] = False
...@@ -358,30 +371,31 @@ class DataLoaderFactory(): ...@@ -358,30 +371,31 @@ class DataLoaderFactory():
config['maxlen_out'] = float('inf') config['maxlen_out'] = float('inf')
config['dist_sampler'] = False config['dist_sampler'] = False
else: else:
raise KeyError("not valid mode type!!, please input one of 'train, valid, test, align'") raise KeyError(
return StreamDataLoader( "not valid mode type!!, please input one of 'train, valid, test, align'"
manifest_file=config.manifest,
train_mode=config.train_mode,
unit_type=config.unit_type,
preprocess_conf=config.preprocess_config,
batch_size=config.batch_size,
num_mel_bins=config.feat_dim,
frame_length=config.window_ms,
frame_shift=config.stride_ms,
dither=config.dither,
minlen_in=config.minlen_in,
maxlen_in=config.maxlen_in,
minlen_out=config.minlen_out,
maxlen_out=config.maxlen_out,
resample_rate=config.resample_rate,
shuffle_size=config.shuffle_size,
sort_size=config.sort_size,
n_iter_processes=config.num_workers,
prefetch_factor=config.prefetch_factor,
dist_sampler=config.dist_sampler,
cmvn_file=config.cmvn_file,
vocab_filepath=config.vocab_filepath,
) )
return StreamDataLoader(
manifest_file=config.manifest,
train_mode=config.train_mode,
unit_type=config.unit_type,
preprocess_conf=config.preprocess_config,
batch_size=config.batch_size,
num_mel_bins=config.feat_dim,
frame_length=config.window_ms,
frame_shift=config.stride_ms,
dither=config.dither,
minlen_in=config.minlen_in,
maxlen_in=config.maxlen_in,
minlen_out=config.minlen_out,
maxlen_out=config.maxlen_out,
resample_rate=config.resample_rate,
shuffle_size=config.shuffle_size,
sort_size=config.sort_size,
n_iter_processes=config.num_workers,
prefetch_factor=config.prefetch_factor,
dist_sampler=config.dist_sampler,
cmvn_file=config.cmvn_file,
vocab_filepath=config.vocab_filepath, )
else: else:
if mode == 'train': if mode == 'train':
config['manifest'] = config.train_manifest config['manifest'] = config.train_manifest
...@@ -411,7 +425,7 @@ class DataLoaderFactory(): ...@@ -411,7 +425,7 @@ class DataLoaderFactory():
config['train_mode'] = False config['train_mode'] = False
config['sortagrad'] = False config['sortagrad'] = False
config['batch_size'] = config.get('decode', dict()).get( config['batch_size'] = config.get('decode', dict()).get(
'decode_batch_size', 1) 'decode_batch_size', 1)
config['maxlen_in'] = float('inf') config['maxlen_in'] = float('inf')
config['maxlen_out'] = float('inf') config['maxlen_out'] = float('inf')
config['minibatches'] = 0 config['minibatches'] = 0
...@@ -427,8 +441,10 @@ class DataLoaderFactory(): ...@@ -427,8 +441,10 @@ class DataLoaderFactory():
config['dist_sampler'] = False config['dist_sampler'] = False
config['shortest_first'] = False config['shortest_first'] = False
else: else:
raise KeyError("not valid mode type!!, please input one of 'train, valid, test, align'") raise KeyError(
"not valid mode type!!, please input one of 'train, valid, test, align'"
)
return BatchDataLoader( return BatchDataLoader(
json_file=config.manifest, json_file=config.manifest,
train_mode=config.train_mode, train_mode=config.train_mode,
...@@ -450,4 +466,3 @@ class DataLoaderFactory(): ...@@ -450,4 +466,3 @@ class DataLoaderFactory():
num_encs=config.num_encs, num_encs=config.num_encs,
dist_sampler=config.dist_sampler, dist_sampler=config.dist_sampler,
shortest_first=config.shortest_first) shortest_first=config.shortest_first)
...@@ -606,7 +606,7 @@ class U2BaseModel(ASRInterface, nn.Layer): ...@@ -606,7 +606,7 @@ class U2BaseModel(ASRInterface, nn.Layer):
offset: int, offset: int,
required_cache_size: int, required_cache_size: int,
att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
""" Export interface for c++ call, give input chunk xs, and return """ Export interface for c++ call, give input chunk xs, and return
output from time 0 to current chunk. output from time 0 to current chunk.
......
...@@ -18,7 +18,6 @@ Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recogni ...@@ -18,7 +18,6 @@ Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recogni
""" """
import time import time
from typing import Dict from typing import Dict
from typing import List
from typing import Optional from typing import Optional
from typing import Tuple from typing import Tuple
...@@ -26,6 +25,8 @@ import paddle ...@@ -26,6 +25,8 @@ import paddle
from paddle import jit from paddle import jit
from paddle import nn from paddle import nn
from paddlespeech.audio.utils.tensor_utils import add_sos_eos
from paddlespeech.audio.utils.tensor_utils import th_accuracy
from paddlespeech.s2t.frontend.utility import IGNORE_ID from paddlespeech.s2t.frontend.utility import IGNORE_ID
from paddlespeech.s2t.frontend.utility import load_cmvn from paddlespeech.s2t.frontend.utility import load_cmvn
from paddlespeech.s2t.modules.cmvn import GlobalCMVN from paddlespeech.s2t.modules.cmvn import GlobalCMVN
...@@ -38,8 +39,6 @@ from paddlespeech.s2t.modules.mask import subsequent_mask ...@@ -38,8 +39,6 @@ from paddlespeech.s2t.modules.mask import subsequent_mask
from paddlespeech.s2t.utils import checkpoint from paddlespeech.s2t.utils import checkpoint
from paddlespeech.s2t.utils import layer_tools from paddlespeech.s2t.utils import layer_tools
from paddlespeech.s2t.utils.log import Log from paddlespeech.s2t.utils.log import Log
from paddlespeech.audio.utils.tensor_utils import add_sos_eos
from paddlespeech.audio.utils.tensor_utils import th_accuracy
from paddlespeech.s2t.utils.utility import UpdateConfig from paddlespeech.s2t.utils.utility import UpdateConfig
__all__ = ["U2STModel", "U2STInferModel"] __all__ = ["U2STModel", "U2STInferModel"]
...@@ -401,8 +400,8 @@ class U2STBaseModel(nn.Layer): ...@@ -401,8 +400,8 @@ class U2STBaseModel(nn.Layer):
xs: paddle.Tensor, xs: paddle.Tensor,
offset: int, offset: int,
required_cache_size: int, required_cache_size: int,
att_cache: paddle.Tensor = paddle.zeros([0, 0, 0, 0]), att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
cnn_cache: paddle.Tensor = paddle.zeros([0, 0, 0, 0]), cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
""" Export interface for c++ call, give input chunk xs, and return """ Export interface for c++ call, give input chunk xs, and return
output from time 0 to current chunk. output from time 0 to current chunk.
...@@ -435,8 +434,8 @@ class U2STBaseModel(nn.Layer): ...@@ -435,8 +434,8 @@ class U2STBaseModel(nn.Layer):
paddle.Tensor: new conformer cnn cache required for next chunk, with paddle.Tensor: new conformer cnn cache required for next chunk, with
same shape as the original cnn_cache. same shape as the original cnn_cache.
""" """
return self.encoder.forward_chunk( return self.encoder.forward_chunk(xs, offset, required_cache_size,
xs, offset, required_cache_size, att_cache, cnn_cache) att_cache, cnn_cache)
# @jit.to_static # @jit.to_static
def ctc_activation(self, xs: paddle.Tensor) -> paddle.Tensor: def ctc_activation(self, xs: paddle.Tensor) -> paddle.Tensor:
......
...@@ -11,9 +11,10 @@ ...@@ -11,9 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import math
import paddle import paddle
from paddle import nn from paddle import nn
import math
""" """
To align the initializer between paddle and torch, To align the initializer between paddle and torch,
the API below are set defalut initializer with priority higger than global initializer. the API below are set defalut initializer with priority higger than global initializer.
...@@ -81,10 +82,18 @@ class Linear(nn.Linear): ...@@ -81,10 +82,18 @@ class Linear(nn.Linear):
name=None): name=None):
if weight_attr is None: if weight_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
weight_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) weight_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
if bias_attr is None: if bias_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
bias_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) bias_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
super(Linear, self).__init__(in_features, out_features, weight_attr, super(Linear, self).__init__(in_features, out_features, weight_attr,
bias_attr, name) bias_attr, name)
...@@ -104,10 +113,18 @@ class Conv1D(nn.Conv1D): ...@@ -104,10 +113,18 @@ class Conv1D(nn.Conv1D):
data_format='NCL'): data_format='NCL'):
if weight_attr is None: if weight_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
weight_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) weight_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
if bias_attr is None: if bias_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
bias_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) bias_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
super(Conv1D, self).__init__( super(Conv1D, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation, in_channels, out_channels, kernel_size, stride, padding, dilation,
groups, padding_mode, weight_attr, bias_attr, data_format) groups, padding_mode, weight_attr, bias_attr, data_format)
...@@ -128,10 +145,18 @@ class Conv2D(nn.Conv2D): ...@@ -128,10 +145,18 @@ class Conv2D(nn.Conv2D):
data_format='NCHW'): data_format='NCHW'):
if weight_attr is None: if weight_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
weight_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) weight_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
if bias_attr is None: if bias_attr is None:
if global_init_type == "kaiming_uniform": if global_init_type == "kaiming_uniform":
bias_attr = paddle.ParamAttr(initializer=nn.initializer.KaimingUniform(fan_in=None, negative_slope=math.sqrt(5), nonlinearity='leaky_relu')) bias_attr = paddle.ParamAttr(
initializer=nn.initializer.KaimingUniform(
fan_in=None,
negative_slope=math.sqrt(5),
nonlinearity='leaky_relu'))
super(Conv2D, self).__init__( super(Conv2D, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation, in_channels, out_channels, kernel_size, stride, padding, dilation,
groups, padding_mode, weight_attr, bias_attr, data_format) groups, padding_mode, weight_attr, bias_attr, data_format)
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
# Modified from wenet(https://github.com/wenet-e2e/wenet) # Modified from wenet(https://github.com/wenet-e2e/wenet)
"""Multi-Head Attention layer definition.""" """Multi-Head Attention layer definition."""
import math import math
from typing import Optional
from typing import Tuple from typing import Tuple
import paddle import paddle
...@@ -83,11 +82,12 @@ class MultiHeadedAttention(nn.Layer): ...@@ -83,11 +82,12 @@ class MultiHeadedAttention(nn.Layer):
return q, k, v return q, k, v
def forward_attention(self, def forward_attention(
value: paddle.Tensor, self,
value: paddle.Tensor,
scores: paddle.Tensor, scores: paddle.Tensor,
mask: paddle.Tensor = paddle.ones([0, 0, 0], dtype=paddle.bool), mask: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool)
) -> paddle.Tensor: ) -> paddle.Tensor:
"""Compute attention context vector. """Compute attention context vector.
Args: Args:
value (paddle.Tensor): Transformed value, size value (paddle.Tensor): Transformed value, size
...@@ -108,7 +108,7 @@ class MultiHeadedAttention(nn.Layer): ...@@ -108,7 +108,7 @@ class MultiHeadedAttention(nn.Layer):
# When will `if mask.size(2) > 0` be False? # When will `if mask.size(2) > 0` be False?
# 1. onnx(16/-1, -1/-1, 16/0) # 1. onnx(16/-1, -1/-1, 16/0)
# 2. jit (16/-1, -1/-1, 16/0, 16/4) # 2. jit (16/-1, -1/-1, 16/0, 16/4)
if paddle.shape(mask)[2] > 0: # time2 > 0 if paddle.shape(mask)[2] > 0: # time2 > 0
mask = mask.unsqueeze(1).equal(0) # (batch, 1, *, time2) mask = mask.unsqueeze(1).equal(0) # (batch, 1, *, time2)
# for last chunk, time2 might be larger than scores.size(-1) # for last chunk, time2 might be larger than scores.size(-1)
mask = mask[:, :, :, :paddle.shape(scores)[-1]] mask = mask[:, :, :, :paddle.shape(scores)[-1]]
...@@ -131,9 +131,9 @@ class MultiHeadedAttention(nn.Layer): ...@@ -131,9 +131,9 @@ class MultiHeadedAttention(nn.Layer):
query: paddle.Tensor, query: paddle.Tensor,
key: paddle.Tensor, key: paddle.Tensor,
value: paddle.Tensor, value: paddle.Tensor,
mask: paddle.Tensor = paddle.ones([0,0,0], dtype=paddle.bool), mask: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool),
pos_emb: paddle.Tensor = paddle.empty([0]), pos_emb: paddle.Tensor=paddle.empty([0]),
cache: paddle.Tensor = paddle.zeros([0,0,0,0]) cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor]:
"""Compute scaled dot product attention. """Compute scaled dot product attention.
Args: Args:
...@@ -247,9 +247,9 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -247,9 +247,9 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
query: paddle.Tensor, query: paddle.Tensor,
key: paddle.Tensor, key: paddle.Tensor,
value: paddle.Tensor, value: paddle.Tensor,
mask: paddle.Tensor = paddle.ones([0,0,0], dtype=paddle.bool), mask: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool),
pos_emb: paddle.Tensor = paddle.empty([0]), pos_emb: paddle.Tensor=paddle.empty([0]),
cache: paddle.Tensor = paddle.zeros([0,0,0,0]) cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor]:
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding. """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args: Args:
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
# Modified from wenet(https://github.com/wenet-e2e/wenet) # Modified from wenet(https://github.com/wenet-e2e/wenet)
"""ConvolutionModule definition.""" """ConvolutionModule definition."""
from typing import Optional
from typing import Tuple from typing import Tuple
import paddle import paddle
...@@ -106,11 +105,12 @@ class ConvolutionModule(nn.Layer): ...@@ -106,11 +105,12 @@ class ConvolutionModule(nn.Layer):
) )
self.activation = activation self.activation = activation
def forward(self, def forward(
x: paddle.Tensor, self,
mask_pad: paddle.Tensor= paddle.ones([0,0,0], dtype=paddle.bool), x: paddle.Tensor,
cache: paddle.Tensor= paddle.zeros([0,0,0]), mask_pad: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool),
) -> Tuple[paddle.Tensor, paddle.Tensor]: cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor]:
"""Compute convolution module. """Compute convolution module.
Args: Args:
x (paddle.Tensor): Input tensor (#batch, time, channels). x (paddle.Tensor): Input tensor (#batch, time, channels).
...@@ -127,11 +127,11 @@ class ConvolutionModule(nn.Layer): ...@@ -127,11 +127,11 @@ class ConvolutionModule(nn.Layer):
x = x.transpose([0, 2, 1]) # [B, C, T] x = x.transpose([0, 2, 1]) # [B, C, T]
# mask batch padding # mask batch padding
if paddle.shape(mask_pad)[2] > 0: # time > 0 if paddle.shape(mask_pad)[2] > 0: # time > 0
x = x.masked_fill(mask_pad, 0.0) x = x.masked_fill(mask_pad, 0.0)
if self.lorder > 0: if self.lorder > 0:
if paddle.shape(cache)[2] == 0: # cache_t == 0 if paddle.shape(cache)[2] == 0: # cache_t == 0
x = nn.functional.pad( x = nn.functional.pad(
x, [self.lorder, 0], 'constant', 0.0, data_format='NCL') x, [self.lorder, 0], 'constant', 0.0, data_format='NCL')
else: else:
...@@ -161,7 +161,7 @@ class ConvolutionModule(nn.Layer): ...@@ -161,7 +161,7 @@ class ConvolutionModule(nn.Layer):
x = self.pointwise_conv2(x) x = self.pointwise_conv2(x)
# mask batch padding # mask batch padding
if paddle.shape(mask_pad)[2] > 0: # time > 0 if paddle.shape(mask_pad)[2] > 0: # time > 0
x = x.masked_fill(mask_pad, 0.0) x = x.masked_fill(mask_pad, 0.0)
x = x.transpose([0, 2, 1]) # [B, T, C] x = x.transpose([0, 2, 1]) # [B, T, C]
......
...@@ -14,8 +14,6 @@ ...@@ -14,8 +14,6 @@
# limitations under the License. # limitations under the License.
# Modified from wenet(https://github.com/wenet-e2e/wenet) # Modified from wenet(https://github.com/wenet-e2e/wenet)
"""Encoder definition.""" """Encoder definition."""
from typing import List
from typing import Optional
from typing import Tuple from typing import Tuple
import paddle import paddle
...@@ -190,9 +188,9 @@ class BaseEncoder(nn.Layer): ...@@ -190,9 +188,9 @@ class BaseEncoder(nn.Layer):
xs: paddle.Tensor, xs: paddle.Tensor,
offset: int, offset: int,
required_cache_size: int, required_cache_size: int,
att_cache: paddle.Tensor = paddle.zeros([0,0,0,0]), att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
cnn_cache: paddle.Tensor = paddle.zeros([0,0,0,0]), cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
att_mask: paddle.Tensor = paddle.ones([0,0,0], dtype=paddle.bool), att_mask: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool)
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
""" Forward just one chunk """ Forward just one chunk
Args: Args:
...@@ -227,7 +225,7 @@ class BaseEncoder(nn.Layer): ...@@ -227,7 +225,7 @@ class BaseEncoder(nn.Layer):
xs = self.global_cmvn(xs) xs = self.global_cmvn(xs)
# before embed, xs=(B, T, D1), pos_emb=(B=1, T, D) # before embed, xs=(B, T, D1), pos_emb=(B=1, T, D)
xs, pos_emb, _ = self.embed(xs, tmp_masks, offset=offset) xs, pos_emb, _ = self.embed(xs, tmp_masks, offset=offset)
# after embed, xs=(B=1, chunk_size, hidden-dim) # after embed, xs=(B=1, chunk_size, hidden-dim)
elayers = paddle.shape(att_cache)[0] elayers = paddle.shape(att_cache)[0]
...@@ -252,14 +250,16 @@ class BaseEncoder(nn.Layer): ...@@ -252,14 +250,16 @@ class BaseEncoder(nn.Layer):
# att_cache[i:i+1] = (1, head, cache_t1, d_k*2) # att_cache[i:i+1] = (1, head, cache_t1, d_k*2)
# cnn_cache[i:i+1] = (1, B=1, hidden-dim, cache_t2) # cnn_cache[i:i+1] = (1, B=1, hidden-dim, cache_t2)
xs, _, new_att_cache, new_cnn_cache = layer( xs, _, new_att_cache, new_cnn_cache = layer(
xs, att_mask, pos_emb, xs,
att_cache=att_cache[i:i+1] if elayers > 0 else att_cache, att_mask,
cnn_cache=cnn_cache[i:i+1] if paddle.shape(cnn_cache)[0] > 0 else cnn_cache, pos_emb,
) att_cache=att_cache[i:i + 1] if elayers > 0 else att_cache,
cnn_cache=cnn_cache[i:i + 1]
if paddle.shape(cnn_cache)[0] > 0 else cnn_cache, )
# new_att_cache = (1, head, attention_key_size, d_k*2) # new_att_cache = (1, head, attention_key_size, d_k*2)
# new_cnn_cache = (B=1, hidden-dim, cache_t2) # new_cnn_cache = (B=1, hidden-dim, cache_t2)
r_att_cache.append(new_att_cache[:,:, next_cache_start:, :]) r_att_cache.append(new_att_cache[:, :, next_cache_start:, :])
r_cnn_cache.append(new_cnn_cache.unsqueeze(0)) # add elayer dim r_cnn_cache.append(new_cnn_cache.unsqueeze(0)) # add elayer dim
if self.normalize_before: if self.normalize_before:
xs = self.after_norm(xs) xs = self.after_norm(xs)
...@@ -270,7 +270,6 @@ class BaseEncoder(nn.Layer): ...@@ -270,7 +270,6 @@ class BaseEncoder(nn.Layer):
r_cnn_cache = paddle.concat(r_cnn_cache, axis=0) r_cnn_cache = paddle.concat(r_cnn_cache, axis=0)
return xs, r_att_cache, r_cnn_cache return xs, r_att_cache, r_cnn_cache
def forward_chunk_by_chunk( def forward_chunk_by_chunk(
self, self,
xs: paddle.Tensor, xs: paddle.Tensor,
...@@ -315,8 +314,8 @@ class BaseEncoder(nn.Layer): ...@@ -315,8 +314,8 @@ class BaseEncoder(nn.Layer):
num_frames = xs.shape[1] num_frames = xs.shape[1]
required_cache_size = decoding_chunk_size * num_decoding_left_chunks required_cache_size = decoding_chunk_size * num_decoding_left_chunks
att_cache: paddle.Tensor = paddle.zeros([0,0,0,0]) att_cache: paddle.Tensor = paddle.zeros([0, 0, 0, 0])
cnn_cache: paddle.Tensor = paddle.zeros([0,0,0,0]) cnn_cache: paddle.Tensor = paddle.zeros([0, 0, 0, 0])
outputs = [] outputs = []
offset = 0 offset = 0
...@@ -326,7 +325,7 @@ class BaseEncoder(nn.Layer): ...@@ -326,7 +325,7 @@ class BaseEncoder(nn.Layer):
chunk_xs = xs[:, cur:end, :] chunk_xs = xs[:, cur:end, :]
(y, att_cache, cnn_cache) = self.forward_chunk( (y, att_cache, cnn_cache) = self.forward_chunk(
chunk_xs, offset, required_cache_size, att_cache, cnn_cache) chunk_xs, offset, required_cache_size, att_cache, cnn_cache)
outputs.append(y) outputs.append(y)
offset += y.shape[1] offset += y.shape[1]
......
...@@ -78,7 +78,7 @@ class TransformerEncoderLayer(nn.Layer): ...@@ -78,7 +78,7 @@ class TransformerEncoderLayer(nn.Layer):
pos_emb: paddle.Tensor, pos_emb: paddle.Tensor,
mask_pad: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool), mask_pad: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool),
att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]:
"""Compute encoded features. """Compute encoded features.
Args: Args:
...@@ -195,7 +195,7 @@ class ConformerEncoderLayer(nn.Layer): ...@@ -195,7 +195,7 @@ class ConformerEncoderLayer(nn.Layer):
pos_emb: paddle.Tensor, pos_emb: paddle.Tensor,
mask_pad: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool), mask_pad: paddle.Tensor=paddle.ones([0, 0, 0], dtype=paddle.bool),
att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), att_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]),
cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0]), cnn_cache: paddle.Tensor=paddle.zeros([0, 0, 0, 0])
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]: ) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]:
"""Compute encoded features. """Compute encoded features.
Args: Args:
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import numpy as np
class DefaultInitializerContext(object): class DefaultInitializerContext(object):
""" """
......
...@@ -19,6 +19,10 @@ from pathlib import Path ...@@ -19,6 +19,10 @@ from pathlib import Path
import paddle import paddle
from paddle import distributed as dist from paddle import distributed as dist
world_size = dist.get_world_size()
if world_size > 1:
dist.init_parallel_env()
from visualdl import LogWriter from visualdl import LogWriter
from paddlespeech.s2t.training.reporter import ObsScope from paddlespeech.s2t.training.reporter import ObsScope
...@@ -122,9 +126,6 @@ class Trainer(): ...@@ -122,9 +126,6 @@ class Trainer():
else: else:
raise Exception("invalid device") raise Exception("invalid device")
if self.parallel:
self.init_parallel()
self.checkpoint = Checkpoint( self.checkpoint = Checkpoint(
kbest_n=self.config.checkpoint.kbest_n, kbest_n=self.config.checkpoint.kbest_n,
latest_n=self.config.checkpoint.latest_n) latest_n=self.config.checkpoint.latest_n)
...@@ -173,11 +174,6 @@ class Trainer(): ...@@ -173,11 +174,6 @@ class Trainer():
""" """
return self.args.ngpu > 1 return self.args.ngpu > 1
def init_parallel(self):
"""Init environment for multiprocess training.
"""
dist.init_parallel_env()
@mp_tools.rank_zero_only @mp_tools.rank_zero_only
def save(self, tag=None, infos: dict=None): def save(self, tag=None, infos: dict=None):
"""Save checkpoint (model parameters and optimizer states). """Save checkpoint (model parameters and optimizer states).
......
...@@ -18,7 +18,6 @@ from typing import List ...@@ -18,7 +18,6 @@ from typing import List
import uvicorn import uvicorn
from fastapi import FastAPI from fastapi import FastAPI
from starlette.middleware.cors import CORSMiddleware
from prettytable import PrettyTable from prettytable import PrettyTable
from starlette.middleware.cors import CORSMiddleware from starlette.middleware.cors import CORSMiddleware
...@@ -46,6 +45,7 @@ app.add_middleware( ...@@ -46,6 +45,7 @@ app.add_middleware(
allow_methods=["*"], allow_methods=["*"],
allow_headers=["*"]) allow_headers=["*"])
@cli_server_register( @cli_server_register(
name='paddlespeech_server.start', description='Start the service') name='paddlespeech_server.start', description='Start the service')
class ServerExecutor(BaseExecutor): class ServerExecutor(BaseExecutor):
...@@ -177,7 +177,7 @@ class ServerStatsExecutor(): ...@@ -177,7 +177,7 @@ class ServerStatsExecutor():
logger.info( logger.info(
"Here is the table of {} static pretrained models supported in the service.". "Here is the table of {} static pretrained models supported in the service.".
format(self.task.upper())) format(self.task.upper()))
self.show_support_models(pretrained_models) self.show_support_models(static_pretrained_models)
return True return True
......
...@@ -25,6 +25,7 @@ asr_python: ...@@ -25,6 +25,7 @@ asr_python:
cfg_path: # [optional] cfg_path: # [optional]
ckpt_path: # [optional] ckpt_path: # [optional]
decode_method: 'attention_rescoring' decode_method: 'attention_rescoring'
num_decoding_left_chunks: -1
force_yes: True force_yes: True
device: # set 'gpu:id' or 'cpu' device: # set 'gpu:id' or 'cpu'
...@@ -38,6 +39,7 @@ asr_inference: ...@@ -38,6 +39,7 @@ asr_inference:
lang: 'zh' lang: 'zh'
sample_rate: 16000 sample_rate: 16000
cfg_path: cfg_path:
num_decoding_left_chunks: -1
decode_method: decode_method:
force_yes: True force_yes: True
......
...@@ -102,8 +102,10 @@ class OnlineCTCEndpoint: ...@@ -102,8 +102,10 @@ class OnlineCTCEndpoint:
assert self.num_frames_decoded >= self.trailing_silence_frames assert self.num_frames_decoded >= self.trailing_silence_frames
assert self.frame_shift_in_ms > 0 assert self.frame_shift_in_ms > 0
decoding_something = (self.num_frames_decoded > self.trailing_silence_frames) and decoding_something decoding_something = (
self.num_frames_decoded > self.trailing_silence_frames
) and decoding_something
utterance_length = self.num_frames_decoded * self.frame_shift_in_ms utterance_length = self.num_frames_decoded * self.frame_shift_in_ms
trailing_silence = self.trailing_silence_frames * self.frame_shift_in_ms trailing_silence = self.trailing_silence_frames * self.frame_shift_in_ms
......
...@@ -21,12 +21,12 @@ import paddle ...@@ -21,12 +21,12 @@ import paddle
from numpy import float32 from numpy import float32
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.cli.asr.infer import ASRExecutor from paddlespeech.cli.asr.infer import ASRExecutor
from paddlespeech.cli.log import logger from paddlespeech.cli.log import logger
from paddlespeech.resource import CommonTaskResource from paddlespeech.resource import CommonTaskResource
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
from paddlespeech.s2t.modules.ctc import CTCDecoder from paddlespeech.s2t.modules.ctc import CTCDecoder
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.s2t.utils.utility import UpdateConfig from paddlespeech.s2t.utils.utility import UpdateConfig
from paddlespeech.server.engine.base_engine import BaseEngine from paddlespeech.server.engine.base_engine import BaseEngine
from paddlespeech.server.utils import onnx_infer from paddlespeech.server.utils import onnx_infer
......
...@@ -21,10 +21,10 @@ import paddle ...@@ -21,10 +21,10 @@ import paddle
from numpy import float32 from numpy import float32
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.cli.asr.infer import ASRExecutor from paddlespeech.cli.asr.infer import ASRExecutor
from paddlespeech.cli.log import logger from paddlespeech.cli.log import logger
from paddlespeech.resource import CommonTaskResource from paddlespeech.resource import CommonTaskResource
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
from paddlespeech.s2t.modules.ctc import CTCDecoder from paddlespeech.s2t.modules.ctc import CTCDecoder
from paddlespeech.s2t.utils.utility import UpdateConfig from paddlespeech.s2t.utils.utility import UpdateConfig
......
...@@ -21,10 +21,10 @@ import paddle ...@@ -21,10 +21,10 @@ import paddle
from numpy import float32 from numpy import float32
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.cli.asr.infer import ASRExecutor from paddlespeech.cli.asr.infer import ASRExecutor
from paddlespeech.cli.log import logger from paddlespeech.cli.log import logger
from paddlespeech.resource import CommonTaskResource from paddlespeech.resource import CommonTaskResource
from paddlespeech.audio.transform.transformation import Transformation
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
from paddlespeech.s2t.modules.ctc import CTCDecoder from paddlespeech.s2t.modules.ctc import CTCDecoder
from paddlespeech.s2t.utils.tensor_utils import add_sos_eos from paddlespeech.s2t.utils.tensor_utils import add_sos_eos
...@@ -130,8 +130,8 @@ class PaddleASRConnectionHanddler: ...@@ -130,8 +130,8 @@ class PaddleASRConnectionHanddler:
## conformer ## conformer
# cache for conformer online # cache for conformer online
self.att_cache = paddle.zeros([0,0,0,0]) self.att_cache = paddle.zeros([0, 0, 0, 0])
self.cnn_cache = paddle.zeros([0,0,0,0]) self.cnn_cache = paddle.zeros([0, 0, 0, 0])
self.encoder_out = None self.encoder_out = None
# conformer decoding state # conformer decoding state
...@@ -474,9 +474,13 @@ class PaddleASRConnectionHanddler: ...@@ -474,9 +474,13 @@ class PaddleASRConnectionHanddler:
# cur chunk # cur chunk
chunk_xs = self.cached_feat[:, cur:end, :] chunk_xs = self.cached_feat[:, cur:end, :]
# forward chunk # forward chunk
(y, self.att_cache, self.cnn_cache) = self.model.encoder.forward_chunk( (y, self.att_cache,
chunk_xs, self.offset, required_cache_size, self.cnn_cache) = self.model.encoder.forward_chunk(
self.att_cache, self.cnn_cache) chunk_xs,
self.offset,
required_cache_size,
att_cache=self.att_cache,
cnn_cache=self.cnn_cache)
outputs.append(y) outputs.append(y)
# update the global offset, in decoding frame unit # update the global offset, in decoding frame unit
......
...@@ -68,9 +68,12 @@ class ASREngine(BaseEngine): ...@@ -68,9 +68,12 @@ class ASREngine(BaseEngine):
return False return False
self.executor._init_from_path( self.executor._init_from_path(
self.config.model, self.config.lang, self.config.sample_rate, model_type=self.config.model,
self.config.cfg_path, self.config.decode_method, lang=self.config.lang,
self.config.ckpt_path) sample_rate=self.config.sample_rate,
cfg_path=self.config.cfg_path,
decode_method=self.config.decode_method,
ckpt_path=self.config.ckpt_path)
logger.info("Initialize ASR server engine successfully on device: %s." % logger.info("Initialize ASR server engine successfully on device: %s." %
(self.device)) (self.device))
......
...@@ -27,8 +27,10 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool: ...@@ -27,8 +27,10 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:
sentence = "您好,欢迎使用语音合成服务。" sentence = "您好,欢迎使用语音合成服务。"
elif tts_engine.lang == 'en': elif tts_engine.lang == 'en':
sentence = "Hello and welcome to the speech synthesis service." sentence = "Hello and welcome to the speech synthesis service."
elif tts_engine.lang == 'mix':
sentence = "您好,欢迎使用TTS多语种服务。"
else: else:
logger.error("tts engine only support lang: zh or en.") logger.error("tts engine only support lang: zh or en or mix.")
sys.exit(-1) sys.exit(-1)
if engine_and_type == "tts_python": if engine_and_type == "tts_python":
......
...@@ -105,7 +105,8 @@ class PaddleVectorConnectionHandler: ...@@ -105,7 +105,8 @@ class PaddleVectorConnectionHandler:
# we can not reuse the cache io.BytesIO(audio) data, # we can not reuse the cache io.BytesIO(audio) data,
# because the soundfile will change the io.BytesIO(audio) to the end # because the soundfile will change the io.BytesIO(audio) to the end
# thus we should convert the base64 string to io.BytesIO when we need the audio data # thus we should convert the base64 string to io.BytesIO when we need the audio data
if not self.executor._check(io.BytesIO(audio), sample_rate): if not self.executor._check(
io.BytesIO(audio), sample_rate, force_yes=True):
logger.debug("check the audio sample rate occurs error") logger.debug("check the audio sample rate occurs error")
return np.array([0.0]) return np.array([0.0])
......
...@@ -11,19 +11,12 @@ ...@@ -11,19 +11,12 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import Collection
from typing import Dict
from typing import List
from typing import Tuple
import numpy as np import numpy as np
import paddle import paddle
from paddlespeech.t2s.datasets.batch import batch_sequences from paddlespeech.t2s.datasets.batch import batch_sequences
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.modules.nets_utils import get_seg_pos from paddlespeech.t2s.modules.nets_utils import get_seg_pos
from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask
from paddlespeech.t2s.modules.nets_utils import pad_list
from paddlespeech.t2s.modules.nets_utils import phones_masking from paddlespeech.t2s.modules.nets_utils import phones_masking
from paddlespeech.t2s.modules.nets_utils import phones_text_masking from paddlespeech.t2s.modules.nets_utils import phones_text_masking
...@@ -492,180 +485,56 @@ def vits_single_spk_batch_fn(examples): ...@@ -492,180 +485,56 @@ def vits_single_spk_batch_fn(examples):
return batch return batch
# for ERNIE SAT def vits_multi_spk_batch_fn(examples):
class MLMCollateFn: """
"""Functor class of common_collate_fn()""" Returns:
Dict[str, Any]:
- text (Tensor): Text index tensor (B, T_text).
- text_lengths (Tensor): Text length tensor (B,).
- feats (Tensor): Feature tensor (B, T_feats, aux_channels).
- feats_lengths (Tensor): Feature length tensor (B,).
- speech (Tensor): Speech waveform tensor (B, T_wav).
- spk_id (Optional[Tensor]): Speaker index tensor (B,) or (B, 1).
- spk_emb (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim).
"""
# fields = ["text", "text_lengths", "feats", "feats_lengths", "speech", "spk_id"/"spk_emb"]
text = [np.array(item["text"], dtype=np.int64) for item in examples]
feats = [np.array(item["feats"], dtype=np.float32) for item in examples]
speech = [np.array(item["wave"], dtype=np.float32) for item in examples]
text_lengths = [
np.array(item["text_lengths"], dtype=np.int64) for item in examples
]
feats_lengths = [
np.array(item["feats_lengths"], dtype=np.int64) for item in examples
]
def __init__( text = batch_sequences(text)
self, feats = batch_sequences(feats)
feats_extract, speech = batch_sequences(speech)
mlm_prob: float=0.8,
mean_phn_span: int=8,
seg_emb: bool=False,
text_masking: bool=False,
attention_window: int=0,
not_sequence: Collection[str]=(), ):
self.mlm_prob = mlm_prob
self.mean_phn_span = mean_phn_span
self.feats_extract = feats_extract
self.not_sequence = set(not_sequence)
self.attention_window = attention_window
self.seg_emb = seg_emb
self.text_masking = text_masking
def __call__(self, data: Collection[Tuple[str, Dict[str, np.ndarray]]] # convert each batch to paddle.Tensor
) -> Tuple[List[str], Dict[str, paddle.Tensor]]: text = paddle.to_tensor(text)
return mlm_collate_fn(
data,
feats_extract=self.feats_extract,
mlm_prob=self.mlm_prob,
mean_phn_span=self.mean_phn_span,
seg_emb=self.seg_emb,
text_masking=self.text_masking,
not_sequence=self.not_sequence)
def mlm_collate_fn(
data: Collection[Tuple[str, Dict[str, np.ndarray]]],
feats_extract=None,
mlm_prob: float=0.8,
mean_phn_span: int=8,
seg_emb: bool=False,
text_masking: bool=False,
pad_value: int=0,
not_sequence: Collection[str]=(),
) -> Tuple[List[str], Dict[str, paddle.Tensor]]:
uttids = [u for u, _ in data]
data = [d for _, d in data]
assert all(set(data[0]) == set(d) for d in data), "dict-keys mismatching"
assert all(not k.endswith("_lens")
for k in data[0]), f"*_lens is reserved: {list(data[0])}"
output = {}
for key in data[0]:
array_list = [d[key] for d in data]
# Assume the first axis is length:
# tensor_list: Batch x (Length, ...)
tensor_list = [paddle.to_tensor(a) for a in array_list]
# tensor: (Batch, Length, ...)
tensor = pad_list(tensor_list, pad_value)
output[key] = tensor
# lens: (Batch,)
if key not in not_sequence:
lens = paddle.to_tensor(
[d[key].shape[0] for d in data], dtype=paddle.int64)
output[key + "_lens"] = lens
feats = feats_extract.get_log_mel_fbank(np.array(output["speech"][0]))
feats = paddle.to_tensor(feats) feats = paddle.to_tensor(feats)
print("feats.shape:", feats.shape) text_lengths = paddle.to_tensor(text_lengths)
feats_lens = paddle.shape(feats)[0] feats_lengths = paddle.to_tensor(feats_lengths)
feats = paddle.unsqueeze(feats, 0)
text = output["text"]
text_lens = output["text_lens"]
align_start = output["align_start"]
align_start_lens = output["align_start_lens"]
align_end = output["align_end"]
max_tlen = max(text_lens)
max_slen = max(feats_lens)
speech_pad = feats[:, :max_slen]
text_pad = text
text_mask = make_non_pad_mask(
text_lens, text_pad, length_dim=1).unsqueeze(-2)
speech_mask = make_non_pad_mask(
feats_lens, speech_pad[:, :, 0], length_dim=1).unsqueeze(-2)
span_bdy = None
if 'span_bdy' in output.keys():
span_bdy = output['span_bdy']
# dual_mask 的是混合中英时候同时 mask 语音和文本
# ernie sat 在实现跨语言的时候都 mask 了
if text_masking:
masked_pos, text_masked_pos = phones_text_masking(
xs_pad=speech_pad,
src_mask=speech_mask,
text_pad=text_pad,
text_mask=text_mask,
align_start=align_start,
align_end=align_end,
align_start_lens=align_start_lens,
mlm_prob=mlm_prob,
mean_phn_span=mean_phn_span,
span_bdy=span_bdy)
# 训练纯中文和纯英文的 -> a3t 没有对 phoneme 做 mask, 只对语音 mask 了
# a3t 和 ernie sat 的区别主要在于做 mask 的时候
else:
masked_pos = phones_masking(
xs_pad=speech_pad,
src_mask=speech_mask,
align_start=align_start,
align_end=align_end,
align_start_lens=align_start_lens,
mlm_prob=mlm_prob,
mean_phn_span=mean_phn_span,
span_bdy=span_bdy)
text_masked_pos = paddle.zeros(paddle.shape(text_pad))
output_dict = {}
speech_seg_pos, text_seg_pos = get_seg_pos(
speech_pad=speech_pad,
text_pad=text_pad,
align_start=align_start,
align_end=align_end,
align_start_lens=align_start_lens,
seg_emb=seg_emb)
output_dict['speech'] = speech_pad
output_dict['text'] = text_pad
output_dict['masked_pos'] = masked_pos
output_dict['text_masked_pos'] = text_masked_pos
output_dict['speech_mask'] = speech_mask
output_dict['text_mask'] = text_mask
output_dict['speech_seg_pos'] = speech_seg_pos
output_dict['text_seg_pos'] = text_seg_pos
output = (uttids, output_dict)
return output
def build_mlm_collate_fn(
sr: int=24000,
n_fft: int=2048,
hop_length: int=300,
win_length: int=None,
n_mels: int=80,
fmin: int=80,
fmax: int=7600,
mlm_prob: float=0.8,
mean_phn_span: int=8,
seg_emb: bool=False,
epoch: int=-1, ):
feats_extract_class = LogMelFBank
feats_extract = feats_extract_class(
sr=sr,
n_fft=n_fft,
hop_length=hop_length,
win_length=win_length,
n_mels=n_mels,
fmin=fmin,
fmax=fmax)
if epoch == -1:
mlm_prob_factor = 1
else:
mlm_prob_factor = 0.8
return MLMCollateFn( batch = {
feats_extract=feats_extract, "text": text,
mlm_prob=mlm_prob * mlm_prob_factor, "text_lengths": text_lengths,
mean_phn_span=mean_phn_span, "feats": feats,
seg_emb=seg_emb) "feats_lengths": feats_lengths,
"speech": speech
}
# spk_emb has a higher priority than spk_id
if "spk_emb" in examples[0]:
spk_emb = [
np.array(item["spk_emb"], dtype=np.float32) for item in examples
]
spk_emb = batch_sequences(spk_emb)
spk_emb = paddle.to_tensor(spk_emb)
batch["spk_emb"] = spk_emb
elif "spk_id" in examples[0]:
spk_id = [np.array(item["spk_id"], dtype=np.int64) for item in examples]
spk_id = paddle.to_tensor(spk_id)
batch["spk_id"] = spk_id
return batch
import math
import numpy as np
from paddle.io import BatchSampler
class ErnieSATSampler(BatchSampler):
"""Sampler that restricts data loading to a subset of the dataset.
In such case, each process can pass a DistributedBatchSampler instance
as a DataLoader sampler, and load a subset of the original dataset that
is exclusive to it.
.. note::
Dataset is assumed to be of constant size.
Args:
dataset(paddle.io.Dataset): this could be a `paddle.io.Dataset` implement
or other python object which implemented
`__len__` for BatchSampler to get sample
number of data source.
batch_size(int): sample indice number in a mini-batch indices.
num_replicas(int, optional): porcess number in distributed training.
If :attr:`num_replicas` is None, :attr:`num_replicas` will be
retrieved from :code:`paddle.distributed.ParallenEnv`.
Default None.
rank(int, optional): the rank of the current process among :attr:`num_replicas`
processes. If :attr:`rank` is None, :attr:`rank` is retrieved from
:code:`paddle.distributed.ParallenEnv`. Default None.
shuffle(bool): whther to shuffle indices order before genrating
batch indices. Default False.
drop_last(bool): whether drop the last incomplete batch dataset size
is not divisible by the batch size. Default False
Examples:
.. code-block:: python
import numpy as np
from paddle.io import Dataset, DistributedBatchSampler
# init with dataset
class RandomDataset(Dataset):
def __init__(self, num_samples):
self.num_samples = num_samples
def __getitem__(self, idx):
image = np.random.random([784]).astype('float32')
label = np.random.randint(0, 9, (1, )).astype('int64')
return image, label
def __len__(self):
return self.num_samples
dataset = RandomDataset(100)
sampler = DistributedBatchSampler(dataset, batch_size=64)
for data in sampler:
# do something
break
"""
def __init__(self,
dataset,
batch_size,
num_replicas=None,
rank=None,
shuffle=False,
drop_last=False):
self.dataset = dataset
assert isinstance(batch_size, int) and batch_size > 0, \
"batch_size should be a positive integer"
self.batch_size = batch_size
assert isinstance(shuffle, bool), \
"shuffle should be a boolean value"
self.shuffle = shuffle
assert isinstance(drop_last, bool), \
"drop_last should be a boolean number"
from paddle.distributed import ParallelEnv
if num_replicas is not None:
assert isinstance(num_replicas, int) and num_replicas > 0, \
"num_replicas should be a positive integer"
self.nranks = num_replicas
else:
self.nranks = ParallelEnv().nranks
if rank is not None:
assert isinstance(rank, int) and rank >= 0, \
"rank should be a non-negative integer"
self.local_rank = rank
else:
self.local_rank = ParallelEnv().local_rank
self.drop_last = drop_last
self.epoch = 0
self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.nranks))
self.total_size = self.num_samples * self.nranks
def __iter__(self):
num_samples = len(self.dataset)
indices = np.arange(num_samples).tolist()
indices += indices[:(self.total_size - len(indices))]
assert len(indices) == self.total_size
# subsample
def _get_indices_by_batch_size(indices):
subsampled_indices = []
last_batch_size = self.total_size % (self.batch_size * self.nranks)
assert last_batch_size % self.nranks == 0
last_local_batch_size = last_batch_size // self.nranks
for i in range(self.local_rank * self.batch_size,
len(indices) - last_batch_size,
self.batch_size * self.nranks):
subsampled_indices.extend(indices[i:i + self.batch_size])
indices = indices[len(indices) - last_batch_size:]
subsampled_indices.extend(
indices[self.local_rank * last_local_batch_size:(
self.local_rank + 1) * last_local_batch_size])
return subsampled_indices
if self.nranks > 1:
indices = _get_indices_by_batch_size(indices)
assert len(indices) == self.num_samples
_sample_iter = iter(indices)
batch_indices_list = []
batch_indices = []
for idx in _sample_iter:
batch_indices.append(idx)
if len(batch_indices) == self.batch_size:
batch_indices_list.append(batch_indices)
batch_indices = []
if not self.drop_last and len(batch_indices) > 0:
batch_indices_list.append(batch_indices)
if self.shuffle:
np.random.RandomState(self.epoch).shuffle(batch_indices_list)
self.epoch += 1
for batch_indices in batch_indices_list:
yield batch_indices
def __len__(self):
num_samples = self.num_samples
num_samples += int(not self.drop_last) * (self.batch_size - 1)
return num_samples // self.batch_size
def set_epoch(self, epoch):
"""
Sets the epoch number. When :attr:`shuffle=True`, this number is used
as seeds of random numbers. By default, users may not set this, all
replicas (workers) use a different random ordering for each epoch.
If set same number at each epoch, this sampler will yield the same
ordering at all epoches.
Arguments:
epoch (int): Epoch number.
Examples:
.. code-block:: python
import numpy as np
from paddle.io import Dataset, DistributedBatchSampler
# init with dataset
class RandomDataset(Dataset):
def __init__(self, num_samples):
self.num_samples = num_samples
def __getitem__(self, idx):
image = np.random.random([784]).astype('float32')
label = np.random.randint(0, 9, (1, )).astype('int64')
return image, label
def __len__(self):
return self.num_samples
dataset = RandomDataset(100)
sampler = DistributedBatchSampler(dataset, batch_size=64)
for epoch in range(10):
sampler.set_epoch(epoch)
"""
self.epoch = epoch
...@@ -19,9 +19,9 @@ import librosa ...@@ -19,9 +19,9 @@ import librosa
import numpy as np import numpy as np
import pypinyin import pypinyin
from praatio import textgrid from praatio import textgrid
from paddlespeech.t2s.exps.ernie_sat.utils import get_tmp_name
from paddlespeech.t2s.exps.ernie_sat.utils import get_dict
from paddlespeech.t2s.exps.ernie_sat.utils import get_dict
from paddlespeech.t2s.exps.ernie_sat.utils import get_tmp_name
DICT_EN = 'tools/aligner/cmudict-0.7b' DICT_EN = 'tools/aligner/cmudict-0.7b'
DICT_ZH = 'tools/aligner/simple.lexicon' DICT_ZH = 'tools/aligner/simple.lexicon'
...@@ -30,6 +30,7 @@ MODEL_DIR_ZH = 'tools/aligner/aishell3_model.zip' ...@@ -30,6 +30,7 @@ MODEL_DIR_ZH = 'tools/aligner/aishell3_model.zip'
MFA_PATH = 'tools/montreal-forced-aligner/bin' MFA_PATH = 'tools/montreal-forced-aligner/bin'
os.environ['PATH'] = MFA_PATH + '/:' + os.environ['PATH'] os.environ['PATH'] = MFA_PATH + '/:' + os.environ['PATH']
def _get_max_idx(dic): def _get_max_idx(dic):
return sorted([int(key.split('_')[0]) for key in dic.keys()])[-1] return sorted([int(key.split('_')[0]) for key in dic.keys()])[-1]
...@@ -57,7 +58,7 @@ def _readtg(tg_path: str, lang: str='en', fs: int=24000, n_shift: int=300): ...@@ -57,7 +58,7 @@ def _readtg(tg_path: str, lang: str='en', fs: int=24000, n_shift: int=300):
durations[-2] += durations[-1] durations[-2] += durations[-1]
durations = durations[:-1] durations = durations[:-1]
# replace ' and 'sil' with 'sp' # replace '' and 'sil' with 'sp'
phones = ['sp' if (phn == '' or phn == 'sil') else phn for phn in phones] phones = ['sp' if (phn == '' or phn == 'sil') else phn for phn in phones]
if lang == 'en': if lang == 'en':
...@@ -106,11 +107,11 @@ def alignment(wav_path: str, ...@@ -106,11 +107,11 @@ def alignment(wav_path: str,
wav_name = os.path.basename(wav_path) wav_name = os.path.basename(wav_path)
utt = wav_name.split('.')[0] utt = wav_name.split('.')[0]
# prepare data for MFA # prepare data for MFA
tmp_name = get_tmp_name(text=text) tmp_name = get_tmp_name(text=text)
tmpbase = './tmp_dir/' + tmp_name tmpbase = './tmp_dir/' + tmp_name
tmpbase = Path(tmpbase) tmpbase = Path(tmpbase)
tmpbase.mkdir(parents=True, exist_ok=True) tmpbase.mkdir(parents=True, exist_ok=True)
print("tmp_name in alignment:",tmp_name) print("tmp_name in alignment:", tmp_name)
shutil.copyfile(wav_path, tmpbase / wav_name) shutil.copyfile(wav_path, tmpbase / wav_name)
txt_name = utt + '.txt' txt_name = utt + '.txt'
...@@ -194,7 +195,7 @@ def words2phns(text: str, lang='en'): ...@@ -194,7 +195,7 @@ def words2phns(text: str, lang='en'):
wrd = wrd.upper() wrd = wrd.upper()
if (wrd not in ds): if (wrd not in ds):
wrd2phns[str(index) + '_' + wrd] = 'spn' wrd2phns[str(index) + '_' + wrd] = 'spn'
phns.extend('spn') phns.extend(['spn'])
else: else:
wrd2phns[str(index) + '_' + wrd] = word2phns_dict[wrd].split() wrd2phns[str(index) + '_' + wrd] = word2phns_dict[wrd].split()
phns.extend(word2phns_dict[wrd].split()) phns.extend(word2phns_dict[wrd].split())
...@@ -340,7 +341,7 @@ def get_phns_spans(wav_path: str, ...@@ -340,7 +341,7 @@ def get_phns_spans(wav_path: str,
if __name__ == '__main__': if __name__ == '__main__':
text = "For that reason cover should not be given." text = "For that reason cover should not be given."
phn, dur, word2phns = alignment("exp/p243_313.wav", text, lang='en') phn, dur, word2phns = alignment("source/p243_313.wav", text, lang='en')
print(phn, dur) print(phn, dur)
print(word2phns) print(word2phns)
print("---------------------------------") print("---------------------------------")
...@@ -352,7 +353,7 @@ if __name__ == '__main__': ...@@ -352,7 +353,7 @@ if __name__ == '__main__':
style=pypinyin.Style.TONE3, style=pypinyin.Style.TONE3,
tone_sandhi=True) tone_sandhi=True)
text_zh = " ".join(text_zh) text_zh = " ".join(text_zh)
phn, dur, word2phns = alignment("exp/000001.wav", text_zh, lang='zh') phn, dur, word2phns = alignment("source/000001.wav", text_zh, lang='zh')
print(phn, dur) print(phn, dur)
print(word2phns) print(word2phns)
print("---------------------------------") print("---------------------------------")
...@@ -367,7 +368,7 @@ if __name__ == '__main__': ...@@ -367,7 +368,7 @@ if __name__ == '__main__':
print("---------------------------------") print("---------------------------------")
outs = get_phns_spans( outs = get_phns_spans(
wav_path="exp/p243_313.wav", wav_path="source/p243_313.wav",
old_str="For that reason cover should not be given.", old_str="For that reason cover should not be given.",
new_str="for that reason cover is impossible to be given.") new_str="for that reason cover is impossible to be given.")
......
...@@ -118,7 +118,7 @@ def main(): ...@@ -118,7 +118,7 @@ def main():
record["spk_emb"] = str(item["spk_emb"]) record["spk_emb"] = str(item["spk_emb"])
output_metadata.append(record) output_metadata.append(record)
output_metadata.sort(key=itemgetter('utt_id')) output_metadata.sort(key=itemgetter('speech_lengths'))
output_metadata_path = Path(args.dumpdir) / "metadata.jsonl" output_metadata_path = Path(args.dumpdir) / "metadata.jsonl"
with jsonlines.open(output_metadata_path, 'w') as writer: with jsonlines.open(output_metadata_path, 'w') as writer:
for item in output_metadata: for item in output_metadata:
......
...@@ -165,7 +165,7 @@ def process_sentences(config, ...@@ -165,7 +165,7 @@ def process_sentences(config,
if record: if record:
results.append(record) results.append(record)
results.sort(key=itemgetter("utt_id")) results.sort(key=itemgetter("speech_lengths"))
# replace 'w' with 'a' to write from the end of file # replace 'w' with 'a' to write from the end of file
with jsonlines.open(output_dir / "metadata.jsonl", 'a') as writer: with jsonlines.open(output_dir / "metadata.jsonl", 'a') as writer:
for item in results: for item in results:
......
...@@ -11,35 +11,41 @@ ...@@ -11,35 +11,41 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import argparse
import os
from pathlib import Path
from typing import List
import librosa import librosa
import numpy as np import numpy as np
import paddle
import pypinyin
import soundfile as sf import soundfile as sf
import yaml
from pypinyin_dict.phrase_pinyin_data import large_pinyin
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import build_erniesat_collate_fn
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.exps.ernie_sat.align import get_phns_spans from paddlespeech.t2s.exps.ernie_sat.align import get_phns_spans
from paddlespeech.t2s.exps.ernie_sat.utils import eval_durs from paddlespeech.t2s.exps.ernie_sat.utils import eval_durs
from paddlespeech.t2s.exps.ernie_sat.utils import get_dur_adj_factor from paddlespeech.t2s.exps.ernie_sat.utils import get_dur_adj_factor
from paddlespeech.t2s.exps.ernie_sat.utils import get_span_bdy from paddlespeech.t2s.exps.ernie_sat.utils import get_span_bdy
from paddlespeech.t2s.datasets.am_batch_fn import build_erniesat_collate_fn
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.exps.syn_utils import norm
from paddlespeech.t2s.exps.ernie_sat.utils import get_tmp_name from paddlespeech.t2s.exps.ernie_sat.utils import get_tmp_name
from paddlespeech.t2s.exps.syn_utils import get_am_inference
from paddlespeech.t2s.exps.syn_utils import get_voc_inference
from paddlespeech.t2s.exps.syn_utils import norm
from paddlespeech.t2s.utils import str2bool
large_pinyin.load()
def _p2id(phonemes: List[str]) -> np.ndarray:
def _p2id(self, phonemes: List[str]) -> np.ndarray:
# replace unk phone with sp # replace unk phone with sp
phonemes = [ phonemes = [phn if phn in vocab_phones else "sp" for phn in phonemes]
phn if phn in vocab_phones else "sp" for phn in phonemes
]
phone_ids = [vocab_phones[item] for item in phonemes] phone_ids = [vocab_phones[item] for item in phonemes]
return np.array(phone_ids, np.int64) return np.array(phone_ids, np.int64)
def prep_feats_with_dur(wav_path: str, def prep_feats_with_dur(wav_path: str,
old_str: str='', old_str: str='',
new_str: str='', new_str: str='',
...@@ -67,12 +73,12 @@ def prep_feats_with_dur(wav_path: str, ...@@ -67,12 +73,12 @@ def prep_feats_with_dur(wav_path: str,
fs=fs, fs=fs,
n_shift=n_shift) n_shift=n_shift)
mfa_start = phns_spans_outs["mfa_start"] mfa_start = phns_spans_outs['mfa_start']
mfa_end = phns_spans_outs["mfa_end"] mfa_end = phns_spans_outs['mfa_end']
old_phns = phns_spans_outs["old_phns"] old_phns = phns_spans_outs['old_phns']
new_phns = phns_spans_outs["new_phns"] new_phns = phns_spans_outs['new_phns']
span_to_repl = phns_spans_outs["span_to_repl"] span_to_repl = phns_spans_outs['span_to_repl']
span_to_add = phns_spans_outs["span_to_add"] span_to_add = phns_spans_outs['span_to_add']
# 中文的 phns 不一定都在 fastspeech2 的字典里, 用 sp 代替 # 中文的 phns 不一定都在 fastspeech2 的字典里, 用 sp 代替
if target_lang in {'en', 'zh'}: if target_lang in {'en', 'zh'}:
...@@ -131,9 +137,6 @@ def prep_feats_with_dur(wav_path: str, ...@@ -131,9 +137,6 @@ def prep_feats_with_dur(wav_path: str,
new_wav = np.concatenate( new_wav = np.concatenate(
[wav_org[:wav_left_idx], blank_wav, wav_org[wav_right_idx:]]) [wav_org[:wav_left_idx], blank_wav, wav_org[wav_right_idx:]])
# 音频是正常遮住了
sf.write(str("new_wav.wav"), new_wav, samplerate=fs)
# 4. get old and new mel span to be mask # 4. get old and new mel span to be mask
old_span_bdy = get_span_bdy( old_span_bdy = get_span_bdy(
mfa_start=mfa_start, mfa_end=mfa_end, span_to_repl=span_to_repl) mfa_start=mfa_start, mfa_end=mfa_end, span_to_repl=span_to_repl)
...@@ -152,8 +155,6 @@ def prep_feats_with_dur(wav_path: str, ...@@ -152,8 +155,6 @@ def prep_feats_with_dur(wav_path: str,
return outs return outs
def prep_feats(wav_path: str, def prep_feats(wav_path: str,
old_str: str='', old_str: str='',
new_str: str='', new_str: str='',
...@@ -163,7 +164,7 @@ def prep_feats(wav_path: str, ...@@ -163,7 +164,7 @@ def prep_feats(wav_path: str,
fs: int=24000, fs: int=24000,
n_shift: int=300): n_shift: int=300):
outs = prep_feats_with_dur( with_dur_outs = prep_feats_with_dur(
wav_path=wav_path, wav_path=wav_path,
old_str=old_str, old_str=old_str,
new_str=new_str, new_str=new_str,
...@@ -176,138 +177,246 @@ def prep_feats(wav_path: str, ...@@ -176,138 +177,246 @@ def prep_feats(wav_path: str,
wav_name = os.path.basename(wav_path) wav_name = os.path.basename(wav_path)
utt_id = wav_name.split('.')[0] utt_id = wav_name.split('.')[0]
wav = outs['new_wav'] wav = with_dur_outs['new_wav']
phns = outs['new_phns'] phns = with_dur_outs['new_phns']
mfa_start = outs['new_mfa_start'] mfa_start = with_dur_outs['new_mfa_start']
mfa_end = outs['new_mfa_end'] mfa_end = with_dur_outs['new_mfa_end']
old_span_bdy = outs['old_span_bdy'] old_span_bdy = with_dur_outs['old_span_bdy']
new_span_bdy = outs['new_span_bdy'] new_span_bdy = with_dur_outs['new_span_bdy']
span_bdy = np.array(new_span_bdy) span_bdy = np.array(new_span_bdy)
text = _p2id(phns)
mel = mel_extractor.get_log_mel_fbank(wav) mel = mel_extractor.get_log_mel_fbank(wav)
erniesat_mean, erniesat_std = np.load(erniesat_stat) erniesat_mean, erniesat_std = np.load(erniesat_stat)
normed_mel = norm(mel, erniesat_mean, erniesat_std) normed_mel = norm(mel, erniesat_mean, erniesat_std)
tmp_name = get_tmp_name(text=old_str) tmp_name = get_tmp_name(text=old_str)
tmpbase = './tmp_dir/' + tmp_name tmpbase = './tmp_dir/' + tmp_name
tmpbase = Path(tmpbase) tmpbase = Path(tmpbase)
tmpbase.mkdir(parents=True, exist_ok=True) tmpbase.mkdir(parents=True, exist_ok=True)
print("tmp_name in synthesize_e2e:",tmp_name)
mel_path = tmpbase / 'mel.npy' mel_path = tmpbase / 'mel.npy'
print("mel_path:",mel_path) np.save(mel_path, normed_mel)
np.save(mel_path, logmel)
durations = [e - s for e, s in zip(mfa_end, mfa_start)] durations = [e - s for e, s in zip(mfa_end, mfa_start)]
text = _p2id(phns)
datum={ datum = {
"utt_id": utt_id, "utt_id": utt_id,
"spk_id": 0, "spk_id": 0,
"text": text, "text": text,
"text_lengths": len(text), "text_lengths": len(text),
"speech_lengths": 115, "speech_lengths": len(normed_mel),
"durations": durations, "durations": durations,
"speech": mel_path, "speech": np.load(mel_path),
"align_start": mfa_start, "align_start": mfa_start,
"align_end": mfa_end, "align_end": mfa_end,
"span_bdy": span_bdy "span_bdy": span_bdy
} }
batch = collate_fn([datum]) batch = collate_fn([datum])
print("batch:",batch) outs = dict()
outs['batch'] = batch
return batch, old_span_bdy, new_span_bdy outs['old_span_bdy'] = old_span_bdy
outs['new_span_bdy'] = new_span_bdy
return outs
def decode_with_model(mlm_model: nn.Layer,
collate_fn,
wav_path: str, def get_mlm_output(wav_path: str,
old_str: str='', old_str: str='',
new_str: str='', new_str: str='',
source_lang: str='en', source_lang: str='en',
target_lang: str='en', target_lang: str='en',
use_teacher_forcing: bool=False, duration_adjust: bool=True,
duration_adjust: bool=True, fs: int=24000,
fs: int=24000, n_shift: int=300):
n_shift: int=300,
token_list: List[str]=[]): prep_feats_outs = prep_feats(
batch, old_span_bdy, new_span_bdy = prep_feats(
source_lang=source_lang,
target_lang=target_lang,
wav_path=wav_path, wav_path=wav_path,
old_str=old_str, old_str=old_str,
new_str=new_str, new_str=new_str,
source_lang=source_lang,
target_lang=target_lang,
duration_adjust=duration_adjust, duration_adjust=duration_adjust,
fs=fs, fs=fs,
n_shift=n_shift, n_shift=n_shift)
token_list=token_list)
feats = collate_fn(batch)[1]
if 'text_masked_pos' in feats.keys(): batch = prep_feats_outs['batch']
feats.pop('text_masked_pos') new_span_bdy = prep_feats_outs['new_span_bdy']
old_span_bdy = prep_feats_outs['old_span_bdy']
output = mlm_model.inference( out_mels = erniesat_inference(
text=feats['text'], speech=batch['speech'],
speech=feats['speech'], text=batch['text'],
masked_pos=feats['masked_pos'], masked_pos=batch['masked_pos'],
speech_mask=feats['speech_mask'], speech_mask=batch['speech_mask'],
text_mask=feats['text_mask'], text_mask=batch['text_mask'],
speech_seg_pos=feats['speech_seg_pos'], speech_seg_pos=batch['speech_seg_pos'],
text_seg_pos=feats['text_seg_pos'], text_seg_pos=batch['text_seg_pos'],
span_bdy=new_span_bdy, span_bdy=new_span_bdy)
use_teacher_forcing=use_teacher_forcing)
# 拼接音频 # 拼接音频
output_feat = paddle.concat(x=output, axis=0) output_feat = paddle.concat(x=out_mels, axis=0)
wav_org, _ = librosa.load(wav_path, sr=fs) wav_org, _ = librosa.load(wav_path, sr=fs)
return wav_org, output_feat, old_span_bdy, new_span_bdy, fs, hop_length outs = dict()
outs['wav_org'] = wav_org
outs['output_feat'] = output_feat
outs['old_span_bdy'] = old_span_bdy
outs['new_span_bdy'] = new_span_bdy
return outs
if __name__ == '__main__':
fs = 24000
n_shift = 300
wav_path = "exp/p243_313.wav"
old_str = "For that reason cover should not be given."
# for edit
# new_str = "for that reason cover is impossible to be given."
# for synthesize
append_str = "do you love me i love you so much"
new_str = old_str + append_str
''' def get_wav(wav_path: str,
outs = prep_feats_with_dur( source_lang: str='en',
target_lang: str='en',
old_str: str='',
new_str: str='',
duration_adjust: bool=True,
fs: int=24000,
n_shift: int=300,
task_name: str='synthesize'):
outs = get_mlm_output(
wav_path=wav_path, wav_path=wav_path,
old_str=old_str, old_str=old_str,
new_str=new_str, new_str=new_str,
source_lang=source_lang,
target_lang=target_lang,
duration_adjust=duration_adjust,
fs=fs, fs=fs,
n_shift=n_shift) n_shift=n_shift)
new_wav = outs['new_wav'] wav_org = outs['wav_org']
new_phns = outs['new_phns'] output_feat = outs['output_feat']
new_mfa_start = outs['new_mfa_start']
new_mfa_end = outs['new_mfa_end']
old_span_bdy = outs['old_span_bdy'] old_span_bdy = outs['old_span_bdy']
new_span_bdy = outs['new_span_bdy'] new_span_bdy = outs['new_span_bdy']
print("---------------------------------") masked_feat = output_feat[new_span_bdy[0]:new_span_bdy[1]]
with paddle.no_grad():
alt_wav = voc_inference(masked_feat)
alt_wav = np.squeeze(alt_wav)
old_time_bdy = [n_shift * x for x in old_span_bdy]
if task_name == 'edit':
wav_replaced = np.concatenate(
[wav_org[:old_time_bdy[0]], alt_wav, wav_org[old_time_bdy[1]:]])
else:
wav_replaced = alt_wav
wav_dict = {"origin": wav_org, "output": wav_replaced}
return wav_dict
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(
description="Synthesize with acoustic model & vocoder")
# ernie sat
parser.add_argument(
'--erniesat_config',
type=str,
default=None,
help='Config of acoustic model.')
parser.add_argument(
'--erniesat_ckpt',
type=str,
default=None,
help='Checkpoint file of acoustic model.')
parser.add_argument(
"--erniesat_stat",
type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training acoustic model."
)
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
# vocoder
parser.add_argument(
'--voc',
type=str,
default='pwgan_csmsc',
choices=[
'pwgan_aishell3',
'pwgan_vctk',
'hifigan_aishell3',
'hifigan_vctk',
],
help='Choose vocoder type of tts task.')
parser.add_argument(
'--voc_config', type=str, default=None, help='Config of voc.')
parser.add_argument(
'--voc_ckpt', type=str, default=None, help='Checkpoint file of voc.')
parser.add_argument(
"--voc_stat",
type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training voc."
)
# other
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
# ernie sat related
parser.add_argument(
"--task_name",
type=str,
choices=['edit', 'synthesize'],
help="task name.")
parser.add_argument("--wav_path", type=str, help="path of old wav")
parser.add_argument("--old_str", type=str, help="old string")
parser.add_argument("--new_str", type=str, help="new string")
parser.add_argument(
"--source_lang", type=str, default="en", help="source language")
parser.add_argument(
"--target_lang", type=str, default="en", help="target language")
parser.add_argument(
"--duration_adjust",
type=str2bool,
default=True,
help="whether to adjust duration.")
parser.add_argument("--output_name", type=str, default="output.wav")
args = parser.parse_args()
return args
print("new_wav:", new_wav)
print("new_phns:", new_phns)
print("new_mfa_start:", new_mfa_start)
print("new_mfa_end:", new_mfa_end)
print("old_span_bdy:", old_span_bdy)
print("new_span_bdy:", new_span_bdy)
print("---------------------------------")
'''
erniesat_config = "/home/yuantian01/PaddleSpeech_ERNIE_SAT/PaddleSpeech/examples/vctk/ernie_sat/local/default.yaml" if __name__ == '__main__':
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
with open(erniesat_config) as f: # evaluate(args)
with open(args.erniesat_config) as f:
erniesat_config = CfgNode(yaml.safe_load(f)) erniesat_config = CfgNode(yaml.safe_load(f))
old_str = args.old_str
erniesat_stat = "/home/yuantian01/PaddleSpeech_ERNIE_SAT/PaddleSpeech/examples/vctk/ernie_sat/dump/train/speech_stats.npy" new_str = args.new_str
# convert Chinese characters to pinyin
if args.source_lang == 'zh':
old_str = pypinyin.lazy_pinyin(
old_str,
neutral_tone_with_five=True,
style=pypinyin.Style.TONE3,
tone_sandhi=True)
old_str = ' '.join(old_str)
if args.target_lang == 'zh':
new_str = pypinyin.lazy_pinyin(
new_str,
neutral_tone_with_five=True,
style=pypinyin.Style.TONE3,
tone_sandhi=True)
new_str = ' '.join(new_str)
if args.task_name == 'edit':
new_str = new_str
elif args.task_name == 'synthesize':
new_str = old_str + ' ' + new_str
else:
new_str = old_str + ' ' + new_str
# Extractor # Extractor
mel_extractor = LogMelFBank( mel_extractor = LogMelFBank(
...@@ -319,28 +428,52 @@ if __name__ == '__main__': ...@@ -319,28 +428,52 @@ if __name__ == '__main__':
n_mels=erniesat_config.n_mels, n_mels=erniesat_config.n_mels,
fmin=erniesat_config.fmin, fmin=erniesat_config.fmin,
fmax=erniesat_config.fmax) fmax=erniesat_config.fmax)
collate_fn = build_erniesat_collate_fn( collate_fn = build_erniesat_collate_fn(
mlm_prob=erniesat_config.mlm_prob, mlm_prob=erniesat_config.mlm_prob,
mean_phn_span=erniesat_config.mean_phn_span, mean_phn_span=erniesat_config.mean_phn_span,
seg_emb=erniesat_config.model['enc_input_layer'] == 'sega_mlm', seg_emb=erniesat_config.model['enc_input_layer'] == 'sega_mlm',
text_masking=False) text_masking=False)
phones_dict='/home/yuantian01/PaddleSpeech_ERNIE_SAT/PaddleSpeech/examples/vctk/ernie_sat/dump/phone_id_map.txt'
vocab_phones = {} vocab_phones = {}
with open(phones_dict, 'rt') as f: with open(args.phones_dict, 'rt') as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
for phn, id in phn_id: for phn, id in phn_id:
vocab_phones[phn] = int(id) vocab_phones[phn] = int(id)
prep_feats(wav_path=wav_path, # ernie sat model
old_str=old_str, erniesat_inference = get_am_inference(
new_str=new_str, am='erniesat_dataset',
fs=fs, am_config=erniesat_config,
n_shift=n_shift) am_ckpt=args.erniesat_ckpt,
am_stat=args.erniesat_stat,
phones_dict=args.phones_dict)
with open(args.voc_config) as f:
voc_config = CfgNode(yaml.safe_load(f))
# vocoder
voc_inference = get_voc_inference(
voc=args.voc,
voc_config=voc_config,
voc_ckpt=args.voc_ckpt,
voc_stat=args.voc_stat)
erniesat_stat = args.erniesat_stat
wav_dict = get_wav(
wav_path=args.wav_path,
source_lang=args.source_lang,
target_lang=args.target_lang,
old_str=old_str,
new_str=new_str,
duration_adjust=args.duration_adjust,
fs=erniesat_config.fs,
n_shift=erniesat_config.n_shift,
task_name=args.task_name)
sf.write(
args.output_name, wav_dict['output'], samplerate=erniesat_config.fs)
print(
f"\033[1;32;m Generated audio saved into {args.output_name} ! \033[0m")
...@@ -25,12 +25,12 @@ from paddle import DataParallel ...@@ -25,12 +25,12 @@ from paddle import DataParallel
from paddle import distributed as dist from paddle import distributed as dist
from paddle import nn from paddle import nn
from paddle.io import DataLoader from paddle.io import DataLoader
from paddle.io import DistributedBatchSampler
from paddle.optimizer import Adam from paddle.optimizer import Adam
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import build_erniesat_collate_fn from paddlespeech.t2s.datasets.am_batch_fn import build_erniesat_collate_fn
from paddlespeech.t2s.datasets.data_table import DataTable from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.datasets.sampler import ErnieSATSampler
from paddlespeech.t2s.models.ernie_sat import ErnieSAT from paddlespeech.t2s.models.ernie_sat import ErnieSAT
from paddlespeech.t2s.models.ernie_sat import ErnieSATEvaluator from paddlespeech.t2s.models.ernie_sat import ErnieSATEvaluator
from paddlespeech.t2s.models.ernie_sat import ErnieSATUpdater from paddlespeech.t2s.models.ernie_sat import ErnieSATUpdater
...@@ -86,7 +86,7 @@ def train_sp(args, config): ...@@ -86,7 +86,7 @@ def train_sp(args, config):
seg_emb=config.model['enc_input_layer'] == 'sega_mlm', seg_emb=config.model['enc_input_layer'] == 'sega_mlm',
text_masking=config["model"]["text_masking"]) text_masking=config["model"]["text_masking"])
train_sampler = DistributedBatchSampler( train_sampler = ErnieSATSampler(
train_dataset, train_dataset,
batch_size=config.batch_size, batch_size=config.batch_size,
shuffle=True, shuffle=True,
......
...@@ -11,32 +11,35 @@ ...@@ -11,32 +11,35 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import hashlib
import os
from pathlib import Path from pathlib import Path
from typing import Dict from typing import Dict
from typing import List from typing import List
from typing import Union from typing import Union
import os
import numpy as np import numpy as np
import paddle import paddle
import yaml import yaml
from yacs.config import CfgNode from yacs.config import CfgNode
import hashlib
from paddlespeech.t2s.exps.syn_utils import get_am_inference from paddlespeech.t2s.exps.syn_utils import get_am_inference
from paddlespeech.t2s.exps.syn_utils import get_voc_inference from paddlespeech.t2s.exps.syn_utils import get_voc_inference
def _get_user(): def _get_user():
return os.path.expanduser('~').split('/')[-1] return os.path.expanduser('~').split('/')[-1]
def str2md5(string): def str2md5(string):
md5_val = hashlib.md5(string.encode('utf8')).hexdigest() md5_val = hashlib.md5(string.encode('utf8')).hexdigest()
return md5_val return md5_val
def get_tmp_name(text:str):
def get_tmp_name(text: str):
return _get_user() + '_' + str(os.getpid()) + '_' + str2md5(text) return _get_user() + '_' + str(os.getpid()) + '_' + str2md5(text)
def get_dict(dictfile: str): def get_dict(dictfile: str):
word2phns_dict = {} word2phns_dict = {}
with open(dictfile, 'r') as fid: with open(dictfile, 'r') as fid:
......
import argparse
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
import numpy as np
import tqdm
from paddlespeech.cli.vector import VectorExecutor
def _process_utterance(ifpath: Path,
input_dir: Path,
output_dir: Path,
vec_executor):
rel_path = ifpath.relative_to(input_dir)
ofpath = (output_dir / rel_path).with_suffix(".npy")
ofpath.parent.mkdir(parents=True, exist_ok=True)
embed = vec_executor(audio_file=ifpath, force_yes=True)
np.save(ofpath, embed)
return ofpath
def main(args):
# input output preparation
input_dir = Path(args.input).expanduser()
ifpaths = list(input_dir.rglob(args.pattern))
print(f"{len(ifpaths)} utterances in total")
output_dir = Path(args.output).expanduser()
output_dir.mkdir(parents=True, exist_ok=True)
vec_executor = VectorExecutor()
nprocs = args.num_cpu
# warm up
vec_executor(audio_file=ifpaths[0], force_yes=True)
if nprocs == 1:
results = []
for ifpath in tqdm.tqdm(ifpaths, total=len(ifpaths)):
_process_utterance(
ifpath=ifpath,
input_dir=input_dir,
output_dir=output_dir,
vec_executor=vec_executor)
else:
with ThreadPoolExecutor(nprocs) as pool:
with tqdm.tqdm(total=len(ifpaths)) as progress:
for ifpath in ifpaths:
future = pool.submit(_process_utterance, ifpath, input_dir,
output_dir, vec_executor)
future.add_done_callback(lambda p: progress.update())
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="compute utterance embed.")
parser.add_argument(
"--input", type=str, help="path of the audio_file folder.")
parser.add_argument(
"--pattern",
type=str,
default="*.wav",
help="pattern to filter audio files.")
parser.add_argument(
"--output",
metavar="OUTPUT_DIR",
help="path to save spk embedding results.")
parser.add_argument(
"--num-cpu", type=int, default=1, help="number of process.")
args = parser.parse_args()
main(args)
...@@ -82,6 +82,10 @@ def denorm(data, mean, std): ...@@ -82,6 +82,10 @@ def denorm(data, mean, std):
return data * std + mean return data * std + mean
def norm(data, mean, std):
return (data - mean) / std
def get_chunks(data, block_size: int, pad_size: int): def get_chunks(data, block_size: int, pad_size: int):
data_len = data.shape[1] data_len = data.shape[1]
chunks = [] chunks = []
...@@ -294,8 +298,8 @@ def am_to_static(am_inference, ...@@ -294,8 +298,8 @@ def am_to_static(am_inference,
am_name = am[:am.rindex('_')] am_name = am[:am.rindex('_')]
am_dataset = am[am.rindex('_') + 1:] am_dataset = am[am.rindex('_') + 1:]
if am_name == 'fastspeech2': if am_name == 'fastspeech2':
if am_dataset in {"aishell3", "vctk", "mix" if am_dataset in {"aishell3", "vctk",
} and speaker_dict is not None: "mix"} and speaker_dict is not None:
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, am_inference,
input_spec=[ input_spec=[
...@@ -307,8 +311,8 @@ def am_to_static(am_inference, ...@@ -307,8 +311,8 @@ def am_to_static(am_inference,
am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)]) am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)])
elif am_name == 'speedyspeech': elif am_name == 'speedyspeech':
if am_dataset in {"aishell3", "vctk", "mix" if am_dataset in {"aishell3", "vctk",
} and speaker_dict is not None: "mix"} and speaker_dict is not None:
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, am_inference,
input_spec=[ input_spec=[
......
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
...@@ -15,6 +15,7 @@ import argparse ...@@ -15,6 +15,7 @@ import argparse
from pathlib import Path from pathlib import Path
import jsonlines import jsonlines
import numpy as np
import paddle import paddle
import soundfile as sf import soundfile as sf
import yaml import yaml
...@@ -23,6 +24,7 @@ from yacs.config import CfgNode ...@@ -23,6 +24,7 @@ from yacs.config import CfgNode
from paddlespeech.t2s.datasets.data_table import DataTable from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.vits import VITS from paddlespeech.t2s.models.vits import VITS
from paddlespeech.t2s.utils import str2bool
def evaluate(args): def evaluate(args):
...@@ -40,8 +42,26 @@ def evaluate(args): ...@@ -40,8 +42,26 @@ def evaluate(args):
print(config) print(config)
fields = ["utt_id", "text"] fields = ["utt_id", "text"]
converters = {}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker vits!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Evaluating voice cloning!")
fields += ["spk_emb"]
else:
print("single speaker vits!")
print("spk_num:", spk_num)
test_dataset = DataTable(data=test_metadata, fields=fields) test_dataset = DataTable(
data=test_metadata,
fields=fields,
converters=converters, )
with open(args.phones_dict, "r") as f: with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
...@@ -49,6 +69,7 @@ def evaluate(args): ...@@ -49,6 +69,7 @@ def evaluate(args):
print("vocab_size:", vocab_size) print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1 odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
vits = VITS(idim=vocab_size, odim=odim, **config["model"]) vits = VITS(idim=vocab_size, odim=odim, **config["model"])
vits.set_state_dict(paddle.load(args.ckpt)["main_params"]) vits.set_state_dict(paddle.load(args.ckpt)["main_params"])
...@@ -65,7 +86,15 @@ def evaluate(args): ...@@ -65,7 +86,15 @@ def evaluate(args):
phone_ids = paddle.to_tensor(datum["text"]) phone_ids = paddle.to_tensor(datum["text"])
with timer() as t: with timer() as t:
with paddle.no_grad(): with paddle.no_grad():
out = vits.inference(text=phone_ids) spk_emb = None
spk_id = None
# multi speaker
if args.voice_cloning and "spk_emb" in datum:
spk_emb = paddle.to_tensor(np.load(datum["spk_emb"]))
elif "spk_id" in datum:
spk_id = paddle.to_tensor(datum["spk_id"])
out = vits.inference(
text=phone_ids, sids=spk_id, spembs=spk_emb)
wav = out["wav"] wav = out["wav"]
wav = wav.numpy() wav = wav.numpy()
N += wav.size N += wav.size
...@@ -90,6 +119,13 @@ def parse_args(): ...@@ -90,6 +119,13 @@ def parse_args():
'--ckpt', type=str, default=None, help='Checkpoint file of VITS.') '--ckpt', type=str, default=None, help='Checkpoint file of VITS.')
parser.add_argument( parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.") "--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
# other # other
parser.add_argument( parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.") "--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
......
...@@ -42,12 +42,23 @@ def evaluate(args): ...@@ -42,12 +42,23 @@ def evaluate(args):
# frontend # frontend
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict) frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker vits!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
else:
print("single speaker vits!")
print("spk_num:", spk_num)
with open(args.phones_dict, "r") as f: with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id) vocab_size = len(phn_id)
print("vocab_size:", vocab_size) print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1 odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
vits = VITS(idim=vocab_size, odim=odim, **config["model"]) vits = VITS(idim=vocab_size, odim=odim, **config["model"])
vits.set_state_dict(paddle.load(args.ckpt)["main_params"]) vits.set_state_dict(paddle.load(args.ckpt)["main_params"])
...@@ -78,7 +89,10 @@ def evaluate(args): ...@@ -78,7 +89,10 @@ def evaluate(args):
flags = 0 flags = 0
for i in range(len(phone_ids)): for i in range(len(phone_ids)):
part_phone_ids = phone_ids[i] part_phone_ids = phone_ids[i]
out = vits.inference(text=part_phone_ids) spk_id = None
if spk_num is not None:
spk_id = paddle.to_tensor(args.spk_id)
out = vits.inference(text=part_phone_ids, sids=spk_id)
wav = out["wav"] wav = out["wav"]
if flags == 0: if flags == 0:
wav_all = wav wav_all = wav
...@@ -109,6 +123,13 @@ def parse_args(): ...@@ -109,6 +123,13 @@ def parse_args():
'--ckpt', type=str, default=None, help='Checkpoint file of VITS.') '--ckpt', type=str, default=None, help='Checkpoint file of VITS.')
parser.add_argument( parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.") "--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
'--spk_id',
type=int,
default=0,
help='spk id for multi speaker acoustic model')
# other # other
parser.add_argument( parser.add_argument(
'--lang', '--lang',
......
...@@ -28,6 +28,7 @@ from paddle.io import DistributedBatchSampler ...@@ -28,6 +28,7 @@ from paddle.io import DistributedBatchSampler
from paddle.optimizer import Adam from paddle.optimizer import Adam
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import vits_multi_spk_batch_fn
from paddlespeech.t2s.datasets.am_batch_fn import vits_single_spk_batch_fn from paddlespeech.t2s.datasets.am_batch_fn import vits_single_spk_batch_fn
from paddlespeech.t2s.datasets.data_table import DataTable from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.vits import VITS from paddlespeech.t2s.models.vits import VITS
...@@ -43,6 +44,7 @@ from paddlespeech.t2s.training.extensions.visualizer import VisualDL ...@@ -43,6 +44,7 @@ from paddlespeech.t2s.training.extensions.visualizer import VisualDL
from paddlespeech.t2s.training.optimizer import scheduler_classes from paddlespeech.t2s.training.optimizer import scheduler_classes
from paddlespeech.t2s.training.seeding import seed_everything from paddlespeech.t2s.training.seeding import seed_everything
from paddlespeech.t2s.training.trainer import Trainer from paddlespeech.t2s.training.trainer import Trainer
from paddlespeech.t2s.utils import str2bool
def train_sp(args, config): def train_sp(args, config):
...@@ -72,6 +74,23 @@ def train_sp(args, config): ...@@ -72,6 +74,23 @@ def train_sp(args, config):
"wave": np.load, "wave": np.load,
"feats": np.load, "feats": np.load,
} }
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker vits!")
collate_fn = vits_multi_spk_batch_fn
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Training voice cloning!")
collate_fn = vits_multi_spk_batch_fn
fields += ["spk_emb"]
converters["spk_emb"] = np.load
else:
print("single speaker vits!")
collate_fn = vits_single_spk_batch_fn
print("spk_num:", spk_num)
# construct dataset for training and validation # construct dataset for training and validation
with jsonlines.open(args.train_metadata, 'r') as reader: with jsonlines.open(args.train_metadata, 'r') as reader:
...@@ -100,18 +119,16 @@ def train_sp(args, config): ...@@ -100,18 +119,16 @@ def train_sp(args, config):
drop_last=False) drop_last=False)
print("samplers done!") print("samplers done!")
train_batch_fn = vits_single_spk_batch_fn
train_dataloader = DataLoader( train_dataloader = DataLoader(
train_dataset, train_dataset,
batch_sampler=train_sampler, batch_sampler=train_sampler,
collate_fn=train_batch_fn, collate_fn=collate_fn,
num_workers=config.num_workers) num_workers=config.num_workers)
dev_dataloader = DataLoader( dev_dataloader = DataLoader(
dev_dataset, dev_dataset,
batch_sampler=dev_sampler, batch_sampler=dev_sampler,
collate_fn=train_batch_fn, collate_fn=collate_fn,
num_workers=config.num_workers) num_workers=config.num_workers)
print("dataloaders done!") print("dataloaders done!")
...@@ -121,6 +138,7 @@ def train_sp(args, config): ...@@ -121,6 +138,7 @@ def train_sp(args, config):
print("vocab_size:", vocab_size) print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1 odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
model = VITS(idim=vocab_size, odim=odim, **config["model"]) model = VITS(idim=vocab_size, odim=odim, **config["model"])
gen_parameters = model.generator.parameters() gen_parameters = model.generator.parameters()
dis_parameters = model.discriminator.parameters() dis_parameters = model.discriminator.parameters()
...@@ -240,6 +258,17 @@ def main(): ...@@ -240,6 +258,17 @@ def main():
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.") "--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument( parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.") "--phones-dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker-dict",
type=str,
default=None,
help="speaker id map file for multiple speaker model.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
args = parser.parse_args() args = parser.parse_args()
......
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
from pathlib import Path
import librosa
import numpy as np
import paddle
import soundfile as sf
import yaml
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.get_feats import LinearSpectrogram
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.models.vits import VITS
from paddlespeech.t2s.utils import str2bool
from paddlespeech.vector.exps.ge2e.audio_processor import SpeakerVerificationPreprocessor
from paddlespeech.vector.models.lstm_speaker_encoder import LSTMSpeakerEncoder
def voice_cloning(args):
# Init body.
with open(args.config) as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
# speaker encoder
spec_extractor = LinearSpectrogram(
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window)
p = SpeakerVerificationPreprocessor(
sampling_rate=16000,
audio_norm_target_dBFS=-30,
vad_window_length=30,
vad_moving_average_width=8,
vad_max_silence_length=6,
mel_window_length=25,
mel_window_step=10,
n_mels=40,
partial_n_frames=160,
min_pad_coverage=0.75,
partial_overlap_ratio=0.5)
print("Audio Processor Done!")
speaker_encoder = LSTMSpeakerEncoder(
n_mels=40, num_layers=3, hidden_size=256, output_size=256)
speaker_encoder.set_state_dict(paddle.load(args.ge2e_params_path))
speaker_encoder.eval()
print("GE2E Done!")
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
print("frontend done!")
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1
vits = VITS(idim=vocab_size, odim=odim, **config["model"])
vits.set_state_dict(paddle.load(args.ckpt)["main_params"])
vits.eval()
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
input_dir = Path(args.input_dir)
if args.audio_path == "":
args.audio_path = None
if args.audio_path is None:
sentence = args.text
merge_sentences = True
add_blank = args.add_blank
if args.lang == 'zh':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences, add_blank=add_blank)
elif args.lang == 'en':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"][0]
else:
wav, _ = librosa.load(str(args.audio_path), sr=config.fs)
feats = paddle.to_tensor(spec_extractor.get_linear_spectrogram(wav))
mel_sequences = p.extract_mel_partials(
p.preprocess_wav(args.audio_path))
with paddle.no_grad():
spk_emb_src = speaker_encoder.embed_utterance(
paddle.to_tensor(mel_sequences))
for name in os.listdir(input_dir):
utt_id = name.split(".")[0]
ref_audio_path = input_dir / name
mel_sequences = p.extract_mel_partials(p.preprocess_wav(ref_audio_path))
# print("mel_sequences: ", mel_sequences.shape)
with paddle.no_grad():
spk_emb = speaker_encoder.embed_utterance(
paddle.to_tensor(mel_sequences))
# print("spk_emb shape: ", spk_emb.shape)
with paddle.no_grad():
if args.audio_path is None:
out = vits.inference(text=phone_ids, spembs=spk_emb)
else:
out = vits.voice_conversion(
feats=feats, spembs_src=spk_emb_src, spembs_tgt=spk_emb)
wav = out["wav"]
sf.write(
str(output_dir / (utt_id + ".wav")),
wav.numpy(),
samplerate=config.fs)
print(f"{utt_id} done!")
# Randomly generate numbers of 0 ~ 0.2, 256 is the dim of spk_emb
random_spk_emb = np.random.rand(256) * 0.2
random_spk_emb = paddle.to_tensor(random_spk_emb, dtype='float32')
utt_id = "random_spk_emb"
with paddle.no_grad():
if args.audio_path is None:
out = vits.inference(text=phone_ids, spembs=random_spk_emb)
else:
out = vits.voice_conversion(
feats=feats, spembs_src=spk_emb_src, spembs_tgt=random_spk_emb)
wav = out["wav"]
sf.write(
str(output_dir / (utt_id + ".wav")), wav.numpy(), samplerate=config.fs)
print(f"{utt_id} done!")
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(description="")
parser.add_argument(
'--config', type=str, default=None, help='Config of VITS.')
parser.add_argument(
'--ckpt', type=str, default=None, help='Checkpoint file of VITS.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--text",
type=str,
default="每当你觉得,想要批评什么人的时候,你切要记着,这个世界上的人,并非都具备你禀有的条件。",
help="text to synthesize, a line")
parser.add_argument(
'--lang',
type=str,
default='zh',
help='Choose model language. zh or en')
parser.add_argument(
"--audio-path",
type=str,
default=None,
help="audio as content to synthesize")
parser.add_argument(
"--ge2e_params_path", type=str, help="ge2e params path.")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu=0, use cpu.")
parser.add_argument(
"--input-dir",
type=str,
help="input dir of *.wav, the sample rate will be resample to 16k.")
parser.add_argument("--output-dir", type=str, help="output dir.")
parser.add_argument(
"--add-blank",
type=str2bool,
default=True,
help="whether to add blank between phones")
args = parser.parse_args()
return args
def main():
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
voice_cloning(args)
if __name__ == "__main__":
main()
...@@ -21,13 +21,28 @@ import soundfile as sf ...@@ -21,13 +21,28 @@ import soundfile as sf
import yaml import yaml
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.cli.vector import VectorExecutor
from paddlespeech.t2s.exps.syn_utils import get_am_inference from paddlespeech.t2s.exps.syn_utils import get_am_inference
from paddlespeech.t2s.exps.syn_utils import get_voc_inference from paddlespeech.t2s.exps.syn_utils import get_voc_inference
from paddlespeech.t2s.frontend.zh_frontend import Frontend from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.t2s.utils import str2bool
from paddlespeech.vector.exps.ge2e.audio_processor import SpeakerVerificationPreprocessor from paddlespeech.vector.exps.ge2e.audio_processor import SpeakerVerificationPreprocessor
from paddlespeech.vector.models.lstm_speaker_encoder import LSTMSpeakerEncoder from paddlespeech.vector.models.lstm_speaker_encoder import LSTMSpeakerEncoder
def gen_random_embed(use_ecapa: bool=False):
if use_ecapa:
# Randomly generate numbers of -25 ~ 25, 192 is the dim of spk_emb
random_spk_emb = (-1 + 2 * np.random.rand(192)) * 25
# GE2E
else:
# Randomly generate numbers of 0 ~ 0.2, 256 is the dim of spk_emb
random_spk_emb = np.random.rand(256) * 0.2
random_spk_emb = paddle.to_tensor(random_spk_emb, dtype='float32')
return random_spk_emb
def voice_cloning(args): def voice_cloning(args):
# Init body. # Init body.
with open(args.am_config) as f: with open(args.am_config) as f:
...@@ -41,30 +56,47 @@ def voice_cloning(args): ...@@ -41,30 +56,47 @@ def voice_cloning(args):
print(am_config) print(am_config)
print(voc_config) print(voc_config)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
input_dir = Path(args.input_dir)
# speaker encoder # speaker encoder
p = SpeakerVerificationPreprocessor( if args.use_ecapa:
sampling_rate=16000, vec_executor = VectorExecutor()
audio_norm_target_dBFS=-30, # warm up
vad_window_length=30, vec_executor(
vad_moving_average_width=8, audio_file=input_dir / os.listdir(input_dir)[0], force_yes=True)
vad_max_silence_length=6, print("ECAPA-TDNN Done!")
mel_window_length=25, # use GE2E
mel_window_step=10, else:
n_mels=40, p = SpeakerVerificationPreprocessor(
partial_n_frames=160, sampling_rate=16000,
min_pad_coverage=0.75, audio_norm_target_dBFS=-30,
partial_overlap_ratio=0.5) vad_window_length=30,
print("Audio Processor Done!") vad_moving_average_width=8,
vad_max_silence_length=6,
speaker_encoder = LSTMSpeakerEncoder( mel_window_length=25,
n_mels=40, num_layers=3, hidden_size=256, output_size=256) mel_window_step=10,
speaker_encoder.set_state_dict(paddle.load(args.ge2e_params_path)) n_mels=40,
speaker_encoder.eval() partial_n_frames=160,
print("GE2E Done!") min_pad_coverage=0.75,
partial_overlap_ratio=0.5)
print("Audio Processor Done!")
speaker_encoder = LSTMSpeakerEncoder(
n_mels=40, num_layers=3, hidden_size=256, output_size=256)
speaker_encoder.set_state_dict(paddle.load(args.ge2e_params_path))
speaker_encoder.eval()
print("GE2E Done!")
frontend = Frontend(phone_vocab_path=args.phones_dict) frontend = Frontend(phone_vocab_path=args.phones_dict)
print("frontend done!") print("frontend done!")
sentence = args.text
input_ids = frontend.get_input_ids(sentence, merge_sentences=True)
phone_ids = input_ids["phone_ids"][0]
# acoustic model # acoustic model
am_inference = get_am_inference( am_inference = get_am_inference(
am=args.am, am=args.am,
...@@ -80,26 +112,19 @@ def voice_cloning(args): ...@@ -80,26 +112,19 @@ def voice_cloning(args):
voc_ckpt=args.voc_ckpt, voc_ckpt=args.voc_ckpt,
voc_stat=args.voc_stat) voc_stat=args.voc_stat)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
input_dir = Path(args.input_dir)
sentence = args.text
input_ids = frontend.get_input_ids(sentence, merge_sentences=True)
phone_ids = input_ids["phone_ids"][0]
for name in os.listdir(input_dir): for name in os.listdir(input_dir):
utt_id = name.split(".")[0] utt_id = name.split(".")[0]
ref_audio_path = input_dir / name ref_audio_path = input_dir / name
mel_sequences = p.extract_mel_partials(p.preprocess_wav(ref_audio_path)) if args.use_ecapa:
# print("mel_sequences: ", mel_sequences.shape) spk_emb = vec_executor(audio_file=ref_audio_path, force_yes=True)
with paddle.no_grad(): spk_emb = paddle.to_tensor(spk_emb)
spk_emb = speaker_encoder.embed_utterance( # GE2E
paddle.to_tensor(mel_sequences)) else:
# print("spk_emb shape: ", spk_emb.shape) mel_sequences = p.extract_mel_partials(
p.preprocess_wav(ref_audio_path))
with paddle.no_grad():
spk_emb = speaker_encoder.embed_utterance(
paddle.to_tensor(mel_sequences))
with paddle.no_grad(): with paddle.no_grad():
wav = voc_inference(am_inference(phone_ids, spk_emb=spk_emb)) wav = voc_inference(am_inference(phone_ids, spk_emb=spk_emb))
...@@ -108,16 +133,17 @@ def voice_cloning(args): ...@@ -108,16 +133,17 @@ def voice_cloning(args):
wav.numpy(), wav.numpy(),
samplerate=am_config.fs) samplerate=am_config.fs)
print(f"{utt_id} done!") print(f"{utt_id} done!")
# Randomly generate numbers of 0 ~ 0.2, 256 is the dim of spk_emb
random_spk_emb = np.random.rand(256) * 0.2 # generate 5 random_spk_emb
random_spk_emb = paddle.to_tensor(random_spk_emb, dtype='float32') for i in range(5):
utt_id = "random_spk_emb" random_spk_emb = gen_random_embed(args.use_ecapa)
with paddle.no_grad(): utt_id = "random_spk_emb"
wav = voc_inference(am_inference(phone_ids, spk_emb=random_spk_emb)) with paddle.no_grad():
sf.write( wav = voc_inference(am_inference(phone_ids, spk_emb=random_spk_emb))
str(output_dir / (utt_id + ".wav")), sf.write(
wav.numpy(), str(output_dir / (utt_id + "_" + str(i) + ".wav")),
samplerate=am_config.fs) wav.numpy(),
samplerate=am_config.fs)
print(f"{utt_id} done!") print(f"{utt_id} done!")
...@@ -171,13 +197,15 @@ def parse_args(): ...@@ -171,13 +197,15 @@ def parse_args():
type=str, type=str,
default="每当你觉得,想要批评什么人的时候,你切要记着,这个世界上的人,并非都具备你禀有的条件。", default="每当你觉得,想要批评什么人的时候,你切要记着,这个世界上的人,并非都具备你禀有的条件。",
help="text to synthesize, a line") help="text to synthesize, a line")
parser.add_argument( parser.add_argument(
"--ge2e_params_path", type=str, help="ge2e params path.") "--ge2e_params_path", type=str, help="ge2e params path.")
parser.add_argument(
"--use_ecapa",
type=str2bool,
default=False,
help="whether to use ECAPA-TDNN as speaker encoder.")
parser.add_argument( parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu=0, use cpu.") "--ngpu", type=int, default=1, help="if ngpu=0, use cpu.")
parser.add_argument( parser.add_argument(
"--input-dir", "--input-dir",
type=str, type=str,
......
from paddlespeech.t2s.frontend.g2pw.onnx_api import G2PWOnnxConverter from .onnx_api import G2PWOnnxConverter
...@@ -15,6 +15,10 @@ ...@@ -15,6 +15,10 @@
Credits Credits
This code is modified from https://github.com/GitYCC/g2pW This code is modified from https://github.com/GitYCC/g2pW
""" """
from typing import Dict
from typing import List
from typing import Tuple
import numpy as np import numpy as np
from paddlespeech.t2s.frontend.g2pw.utils import tokenize_and_map from paddlespeech.t2s.frontend.g2pw.utils import tokenize_and_map
...@@ -23,22 +27,17 @@ ANCHOR_CHAR = '▁' ...@@ -23,22 +27,17 @@ ANCHOR_CHAR = '▁'
def prepare_onnx_input(tokenizer, def prepare_onnx_input(tokenizer,
labels, labels: List[str],
char2phonemes, char2phonemes: Dict[str, List[int]],
chars, chars: List[str],
texts, texts: List[str],
query_ids, query_ids: List[int],
phonemes=None, use_mask: bool=False,
pos_tags=None, window_size: int=None,
use_mask=False, max_len: int=512) -> Dict[str, np.array]:
use_char_phoneme=False,
use_pos=False,
window_size=None,
max_len=512):
if window_size is not None: if window_size is not None:
truncated_texts, truncated_query_ids = _truncate_texts(window_size, truncated_texts, truncated_query_ids = _truncate_texts(
texts, query_ids) window_size=window_size, texts=texts, query_ids=query_ids)
input_ids = [] input_ids = []
token_type_ids = [] token_type_ids = []
attention_masks = [] attention_masks = []
...@@ -51,13 +50,19 @@ def prepare_onnx_input(tokenizer, ...@@ -51,13 +50,19 @@ def prepare_onnx_input(tokenizer,
query_id = (truncated_query_ids if window_size else query_ids)[idx] query_id = (truncated_query_ids if window_size else query_ids)[idx]
try: try:
tokens, text2token, token2text = tokenize_and_map(tokenizer, text) tokens, text2token, token2text = tokenize_and_map(
tokenizer=tokenizer, text=text)
except Exception: except Exception:
print(f'warning: text "{text}" is invalid') print(f'warning: text "{text}" is invalid')
return {} return {}
text, query_id, tokens, text2token, token2text = _truncate( text, query_id, tokens, text2token, token2text = _truncate(
max_len, text, query_id, tokens, text2token, token2text) max_len=max_len,
text=text,
query_id=query_id,
tokens=tokens,
text2token=text2token,
token2text=token2text)
processed_tokens = ['[CLS]'] + tokens + ['[SEP]'] processed_tokens = ['[CLS]'] + tokens + ['[SEP]']
...@@ -81,17 +86,18 @@ def prepare_onnx_input(tokenizer, ...@@ -81,17 +86,18 @@ def prepare_onnx_input(tokenizer,
position_ids.append(position_id) position_ids.append(position_id)
outputs = { outputs = {
'input_ids': np.array(input_ids), 'input_ids': np.array(input_ids).astype(np.int64),
'token_type_ids': np.array(token_type_ids), 'token_type_ids': np.array(token_type_ids).astype(np.int64),
'attention_masks': np.array(attention_masks), 'attention_masks': np.array(attention_masks).astype(np.int64),
'phoneme_masks': np.array(phoneme_masks).astype(np.float32), 'phoneme_masks': np.array(phoneme_masks).astype(np.float32),
'char_ids': np.array(char_ids), 'char_ids': np.array(char_ids).astype(np.int64),
'position_ids': np.array(position_ids), 'position_ids': np.array(position_ids).astype(np.int64),
} }
return outputs return outputs
def _truncate_texts(window_size, texts, query_ids): def _truncate_texts(window_size: int, texts: List[str],
query_ids: List[int]) -> Tuple[List[str], List[int]]:
truncated_texts = [] truncated_texts = []
truncated_query_ids = [] truncated_query_ids = []
for text, query_id in zip(texts, query_ids): for text, query_id in zip(texts, query_ids):
...@@ -105,7 +111,12 @@ def _truncate_texts(window_size, texts, query_ids): ...@@ -105,7 +111,12 @@ def _truncate_texts(window_size, texts, query_ids):
return truncated_texts, truncated_query_ids return truncated_texts, truncated_query_ids
def _truncate(max_len, text, query_id, tokens, text2token, token2text): def _truncate(max_len: int,
text: str,
query_id: int,
tokens: List[str],
text2token: List[int],
token2text: List[Tuple[int]]):
truncate_len = max_len - 2 truncate_len = max_len - 2
if len(tokens) <= truncate_len: if len(tokens) <= truncate_len:
return (text, query_id, tokens, text2token, token2text) return (text, query_id, tokens, text2token, token2text)
...@@ -132,18 +143,8 @@ def _truncate(max_len, text, query_id, tokens, text2token, token2text): ...@@ -132,18 +143,8 @@ def _truncate(max_len, text, query_id, tokens, text2token, token2text):
], [(s - start, e - start) for s, e in token2text[token_start:token_end]]) ], [(s - start, e - start) for s, e in token2text[token_start:token_end]])
def prepare_data(sent_path, lb_path=None): def get_phoneme_labels(polyphonic_chars: List[List[str]]
raw_texts = open(sent_path).read().rstrip().split('\n') ) -> Tuple[List[str], Dict[str, List[int]]]:
query_ids = [raw.index(ANCHOR_CHAR) for raw in raw_texts]
texts = [raw.replace(ANCHOR_CHAR, '') for raw in raw_texts]
if lb_path is None:
return texts, query_ids
else:
phonemes = open(lb_path).read().rstrip().split('\n')
return texts, query_ids, phonemes
def get_phoneme_labels(polyphonic_chars):
labels = sorted(list(set([phoneme for char, phoneme in polyphonic_chars]))) labels = sorted(list(set([phoneme for char, phoneme in polyphonic_chars])))
char2phonemes = {} char2phonemes = {}
for char, phoneme in polyphonic_chars: for char, phoneme in polyphonic_chars:
...@@ -153,7 +154,8 @@ def get_phoneme_labels(polyphonic_chars): ...@@ -153,7 +154,8 @@ def get_phoneme_labels(polyphonic_chars):
return labels, char2phonemes return labels, char2phonemes
def get_char_phoneme_labels(polyphonic_chars): def get_char_phoneme_labels(polyphonic_chars: List[List[str]]
) -> Tuple[List[str], Dict[str, List[int]]]:
labels = sorted( labels = sorted(
list(set([f'{char} {phoneme}' for char, phoneme in polyphonic_chars]))) list(set([f'{char} {phoneme}' for char, phoneme in polyphonic_chars])))
char2phonemes = {} char2phonemes = {}
......
...@@ -17,6 +17,10 @@ Credits ...@@ -17,6 +17,10 @@ Credits
""" """
import json import json
import os import os
from typing import Any
from typing import Dict
from typing import List
from typing import Tuple
import numpy as np import numpy as np
import onnxruntime import onnxruntime
...@@ -31,10 +35,14 @@ from paddlespeech.t2s.frontend.g2pw.dataset import get_char_phoneme_labels ...@@ -31,10 +35,14 @@ from paddlespeech.t2s.frontend.g2pw.dataset import get_char_phoneme_labels
from paddlespeech.t2s.frontend.g2pw.dataset import get_phoneme_labels from paddlespeech.t2s.frontend.g2pw.dataset import get_phoneme_labels
from paddlespeech.t2s.frontend.g2pw.dataset import prepare_onnx_input from paddlespeech.t2s.frontend.g2pw.dataset import prepare_onnx_input
from paddlespeech.t2s.frontend.g2pw.utils import load_config from paddlespeech.t2s.frontend.g2pw.utils import load_config
from paddlespeech.t2s.frontend.zh_normalization.char_convert import tranditional_to_simplified
from paddlespeech.utils.env import MODEL_HOME from paddlespeech.utils.env import MODEL_HOME
model_version = '1.1'
def predict(session, onnx_input, labels):
def predict(session, onnx_input: Dict[str, Any],
labels: List[str]) -> Tuple[List[str], List[float]]:
all_preds = [] all_preds = []
all_confidences = [] all_confidences = []
probs = session.run([], { probs = session.run([], {
...@@ -58,56 +66,75 @@ def predict(session, onnx_input, labels): ...@@ -58,56 +66,75 @@ def predict(session, onnx_input, labels):
class G2PWOnnxConverter: class G2PWOnnxConverter:
def __init__(self, def __init__(self,
model_dir=MODEL_HOME, model_dir: os.PathLike=MODEL_HOME,
style='bopomofo', style: str='bopomofo',
model_source=None, model_source: str=None,
enable_non_tradional_chinese=False): enable_non_tradional_chinese: bool=False):
if not os.path.exists(os.path.join(model_dir, 'G2PWModel/g2pW.onnx')): uncompress_path = download_and_decompress(
uncompress_path = download_and_decompress( g2pw_onnx_models['G2PWModel'][model_version], model_dir)
g2pw_onnx_models['G2PWModel']['1.0'], model_dir)
sess_options = onnxruntime.SessionOptions() sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_options.execution_mode = onnxruntime.ExecutionMode.ORT_SEQUENTIAL sess_options.execution_mode = onnxruntime.ExecutionMode.ORT_SEQUENTIAL
sess_options.intra_op_num_threads = 2 sess_options.intra_op_num_threads = 2
self.session_g2pW = onnxruntime.InferenceSession( self.session_g2pW = onnxruntime.InferenceSession(
os.path.join(model_dir, 'G2PWModel/g2pW.onnx'), os.path.join(uncompress_path, 'g2pW.onnx'),
sess_options=sess_options) sess_options=sess_options)
self.config = load_config( self.config = load_config(
os.path.join(model_dir, 'G2PWModel/config.py'), use_default=True) config_path=os.path.join(uncompress_path, 'config.py'),
use_default=True)
self.model_source = model_source if model_source else self.config.model_source self.model_source = model_source if model_source else self.config.model_source
self.enable_opencc = enable_non_tradional_chinese self.enable_opencc = enable_non_tradional_chinese
self.tokenizer = BertTokenizer.from_pretrained(self.config.model_source) self.tokenizer = BertTokenizer.from_pretrained(self.config.model_source)
polyphonic_chars_path = os.path.join(model_dir, polyphonic_chars_path = os.path.join(uncompress_path,
'G2PWModel/POLYPHONIC_CHARS.txt') 'POLYPHONIC_CHARS.txt')
monophonic_chars_path = os.path.join(model_dir, monophonic_chars_path = os.path.join(uncompress_path,
'G2PWModel/MONOPHONIC_CHARS.txt') 'MONOPHONIC_CHARS.txt')
self.polyphonic_chars = [ self.polyphonic_chars = [
line.split('\t') line.split('\t')
for line in open(polyphonic_chars_path, encoding='utf-8').read() for line in open(polyphonic_chars_path, encoding='utf-8').read()
.strip().split('\n') .strip().split('\n')
] ]
self.non_polyphonic = {
'一', '不', '和', '咋', '嗲', '剖', '差', '攢', '倒', '難', '奔', '勁', '拗',
'肖', '瘙', '誒', '泊'
}
self.non_monophonic = {'似', '攢'}
self.monophonic_chars = [ self.monophonic_chars = [
line.split('\t') line.split('\t')
for line in open(monophonic_chars_path, encoding='utf-8').read() for line in open(monophonic_chars_path, encoding='utf-8').read()
.strip().split('\n') .strip().split('\n')
] ]
self.labels, self.char2phonemes = get_char_phoneme_labels( self.labels, self.char2phonemes = get_char_phoneme_labels(
self.polyphonic_chars polyphonic_chars=self.polyphonic_chars
) if self.config.use_char_phoneme else get_phoneme_labels( ) if self.config.use_char_phoneme else get_phoneme_labels(
self.polyphonic_chars) polyphonic_chars=self.polyphonic_chars)
self.chars = sorted(list(self.char2phonemes.keys())) self.chars = sorted(list(self.char2phonemes.keys()))
self.polyphonic_chars_new = set(self.chars)
for char in self.non_polyphonic:
if char in self.polyphonic_chars_new:
self.polyphonic_chars_new.remove(char)
self.monophonic_chars_dict = {
char: phoneme
for char, phoneme in self.monophonic_chars
}
for char in self.non_monophonic:
if char in self.monophonic_chars_dict:
self.monophonic_chars_dict.pop(char)
self.pos_tags = [ self.pos_tags = [
'UNK', 'A', 'C', 'D', 'I', 'N', 'P', 'T', 'V', 'DE', 'SHI' 'UNK', 'A', 'C', 'D', 'I', 'N', 'P', 'T', 'V', 'DE', 'SHI'
] ]
with open( with open(
os.path.join(model_dir, os.path.join(uncompress_path,
'G2PWModel/bopomofo_to_pinyin_wo_tune_dict.json'), 'bopomofo_to_pinyin_wo_tune_dict.json'),
'r', 'r',
encoding='utf-8') as fr: encoding='utf-8') as fr:
self.bopomofo_convert_dict = json.load(fr) self.bopomofo_convert_dict = json.load(fr)
...@@ -117,7 +144,7 @@ class G2PWOnnxConverter: ...@@ -117,7 +144,7 @@ class G2PWOnnxConverter:
}[style] }[style]
with open( with open(
os.path.join(model_dir, 'G2PWModel/char_bopomofo_dict.json'), os.path.join(uncompress_path, 'char_bopomofo_dict.json'),
'r', 'r',
encoding='utf-8') as fr: encoding='utf-8') as fr:
self.char_bopomofo_dict = json.load(fr) self.char_bopomofo_dict = json.load(fr)
...@@ -125,7 +152,7 @@ class G2PWOnnxConverter: ...@@ -125,7 +152,7 @@ class G2PWOnnxConverter:
if self.enable_opencc: if self.enable_opencc:
self.cc = OpenCC('s2tw') self.cc = OpenCC('s2tw')
def _convert_bopomofo_to_pinyin(self, bopomofo): def _convert_bopomofo_to_pinyin(self, bopomofo: str) -> str:
tone = bopomofo[-1] tone = bopomofo[-1]
assert tone in '12345' assert tone in '12345'
component = self.bopomofo_convert_dict.get(bopomofo[:-1]) component = self.bopomofo_convert_dict.get(bopomofo[:-1])
...@@ -135,7 +162,7 @@ class G2PWOnnxConverter: ...@@ -135,7 +162,7 @@ class G2PWOnnxConverter:
print(f'Warning: "{bopomofo}" cannot convert to pinyin') print(f'Warning: "{bopomofo}" cannot convert to pinyin')
return None return None
def __call__(self, sentences): def __call__(self, sentences: List[str]) -> List[List[str]]:
if isinstance(sentences, str): if isinstance(sentences, str):
sentences = [sentences] sentences = [sentences]
...@@ -148,23 +175,25 @@ class G2PWOnnxConverter: ...@@ -148,23 +175,25 @@ class G2PWOnnxConverter:
sentences = translated_sentences sentences = translated_sentences
texts, query_ids, sent_ids, partial_results = self._prepare_data( texts, query_ids, sent_ids, partial_results = self._prepare_data(
sentences) sentences=sentences)
if len(texts) == 0: if len(texts) == 0:
# sentences no polyphonic words # sentences no polyphonic words
return partial_results return partial_results
onnx_input = prepare_onnx_input( onnx_input = prepare_onnx_input(
self.tokenizer, tokenizer=self.tokenizer,
self.labels, labels=self.labels,
self.char2phonemes, char2phonemes=self.char2phonemes,
self.chars, chars=self.chars,
texts, texts=texts,
query_ids, query_ids=query_ids,
use_mask=self.config.use_mask, use_mask=self.config.use_mask,
use_char_phoneme=self.config.use_char_phoneme,
window_size=None) window_size=None)
preds, confidences = predict(self.session_g2pW, onnx_input, self.labels) preds, confidences = predict(
session=self.session_g2pW,
onnx_input=onnx_input,
labels=self.labels)
if self.config.use_char_phoneme: if self.config.use_char_phoneme:
preds = [pred.split(' ')[1] for pred in preds] preds = [pred.split(' ')[1] for pred in preds]
...@@ -174,26 +203,28 @@ class G2PWOnnxConverter: ...@@ -174,26 +203,28 @@ class G2PWOnnxConverter:
return results return results
def _prepare_data(self, sentences): def _prepare_data(
polyphonic_chars = set(self.chars) self, sentences: List[str]
monophonic_chars_dict = { ) -> Tuple[List[str], List[int], List[int], List[List[str]]]:
char: phoneme
for char, phoneme in self.monophonic_chars
}
texts, query_ids, sent_ids, partial_results = [], [], [], [] texts, query_ids, sent_ids, partial_results = [], [], [], []
for sent_id, sent in enumerate(sentences): for sent_id, sent in enumerate(sentences):
pypinyin_result = pinyin(sent, style=Style.TONE3) # pypinyin works well for Simplified Chinese than Traditional Chinese
sent_s = tranditional_to_simplified(sent)
pypinyin_result = pinyin(sent_s, style=Style.TONE3)
partial_result = [None] * len(sent) partial_result = [None] * len(sent)
for i, char in enumerate(sent): for i, char in enumerate(sent):
if char in polyphonic_chars: if char in self.polyphonic_chars_new:
texts.append(sent) texts.append(sent)
query_ids.append(i) query_ids.append(i)
sent_ids.append(sent_id) sent_ids.append(sent_id)
elif char in monophonic_chars_dict: elif char in self.monophonic_chars_dict:
partial_result[i] = self.style_convert_func( partial_result[i] = self.style_convert_func(
monophonic_chars_dict[char]) self.monophonic_chars_dict[char])
elif char in self.char_bopomofo_dict: elif char in self.char_bopomofo_dict:
partial_result[i] = pypinyin_result[i][0] partial_result[i] = pypinyin_result[i][0]
# partial_result[i] = self.style_convert_func(self.char_bopomofo_dict[char][0]) # partial_result[i] = self.style_convert_func(self.char_bopomofo_dict[char][0])
else:
partial_result[i] = pypinyin_result[i][0]
partial_results.append(partial_result) partial_results.append(partial_result)
return texts, query_ids, sent_ids, partial_results return texts, query_ids, sent_ids, partial_results
...@@ -15,10 +15,11 @@ ...@@ -15,10 +15,11 @@
Credits Credits
This code is modified from https://github.com/GitYCC/g2pW This code is modified from https://github.com/GitYCC/g2pW
""" """
import os
import re import re
def wordize_and_map(text): def wordize_and_map(text: str):
words = [] words = []
index_map_from_text_to_word = [] index_map_from_text_to_word = []
index_map_from_word_to_text = [] index_map_from_word_to_text = []
...@@ -54,8 +55,8 @@ def wordize_and_map(text): ...@@ -54,8 +55,8 @@ def wordize_and_map(text):
return words, index_map_from_text_to_word, index_map_from_word_to_text return words, index_map_from_text_to_word, index_map_from_word_to_text
def tokenize_and_map(tokenizer, text): def tokenize_and_map(tokenizer, text: str):
words, text2word, word2text = wordize_and_map(text) words, text2word, word2text = wordize_and_map(text=text)
tokens = [] tokens = []
index_map_from_token_to_text = [] index_map_from_token_to_text = []
...@@ -82,7 +83,7 @@ def tokenize_and_map(tokenizer, text): ...@@ -82,7 +83,7 @@ def tokenize_and_map(tokenizer, text):
return tokens, index_map_from_text_to_token, index_map_from_token_to_text return tokens, index_map_from_text_to_token, index_map_from_token_to_text
def _load_config(config_path): def _load_config(config_path: os.PathLike):
import importlib.util import importlib.util
spec = importlib.util.spec_from_file_location('__init__', config_path) spec = importlib.util.spec_from_file_location('__init__', config_path)
config = importlib.util.module_from_spec(spec) config = importlib.util.module_from_spec(spec)
...@@ -130,7 +131,7 @@ default_config_dict = { ...@@ -130,7 +131,7 @@ default_config_dict = {
} }
def load_config(config_path, use_default=False): def load_config(config_path: os.PathLike, use_default: bool=False):
config = _load_config(config_path) config = _load_config(config_path)
if use_default: if use_default:
for attr, val in default_config_dict.items(): for attr, val in default_config_dict.items():
......
...@@ -60,8 +60,103 @@ class MixFrontend(): ...@@ -60,8 +60,103 @@ class MixFrontend():
else: else:
return False return False
def is_end(self, before_char, after_char) -> bool:
flag = 0
for char in (before_char, after_char):
if self.is_alphabet(char) or char == " ":
flag += 1
if flag == 2:
return True
else:
return False
def _replace(self, text: str) -> str:
new_text = ""
# get "." indexs
point = "."
point_indexs = []
index = -1
for i in range(text.count(point)):
index = text.find(".", index + 1, len(text))
point_indexs.append(index)
# replace "." -> "。" when English sentence ending
if len(point_indexs) == 0:
new_text = text
elif len(point_indexs) == 1:
point_index = point_indexs[0]
if point_index == 0 or point_index == len(text) - 1:
new_text = text
else:
if not self.is_end(text[point_index - 1], text[point_index +
1]):
new_text = text
else:
new_text = text[:point_index] + "。" + text[point_index + 1:]
elif len(point_indexs) == 2:
first_index = point_indexs[0]
end_index = point_indexs[1]
# first
if first_index != 0:
if not self.is_end(text[first_index - 1], text[first_index +
1]):
new_text += (text[:first_index] + ".")
else:
new_text += (text[:first_index] + "。")
else:
new_text += "."
# last
if end_index != len(text) - 1:
if not self.is_end(text[end_index - 1], text[end_index + 1]):
new_text += text[point_indexs[-2] + 1:]
else:
new_text += (text[point_indexs[-2] + 1:end_index] + "。" +
text[end_index + 1:])
else:
new_text += "."
else:
first_index = point_indexs[0]
end_index = point_indexs[-1]
# first
if first_index != 0:
if not self.is_end(text[first_index - 1], text[first_index +
1]):
new_text += (text[:first_index] + ".")
else:
new_text += (text[:first_index] + "。")
else:
new_text += "."
# middle
for j in range(1, len(point_indexs) - 1):
point_index = point_indexs[j]
if not self.is_end(text[point_index - 1], text[point_index +
1]):
new_text += (
text[point_indexs[j - 1] + 1:point_index] + ".")
else:
new_text += (
text[point_indexs[j - 1] + 1:point_index] + "。")
# last
if end_index != len(text) - 1:
if not self.is_end(text[end_index - 1], text[end_index + 1]):
new_text += text[point_indexs[-2] + 1:]
else:
new_text += (text[point_indexs[-2] + 1:end_index] + "。" +
text[end_index + 1:])
else:
new_text += "."
return new_text
def _split(self, text: str) -> List[str]: def _split(self, text: str) -> List[str]:
text = re.sub(r'[《》【】<=>{}()()#&@“”^_|…\\]', '', text) text = re.sub(r'[《》【】<=>{}()()#&@“”^_|…\\]', '', text)
# 替换英文句子的句号 "." --> "。" 用于后续分句
text = self._replace(text)
text = self.SENTENCE_SPLITOR.sub(r'\1\n', text) text = self.SENTENCE_SPLITOR.sub(r'\1\n', text)
text = text.strip() text = text.strip()
sentences = [sentence.strip() for sentence in re.split(r'\n+', text)] sentences = [sentence.strip() for sentence in re.split(r'\n+', text)]
...@@ -77,9 +172,11 @@ class MixFrontend(): ...@@ -77,9 +172,11 @@ class MixFrontend():
temp_seg = "" temp_seg = ""
temp_lang = "" temp_lang = ""
# Determine the type of each character. type: blank, chinese, alphabet, number, unk. # Determine the type of each character. type: blank, chinese, alphabet, number, unk and point.
for ch in text: for ch in text:
if self.is_chinese(ch): if ch == ".":
types.append("point")
elif self.is_chinese(ch):
types.append("zh") types.append("zh")
elif self.is_alphabet(ch): elif self.is_alphabet(ch):
types.append("en") types.append("en")
...@@ -96,21 +193,26 @@ class MixFrontend(): ...@@ -96,21 +193,26 @@ class MixFrontend():
# find the first char of the seg # find the first char of the seg
if flag == 0: if flag == 0:
if types[i] != "unk" and types[i] != "blank": # 首个字符是中文,英文或者数字
if types[i] == "zh" or types[i] == "en" or types[i] == "num":
temp_seg += text[i] temp_seg += text[i]
temp_lang = types[i] temp_lang = types[i]
flag = 1 flag = 1
else: else:
if types[i] == temp_lang or types[i] == "num": # 数字和小数点均与前面的字符合并,类型属于前面一个字符的类型
if types[i] == temp_lang or types[i] == "num" or types[
i] == "point":
temp_seg += text[i] temp_seg += text[i]
elif temp_lang == "num" and types[i] != "unk": # 数字与后面的任意字符都拼接
elif temp_lang == "num":
temp_seg += text[i] temp_seg += text[i]
if types[i] == "zh" or types[i] == "en": if types[i] == "zh" or types[i] == "en":
temp_lang = types[i] temp_lang = types[i]
elif temp_lang == "en" and types[i] == "blank": # 如果是空格则与前面字符拼接
elif types[i] == "blank":
temp_seg += text[i] temp_seg += text[i]
elif types[i] == "unk": elif types[i] == "unk":
...@@ -119,7 +221,7 @@ class MixFrontend(): ...@@ -119,7 +221,7 @@ class MixFrontend():
else: else:
segments.append((temp_seg, temp_lang)) segments.append((temp_seg, temp_lang))
if types[i] != "unk" and types[i] != "blank": if types[i] == "zh" or types[i] == "en":
temp_seg = text[i] temp_seg = text[i]
temp_lang = types[i] temp_lang = types[i]
flag = 1 flag = 1
...@@ -134,7 +236,7 @@ class MixFrontend(): ...@@ -134,7 +236,7 @@ class MixFrontend():
def get_input_ids(self, def get_input_ids(self,
sentence: str, sentence: str,
merge_sentences: bool=True, merge_sentences: bool=False,
get_tone_ids: bool=False, get_tone_ids: bool=False,
add_sp: bool=True, add_sp: bool=True,
to_tensor: bool=True) -> Dict[str, List[paddle.Tensor]]: to_tensor: bool=True) -> Dict[str, List[paddle.Tensor]]:
...@@ -142,28 +244,29 @@ class MixFrontend(): ...@@ -142,28 +244,29 @@ class MixFrontend():
sentences = self._split(sentence) sentences = self._split(sentence)
phones_list = [] phones_list = []
result = {} result = {}
for text in sentences: for text in sentences:
phones_seg = [] phones_seg = []
segments = self._distinguish(text) segments = self._distinguish(text)
for seg in segments: for seg in segments:
content = seg[0] content = seg[0]
lang = seg[1] lang = seg[1]
if lang == "zh": if content != '':
input_ids = self.zh_frontend.get_input_ids( if lang == "en":
content, input_ids = self.en_frontend.get_input_ids(
merge_sentences=True, content, merge_sentences=True, to_tensor=to_tensor)
get_tone_ids=get_tone_ids, else:
to_tensor=to_tensor) input_ids = self.zh_frontend.get_input_ids(
content,
elif lang == "en": merge_sentences=True,
input_ids = self.en_frontend.get_input_ids( get_tone_ids=get_tone_ids,
content, merge_sentences=True, to_tensor=to_tensor) to_tensor=to_tensor)
phones_seg.append(input_ids["phone_ids"][0]) phones_seg.append(input_ids["phone_ids"][0])
if add_sp: if add_sp:
phones_seg.append(self.sp_id_tensor) phones_seg.append(self.sp_id_tensor)
if phones_seg == []:
phones_seg.append(self.sp_id_tensor)
phones = paddle.concat(phones_seg) phones = paddle.concat(phones_seg)
phones_list.append(phones) phones_list.append(phones)
......
...@@ -23,4 +23,27 @@ polyphonic: ...@@ -23,4 +23,27 @@ polyphonic:
鸭绿江: ['ya1','lu4','jiang1'] 鸭绿江: ['ya1','lu4','jiang1']
撒切尔: ['sa4','qie4','er3'] 撒切尔: ['sa4','qie4','er3']
比比皆是: ['bi3','bi3','jie1','shi4'] 比比皆是: ['bi3','bi3','jie1','shi4']
身无长物: ['shen1','wu2','chang2','wu4'] 身无长物: ['shen1','wu2','chang2','wu4']
\ No newline at end of file 手里: ['shou2','li3']
关卡: ['guan1','qia3']
怀揣: ['huai2','chuai1']
挑剔: ['tiao1','ti4']
供称: ['gong4','cheng1']
作坊: ['zuo1', 'fang5']
中医: ['zhong1','yi1']
嚷嚷: ['rang1','rang5']
商厦: ['shang1','sha4']
大厦: ['da4','sha4']
刹车: ['sha1','che1']
嘚瑟: ['de4','se5']
朝鲜: ['chao2','xian3']
阿房宫: ['e1','pang2','gong1']
阿胶: ['e1','jiao1']
咖喱: ['ga1','li5']
时分: ['shi2','fen1']
蚌埠: ['beng4','bu4']
驯服: ['xun4','fu2']
幸免于难: ['xing4','mian3','yu2','nan4']
恶行: ['e4','xing2']
: ['ai4']
...@@ -30,7 +30,7 @@ class ToneSandhi(): ...@@ -30,7 +30,7 @@ class ToneSandhi():
'蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
'舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
'胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
'老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', '戏弄', '将军', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
'精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
'窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
'码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
...@@ -41,30 +41,31 @@ class ToneSandhi(): ...@@ -41,30 +41,31 @@ class ToneSandhi():
'棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
'木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
'收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
'抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打扮', '打听', '打发', '扎实', '扁担',
'扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', '戒指', '懒得', '意识', '意思', '悟性', '怪物', '思量', '怎么', '念头', '念叨', '别人',
'念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', '干事',
'干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', '屁股',
'屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', '实在',
'实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', '官司', '学问', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', '姑娘', '姐夫',
'姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', '大意', '大夫',
'大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', '嘱咐', '嘟囔',
'嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', '咳嗽', '和尚',
'咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', '叫唤', '口袋',
'叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', '功夫', '力气',
'功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', '凑合', '凉快',
'凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', '佩服', '作坊',
'佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', '交情', '云彩',
'交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', '不由', '下水',
'不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', '父亲', '母亲', '咕噜',
'父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', '幸福', '熟悉', '计划',
'幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', '凤凰', '拖沓', '寒碜',
'凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', '扫把', '惦记'
'扫把', '惦记'
} }
self.must_not_neural_tone_words = { self.must_not_neural_tone_words = {
"男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎", '男子', '女子', '分子', '原子', '量子', '莲子', '石子', '瓜子', '电子', '人人', '虎虎',
"幺幺" '幺幺', '干嘛', '学子', '哈哈', '数数', '袅袅', '局地', '以下', '娃哈哈', '花花草草', '留得',
'耕地', '想想', '熙熙', '攘攘', '卵子', '死死', '冉冉', '恳恳', '佼佼', '吵吵', '打打',
'考考', '整整', '莘莘'
} }
self.punc = ":,;。?!“”‘’':,;.?!" self.punc = ":,;。?!“”‘’':,;.?!"
...@@ -75,27 +76,24 @@ class ToneSandhi(): ...@@ -75,27 +76,24 @@ class ToneSandhi():
# finals: ['ia1', 'i3'] # finals: ['ia1', 'i3']
def _neural_sandhi(self, word: str, pos: str, def _neural_sandhi(self, word: str, pos: str,
finals: List[str]) -> List[str]: finals: List[str]) -> List[str]:
if word in self.must_not_neural_tone_words:
return finals
# reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
for j, item in enumerate(word): for j, item in enumerate(word):
if j - 1 >= 0 and item == word[j - 1] and pos[0] in { if j - 1 >= 0 and item == word[j - 1] and pos[0] in {"n", "v", "a"}:
"n", "v", "a"
} and word not in self.must_not_neural_tone_words:
finals[j] = finals[j][:-1] + "5" finals[j] = finals[j][:-1] + "5"
ge_idx = word.find("个") ge_idx = word.find("个")
if len(word) >= 1 and word[-1] in "吧呢哈啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒滴哩哟喽啰耶喔诶":
finals[-1] = finals[-1][:-1] + "5" finals[-1] = finals[-1][:-1] + "5"
elif len(word) >= 1 and word[-1] in "的地得": elif len(word) >= 1 and word[-1] in "的地得":
finals[-1] = finals[-1][:-1] + "5" finals[-1] = finals[-1][:-1] + "5"
# e.g. 走了, 看着, 去过 # e.g. 走了, 看着, 去过
elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
finals[-1] = finals[-1][:-1] + "5" finals[-1] = finals[-1][:-1] + "5"
elif len(word) > 1 and word[-1] in "们子" and pos in { elif len(word) > 1 and word[-1] in "们子" and pos in {"r", "n"}:
"r", "n"
} and word not in self.must_not_neural_tone_words:
finals[-1] = finals[-1][:-1] + "5" finals[-1] = finals[-1][:-1] + "5"
# e.g. 桌上, 地下, 家里 # e.g. 桌上, 地下
elif len(word) > 1 and word[-1] in "上下" and pos in {"s", "l", "f"}: elif len(word) > 1 and word[-1] in "上下" and pos in {"s", "l", "f"}:
finals[-1] = finals[-1][:-1] + "5" finals[-1] = finals[-1][:-1] + "5"
# e.g. 上来, 下去 # e.g. 上来, 下去
elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
...@@ -147,7 +145,7 @@ class ToneSandhi(): ...@@ -147,7 +145,7 @@ class ToneSandhi():
for i, char in enumerate(word): for i, char in enumerate(word):
if char == "一" and i + 1 < len(word): if char == "一" and i + 1 < len(word):
# "一" before tone4 should be yi2, e.g. 一段 # "一" before tone4 should be yi2, e.g. 一段
if finals[i + 1][-1] == "4": if finals[i + 1][-1] in {'4', '5'}:
finals[i] = finals[i][:-1] + "2" finals[i] = finals[i][:-1] + "2"
# "一" before non-tone4 should be yi4, e.g. 一天 # "一" before non-tone4 should be yi4, e.g. 一天
else: else:
...@@ -170,6 +168,7 @@ class ToneSandhi(): ...@@ -170,6 +168,7 @@ class ToneSandhi():
return new_word_list return new_word_list
def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
if len(word) == 2 and self._all_tone_three(finals): if len(word) == 2 and self._all_tone_three(finals):
finals[0] = finals[0][:-1] + "2" finals[0] = finals[0][:-1] + "2"
elif len(word) == 3: elif len(word) == 3:
...@@ -239,7 +238,12 @@ class ToneSandhi(): ...@@ -239,7 +238,12 @@ class ToneSandhi():
for i, (word, pos) in enumerate(seg): for i, (word, pos) in enumerate(seg):
if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
0] == seg[i + 1][0] and seg[i - 1][1] == "v": 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] if i - 1 < len(new_seg):
new_seg[i -
1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
else:
new_seg.append([word, pos])
new_seg.append([seg[i + 1][0], pos])
else: else:
if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
0] == word and pos == "v": 0] == word and pos == "v":
...@@ -342,6 +346,7 @@ class ToneSandhi(): ...@@ -342,6 +346,7 @@ class ToneSandhi():
def modified_tone(self, word: str, pos: str, def modified_tone(self, word: str, pos: str,
finals: List[str]) -> List[str]: finals: List[str]) -> List[str]:
finals = self._bu_sandhi(word, finals) finals = self._bu_sandhi(word, finals)
finals = self._yi_sandhi(word, finals) finals = self._yi_sandhi(word, finals)
finals = self._neural_sandhi(word, pos, finals) finals = self._neural_sandhi(word, pos, finals)
......
...@@ -84,6 +84,24 @@ class Frontend(): ...@@ -84,6 +84,24 @@ class Frontend():
self.tone_modifier = ToneSandhi() self.tone_modifier = ToneSandhi()
self.text_normalizer = TextNormalizer() self.text_normalizer = TextNormalizer()
self.punc = ":,;。?!“”‘’':,;.?!" self.punc = ":,;。?!“”‘’':,;.?!"
self.phrases_dict = {
'开户行': [['ka1i'], ['hu4'], ['hang2']],
'发卡行': [['fa4'], ['ka3'], ['hang2']],
'放款行': [['fa4ng'], ['kua3n'], ['hang2']],
'茧行': [['jia3n'], ['hang2']],
'行号': [['hang2'], ['ha4o']],
'各地': [['ge4'], ['di4']],
'借还款': [['jie4'], ['hua2n'], ['kua3n']],
'时间为': [['shi2'], ['jia1n'], ['we2i']],
'为准': [['we2i'], ['zhu3n']],
'色差': [['se4'], ['cha1']],
'嗲': [['dia3']],
'呗': [['bei5']],
'不': [['bu4']],
'咗': [['zuo5']],
'嘞': [['lei5']],
'掺和': [['chan1'], ['huo5']]
}
# g2p_model can be pypinyin and g2pM and g2pW # g2p_model can be pypinyin and g2pM and g2pW
self.g2p_model = g2p_model self.g2p_model = g2p_model
if self.g2p_model == "g2pM": if self.g2p_model == "g2pM":
...@@ -91,6 +109,8 @@ class Frontend(): ...@@ -91,6 +109,8 @@ class Frontend():
self.pinyin2phone = generate_lexicon( self.pinyin2phone = generate_lexicon(
with_tone=True, with_erhua=False) with_tone=True, with_erhua=False)
elif self.g2p_model == "g2pW": elif self.g2p_model == "g2pW":
# use pypinyin as backup for non polyphonic characters in g2pW
self._init_pypinyin()
self.corrector = Polyphonic() self.corrector = Polyphonic()
self.g2pM_model = G2pM() self.g2pM_model = G2pM()
self.g2pW_model = G2PWOnnxConverter( self.g2pW_model = G2PWOnnxConverter(
...@@ -99,8 +119,10 @@ class Frontend(): ...@@ -99,8 +119,10 @@ class Frontend():
with_tone=True, with_erhua=False) with_tone=True, with_erhua=False)
else: else:
self.__init__pypinyin() self._init_pypinyin()
self.must_erhua = {"小院儿", "胡同儿", "范儿", "老汉儿", "撒欢儿", "寻老礼儿", "妥妥儿"} self.must_erhua = {
"小院儿", "胡同儿", "范儿", "老汉儿", "撒欢儿", "寻老礼儿", "妥妥儿", "媳妇儿"
}
self.not_erhua = { self.not_erhua = {
"虐儿", "为儿", "护儿", "瞒儿", "救儿", "替儿", "有儿", "一儿", "我儿", "俺儿", "妻儿", "虐儿", "为儿", "护儿", "瞒儿", "救儿", "替儿", "有儿", "一儿", "我儿", "俺儿", "妻儿",
"拐儿", "聋儿", "乞儿", "患儿", "幼儿", "孤儿", "婴儿", "婴幼儿", "连体儿", "脑瘫儿", "拐儿", "聋儿", "乞儿", "患儿", "幼儿", "孤儿", "婴儿", "婴幼儿", "连体儿", "脑瘫儿",
...@@ -108,6 +130,7 @@ class Frontend(): ...@@ -108,6 +130,7 @@ class Frontend():
"孙儿", "侄孙儿", "女儿", "男儿", "红孩儿", "花儿", "虫儿", "马儿", "鸟儿", "猪儿", "猫儿", "孙儿", "侄孙儿", "女儿", "男儿", "红孩儿", "花儿", "虫儿", "马儿", "鸟儿", "猪儿", "猫儿",
"狗儿" "狗儿"
} }
self.vocab_phones = {} self.vocab_phones = {}
self.vocab_tones = {} self.vocab_tones = {}
if phone_vocab_path: if phone_vocab_path:
...@@ -121,20 +144,9 @@ class Frontend(): ...@@ -121,20 +144,9 @@ class Frontend():
for tone, id in tone_id: for tone, id in tone_id:
self.vocab_tones[tone] = int(id) self.vocab_tones[tone] = int(id)
def __init__pypinyin(self): def _init_pypinyin(self):
large_pinyin.load() large_pinyin.load()
load_phrases_dict(self.phrases_dict)
load_phrases_dict({u'开户行': [[u'ka1i'], [u'hu4'], [u'hang2']]})
load_phrases_dict({u'发卡行': [[u'fa4'], [u'ka3'], [u'hang2']]})
load_phrases_dict({u'放款行': [[u'fa4ng'], [u'kua3n'], [u'hang2']]})
load_phrases_dict({u'茧行': [[u'jia3n'], [u'hang2']]})
load_phrases_dict({u'行号': [[u'hang2'], [u'ha4o']]})
load_phrases_dict({u'各地': [[u'ge4'], [u'di4']]})
load_phrases_dict({u'借还款': [[u'jie4'], [u'hua2n'], [u'kua3n']]})
load_phrases_dict({u'时间为': [[u'shi2'], [u'jia1n'], [u'we2i']]})
load_phrases_dict({u'为准': [[u'we2i'], [u'zhu3n']]})
load_phrases_dict({u'色差': [[u'se4'], [u'cha1']]})
# 调整字的拼音顺序 # 调整字的拼音顺序
load_single_dict({ord(u'地'): u'de,di4'}) load_single_dict({ord(u'地'): u'de,di4'})
...@@ -258,7 +270,6 @@ class Frontend(): ...@@ -258,7 +270,6 @@ class Frontend():
phones.append('sp') phones.append('sp')
if v and v not in self.punc: if v and v not in self.punc:
phones.append(v) phones.append(v)
phones_list.append(phones) phones_list.append(phones)
if merge_sentences: if merge_sentences:
merge_list = sum(phones_list, []) merge_list = sum(phones_list, [])
...@@ -275,6 +286,10 @@ class Frontend(): ...@@ -275,6 +286,10 @@ class Frontend():
finals: List[str], finals: List[str],
word: str, word: str,
pos: str) -> List[List[str]]: pos: str) -> List[List[str]]:
# fix er1
for i, phn in enumerate(finals):
if i == len(finals) - 1 and word[i] == "儿" and phn == 'er1':
finals[i] = 'er2'
if word not in self.must_erhua and (word in self.not_erhua or if word not in self.must_erhua and (word in self.not_erhua or
pos in {"a", "j", "nr"}): pos in {"a", "j", "nr"}):
return initials, finals return initials, finals
......
...@@ -28,7 +28,7 @@ UNITS = OrderedDict({ ...@@ -28,7 +28,7 @@ UNITS = OrderedDict({
8: '亿', 8: '亿',
}) })
COM_QUANTIFIERS = '(所|朵|匹|张|座|回|场|尾|条|个|首|阙|阵|网|炮|顶|丘|棵|只|支|袭|辆|挑|担|颗|壳|窠|曲|墙|群|腔|砣|座|客|贯|扎|捆|刀|令|打|手|罗|坡|山|岭|江|溪|钟|队|单|双|对|出|口|头|脚|板|跳|枝|件|贴|针|线|管|名|位|身|堂|课|本|页|家|户|层|丝|毫|厘|分|钱|两|斤|担|铢|石|钧|锱|忽|(千|毫|微)克|毫|厘|(公)分|分|寸|尺|丈|里|寻|常|铺|程|(千|分|厘|毫|微)米|米|撮|勺|合|升|斗|石|盘|碗|碟|叠|桶|笼|盆|盒|杯|钟|斛|锅|簋|篮|盘|桶|罐|瓶|壶|卮|盏|箩|箱|煲|啖|袋|钵|年|月|日|季|刻|时|周|天|秒|分|小时|旬|纪|岁|世|更|夜|春|夏|秋|冬|代|伏|辈|丸|泡|粒|颗|幢|堆|条|根|支|道|面|片|张|颗|块|元|(亿|千万|百万|万|千|百)|(亿|千万|百万|万|千|百|美|)元|(亿|千万|百万|万|千|百|)块|角|毛|分)' COM_QUANTIFIERS = '(封|艘|把|目|套|段|人|所|朵|匹|张|座|回|场|尾|条|个|首|阙|阵|网|炮|顶|丘|棵|只|支|袭|辆|挑|担|颗|壳|窠|曲|墙|群|腔|砣|座|客|贯|扎|捆|刀|令|打|手|罗|坡|山|岭|江|溪|钟|队|单|双|对|出|口|头|脚|板|跳|枝|件|贴|针|线|管|名|位|身|堂|课|本|页|家|户|层|丝|毫|厘|分|钱|两|斤|担|铢|石|钧|锱|忽|(千|毫|微)克|毫|厘|(公)分|分|寸|尺|丈|里|寻|常|铺|程|(千|分|厘|毫|微)米|米|撮|勺|合|升|斗|石|盘|碗|碟|叠|桶|笼|盆|盒|杯|钟|斛|锅|簋|篮|盘|桶|罐|瓶|壶|卮|盏|箩|箱|煲|啖|袋|钵|年|月|日|季|刻|时|周|天|秒|分|小时|旬|纪|岁|世|更|夜|春|夏|秋|冬|代|伏|辈|丸|泡|粒|颗|幢|堆|条|根|支|道|面|片|张|颗|块|元|(亿|千万|百万|万|千|百)|(亿|千万|百万|万|千|百|美|)元|(亿|千万|百万|万|千|百|十|)吨|(亿|千万|百万|万|千|百|)块|角|毛|分)'
# 分数表达式 # 分数表达式
RE_FRAC = re.compile(r'(-?)(\d+)/(\d+)') RE_FRAC = re.compile(r'(-?)(\d+)/(\d+)')
......
...@@ -13,4 +13,3 @@ ...@@ -13,4 +13,3 @@
# limitations under the License. # limitations under the License.
from .ernie_sat import * from .ernie_sat import *
from .ernie_sat_updater import * from .ernie_sat_updater import *
from .mlm import *
...@@ -389,7 +389,7 @@ class MLM(nn.Layer): ...@@ -389,7 +389,7 @@ class MLM(nn.Layer):
speech_seg_pos: paddle.Tensor, speech_seg_pos: paddle.Tensor,
text_seg_pos: paddle.Tensor, text_seg_pos: paddle.Tensor,
span_bdy: List[int], span_bdy: List[int],
use_teacher_forcing: bool=False, ) -> List[paddle.Tensor]: use_teacher_forcing: bool=True, ) -> List[paddle.Tensor]:
''' '''
Args: Args:
speech (paddle.Tensor): input speech (1, Tmax, D). speech (paddle.Tensor): input speech (1, Tmax, D).
...@@ -657,7 +657,7 @@ class ErnieSAT(nn.Layer): ...@@ -657,7 +657,7 @@ class ErnieSAT(nn.Layer):
speech_seg_pos: paddle.Tensor, speech_seg_pos: paddle.Tensor,
text_seg_pos: paddle.Tensor, text_seg_pos: paddle.Tensor,
span_bdy: List[int], span_bdy: List[int],
use_teacher_forcing: bool=False, ) -> Dict[str, paddle.Tensor]: use_teacher_forcing: bool=True, ) -> Dict[str, paddle.Tensor]:
return self.model.inference( return self.model.inference(
speech=speech, speech=speech,
text=text, text=text,
......
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from typing import Dict
from typing import List
from typing import Optional
import paddle
import yaml
from paddle import nn
from yacs.config import CfgNode
from paddlespeech.t2s.modules.activation import get_activation
from paddlespeech.t2s.modules.conformer.convolution import ConvolutionModule
from paddlespeech.t2s.modules.conformer.encoder_layer import EncoderLayer
from paddlespeech.t2s.modules.layer_norm import LayerNorm
from paddlespeech.t2s.modules.masked_fill import masked_fill
from paddlespeech.t2s.modules.nets_utils import initialize
from paddlespeech.t2s.modules.tacotron2.decoder import Postnet
from paddlespeech.t2s.modules.transformer.attention import LegacyRelPositionMultiHeadedAttention
from paddlespeech.t2s.modules.transformer.attention import MultiHeadedAttention
from paddlespeech.t2s.modules.transformer.attention import RelPositionMultiHeadedAttention
from paddlespeech.t2s.modules.transformer.embedding import LegacyRelPositionalEncoding
from paddlespeech.t2s.modules.transformer.embedding import PositionalEncoding
from paddlespeech.t2s.modules.transformer.embedding import RelPositionalEncoding
from paddlespeech.t2s.modules.transformer.embedding import ScaledPositionalEncoding
from paddlespeech.t2s.modules.transformer.multi_layer_conv import Conv1dLinear
from paddlespeech.t2s.modules.transformer.multi_layer_conv import MultiLayeredConv1d
from paddlespeech.t2s.modules.transformer.positionwise_feed_forward import PositionwiseFeedForward
from paddlespeech.t2s.modules.transformer.repeat import repeat
from paddlespeech.t2s.modules.transformer.subsampling import Conv2dSubsampling
# MLM -> Mask Language Model
class mySequential(nn.Sequential):
def forward(self, *inputs):
for module in self._sub_layers.values():
if type(inputs) == tuple:
inputs = module(*inputs)
else:
inputs = module(inputs)
return inputs
class MaskInputLayer(nn.Layer):
def __init__(self, out_features: int) -> None:
super().__init__()
self.mask_feature = paddle.create_parameter(
shape=(1, 1, out_features),
dtype=paddle.float32,
default_initializer=paddle.nn.initializer.Assign(
paddle.normal(shape=(1, 1, out_features))))
def forward(self, input: paddle.Tensor,
masked_pos: paddle.Tensor=None) -> paddle.Tensor:
masked_pos = paddle.expand_as(paddle.unsqueeze(masked_pos, -1), input)
masked_input = masked_fill(input, masked_pos, 0) + masked_fill(
paddle.expand_as(self.mask_feature, input), ~masked_pos, 0)
return masked_input
class MLMEncoder(nn.Layer):
"""Conformer encoder module.
Args:
idim (int): Input dimension.
attention_dim (int): Dimension of attention.
attention_heads (int): The number of heads of multi head attention.
linear_units (int): The number of units of position-wise feed forward.
num_blocks (int): The number of decoder blocks.
dropout_rate (float): Dropout rate.
positional_dropout_rate (float): Dropout rate after adding positional encoding.
attention_dropout_rate (float): Dropout rate in attention.
input_layer (Union[str, paddle.nn.Layer]): Input layer type.
normalize_before (bool): Whether to use layer_norm before the first block.
concat_after (bool): Whether to concat attention layer's input and output.
if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x)
positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear".
positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer.
macaron_style (bool): Whether to use macaron style for positionwise layer.
pos_enc_layer_type (str): Encoder positional encoding layer type.
selfattention_layer_type (str): Encoder attention layer type.
activation_type (str): Encoder activation function type.
use_cnn_module (bool): Whether to use convolution module.
zero_triu (bool): Whether to zero the upper triangular part of attention matrix.
cnn_module_kernel (int): Kernerl size of convolution module.
padding_idx (int): Padding idx for input_layer=embed.
stochastic_depth_rate (float): Maximum probability to skip the encoder layer.
"""
def __init__(self,
idim: int,
vocab_size: int=0,
pre_speech_layer: int=0,
attention_dim: int=256,
attention_heads: int=4,
linear_units: int=2048,
num_blocks: int=6,
dropout_rate: float=0.1,
positional_dropout_rate: float=0.1,
attention_dropout_rate: float=0.0,
input_layer: str="conv2d",
normalize_before: bool=True,
concat_after: bool=False,
positionwise_layer_type: str="linear",
positionwise_conv_kernel_size: int=1,
macaron_style: bool=False,
pos_enc_layer_type: str="abs_pos",
selfattention_layer_type: str="selfattn",
activation_type: str="swish",
use_cnn_module: bool=False,
zero_triu: bool=False,
cnn_module_kernel: int=31,
padding_idx: int=-1,
stochastic_depth_rate: float=0.0,
text_masking: bool=False):
"""Construct an Encoder object."""
super().__init__()
self._output_size = attention_dim
self.text_masking = text_masking
if self.text_masking:
self.text_masking_layer = MaskInputLayer(attention_dim)
activation = get_activation(activation_type)
if pos_enc_layer_type == "abs_pos":
pos_enc_class = PositionalEncoding
elif pos_enc_layer_type == "scaled_abs_pos":
pos_enc_class = ScaledPositionalEncoding
elif pos_enc_layer_type == "rel_pos":
assert selfattention_layer_type == "rel_selfattn"
pos_enc_class = RelPositionalEncoding
elif pos_enc_layer_type == "legacy_rel_pos":
pos_enc_class = LegacyRelPositionalEncoding
assert selfattention_layer_type == "legacy_rel_selfattn"
else:
raise ValueError("unknown pos_enc_layer: " + pos_enc_layer_type)
self.conv_subsampling_factor = 1
if input_layer == "linear":
self.embed = nn.Sequential(
nn.Linear(idim, attention_dim),
nn.LayerNorm(attention_dim),
nn.Dropout(dropout_rate),
nn.ReLU(),
pos_enc_class(attention_dim, positional_dropout_rate), )
elif input_layer == "conv2d":
self.embed = Conv2dSubsampling(
idim,
attention_dim,
dropout_rate,
pos_enc_class(attention_dim, positional_dropout_rate), )
self.conv_subsampling_factor = 4
elif input_layer == "embed":
self.embed = nn.Sequential(
nn.Embedding(idim, attention_dim, padding_idx=padding_idx),
pos_enc_class(attention_dim, positional_dropout_rate), )
elif input_layer == "mlm":
self.segment_emb = None
self.speech_embed = mySequential(
MaskInputLayer(idim),
nn.Linear(idim, attention_dim),
nn.LayerNorm(attention_dim),
nn.ReLU(),
pos_enc_class(attention_dim, positional_dropout_rate))
self.text_embed = nn.Sequential(
nn.Embedding(
vocab_size, attention_dim, padding_idx=padding_idx),
pos_enc_class(attention_dim, positional_dropout_rate), )
elif input_layer == "sega_mlm":
self.segment_emb = nn.Embedding(
500, attention_dim, padding_idx=padding_idx)
self.speech_embed = mySequential(
MaskInputLayer(idim),
nn.Linear(idim, attention_dim),
nn.LayerNorm(attention_dim),
nn.ReLU(),
pos_enc_class(attention_dim, positional_dropout_rate))
self.text_embed = nn.Sequential(
nn.Embedding(
vocab_size, attention_dim, padding_idx=padding_idx),
pos_enc_class(attention_dim, positional_dropout_rate), )
elif isinstance(input_layer, nn.Layer):
self.embed = nn.Sequential(
input_layer,
pos_enc_class(attention_dim, positional_dropout_rate), )
elif input_layer is None:
self.embed = nn.Sequential(
pos_enc_class(attention_dim, positional_dropout_rate))
else:
raise ValueError("unknown input_layer: " + input_layer)
self.normalize_before = normalize_before
# self-attention module definition
if selfattention_layer_type == "selfattn":
encoder_selfattn_layer = MultiHeadedAttention
encoder_selfattn_layer_args = (attention_heads, attention_dim,
attention_dropout_rate, )
elif selfattention_layer_type == "legacy_rel_selfattn":
assert pos_enc_layer_type == "legacy_rel_pos"
encoder_selfattn_layer = LegacyRelPositionMultiHeadedAttention
encoder_selfattn_layer_args = (attention_heads, attention_dim,
attention_dropout_rate, )
elif selfattention_layer_type == "rel_selfattn":
assert pos_enc_layer_type == "rel_pos"
encoder_selfattn_layer = RelPositionMultiHeadedAttention
encoder_selfattn_layer_args = (attention_heads, attention_dim,
attention_dropout_rate, zero_triu, )
else:
raise ValueError("unknown encoder_attn_layer: " +
selfattention_layer_type)
# feed-forward module definition
if positionwise_layer_type == "linear":
positionwise_layer = PositionwiseFeedForward
positionwise_layer_args = (attention_dim, linear_units,
dropout_rate, activation, )
elif positionwise_layer_type == "conv1d":
positionwise_layer = MultiLayeredConv1d
positionwise_layer_args = (attention_dim, linear_units,
positionwise_conv_kernel_size,
dropout_rate, )
elif positionwise_layer_type == "conv1d-linear":
positionwise_layer = Conv1dLinear
positionwise_layer_args = (attention_dim, linear_units,
positionwise_conv_kernel_size,
dropout_rate, )
else:
raise NotImplementedError("Support only linear or conv1d.")
# convolution module definition
convolution_layer = ConvolutionModule
convolution_layer_args = (attention_dim, cnn_module_kernel, activation)
self.encoders = repeat(
num_blocks,
lambda lnum: EncoderLayer(
attention_dim,
encoder_selfattn_layer(*encoder_selfattn_layer_args),
positionwise_layer(*positionwise_layer_args),
positionwise_layer(*positionwise_layer_args) if macaron_style else None,
convolution_layer(*convolution_layer_args) if use_cnn_module else None,
dropout_rate,
normalize_before,
concat_after,
stochastic_depth_rate * float(1 + lnum) / num_blocks, ), )
self.pre_speech_layer = pre_speech_layer
self.pre_speech_encoders = repeat(
self.pre_speech_layer,
lambda lnum: EncoderLayer(
attention_dim,
encoder_selfattn_layer(*encoder_selfattn_layer_args),
positionwise_layer(*positionwise_layer_args),
positionwise_layer(*positionwise_layer_args) if macaron_style else None,
convolution_layer(*convolution_layer_args) if use_cnn_module else None,
dropout_rate,
normalize_before,
concat_after,
stochastic_depth_rate * float(1 + lnum) / self.pre_speech_layer, ),
)
if self.normalize_before:
self.after_norm = LayerNorm(attention_dim)
def forward(self,
speech: paddle.Tensor,
text: paddle.Tensor,
masked_pos: paddle.Tensor,
speech_mask: paddle.Tensor=None,
text_mask: paddle.Tensor=None,
speech_seg_pos: paddle.Tensor=None,
text_seg_pos: paddle.Tensor=None):
"""Encode input sequence.
"""
if masked_pos is not None:
speech = self.speech_embed(speech, masked_pos)
else:
speech = self.speech_embed(speech)
if text is not None:
text = self.text_embed(text)
if speech_seg_pos is not None and text_seg_pos is not None and self.segment_emb:
speech_seg_emb = self.segment_emb(speech_seg_pos)
text_seg_emb = self.segment_emb(text_seg_pos)
text = (text[0] + text_seg_emb, text[1])
speech = (speech[0] + speech_seg_emb, speech[1])
if self.pre_speech_encoders:
speech, _ = self.pre_speech_encoders(speech, speech_mask)
if text is not None:
xs = paddle.concat([speech[0], text[0]], axis=1)
xs_pos_emb = paddle.concat([speech[1], text[1]], axis=1)
masks = paddle.concat([speech_mask, text_mask], axis=-1)
else:
xs = speech[0]
xs_pos_emb = speech[1]
masks = speech_mask
xs, masks = self.encoders((xs, xs_pos_emb), masks)
if isinstance(xs, tuple):
xs = xs[0]
if self.normalize_before:
xs = self.after_norm(xs)
return xs, masks
class MLMDecoder(MLMEncoder):
def forward(self, xs: paddle.Tensor, masks: paddle.Tensor):
"""Encode input sequence.
Args:
xs (paddle.Tensor): Input tensor (#batch, time, idim).
masks (paddle.Tensor): Mask tensor (#batch, time).
Returns:
paddle.Tensor: Output tensor (#batch, time, attention_dim).
paddle.Tensor: Mask tensor (#batch, time).
"""
xs = self.embed(xs)
xs, masks = self.encoders(xs, masks)
if isinstance(xs, tuple):
xs = xs[0]
if self.normalize_before:
xs = self.after_norm(xs)
return xs, masks
# encoder and decoder is nn.Layer, not str
class MLM(nn.Layer):
def __init__(self,
odim: int,
encoder: nn.Layer,
decoder: Optional[nn.Layer],
postnet_layers: int=0,
postnet_chans: int=0,
postnet_filts: int=0,
text_masking: bool=False):
super().__init__()
self.odim = odim
self.encoder = encoder
self.decoder = decoder
self.vocab_size = encoder.text_embed[0]._num_embeddings
if self.decoder is None or not (hasattr(self.decoder,
'output_layer') and
self.decoder.output_layer is not None):
self.sfc = nn.Linear(self.encoder._output_size, odim)
else:
self.sfc = None
if text_masking:
self.text_sfc = nn.Linear(
self.encoder.text_embed[0]._embedding_dim,
self.vocab_size,
weight_attr=self.encoder.text_embed[0]._weight_attr)
else:
self.text_sfc = None
self.postnet = (None if postnet_layers == 0 else Postnet(
idim=self.encoder._output_size,
odim=odim,
n_layers=postnet_layers,
n_chans=postnet_chans,
n_filts=postnet_filts,
use_batch_norm=True,
dropout_rate=0.5, ))
def inference(
self,
speech: paddle.Tensor,
text: paddle.Tensor,
masked_pos: paddle.Tensor,
speech_mask: paddle.Tensor,
text_mask: paddle.Tensor,
speech_seg_pos: paddle.Tensor,
text_seg_pos: paddle.Tensor,
span_bdy: List[int],
use_teacher_forcing: bool=False, ) -> Dict[str, paddle.Tensor]:
'''
Args:
speech (paddle.Tensor): input speech (1, Tmax, D).
text (paddle.Tensor): input text (1, Tmax2).
masked_pos (paddle.Tensor): masked position of input speech (1, Tmax)
speech_mask (paddle.Tensor): mask of speech (1, 1, Tmax).
text_mask (paddle.Tensor): mask of text (1, 1, Tmax2).
speech_seg_pos (paddle.Tensor): n-th phone of each mel, 0<=n<=Tmax2 (1, Tmax).
text_seg_pos (paddle.Tensor): n-th phone of each phone, 0<=n<=Tmax2 (1, Tmax2).
span_bdy (List[int]): masked mel boundary of input speech (2,)
use_teacher_forcing (bool): whether to use teacher forcing
Returns:
List[Tensor]:
eg:
[Tensor(shape=[1, 181, 80]), Tensor(shape=[80, 80]), Tensor(shape=[1, 67, 80])]
'''
z_cache = None
if use_teacher_forcing:
before_outs, zs, *_ = self.forward(
speech=speech,
text=text,
masked_pos=masked_pos,
speech_mask=speech_mask,
text_mask=text_mask,
speech_seg_pos=speech_seg_pos,
text_seg_pos=text_seg_pos)
if zs is None:
zs = before_outs
speech = speech.squeeze(0)
outs = [speech[:span_bdy[0]]]
outs += [zs[0][span_bdy[0]:span_bdy[1]]]
outs += [speech[span_bdy[1]:]]
return outs
return None
class MLMEncAsDecoder(MLM):
def forward(self,
speech: paddle.Tensor,
text: paddle.Tensor,
masked_pos: paddle.Tensor,
speech_mask: paddle.Tensor,
text_mask: paddle.Tensor,
speech_seg_pos: paddle.Tensor,
text_seg_pos: paddle.Tensor):
# feats: (Batch, Length, Dim)
# -> encoder_out: (Batch, Length2, Dim2)
encoder_out, h_masks = self.encoder(
speech=speech,
text=text,
masked_pos=masked_pos,
speech_mask=speech_mask,
text_mask=text_mask,
speech_seg_pos=speech_seg_pos,
text_seg_pos=text_seg_pos)
if self.decoder is not None:
zs, _ = self.decoder(encoder_out, h_masks)
else:
zs = encoder_out
speech_hidden_states = zs[:, :paddle.shape(speech)[1], :]
if self.sfc is not None:
before_outs = paddle.reshape(
self.sfc(speech_hidden_states),
(paddle.shape(speech_hidden_states)[0], -1, self.odim))
else:
before_outs = speech_hidden_states
if self.postnet is not None:
after_outs = before_outs + paddle.transpose(
self.postnet(paddle.transpose(before_outs, [0, 2, 1])),
[0, 2, 1])
else:
after_outs = None
return before_outs, after_outs, None
class MLMDualMaksing(MLM):
def forward(self,
speech: paddle.Tensor,
text: paddle.Tensor,
masked_pos: paddle.Tensor,
speech_mask: paddle.Tensor,
text_mask: paddle.Tensor,
speech_seg_pos: paddle.Tensor,
text_seg_pos: paddle.Tensor):
# feats: (Batch, Length, Dim)
# -> encoder_out: (Batch, Length2, Dim2)
encoder_out, h_masks = self.encoder(
speech=speech,
text=text,
masked_pos=masked_pos,
speech_mask=speech_mask,
text_mask=text_mask,
speech_seg_pos=speech_seg_pos,
text_seg_pos=text_seg_pos)
if self.decoder is not None:
zs, _ = self.decoder(encoder_out, h_masks)
else:
zs = encoder_out
speech_hidden_states = zs[:, :paddle.shape(speech)[1], :]
if self.text_sfc:
text_hiddent_states = zs[:, paddle.shape(speech)[1]:, :]
text_outs = paddle.reshape(
self.text_sfc(text_hiddent_states),
(paddle.shape(text_hiddent_states)[0], -1, self.vocab_size))
if self.sfc is not None:
before_outs = paddle.reshape(
self.sfc(speech_hidden_states),
(paddle.shape(speech_hidden_states)[0], -1, self.odim))
else:
before_outs = speech_hidden_states
if self.postnet is not None:
after_outs = before_outs + paddle.transpose(
self.postnet(paddle.transpose(before_outs, [0, 2, 1])),
[0, 2, 1])
else:
after_outs = None
return before_outs, after_outs, text_outs
def build_model_from_file(config_file, model_file):
state_dict = paddle.load(model_file)
model_class = MLMDualMaksing if 'conformer_combine_vctk_aishell3_dual_masking' in config_file \
else MLMEncAsDecoder
# 构建模型
with open(config_file) as f:
conf = CfgNode(yaml.safe_load(f))
model = build_model(conf, model_class)
model.set_state_dict(state_dict)
return model, conf
# select encoder and decoder here
def build_model(args: argparse.Namespace, model_class=MLMEncAsDecoder) -> MLM:
if isinstance(args.token_list, str):
with open(args.token_list, encoding="utf-8") as f:
token_list = [line.rstrip() for line in f]
# Overwriting token_list to keep it as "portable".
args.token_list = list(token_list)
elif isinstance(args.token_list, (tuple, list)):
token_list = list(args.token_list)
else:
raise RuntimeError("token_list must be str or list")
vocab_size = len(token_list)
odim = 80
# Encoder
encoder_class = MLMEncoder
if 'text_masking' in args.model_conf.keys() and args.model_conf[
'text_masking']:
args.encoder_conf['text_masking'] = True
else:
args.encoder_conf['text_masking'] = False
encoder = encoder_class(
args.input_size, vocab_size=vocab_size, **args.encoder_conf)
# Decoder
if args.decoder != 'no_decoder':
decoder_class = MLMDecoder
decoder = decoder_class(
idim=0,
input_layer=None,
**args.decoder_conf, )
else:
decoder = None
# Build model
model = model_class(
odim=odim,
encoder=encoder,
decoder=decoder,
**args.model_conf, )
# Initialize
if args.init is not None:
initialize(model, args.init)
return model
...@@ -522,6 +522,82 @@ class VITSGenerator(nn.Layer): ...@@ -522,6 +522,82 @@ class VITSGenerator(nn.Layer):
return wav.squeeze(1), attn.squeeze(1), dur.squeeze(1) return wav.squeeze(1), attn.squeeze(1), dur.squeeze(1)
def voice_conversion(
self,
feats: paddle.Tensor=None,
feats_lengths: paddle.Tensor=None,
sids_src: Optional[paddle.Tensor]=None,
sids_tgt: Optional[paddle.Tensor]=None,
spembs_src: Optional[paddle.Tensor]=None,
spembs_tgt: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None, ) -> paddle.Tensor:
"""Run voice conversion.
Args:
feats (Tensor): Feature tensor (B, aux_channels, T_feats,).
feats_lengths (Tensor): Feature length tensor (B,).
sids_src (Optional[Tensor]): Speaker index tensor of source feature (B,) or (B, 1).
sids_tgt (Optional[Tensor]): Speaker index tensor of target feature (B,) or (B, 1).
spembs_src (Optional[Tensor]): Speaker embedding tensor of source feature (B, spk_embed_dim).
spembs_tgt (Optional[Tensor]): Speaker embedding tensor of target feature (B, spk_embed_dim).
lids (Optional[Tensor]): Language index tensor (B,) or (B, 1).
Returns:
Tensor: Generated waveform tensor (B, T_wav).
"""
# encoder
g_src = None
g_tgt = None
if self.spks is not None:
# (B, global_channels, 1)
g_src = self.global_emb(
paddle.reshape(sids_src, [-1])).unsqueeze(-1)
g_tgt = self.global_emb(
paddle.reshape(sids_tgt, [-1])).unsqueeze(-1)
if self.spk_embed_dim is not None:
# (B, global_channels, 1)
g_src_ = self.spemb_proj(
F.normalize(spembs_src.unsqueeze(0))).unsqueeze(-1)
if g_src is None:
g_src = g_src_
else:
g_src = g_src + g_src_
# (B, global_channels, 1)
g_tgt_ = self.spemb_proj(
F.normalize(spembs_tgt.unsqueeze(0))).unsqueeze(-1)
if g_tgt is None:
g_tgt = g_tgt_
else:
g_tgt = g_tgt + g_tgt_
if self.langs is not None:
# (B, global_channels, 1)
g_ = self.lang_emb(paddle.reshape(lids, [-1])).unsqueeze(-1)
if g_src is None:
g_src = g_
else:
g_src = g_src + g_
if g_tgt is None:
g_tgt = g_
else:
g_tgt = g_tgt + g_
# forward posterior encoder
z, m_q, logs_q, y_mask = self.posterior_encoder(
feats, feats_lengths, g=g_src)
# forward flow
# (B, H, T_feats)
z_p = self.flow(z, y_mask, g=g_src)
# decoder
z_hat = self.flow(z_p, y_mask, g=g_tgt, inverse=True)
wav = self.decoder(z_hat * y_mask, g=g_tgt)
return wav.squeeze(1)
def _generate_path(self, dur: paddle.Tensor, def _generate_path(self, dur: paddle.Tensor,
mask: paddle.Tensor) -> paddle.Tensor: mask: paddle.Tensor) -> paddle.Tensor:
"""Generate path a.k.a. monotonic attention. """Generate path a.k.a. monotonic attention.
......
...@@ -381,7 +381,7 @@ class VITS(nn.Layer): ...@@ -381,7 +381,7 @@ class VITS(nn.Layer):
if use_teacher_forcing: if use_teacher_forcing:
assert feats is not None assert feats is not None
feats = feats[None].transpose([0, 2, 1]) feats = feats[None].transpose([0, 2, 1])
feats_lengths = paddle.to_tensor([paddle.shape(feats)[2]]) feats_lengths = paddle.to_tensor(paddle.shape(feats)[2])
wav, att_w, dur = self.generator.inference( wav, att_w, dur = self.generator.inference(
text=text, text=text,
text_lengths=text_lengths, text_lengths=text_lengths,
...@@ -406,3 +406,43 @@ class VITS(nn.Layer): ...@@ -406,3 +406,43 @@ class VITS(nn.Layer):
max_len=max_len, ) max_len=max_len, )
return dict( return dict(
wav=paddle.reshape(wav, [-1]), att_w=att_w[0], duration=dur[0]) wav=paddle.reshape(wav, [-1]), att_w=att_w[0], duration=dur[0])
def voice_conversion(
self,
feats: paddle.Tensor,
sids_src: Optional[paddle.Tensor]=None,
sids_tgt: Optional[paddle.Tensor]=None,
spembs_src: Optional[paddle.Tensor]=None,
spembs_tgt: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None, ) -> paddle.Tensor:
"""Run voice conversion.
Args:
feats (Tensor): Feature tensor (T_feats, aux_channels).
sids_src (Optional[Tensor]): Speaker index tensor of source feature (1,).
sids_tgt (Optional[Tensor]): Speaker index tensor of target feature (1,).
spembs_src (Optional[Tensor]): Speaker embedding tensor of source feature (spk_embed_dim,).
spembs_tgt (Optional[Tensor]): Speaker embedding tensor of target feature (spk_embed_dim,).
lids (Optional[Tensor]): Language index tensor (1,).
Returns:
Dict[str, Tensor]:
* wav (Tensor): Generated waveform tensor (T_wav,).
"""
assert feats is not None
feats = feats[None].transpose([0, 2, 1])
feats_lengths = paddle.to_tensor(paddle.shape(feats)[2])
sids_none = sids_src is None and sids_tgt is None
spembs_none = spembs_src is None and spembs_tgt is None
assert not sids_none or not spembs_none
wav = self.generator.voice_conversion(
feats,
feats_lengths,
sids_src,
sids_tgt,
spembs_src,
spembs_tgt,
lids, )
return dict(wav=paddle.reshape(wav, [-1]))
...@@ -111,6 +111,8 @@ class VITSUpdater(StandardUpdater): ...@@ -111,6 +111,8 @@ class VITSUpdater(StandardUpdater):
text_lengths=batch["text_lengths"], text_lengths=batch["text_lengths"],
feats=batch["feats"], feats=batch["feats"],
feats_lengths=batch["feats_lengths"], feats_lengths=batch["feats_lengths"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator") forward_generator=turn == "generator")
# Generator # Generator
if turn == "generator": if turn == "generator":
...@@ -268,6 +270,8 @@ class VITSEvaluator(StandardEvaluator): ...@@ -268,6 +270,8 @@ class VITSEvaluator(StandardEvaluator):
text_lengths=batch["text_lengths"], text_lengths=batch["text_lengths"],
feats=batch["feats"], feats=batch["feats"],
feats_lengths=batch["feats_lengths"], feats_lengths=batch["feats_lengths"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator") forward_generator=turn == "generator")
# Generator # Generator
if turn == "generator": if turn == "generator":
......
...@@ -24,6 +24,7 @@ from paddle.nn import Layer ...@@ -24,6 +24,7 @@ from paddle.nn import Layer
from paddle.optimizer import Optimizer from paddle.optimizer import Optimizer
from timer import timer from timer import timer
from paddlespeech.t2s.datasets.sampler import ErnieSATSampler
from paddlespeech.t2s.training.reporter import report from paddlespeech.t2s.training.reporter import report
from paddlespeech.t2s.training.updater import UpdaterBase from paddlespeech.t2s.training.updater import UpdaterBase
from paddlespeech.t2s.training.updater import UpdaterState from paddlespeech.t2s.training.updater import UpdaterState
...@@ -165,7 +166,8 @@ class StandardUpdater(UpdaterBase): ...@@ -165,7 +166,8 @@ class StandardUpdater(UpdaterBase):
# NOTE: all batch sampler for distributed training should # NOTE: all batch sampler for distributed training should
# subclass DistributedBatchSampler and implement `set_epoch` method # subclass DistributedBatchSampler and implement `set_epoch` method
batch_sampler = self.dataloader.batch_sampler batch_sampler = self.dataloader.batch_sampler
if isinstance(batch_sampler, DistributedBatchSampler): if isinstance(batch_sampler, DistributedBatchSampler) \
or isinstance(batch_sampler, ErnieSATSampler):
batch_sampler.set_epoch(self.state.epoch) batch_sampler.set_epoch(self.state.epoch)
self.train_iterator = iter(self.dataloader) self.train_iterator = iter(self.dataloader)
......
...@@ -57,7 +57,7 @@ base = [ ...@@ -57,7 +57,7 @@ base = [
"Pillow>=9.0.0", "Pillow>=9.0.0",
"praatio==5.0.0", "praatio==5.0.0",
"protobuf>=3.1.0, <=3.20.0", "protobuf>=3.1.0, <=3.20.0",
"pypinyin", "pypinyin<=0.44.0",
"pypinyin-dict", "pypinyin-dict",
"python-dateutil", "python-dateutil",
"pyworld==0.2.12", "pyworld==0.2.12",
...@@ -83,12 +83,7 @@ base = [ ...@@ -83,12 +83,7 @@ base = [
"Ninja", "Ninja",
] ]
server = [ server = ["fastapi", "uvicorn", "pattern_singleton", "websockets"]
"fastapi",
"uvicorn",
"pattern_singleton",
"websockets"
]
requirements = { requirements = {
"install": "install":
......
...@@ -490,14 +490,10 @@ class SymbolicShapeInference: ...@@ -490,14 +490,10 @@ class SymbolicShapeInference:
def _onnx_infer_single_node(self, node): def _onnx_infer_single_node(self, node):
# skip onnx shape inference for some ops, as they are handled in _infer_* # skip onnx shape inference for some ops, as they are handled in _infer_*
skip_infer = node.op_type in [ skip_infer = node.op_type in [
'If', 'Loop', 'Scan', 'SplitToSequence', 'ZipMap', \ 'If', 'Loop', 'Scan', 'SplitToSequence', 'ZipMap', 'Attention',
# contrib ops 'BiasGelu', 'EmbedLayerNormalization', 'FastGelu', 'Gelu',
'Attention', 'BiasGelu', \ 'LayerNormalization', 'LongformerAttention',
'EmbedLayerNormalization', \ 'SkipLayerNormalization', 'PythonOp'
'FastGelu', 'Gelu', 'LayerNormalization', \
'LongformerAttention', \
'SkipLayerNormalization', \
'PythonOp'
] ]
if not skip_infer: if not skip_infer:
...@@ -510,8 +506,8 @@ class SymbolicShapeInference: ...@@ -510,8 +506,8 @@ class SymbolicShapeInference:
if (get_opset(self.out_mp_) >= 9) and node.op_type in ['Unsqueeze']: if (get_opset(self.out_mp_) >= 9) and node.op_type in ['Unsqueeze']:
initializers = [ initializers = [
self.initializers_[name] for name in node.input self.initializers_[name] for name in node.input
if (name in self.initializers_ and if (name in self.initializers_ and name not in
name not in self.graph_inputs_) self.graph_inputs_)
] ]
# run single node inference with self.known_vi_ shapes # run single node inference with self.known_vi_ shapes
...@@ -597,8 +593,8 @@ class SymbolicShapeInference: ...@@ -597,8 +593,8 @@ class SymbolicShapeInference:
for o in symbolic_shape_inference.out_mp_.graph.output for o in symbolic_shape_inference.out_mp_.graph.output
] ]
subgraph_new_symbolic_dims = set([ subgraph_new_symbolic_dims = set([
d for s in subgraph_shapes if s for d in s d for s in subgraph_shapes
if type(d) == str and not d in self.symbolic_dims_ if s for d in s if type(d) == str and not d in self.symbolic_dims_
]) ])
new_dims = {} new_dims = {}
for d in subgraph_new_symbolic_dims: for d in subgraph_new_symbolic_dims:
...@@ -725,8 +721,9 @@ class SymbolicShapeInference: ...@@ -725,8 +721,9 @@ class SymbolicShapeInference:
for d, s in zip(sympy_shape[-rank:], strides) for d, s in zip(sympy_shape[-rank:], strides)
] ]
total_pads = [ total_pads = [
max(0, (k - s) if r == 0 else (k - r)) for k, s, r in max(0, (k - s) if r == 0 else (k - r))
zip(effective_kernel_shape, strides, residual) for k, s, r in zip(effective_kernel_shape, strides,
residual)
] ]
except TypeError: # sympy may throw TypeError: cannot determine truth value of Relational except TypeError: # sympy may throw TypeError: cannot determine truth value of Relational
total_pads = [ total_pads = [
...@@ -1272,8 +1269,9 @@ class SymbolicShapeInference: ...@@ -1272,8 +1269,9 @@ class SymbolicShapeInference:
if pads is not None: if pads is not None:
assert len(pads) == 2 * rank assert len(pads) == 2 * rank
new_sympy_shape = [ new_sympy_shape = [
d + pad_up + pad_down for d, pad_up, pad_down in d + pad_up + pad_down
zip(sympy_shape, pads[:rank], pads[rank:]) for d, pad_up, pad_down in zip(sympy_shape, pads[:rank], pads[
rank:])
] ]
self._update_computed_dims(new_sympy_shape) self._update_computed_dims(new_sympy_shape)
else: else:
...@@ -1586,8 +1584,8 @@ class SymbolicShapeInference: ...@@ -1586,8 +1584,8 @@ class SymbolicShapeInference:
scales = list(scales) scales = list(scales)
new_sympy_shape = [ new_sympy_shape = [
sympy.simplify(sympy.floor(d * (end - start) * scale)) sympy.simplify(sympy.floor(d * (end - start) * scale))
for d, start, end, scale in for d, start, end, scale in zip(input_sympy_shape,
zip(input_sympy_shape, roi_start, roi_end, scales) roi_start, roi_end, scales)
] ]
self._update_computed_dims(new_sympy_shape) self._update_computed_dims(new_sympy_shape)
else: else:
...@@ -2200,8 +2198,9 @@ class SymbolicShapeInference: ...@@ -2200,8 +2198,9 @@ class SymbolicShapeInference:
# topological sort nodes, note there might be dead nodes so we check if all graph outputs are reached to terminate # topological sort nodes, note there might be dead nodes so we check if all graph outputs are reached to terminate
sorted_nodes = [] sorted_nodes = []
sorted_known_vi = set([ sorted_known_vi = set([
i.name for i in list(self.out_mp_.graph.input) + i.name
list(self.out_mp_.graph.initializer) for i in list(self.out_mp_.graph.input) + list(
self.out_mp_.graph.initializer)
]) ])
if any([o.name in sorted_known_vi for o in self.out_mp_.graph.output]): if any([o.name in sorted_known_vi for o in self.out_mp_.graph.output]):
# Loop/Scan will have some graph output in graph inputs, so don't do topological sort # Loop/Scan will have some graph output in graph inputs, so don't do topological sort
......
...@@ -43,6 +43,7 @@ function _train(){ ...@@ -43,6 +43,7 @@ function _train(){
log_parse_file="mylog/workerlog.0" ;; log_parse_file="mylog/workerlog.0" ;;
*) echo "choose run_mode(sp or mp)"; exit 1; *) echo "choose run_mode(sp or mp)"; exit 1;
esac esac
bash tests/test_tipc/barrier.sh
# 以下不用修改 # 以下不用修改
timeout 15m ${train_cmd} > ${log_file} 2>&1 timeout 15m ${train_cmd} > ${log_file} 2>&1
if [ $? -ne 0 ];then if [ $? -ne 0 ];then
......
set -ex
NNODES=${PADDLE_TRAINERS_NUM:-"1"}
PYTHON=${PYTHON:-"python"}
TIMEOUT=${1:-"10m"}
if [[ "$NNODES" -gt 1 ]]; then
while ! timeout "$TIMEOUT" "$PYTHON" -m paddle.distributed.launch run_check; do
echo "Retry barrier ......"
done
fi
...@@ -154,6 +154,7 @@ else ...@@ -154,6 +154,7 @@ else
device_num_list=($device_num) device_num_list=($device_num)
fi fi
PYTHON="${python}" bash test_tipc/barrier.sh
IFS="|" IFS="|"
for batch_size in ${batch_size_list[*]}; do for batch_size in ${batch_size_list[*]}; do
for precision in ${fp_items_list[*]}; do for precision in ${fp_items_list[*]}; do
......
...@@ -15,6 +15,7 @@ dataline=$(cat ${FILENAME}) ...@@ -15,6 +15,7 @@ dataline=$(cat ${FILENAME})
# parser params # parser params
IFS=$'\n' IFS=$'\n'
lines=(${dataline}) lines=(${dataline})
python=python
# The training params # The training params
model_name=$(func_parser_value "${lines[1]}") model_name=$(func_parser_value "${lines[1]}")
...@@ -22,7 +23,7 @@ model_name=$(func_parser_value "${lines[1]}") ...@@ -22,7 +23,7 @@ model_name=$(func_parser_value "${lines[1]}")
echo "model_name:"${model_name} echo "model_name:"${model_name}
trainer_list=$(func_parser_value "${lines[14]}") trainer_list=$(func_parser_value "${lines[14]}")
if [ ${MODE} = "benchmark_train" ];then if [[ ${MODE} = "benchmark_train" ]];then
curPath=$(readlink -f "$(dirname "$0")") curPath=$(readlink -f "$(dirname "$0")")
echo "curPath:"${curPath} # /PaddleSpeech/tests/test_tipc echo "curPath:"${curPath} # /PaddleSpeech/tests/test_tipc
cd ${curPath}/../.. cd ${curPath}/../..
...@@ -36,10 +37,10 @@ if [ ${MODE} = "benchmark_train" ];then ...@@ -36,10 +37,10 @@ if [ ${MODE} = "benchmark_train" ];then
pip install jsonlines pip install jsonlines
pip list pip list
cd - cd -
if [ ${model_name} == "conformer" ]; then if [[ ${model_name} == "conformer" ]]; then
# set the URL for aishell_tiny dataset # set the URL for aishell_tiny dataset
conformer_aishell_URL=${conformer_aishell_URL:-"None"} conformer_aishell_URL=${conformer_aishell_URL:-"None"}
if [ ${conformer_aishell_URL} == 'None' ];then if [[ ${conformer_aishell_URL} == 'None' ]];then
echo "please contact author to get the URL.\n" echo "please contact author to get the URL.\n"
exit exit
else else
...@@ -66,9 +67,9 @@ if [ ${MODE} = "benchmark_train" ];then ...@@ -66,9 +67,9 @@ if [ ${MODE} = "benchmark_train" ];then
sed -i "s#data/#test_tipc/conformer/benchmark_train/data/#g" ${curPath}/conformer/benchmark_train/conf/preprocess.yaml sed -i "s#data/#test_tipc/conformer/benchmark_train/data/#g" ${curPath}/conformer/benchmark_train/conf/preprocess.yaml
fi fi
if [ ${model_name} == "pwgan" ]; then if [[ ${model_name} == "pwgan" ]]; then
# 下载 csmsc 数据集并解压缩 # 下载 csmsc 数据集并解压缩
wget -nc https://weixinxcxdb.oss-cn-beijing.aliyuncs.com/gwYinPinKu/BZNSYP.rar wget -nc https://paddle-wheel.bj.bcebos.com/benchmark/BZNSYP.rar
mkdir -p BZNSYP mkdir -p BZNSYP
unrar x BZNSYP.rar BZNSYP unrar x BZNSYP.rar BZNSYP
wget -nc https://paddlespeech.bj.bcebos.com/Parakeet/benchmark/durations.txt wget -nc https://paddlespeech.bj.bcebos.com/Parakeet/benchmark/durations.txt
...@@ -80,9 +81,14 @@ if [ ${MODE} = "benchmark_train" ];then ...@@ -80,9 +81,14 @@ if [ ${MODE} = "benchmark_train" ];then
python ../paddlespeech/t2s/exps/gan_vocoder/normalize.py --metadata=dump/test/raw/metadata.jsonl --dumpdir=dump/test/norm --stats=dump/train/feats_stats.npy python ../paddlespeech/t2s/exps/gan_vocoder/normalize.py --metadata=dump/test/raw/metadata.jsonl --dumpdir=dump/test/norm --stats=dump/train/feats_stats.npy
fi fi
if [ ${model_name} == "mdtc" ]; then echo "barrier start"
PYTHON="${python}" bash test_tipc/barrier.sh
echo "barrier end"
if [[ ${model_name} == "mdtc" ]]; then
# 下载 Snips 数据集并解压缩 # 下载 Snips 数据集并解压缩
wget -nc https://paddlespeech.bj.bcebos.com/datasets/hey_snips_kws_4.0.tar.gz.1 https://paddlespeech.bj.bcebos.com/datasets/hey_snips_https://paddlespeech.bj.bcebos.com/datasets/hey_snips_kws_4.0.tar.gz.2 wget https://paddlespeech.bj.bcebos.com/datasets/hey_snips_kws_4.0.tar.gz.1
wget https://paddlespeech.bj.bcebos.com/datasets/hey_snips_kws_4.0.tar.gz.2
cat hey_snips_kws_4.0.tar.gz.* > hey_snips_kws_4.0.tar.gz cat hey_snips_kws_4.0.tar.gz.* > hey_snips_kws_4.0.tar.gz
rm hey_snips_kws_4.0.tar.gz.* rm hey_snips_kws_4.0.tar.gz.*
tar -xzf hey_snips_kws_4.0.tar.gz tar -xzf hey_snips_kws_4.0.tar.gz
......
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "automotive-trailer",
"metadata": {},
"outputs": [],
"source": [
"from selenium import webdriver\n",
"chromeOptions = webdriver.ChromeOptions()\n",
"driver = webdriver.Chrome('./chromedriver', chrome_options=chromeOptions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "physical-croatia",
"metadata": {},
"outputs": [],
"source": [
"driver.get(\"https://github.com/PaddlePaddle/PaddleSpeech/graphs/contributors\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "seventh-latitude",
"metadata": {},
"outputs": [],
"source": [
"<h3 class=\"border-bottom p-2 lh-condensed\">\n",
" <a data-hovercard-type=\"user\" data-hovercard-url=\"/users/zh794390558/hovercard\" href=\"/zh794390558\"\n",
" class=\"d-inline-block mr-2 float-left\">\n",
" <img src=\"https://avatars.githubusercontent.com/u/3038472?s=60&amp;v=4\" class=\"avatar avatar-user\"\n",
" alt=\"zh794390558\" width=\"38\" height=\"38\">\n",
" </a>\n",
" <span class=\"f5 text-normal color-fg-muted float-right\">#1</span>\n",
" <a data-hovercard-type=\"user\" data-hovercard-url=\"/users/zh794390558/hovercard\" class=\"text-normal\"\n",
" href=\"/zh794390558\">zh794390558</a>\n",
" <span class=\"f6 d-block color-fg-muted\">\n",
" <span class=\"cmeta\">\n",
" <div>\n",
" <a href=\"https://github.com/PaddlePaddle/PaddleSpeech/commits?author=zh794390558\"\n",
" class=\"Link--secondary text-normal\">655 commits</a>\n",
" &nbsp;&nbsp;\n",
" <span class=\"color-fg-success text-normal\">3,671,956 ++</span>\n",
" &nbsp;&nbsp;\n",
" <span class=\"color-fg-danger text-normal\">1,966,288 --</span>\n",
" </div>\n",
" </span>\n",
" </span>\n",
"</h3>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "modified-argument",
"metadata": {},
"outputs": [],
"source": [
"from selenium.webdriver.common.by import By"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "demonstrated-aging",
"metadata": {},
"outputs": [],
"source": [
"elements = driver.find_elements(By.CLASS_NAME, 'lh-condensed')\n",
"for element in elements:\n",
" zhuye = element.find_elements(By.CLASS_NAME, 'd-inline-block')[0].get_attribute(\"href\")\n",
" img = element.find_elements(By.CLASS_NAME, 'avatar')[0].get_attribute(\"src\")\n",
" mkdown = f\"\"\"<a href=\"{zhuye}\"><img src=\"{img}\" width=75 height=75></a>\"\"\"\n",
" print(mkdown)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "general-torture",
"metadata": {},
"outputs": [],
"source": [
"element.find_elements(By.CLASS_NAME, 'd-inline-block')[0].get_attribute(\"href\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "downtown-institute",
"metadata": {},
"outputs": [],
"source": [
"element.find_elements(By.CLASS_NAME, 'avatar')[0].get_attribute(\"src\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "worthy-planet",
"metadata": {},
"outputs": [],
"source": [
"len(elements)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册