提交 e3298c79 编写于 作者: H Hui Zhang

Merge branch 'develop' into u2_export

--- ---
name: Bug report name: "\U0001F41B S2T Bug Report"
about: Create a report to help us improve about: Create a report to help us improve
title: '' title: "[S2T]XXXX"
labels: '' labels: Bug, S2T
assignees: '' assignees: zh794390558
--- ---
...@@ -27,7 +27,7 @@ A clear and concise description of what you expected to happen. ...@@ -27,7 +27,7 @@ A clear and concise description of what you expected to happen.
**Screenshots** **Screenshots**
If applicable, add screenshots to help explain your problem. If applicable, add screenshots to help explain your problem.
** Environment (please complete the following information):** **Environment (please complete the following information):**
- OS: [e.g. Ubuntu] - OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3] - GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7] - Python Version [e.g. 3.7]
......
---
name: "\U0001F41B TTS Bug Report"
about: Create a report to help us improve
title: "[TTS]XXXX"
labels: Bug, T2S
assignees: yt605155624
---
For support and discussions, please use our [Discourse forums](https://github.com/PaddlePaddle/DeepSpeech/discussions).
If you've found a bug then please create an issue with the following information:
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7]
- PaddlePaddle Version [e.g. 2.0.0]
- Model Version [e.g. 2.0.0]
- GPU/DRIVER Informationo [e.g. Tesla V100-SXM2-32GB/440.64.00]
- CUDA/CUDNN Version [e.g. cuda-10.2]
- MKL Version
- TensorRT Version
**Additional context**
Add any other context about the problem here.
---
name: "\U0001F680 Feature Request"
about: As a user, I want to request a New Feature on the product.
title: ''
labels: feature request
assignees: D-DanielYang, iftaken
---
## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
---
name: "\U0001F9E9 Others"
about: Report any other non-support related issues.
title: ''
labels: ''
assignees: ''
---
## Others
<!--
你可以在这里提出任何前面几类模板不适用的问题,包括但不限于:优化性建议、框架使用体验反馈、版本兼容性问题、报错信息不清楚等。
You can report any issues that are not applicable to the previous types of templates, including but not limited to: enhancement suggestions, feedback on the use of the framework, version compatibility issues, unclear error information, etc.
-->
---
name: "\U0001F914 Ask a Question"
about: I want to ask a question.
title: ''
labels: Question
assignees: ''
---
## General Question
<!--
Before asking a question, make sure you have:
- Baidu/Google your question.
- Searched open and closed [GitHub issues](https://github.com/PaddlePaddle/PaddleSpeech/issues?q=is%3Aissue)
- Read the documentation:
- [Readme](https://github.com/PaddlePaddle/PaddleSpeech)
- [Doc](https://paddlespeech.readthedocs.io/)
-->
# Changelog
Date: 2022-3-22, Author: yt605155624.
Add features to: CLI:
- Support aishell3_hifigan、vctk_hifigan
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1587
Date: 2022-3-09, Author: yt605155624.
Add features to: T2S:
- Add ljspeech hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1549
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add aishell3 hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1545
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add vctk hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1544
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Update aishell3 vc0 with new Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1419
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Add ljspeech Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1416
Date: 2022-1-24, Author: yt605155624.
Add features to: T2S:
- Add csmsc WaveRNN.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1379
Date: 2022-1-19, Author: yt605155624.
Add features to: T2S:
- Add csmsc Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1314
Date: 2022-1-10, Author: Jackwaterveg.
Add features to: CLI:
- Support English (librispeech/asr1/transformer).
- Support choosing `decode_method` for conformer and transformer models.
- Refactor the config, using the unified config.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1297
***
Date: 2022-1-17, Author: Jackwaterveg.
Add features to: CLI:
- Support deepspeech2 online/offline model(aishell).
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1356
***
Date: 2022-1-24, Author: Jackwaterveg.
Add features to: ctc_decoders:
- Support online ctc prefix-beam search decoder.
- Unified ctc online decoder and ctc offline decoder.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/821
***
include paddlespeech/t2s/exps/*.txt
include paddlespeech/t2s/frontend/*.yaml
\ No newline at end of file
此差异已折叠。
此差异已折叠。
...@@ -226,6 +226,12 @@ recall and elapsed time statistics are shown in the following figure: ...@@ -226,6 +226,12 @@ recall and elapsed time statistics are shown in the following figure:
The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios. The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios.
* compute embeding takes 500 ms
* retrieval with cosine takes 2.9 ms
* total takes 503 ms
> test audio is 5 sec
### 6.Pretrained Models ### 6.Pretrained Models
Here is a list of pretrained models released by PaddleSpeech : Here is a list of pretrained models released by PaddleSpeech :
......
...@@ -26,8 +26,9 @@ def get_audios(path): ...@@ -26,8 +26,9 @@ def get_audios(path):
""" """
supported_formats = [".wav", ".mp3", ".ogg", ".flac", ".m4a"] supported_formats = [".wav", ".mp3", ".ogg", ".flac", ".m4a"]
return [ return [
item for sublist in [[os.path.join(dir, file) for file in files] item
for dir, _, files in list(os.walk(path))] for sublist in [[os.path.join(dir, file) for file in files]
for dir, _, files in list(os.walk(path))]
for item in sublist if os.path.splitext(item)[1] in supported_formats for item in sublist if os.path.splitext(item)[1] in supported_formats
] ]
......
([简体中文](./README_cn.md)|English)
# Metaverse # Metaverse
## Introduction ## Introduction
Metaverse is a new Internet application and social form integrating virtual reality produced by integrating a variety of new technologies. Metaverse is a new Internet application and social form integrating virtual reality produced by integrating a variety of new technologies.
......
(简体中文|[English](./README.md))
# Metaverse
## 简介
Metaverse 是一种新的互联网应用和社交形式,融合了多种新技术,产生了虚拟现实。
这个演示是一个让图片中的名人“说话”的实现。通过 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 的组合,我们集成了安装和特定模块到一个 shell 脚本中。
## 使用
您可以使用 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 让您最喜欢的人说出指定的内容,并构建您的虚拟人。
运行 `run.sh` 完成所有基本程序,包括安装。
```bash
./run.sh
```
`run.sh`, 先会执行 `source path.sh` 来设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt` 中的句子。
如果您想尝试图像,请将图像替换 shell 脚本中的 `download/Lamarr.png`
结果已显示在我们的 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
...@@ -19,6 +19,7 @@ The input of this cli demo should be a WAV file(`.wav`), and the sample rate mus ...@@ -19,6 +19,7 @@ The input of this cli demo should be a WAV file(`.wav`), and the sample rate mus
Here are sample files for this demo that can be downloaded: Here are sample files for this demo that can be downloaded:
```bash ```bash
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
``` ```
### 3. Usage ### 3. Usage
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
```bash ```bash
# 该音频的内容是数字串 85236145389 # 该音频的内容是数字串 85236145389
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
``` ```
### 3. 使用方法 ### 3. 使用方法
- 命令行 (推荐使用) - 命令行 (推荐使用)
......
此差异已折叠。
此差异已折叠。
...@@ -7,7 +7,7 @@ host: 0.0.0.0 ...@@ -7,7 +7,7 @@ host: 0.0.0.0
port: 8090 port: 8090
# The task format in the engin_list is: <speech task>_<engine type> # The task format in the engin_list is: <speech task>_<engine type>
# task choices = ['asr_python', 'asr_inference', 'tts_python', 'tts_inference', 'cls_python', 'cls_inference'] # task choices = ['asr_python', 'asr_inference', 'tts_python', 'tts_inference', 'cls_python', 'cls_inference', 'text_python', 'vector_python']
protocol: 'http' protocol: 'http'
engine_list: ['asr_python', 'tts_python', 'cls_python', 'text_python', 'vector_python'] engine_list: ['asr_python', 'tts_python', 'cls_python', 'text_python', 'vector_python']
...@@ -28,7 +28,6 @@ asr_python: ...@@ -28,7 +28,6 @@ asr_python:
force_yes: True force_yes: True
device: # set 'gpu:id' or 'cpu' device: # set 'gpu:id' or 'cpu'
################### speech task: asr; engine_type: inference ####################### ################### speech task: asr; engine_type: inference #######################
asr_inference: asr_inference:
# model_type choices=['deepspeech2offline_aishell'] # model_type choices=['deepspeech2offline_aishell']
...@@ -50,10 +49,11 @@ asr_inference: ...@@ -50,10 +49,11 @@ asr_inference:
################################### TTS ######################################### ################################### TTS #########################################
################### speech task: tts; engine_type: python ####################### ################### speech task: tts; engine_type: python #######################
tts_python: tts_python:
# am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc', # am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc',
# 'fastspeech2_ljspeech', 'fastspeech2_aishell3', # 'fastspeech2_ljspeech', 'fastspeech2_aishell3',
# 'fastspeech2_vctk'] # 'fastspeech2_vctk', 'fastspeech2_mix',
# 'tacotron2_csmsc', 'tacotron2_ljspeech']
am: 'fastspeech2_csmsc' am: 'fastspeech2_csmsc'
am_config: am_config:
am_ckpt: am_ckpt:
...@@ -61,11 +61,13 @@ tts_python: ...@@ -61,11 +61,13 @@ tts_python:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3', # voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3',
# 'pwgan_vctk', 'mb_melgan_csmsc'] # 'pwgan_vctk', 'mb_melgan_csmsc', 'style_melgan_csmsc',
voc: 'pwgan_csmsc' # 'hifigan_csmsc', 'hifigan_ljspeech', 'hifigan_aishell3',
# 'hifigan_vctk', 'wavernn_csmsc']
voc: 'mb_melgan_csmsc'
voc_config: voc_config:
voc_ckpt: voc_ckpt:
voc_stat: voc_stat:
...@@ -85,7 +87,7 @@ tts_inference: ...@@ -85,7 +87,7 @@ tts_inference:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_predictor_conf: am_predictor_conf:
device: # set 'gpu:id' or 'cpu' device: # set 'gpu:id' or 'cpu'
...@@ -94,7 +96,7 @@ tts_inference: ...@@ -94,7 +96,7 @@ tts_inference:
summary: True # False -> do not show predictor config summary: True # False -> do not show predictor config
# voc (vocoder) choices=['pwgan_csmsc', 'mb_melgan_csmsc','hifigan_csmsc'] # voc (vocoder) choices=['pwgan_csmsc', 'mb_melgan_csmsc','hifigan_csmsc']
voc: 'pwgan_csmsc' voc: 'mb_melgan_csmsc'
voc_model: # the pdmodel file of your vocoder static model (XX.pdmodel) voc_model: # the pdmodel file of your vocoder static model (XX.pdmodel)
voc_params: # the pdiparams file of your vocoder static model (XX.pdipparams) voc_params: # the pdiparams file of your vocoder static model (XX.pdipparams)
voc_sample_rate: 24000 voc_sample_rate: 24000
......
...@@ -401,4 +401,4 @@ curl -X 'GET' \ ...@@ -401,4 +401,4 @@ curl -X 'GET' \
"code": 0, "code": 0,
"result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", "result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"message": "ok" "message": "ok"
``` ```
\ No newline at end of file
...@@ -3,48 +3,48 @@ ...@@ -3,48 +3,48 @@
# 2. 接收录音音频,返回识别结果 # 2. 接收录音音频,返回识别结果
# 3. 接收ASR识别结果,返回NLP对话结果 # 3. 接收ASR识别结果,返回NLP对话结果
# 4. 接收NLP对话结果,返回TTS音频 # 4. 接收NLP对话结果,返回TTS音频
import argparse
import base64 import base64
import yaml
import os
import json
import datetime import datetime
import json
import os
from typing import List
import aiofiles
import librosa import librosa
import soundfile as sf import soundfile as sf
import numpy as np
import argparse
import uvicorn import uvicorn
import aiofiles from fastapi import FastAPI
from typing import Optional, List from fastapi import File
from pydantic import BaseModel from fastapi import Form
from fastapi import FastAPI, Header, File, UploadFile, Form, Cookie, WebSocket, WebSocketDisconnect from fastapi import UploadFile
from fastapi import WebSocket
from fastapi import WebSocketDisconnect
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from starlette.responses import FileResponse from pydantic import BaseModel
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.websockets import WebSocketState as WebSocketState
from src.AudioManeger import AudioMannger from src.AudioManeger import AudioMannger
from src.util import *
from src.robot import Robot from src.robot import Robot
from src.WebsocketManeger import ConnectionManager
from src.SpeechBase.vpr import VPR from src.SpeechBase.vpr import VPR
from src.util import *
from src.WebsocketManeger import ConnectionManager
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.responses import FileResponse
from starlette.websockets import WebSocketState as WebSocketState
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.audio_process import float2pcm from paddlespeech.server.utils.audio_process import float2pcm
# 解析配置 # 解析配置
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(prog='PaddleSpeechDemo', add_help=True)
prog='PaddleSpeechDemo', add_help=True)
parser.add_argument( parser.add_argument(
"--port", "--port",
action="store", action="store",
type=int, type=int,
help="port of the app", help="port of the app",
default=8010, default=8010,
required=False) required=False)
args = parser.parse_args() args = parser.parse_args()
port = args.port port = args.port
...@@ -60,39 +60,41 @@ ie_model_path = "source/model" ...@@ -60,39 +60,41 @@ ie_model_path = "source/model"
UPLOAD_PATH = "source/vpr" UPLOAD_PATH = "source/vpr"
WAV_PATH = "source/wav" WAV_PATH = "source/wav"
base_sources = [UPLOAD_PATH, WAV_PATH]
base_sources = [
UPLOAD_PATH, WAV_PATH
]
for path in base_sources: for path in base_sources:
os.makedirs(path, exist_ok=True) os.makedirs(path, exist_ok=True)
# 初始化 # 初始化
app = FastAPI() app = FastAPI()
chatbot = Robot(asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path) chatbot = Robot(
asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path)
manager = ConnectionManager() manager = ConnectionManager()
aumanager = AudioMannger(chatbot) aumanager = AudioMannger(chatbot)
aumanager.init() aumanager.init()
vpr = VPR(db_path, dim = 192, top_k = 5) vpr = VPR(db_path, dim=192, top_k=5)
# 服务配置 # 服务配置
class NlpBase(BaseModel): class NlpBase(BaseModel):
chat: str chat: str
class TtsBase(BaseModel): class TtsBase(BaseModel):
text: str text: str
class Audios: class Audios:
def __init__(self) -> None: def __init__(self) -> None:
self.audios = b"" self.audios = b""
audios = Audios() audios = Audios()
###################################################################### ######################################################################
########################### ASR 服务 ################################# ########################### ASR 服务 #################################
##################################################################### #####################################################################
# 接收文件,返回ASR结果 # 接收文件,返回ASR结果
# 上传文件 # 上传文件
@app.post("/asr/offline") @app.post("/asr/offline")
...@@ -101,7 +103,8 @@ async def speech2textOffline(files: List[UploadFile]): ...@@ -101,7 +103,8 @@ async def speech2textOffline(files: List[UploadFile]):
asr_res = "" asr_res = ""
for file in files[:1]: for file in files[:1]:
# 生成时间戳 # 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file: async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read content = await file.read() # async read
...@@ -110,10 +113,9 @@ async def speech2textOffline(files: List[UploadFile]): ...@@ -110,10 +113,9 @@ async def speech2textOffline(files: List[UploadFile]):
# 返回ASR识别结果 # 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path) asr_res = chatbot.speech2text(out_file_path)
return SuccessRequest(result=asr_res) return SuccessRequest(result=asr_res)
# else:
# return ErrorRequest(message="文件不是.wav格式")
return ErrorRequest(message="上传文件为空") return ErrorRequest(message="上传文件为空")
# 接收文件,同时将wav强制转成16k, int16类型 # 接收文件,同时将wav强制转成16k, int16类型
@app.post("/asr/offlinefile") @app.post("/asr/offlinefile")
async def speech2textOfflineFile(files: List[UploadFile]): async def speech2textOfflineFile(files: List[UploadFile]):
...@@ -121,7 +123,8 @@ async def speech2textOfflineFile(files: List[UploadFile]): ...@@ -121,7 +123,8 @@ async def speech2textOfflineFile(files: List[UploadFile]):
asr_res = "" asr_res = ""
for file in files[:1]: for file in files[:1]:
# 生成时间戳 # 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file: async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read content = await file.read() # async read
...@@ -132,22 +135,18 @@ async def speech2textOfflineFile(files: List[UploadFile]): ...@@ -132,22 +135,18 @@ async def speech2textOfflineFile(files: List[UploadFile]):
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
# 将文件重新写入 # 将文件重新写入
now_name = now_name[:-4] + "_16k" + ".wav" now_name = now_name[:-4] + "_16k" + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
sf.write(out_file_path,wav,16000) sf.write(out_file_path, wav, 16000)
# 返回ASR识别结果 # 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path) asr_res = chatbot.speech2text(out_file_path)
response_res = { response_res = {"asr_result": asr_res, "wav_base64": wav_base64}
"asr_result": asr_res,
"wav_base64": wav_base64
}
return SuccessRequest(result=response_res) return SuccessRequest(result=response_res)
return ErrorRequest(message="上传文件为空")
return ErrorRequest(message="上传文件为空")
# 流式接收测试 # 流式接收测试
...@@ -161,15 +160,17 @@ async def speech2textOnlineRecive(files: List[UploadFile]): ...@@ -161,15 +160,17 @@ async def speech2textOnlineRecive(files: List[UploadFile]):
print(f"audios长度变化: {len(audios.audios)}") print(f"audios长度变化: {len(audios.audios)}")
return SuccessRequest(message="接收成功") return SuccessRequest(message="接收成功")
# 采集环境噪音大小 # 采集环境噪音大小
@app.post("/asr/collectEnv") @app.post("/asr/collectEnv")
async def collectEnv(files: List[UploadFile]): async def collectEnv(files: List[UploadFile]):
for file in files[:1]: for file in files[:1]:
content = await file.read() # async read content = await file.read() # async read
# 初始化, wav 前44字节是头部信息 # 初始化, wav 前44字节是头部信息
aumanager.compute_env_volume(content[44:]) aumanager.compute_env_volume(content[44:])
vad_ = aumanager.vad_threshold vad_ = aumanager.vad_threshold
return SuccessRequest(result=vad_,message="采集环境噪音成功") return SuccessRequest(result=vad_, message="采集环境噪音成功")
# 停止录音 # 停止录音
@app.get("/asr/stopRecord") @app.get("/asr/stopRecord")
...@@ -179,6 +180,7 @@ async def stopRecord(): ...@@ -179,6 +180,7 @@ async def stopRecord():
print("Online录音暂停") print("Online录音暂停")
return SuccessRequest(message="停止成功") return SuccessRequest(message="停止成功")
# 恢复录音 # 恢复录音
@app.get("/asr/resumeRecord") @app.get("/asr/resumeRecord")
async def resumeRecord(): async def resumeRecord():
...@@ -187,7 +189,7 @@ async def resumeRecord(): ...@@ -187,7 +189,7 @@ async def resumeRecord():
return SuccessRequest(message="Online录音恢复") return SuccessRequest(message="Online录音恢复")
# 聊天用的ASR # 聊天用的 ASR
@app.websocket("/ws/asr/offlineStream") @app.websocket("/ws/asr/offlineStream")
async def websocket_endpoint(websocket: WebSocket): async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket) await manager.connect(websocket)
...@@ -210,9 +212,9 @@ async def websocket_endpoint(websocket: WebSocket): ...@@ -210,9 +212,9 @@ async def websocket_endpoint(websocket: WebSocket):
# print(f"用户-{user}-离开") # print(f"用户-{user}-离开")
# Online识别的ASR # 流式识别的 ASR
@app.websocket('/ws/asr/onlineStream') @app.websocket('/ws/asr/onlineStream')
async def websocket_endpoint(websocket: WebSocket): async def websocket_endpoint_online(websocket: WebSocket):
"""PaddleSpeech Online ASR Server api """PaddleSpeech Online ASR Server api
Args: Args:
...@@ -298,12 +300,14 @@ async def websocket_endpoint(websocket: WebSocket): ...@@ -298,12 +300,14 @@ async def websocket_endpoint(websocket: WebSocket):
except WebSocketDisconnect: except WebSocketDisconnect:
pass pass
###################################################################### ######################################################################
########################### NLP 服务 ################################# ########################### NLP 服务 #################################
##################################################################### #####################################################################
@app.post("/nlp/chat") @app.post("/nlp/chat")
async def chatOffline(nlp_base:NlpBase): async def chatOffline(nlp_base: NlpBase):
chat = nlp_base.chat chat = nlp_base.chat
if not chat: if not chat:
return ErrorRequest(message="传入文本为空") return ErrorRequest(message="传入文本为空")
...@@ -311,8 +315,9 @@ async def chatOffline(nlp_base:NlpBase): ...@@ -311,8 +315,9 @@ async def chatOffline(nlp_base:NlpBase):
res = chatbot.chat(chat) res = chatbot.chat(chat)
return SuccessRequest(result=res) return SuccessRequest(result=res)
@app.post("/nlp/ie") @app.post("/nlp/ie")
async def ieOffline(nlp_base:NlpBase): async def ieOffline(nlp_base: NlpBase):
nlp_text = nlp_base.chat nlp_text = nlp_base.chat
if not nlp_text: if not nlp_text:
return ErrorRequest(message="传入文本为空") return ErrorRequest(message="传入文本为空")
...@@ -320,17 +325,20 @@ async def ieOffline(nlp_base:NlpBase): ...@@ -320,17 +325,20 @@ async def ieOffline(nlp_base:NlpBase):
res = chatbot.ie(nlp_text) res = chatbot.ie(nlp_text)
return SuccessRequest(result=res) return SuccessRequest(result=res)
###################################################################### ######################################################################
########################### TTS 服务 ################################# ########################### TTS 服务 #################################
##################################################################### #####################################################################
@app.post("/tts/offline") @app.post("/tts/offline")
async def text2speechOffline(tts_base:TtsBase): async def text2speechOffline(tts_base: TtsBase):
text = tts_base.text text = tts_base.text
if not text: if not text:
return ErrorRequest(message="文本为空") return ErrorRequest(message="文本为空")
else: else:
now_name = "tts_"+ datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "tts_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name) out_file_path = os.path.join(WAV_PATH, now_name)
# 保存为文件,再转成base64传输 # 保存为文件,再转成base64传输
chatbot.text2speech(text, outpath=out_file_path) chatbot.text2speech(text, outpath=out_file_path)
...@@ -339,12 +347,14 @@ async def text2speechOffline(tts_base:TtsBase): ...@@ -339,12 +347,14 @@ async def text2speechOffline(tts_base:TtsBase):
base_str = base64.b64encode(data_bin) base_str = base64.b64encode(data_bin)
return SuccessRequest(result=base_str) return SuccessRequest(result=base_str)
# http流式TTS # http流式TTS
@app.post("/tts/online") @app.post("/tts/online")
async def stream_tts(request_body: TtsBase): async def stream_tts(request_body: TtsBase):
text = request_body.text text = request_body.text
return StreamingResponse(chatbot.text2speechStreamBytes(text=text)) return StreamingResponse(chatbot.text2speechStreamBytes(text=text))
# ws流式TTS # ws流式TTS
@app.websocket("/ws/tts/online") @app.websocket("/ws/tts/online")
async def stream_ttsWS(websocket: WebSocket): async def stream_ttsWS(websocket: WebSocket):
...@@ -356,17 +366,11 @@ async def stream_ttsWS(websocket: WebSocket): ...@@ -356,17 +366,11 @@ async def stream_ttsWS(websocket: WebSocket):
if text: if text:
for sub_wav in chatbot.text2speechStream(text=text): for sub_wav in chatbot.text2speechStream(text=text):
# print("发送sub wav: ", len(sub_wav)) # print("发送sub wav: ", len(sub_wav))
res = { res = {"wav": sub_wav, "done": False}
"wav": sub_wav,
"done": False
}
await websocket.send_json(res) await websocket.send_json(res)
# 输送结束 # 输送结束
res = { res = {"wav": sub_wav, "done": True}
"wav": sub_wav,
"done": True
}
await websocket.send_json(res) await websocket.send_json(res)
# manager.disconnect(websocket) # manager.disconnect(websocket)
...@@ -396,8 +400,9 @@ async def vpr_enroll(table_name: str=None, ...@@ -396,8 +400,9 @@ async def vpr_enroll(table_name: str=None,
return {'status': False, 'msg': "spk_id can not be None"} return {'status': False, 'msg': "spk_id can not be None"}
# Save the upload data to server. # Save the upload data to server.
content = await audio.read() content = await audio.read()
now_name = "vpr_enroll_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "vpr_enroll_" + datetime.datetime.strftime(
audio_path = os.path.join(UPLOAD_PATH, now_name) datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(audio_path, "wb+") as f: with open(audio_path, "wb+") as f:
f.write(content) f.write(content)
...@@ -413,20 +418,19 @@ async def vpr_recog(request: Request, ...@@ -413,20 +418,19 @@ async def vpr_recog(request: Request,
audio: UploadFile=File(...)): audio: UploadFile=File(...)):
# Voice print recognition online # Voice print recognition online
# try: # try:
# Save the upload data to server. # Save the upload data to server.
content = await audio.read() content = await audio.read()
now_name = "vpr_query_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav" now_name = "vpr_query_" + datetime.datetime.strftime(
query_audio_path = os.path.join(UPLOAD_PATH, now_name) datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
query_audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(query_audio_path, "wb+") as f: with open(query_audio_path, "wb+") as f:
f.write(content) f.write(content)
spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path) spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path)
res = dict(zip(spk_ids, zip(paths, scores))) res = dict(zip(spk_ids, zip(paths, scores)))
# Sort results by distance metric, closest distances first # Sort results by distance metric, closest distances first
res = sorted(res.items(), key=lambda item: item[1][1], reverse=True) res = sorted(res.items(), key=lambda item: item[1][1], reverse=True)
return res return res
# except Exception as e:
# return {'status': False, 'msg': e}, 400
@app.post('/vpr/del') @app.post('/vpr/del')
...@@ -460,17 +464,18 @@ async def vpr_database64(vprId: int): ...@@ -460,17 +464,18 @@ async def vpr_database64(vprId: int):
return {'status': False, 'msg': "vpr_id can not be None"} return {'status': False, 'msg': "vpr_id can not be None"}
audio_path = vpr.do_get_wav(vprId) audio_path = vpr.do_get_wav(vprId)
# 返回base64 # 返回base64
# 将文件转成16k, 16bit类型的wav文件 # 将文件转成16k, 16bit类型的wav文件
wav, sr = librosa.load(audio_path, sr=16000) wav, sr = librosa.load(audio_path, sr=16000)
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
return SuccessRequest(result=wav_base64) return SuccessRequest(result=wav_base64)
except Exception as e: except Exception as e:
return {'status': False, 'msg': e}, 400 return {'status': False, 'msg': e}, 400
@app.get('/vpr/data') @app.get('/vpr/data')
async def vpr_data(vprId: int): async def vpr_data(vprId: int):
# Get the audio file from path by spk_id in MySQL # Get the audio file from path by spk_id in MySQL
...@@ -482,11 +487,6 @@ async def vpr_data(vprId: int): ...@@ -482,11 +487,6 @@ async def vpr_data(vprId: int):
except Exception as e: except Exception as e:
return {'status': False, 'msg': e}, 400 return {'status': False, 'msg': e}, 400
if __name__ == '__main__': if __name__ == '__main__':
uvicorn.run(app=app, host='0.0.0.0', port=port) uvicorn.run(app=app, host='0.0.0.0', port=port)
aiofiles aiofiles
faiss-cpu
fastapi fastapi
librosa librosa
numpy numpy
paddlenlp
paddlepaddle
paddlespeech
pydantic pydantic
scikit_learn python-multipartscikit_learn
SoundFile SoundFile
starlette starlette
uvicorn uvicorn
paddlepaddle
paddlespeech
paddlenlp
faiss-cpu
python-multipart
\ No newline at end of file
import imp import datetime
from queue import Queue
import numpy as np
import os import os
import wave import wave
import random
import datetime import numpy as np
from .util import randName from .util import randName
class AudioMannger: class AudioMannger:
def __init__(self, robot, frame_length=160, frame=10, data_width=2, vad_default = 300): def __init__(self,
robot,
frame_length=160,
frame=10,
data_width=2,
vad_default=300):
# 二进制 pcm 流 # 二进制 pcm 流
self.audios = b'' self.audios = b''
self.asr_result = "" self.asr_result = ""
...@@ -20,8 +24,9 @@ class AudioMannger: ...@@ -20,8 +24,9 @@ class AudioMannger:
os.makedirs(self.file_dir, exist_ok=True) os.makedirs(self.file_dir, exist_ok=True)
self.vad_deafult = vad_default self.vad_deafult = vad_default
self.vad_threshold = vad_default self.vad_threshold = vad_default
self.vad_threshold_path = os.path.join(self.file_dir, "vad_threshold.npy") self.vad_threshold_path = os.path.join(self.file_dir,
"vad_threshold.npy")
# 10ms 一帧 # 10ms 一帧
self.frame_length = frame_length self.frame_length = frame_length
# 10帧,检测一次 vad # 10帧,检测一次 vad
...@@ -30,67 +35,64 @@ class AudioMannger: ...@@ -30,67 +35,64 @@ class AudioMannger:
self.data_width = data_width self.data_width = data_width
# window # window
self.window_length = frame_length * frame * data_width self.window_length = frame_length * frame * data_width
# 是否开始录音 # 是否开始录音
self.on_asr = False self.on_asr = False
self.silence_cnt = 0 self.silence_cnt = 0
self.max_silence_cnt = 4 self.max_silence_cnt = 4
self.is_pause = False # 录音暂停与恢复 self.is_pause = False # 录音暂停与恢复
def init(self): def init(self):
if os.path.exists(self.vad_threshold_path): if os.path.exists(self.vad_threshold_path):
# 平均响度文件存在 # 平均响度文件存在
self.vad_threshold = np.load(self.vad_threshold_path) self.vad_threshold = np.load(self.vad_threshold_path)
def clear_audio(self): def clear_audio(self):
# 清空 pcm 累积片段与 asr 识别结果 # 清空 pcm 累积片段与 asr 识别结果
self.audios = b'' self.audios = b''
def clear_asr(self): def clear_asr(self):
self.asr_result = "" self.asr_result = ""
def compute_chunk_volume(self, start_index, pcm_bins): def compute_chunk_volume(self, start_index, pcm_bins):
# 根据帧长计算能量平均值 # 根据帧长计算能量平均值
pcm_bin = pcm_bins[start_index: start_index + self.window_length] pcm_bin = pcm_bins[start_index:start_index + self.window_length]
# 转成 numpy # 转成 numpy
pcm_np = np.frombuffer(pcm_bin, np.int16) pcm_np = np.frombuffer(pcm_bin, np.int16)
# 归一化 + 计算响度 # 归一化 + 计算响度
x = pcm_np.astype(np.float32) x = pcm_np.astype(np.float32)
x = np.abs(x) x = np.abs(x)
return np.mean(x) return np.mean(x)
def is_speech(self, start_index, pcm_bins): def is_speech(self, start_index, pcm_bins):
# 检查是否没 # 检查是否没
if start_index > len(pcm_bins): if start_index > len(pcm_bins):
return False return False
# 检查从这个 start 开始是否为静音帧 # 检查从这个 start 开始是否为静音帧
energy = self.compute_chunk_volume(start_index=start_index, pcm_bins=pcm_bins) energy = self.compute_chunk_volume(
start_index=start_index, pcm_bins=pcm_bins)
# print(energy) # print(energy)
if energy > self.vad_threshold: if energy > self.vad_threshold:
return True return True
else: else:
return False return False
def compute_env_volume(self, pcm_bins): def compute_env_volume(self, pcm_bins):
max_energy = 0 max_energy = 0
start = 0 start = 0
while start < len(pcm_bins): while start < len(pcm_bins):
energy = self.compute_chunk_volume(start_index=start, pcm_bins=pcm_bins) energy = self.compute_chunk_volume(
start_index=start, pcm_bins=pcm_bins)
if energy > max_energy: if energy > max_energy:
max_energy = energy max_energy = energy
start += self.window_length start += self.window_length
self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult
# 保存成文件 # 保存成文件
np.save(self.vad_threshold_path, self.vad_threshold) np.save(self.vad_threshold_path, self.vad_threshold)
print(f"vad 阈值大小: {self.vad_threshold}") print(f"vad 阈值大小: {self.vad_threshold}")
print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}") print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}")
def stream_asr(self, pcm_bin): def stream_asr(self, pcm_bin):
# 先把 pcm_bin 送进去做端点检测 # 先把 pcm_bin 送进去做端点检测
start = 0 start = 0
...@@ -99,7 +101,7 @@ class AudioMannger: ...@@ -99,7 +101,7 @@ class AudioMannger:
self.on_asr = True self.on_asr = True
self.silence_cnt = 0 self.silence_cnt = 0
print("录音中") print("录音中")
self.audios += pcm_bin[ start : start + self.window_length] self.audios += pcm_bin[start:start + self.window_length]
else: else:
if self.on_asr: if self.on_asr:
self.silence_cnt += 1 self.silence_cnt += 1
...@@ -110,41 +112,42 @@ class AudioMannger: ...@@ -110,41 +112,42 @@ class AudioMannger:
print("录音停止") print("录音停止")
# audios 保存为 wav, 送入 ASR # audios 保存为 wav, 送入 ASR
if len(self.audios) > 2 * 16000: if len(self.audios) > 2 * 16000:
file_path = os.path.join(self.file_dir, "asr_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav") file_path = os.path.join(
self.file_dir,
"asr_" + datetime.datetime.strftime(
datetime.datetime.now(),
'%Y%m%d%H%M%S') + randName() + ".wav")
self.save_audio(file_path=file_path) self.save_audio(file_path=file_path)
self.asr_result = self.robot.speech2text(file_path) self.asr_result = self.robot.speech2text(file_path)
self.clear_audio() self.clear_audio()
return self.asr_result return self.asr_result
else: else:
# 正常接收 # 正常接收
print("录音中 静音") print("录音中 静音")
self.audios += pcm_bin[ start : start + self.window_length] self.audios += pcm_bin[start:start + self.window_length]
start += self.window_length start += self.window_length
return "" return ""
def save_audio(self, file_path): def save_audio(self, file_path):
print("保存音频") print("保存音频")
wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav" wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav"
wf.setnchannels(1) # 设置声道数为2 wf.setnchannels(1) # 设置声道数为2
wf.setsampwidth(2) # 设置采样深度为 wf.setsampwidth(2) # 设置采样深度为
wf.setframerate(16000) # 设置采样率为16000 wf.setframerate(16000) # 设置采样率为16000
# 将数据写入创建的音频文件 # 将数据写入创建的音频文件
wf.writeframes(self.audios) wf.writeframes(self.audios)
# 写完后将文件关闭 # 写完后将文件关闭
wf.close() wf.close()
def end(self): def end(self):
# audios 保存为 wav, 送入 ASR # audios 保存为 wav, 送入 ASR
file_path = os.path.join(self.file_dir, "asr.wav") file_path = os.path.join(self.file_dir, "asr.wav")
self.save_audio(file_path=file_path) self.save_audio(file_path=file_path)
return self.robot.speech2text(file_path) return self.robot.speech2text(file_path)
def stop(self): def stop(self):
self.is_pause = True self.is_pause = True
self.audios = b'' self.audios = b''
def resume(self): def resume(self):
self.is_pause = False self.is_pause = False
\ No newline at end of file
from re import sub
import numpy as np import numpy as np
import paddle
import librosa
import soundfile
from paddlespeech.server.engine.asr.online.python.asr_engine import ASREngine from paddlespeech.server.engine.asr.online.python.asr_engine import ASREngine
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.config import get_config from paddlespeech.server.utils.config import get_config
def readWave(samples): def readWave(samples):
x_len = len(samples) x_len = len(samples)
...@@ -31,20 +28,23 @@ def readWave(samples): ...@@ -31,20 +28,23 @@ def readWave(samples):
class ASR: class ASR:
def __init__(self, config_path, ) -> None: def __init__(
self,
config_path, ) -> None:
self.config = get_config(config_path)['asr_online'] self.config = get_config(config_path)['asr_online']
self.engine = ASREngine() self.engine = ASREngine()
self.engine.init(self.config) self.engine.init(self.config)
self.connection_handler = PaddleASRConnectionHanddler(self.engine) self.connection_handler = PaddleASRConnectionHanddler(self.engine)
def offlineASR(self, samples, sample_rate=16000): def offlineASR(self, samples, sample_rate=16000):
x_chunk, x_chunk_lens = self.engine.preprocess(samples=samples, sample_rate=sample_rate) x_chunk, x_chunk_lens = self.engine.preprocess(
samples=samples, sample_rate=sample_rate)
self.engine.run(x_chunk, x_chunk_lens) self.engine.run(x_chunk, x_chunk_lens)
result = self.engine.postprocess() result = self.engine.postprocess()
self.engine.reset() self.engine.reset()
return result return result
def onlineASR(self, samples:bytes=None, is_finished=False): def onlineASR(self, samples: bytes=None, is_finished=False):
if not is_finished: if not is_finished:
# 流式开始 # 流式开始
self.connection_handler.extract_feat(samples) self.connection_handler.extract_feat(samples)
...@@ -58,5 +58,3 @@ class ASR: ...@@ -58,5 +58,3 @@ class ASR:
asr_results = self.connection_handler.get_result() asr_results = self.connection_handler.get_result()
self.connection_handler.reset() self.connection_handler.reset()
return asr_results return asr_results
\ No newline at end of file
from paddlenlp import Taskflow from paddlenlp import Taskflow
class NLP: class NLP:
def __init__(self, ie_model_path=None): def __init__(self, ie_model_path=None):
schema = ["时间", "出发地", "目的地", "费用"] schema = ["时间", "出发地", "目的地", "费用"]
if ie_model_path: if ie_model_path:
self.ie_model = Taskflow("information_extraction", self.ie_model = Taskflow(
schema=schema, task_path=ie_model_path) "information_extraction",
schema=schema,
task_path=ie_model_path)
else: else:
self.ie_model = Taskflow("information_extraction", self.ie_model = Taskflow("information_extraction", schema=schema)
schema=schema)
self.dialogue_model = Taskflow("dialogue") self.dialogue_model = Taskflow("dialogue")
def chat(self, text): def chat(self, text):
result = self.dialogue_model([text]) result = self.dialogue_model([text])
return result[0] return result[0]
def ie(self, text): def ie(self, text):
result = self.ie_model(text) result = self.ie_model(text)
return result return result
\ No newline at end of file
import base64 import base64
import sqlite3
import os import os
import sqlite3
import numpy as np import numpy as np
from pkg_resources import resource_stream
def dict_factory(cursor, row): def dict_factory(cursor, row):
d = {} d = {}
for idx, col in enumerate(cursor.description): for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx] d[col[0]] = row[idx]
return d return d
class DataBase(object): class DataBase(object):
def __init__(self, db_path:str): def __init__(self, db_path: str):
db_path = os.path.realpath(db_path) db_path = os.path.realpath(db_path)
if os.path.exists(db_path): if os.path.exists(db_path):
...@@ -21,12 +22,12 @@ class DataBase(object): ...@@ -21,12 +22,12 @@ class DataBase(object):
db_path_dir = os.path.dirname(db_path) db_path_dir = os.path.dirname(db_path)
os.makedirs(db_path_dir, exist_ok=True) os.makedirs(db_path_dir, exist_ok=True)
self.db_path = db_path self.db_path = db_path
self.conn = sqlite3.connect(self.db_path) self.conn = sqlite3.connect(self.db_path)
self.conn.row_factory = dict_factory self.conn.row_factory = dict_factory
self.cursor = self.conn.cursor() self.cursor = self.conn.cursor()
self.init_database() self.init_database()
def init_database(self): def init_database(self):
""" """
初始化数据库, 若表不存在则创建 初始化数据库, 若表不存在则创建
...@@ -41,20 +42,21 @@ class DataBase(object): ...@@ -41,20 +42,21 @@ class DataBase(object):
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def execute_base(self, sql, data_dict): def execute_base(self, sql, data_dict):
self.cursor.execute(sql, data_dict) self.cursor.execute(sql, data_dict)
self.conn.commit() self.conn.commit()
def insert_one(self, username, vector_base64:str, wav_path): def insert_one(self, username, vector_base64: str, wav_path):
if not os.path.exists(wav_path): if not os.path.exists(wav_path):
return None, "wav not exists" return None, "wav not exists"
else: else:
sql = f""" sql = """
insert into insert into
vprtable (username, vector, wavpath) vprtable (username, vector, wavpath)
values (?, ?, ?) values (?, ?, ?)
""" """
try: try:
self.cursor.execute(sql, (username, vector_base64, wav_path)) self.cursor.execute(sql, (username, vector_base64, wav_path))
self.conn.commit() self.conn.commit()
...@@ -63,25 +65,27 @@ class DataBase(object): ...@@ -63,25 +65,27 @@ class DataBase(object):
except Exception as e: except Exception as e:
print(e) print(e)
return None, e return None, e
def select_all(self): def select_all(self):
sql = """ sql = """
SELECT * from vprtable SELECT * from vprtable
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
def select_by_id(self, vpr_id): def select_by_id(self, vpr_id):
sql = f""" sql = f"""
SELECT * from vprtable WHERE `id` = {vpr_id} SELECT * from vprtable WHERE `id` = {vpr_id}
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
def select_by_username(self, username): def select_by_username(self, username):
sql = f""" sql = f"""
SELECT * from vprtable WHERE `username` = '{username}' SELECT * from vprtable WHERE `username` = '{username}'
""" """
result = self.cursor.execute(sql).fetchall() result = self.cursor.execute(sql).fetchall()
return result return result
...@@ -89,28 +93,30 @@ class DataBase(object): ...@@ -89,28 +93,30 @@ class DataBase(object):
sql = f""" sql = f"""
DELETE from vprtable WHERE `username`='{username}' DELETE from vprtable WHERE `username`='{username}'
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def drop_all(self): def drop_all(self):
sql = f""" sql = """
DELETE from vprtable DELETE from vprtable
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def drop_table(self): def drop_table(self):
sql = f""" sql = """
DROP TABLE vprtable DROP TABLE vprtable
""" """
self.cursor.execute(sql) self.cursor.execute(sql)
self.conn.commit() self.conn.commit()
def encode_vector(self, vector:np.ndarray): def encode_vector(self, vector: np.ndarray):
return base64.b64encode(vector).decode('utf8') return base64.b64encode(vector).decode('utf8')
def decode_vector(self, vector_base64, dtype=np.float32): def decode_vector(self, vector_base64, dtype=np.float32):
b = base64.b64decode(vector_base64) b = base64.b64decode(vector_base64)
vc = np.frombuffer(b, dtype=dtype) vc = np.frombuffer(b, dtype=dtype)
return vc return vc
\ No newline at end of file
...@@ -5,18 +5,19 @@ ...@@ -5,18 +5,19 @@
# 2. 加载模型 # 2. 加载模型
# 3. 端到端推理 # 3. 端到端推理
# 4. 流式推理 # 4. 流式推理
import base64 import base64
import math
import logging import logging
import math
import numpy as np import numpy as np
from paddlespeech.server.utils.onnx_infer import get_sess
from paddlespeech.t2s.frontend.zh_frontend import Frontend from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
from paddlespeech.server.utils.util import denorm, get_chunks
from paddlespeech.server.utils.audio_process import float2pcm from paddlespeech.server.utils.audio_process import float2pcm
from paddlespeech.server.utils.config import get_config from paddlespeech.server.utils.config import get_config
from paddlespeech.server.utils.util import denorm
from paddlespeech.server.utils.util import get_chunks
from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
class TTS: class TTS:
def __init__(self, config_path): def __init__(self, config_path):
...@@ -26,12 +27,12 @@ class TTS: ...@@ -26,12 +27,12 @@ class TTS:
self.engine.init(self.config) self.engine.init(self.config)
self.executor = self.engine.executor self.executor = self.engine.executor
#self.engine.warm_up() #self.engine.warm_up()
# 前端初始化 # 前端初始化
self.frontend = Frontend( self.frontend = Frontend(
phone_vocab_path=self.engine.executor.phones_dict, phone_vocab_path=self.engine.executor.phones_dict,
tone_vocab_path=None) tone_vocab_path=None)
def depadding(self, data, chunk_num, chunk_id, block, pad, upsample): def depadding(self, data, chunk_num, chunk_id, block, pad, upsample):
""" """
Streaming inference removes the result of pad inference Streaming inference removes the result of pad inference
...@@ -48,39 +49,37 @@ class TTS: ...@@ -48,39 +49,37 @@ class TTS:
data = data[front_pad * upsample:(front_pad + block) * upsample] data = data[front_pad * upsample:(front_pad + block) * upsample]
return data return data
def offlineTTS(self, text): def offlineTTS(self, text):
get_tone_ids = False get_tone_ids = False
merge_sentences = False merge_sentences = False
input_ids = self.frontend.get_input_ids( input_ids = self.frontend.get_input_ids(
text, text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
wav_list = [] wav_list = []
for i in range(len(phone_ids)): for i in range(len(phone_ids)):
orig_hs = self.engine.executor.am_encoder_infer_sess.run( orig_hs = self.engine.executor.am_encoder_infer_sess.run(
None, input_feed={'text': phone_ids[i].numpy()} None, input_feed={'text': phone_ids[i].numpy()})
)
hs = orig_hs[0] hs = orig_hs[0]
am_decoder_output = self.engine.executor.am_decoder_sess.run( am_decoder_output = self.engine.executor.am_decoder_sess.run(
None, input_feed={'xs': hs}) None, input_feed={'xs': hs})
am_postnet_output = self.engine.executor.am_postnet_sess.run( am_postnet_output = self.engine.executor.am_postnet_sess.run(
None, None,
input_feed={ input_feed={
'xs': np.transpose(am_decoder_output[0], (0, 2, 1)) 'xs': np.transpose(am_decoder_output[0], (0, 2, 1))
}) })
am_output_data = am_decoder_output + np.transpose( am_output_data = am_decoder_output + np.transpose(
am_postnet_output[0], (0, 2, 1)) am_postnet_output[0], (0, 2, 1))
normalized_mel = am_output_data[0][0] normalized_mel = am_output_data[0][0]
mel = denorm(normalized_mel, self.engine.executor.am_mu, self.engine.executor.am_std) mel = denorm(normalized_mel, self.engine.executor.am_mu,
self.engine.executor.am_std)
wav = self.engine.executor.voc_sess.run( wav = self.engine.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel})[0] output_names=None, input_feed={'logmel': mel})[0]
wav_list.append(wav) wav_list.append(wav)
wavs = np.concatenate(wav_list) wavs = np.concatenate(wav_list)
return wavs return wavs
def streamTTS(self, text): def streamTTS(self, text):
get_tone_ids = False get_tone_ids = False
...@@ -88,9 +87,7 @@ class TTS: ...@@ -88,9 +87,7 @@ class TTS:
# front # front
input_ids = self.frontend.get_input_ids( input_ids = self.frontend.get_input_ids(
text, text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
for i in range(len(phone_ids)): for i in range(len(phone_ids)):
...@@ -105,14 +102,15 @@ class TTS: ...@@ -105,14 +102,15 @@ class TTS:
mel = mel[0] mel = mel[0]
# voc streaming # voc streaming
mel_chunks = get_chunks(mel, self.config.voc_block, self.config.voc_pad, "voc") mel_chunks = get_chunks(mel, self.config.voc_block,
self.config.voc_pad, "voc")
voc_chunk_num = len(mel_chunks) voc_chunk_num = len(mel_chunks)
for i, mel_chunk in enumerate(mel_chunks): for i, mel_chunk in enumerate(mel_chunks):
sub_wav = self.executor.voc_sess.run( sub_wav = self.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel_chunk}) output_names=None, input_feed={'logmel': mel_chunk})
sub_wav = self.depadding(sub_wav[0], voc_chunk_num, i, sub_wav = self.depadding(
self.config.voc_block, self.config.voc_pad, sub_wav[0], voc_chunk_num, i, self.config.voc_block,
self.config.voc_upsample) self.config.voc_pad, self.config.voc_upsample)
yield self.after_process(sub_wav) yield self.after_process(sub_wav)
...@@ -130,7 +128,8 @@ class TTS: ...@@ -130,7 +128,8 @@ class TTS:
end = min(self.config.voc_block + self.config.voc_pad, mel_len) end = min(self.config.voc_block + self.config.voc_pad, mel_len)
# streaming am # streaming am
hss = get_chunks(orig_hs, self.config.am_block, self.config.am_pad, "am") hss = get_chunks(orig_hs, self.config.am_block,
self.config.am_pad, "am")
am_chunk_num = len(hss) am_chunk_num = len(hss)
for i, hs in enumerate(hss): for i, hs in enumerate(hss):
am_decoder_output = self.executor.am_decoder_sess.run( am_decoder_output = self.executor.am_decoder_sess.run(
...@@ -147,7 +146,8 @@ class TTS: ...@@ -147,7 +146,8 @@ class TTS:
sub_mel = denorm(normalized_mel, self.executor.am_mu, sub_mel = denorm(normalized_mel, self.executor.am_mu,
self.executor.am_std) self.executor.am_std)
sub_mel = self.depadding(sub_mel, am_chunk_num, i, sub_mel = self.depadding(sub_mel, am_chunk_num, i,
self.config.am_block, self.config.am_pad, 1) self.config.am_block,
self.config.am_pad, 1)
if i == 0: if i == 0:
mel_streaming = sub_mel mel_streaming = sub_mel
...@@ -165,23 +165,22 @@ class TTS: ...@@ -165,23 +165,22 @@ class TTS:
output_names=None, input_feed={'logmel': voc_chunk}) output_names=None, input_feed={'logmel': voc_chunk})
sub_wav = self.depadding( sub_wav = self.depadding(
sub_wav[0], voc_chunk_num, voc_chunk_id, sub_wav[0], voc_chunk_num, voc_chunk_id,
self.config.voc_block, self.config.voc_pad, self.config.voc_upsample) self.config.voc_block, self.config.voc_pad,
self.config.voc_upsample)
yield self.after_process(sub_wav) yield self.after_process(sub_wav)
voc_chunk_id += 1 voc_chunk_id += 1
start = max( start = max(0, voc_chunk_id * self.config.voc_block -
0, voc_chunk_id * self.config.voc_block - self.config.voc_pad) self.config.voc_pad)
end = min( end = min((voc_chunk_id + 1) * self.config.voc_block +
(voc_chunk_id + 1) * self.config.voc_block + self.config.voc_pad, self.config.voc_pad, mel_len)
mel_len)
else: else:
logging.error( logging.error(
"Only support fastspeech2_csmsc or fastspeech2_cnndecoder_csmsc on streaming tts." "Only support fastspeech2_csmsc or fastspeech2_cnndecoder_csmsc on streaming tts."
) )
def streamTTSBytes(self, text): def streamTTSBytes(self, text):
for wav in self.engine.executor.infer( for wav in self.engine.executor.infer(
text=text, text=text,
...@@ -191,19 +190,14 @@ class TTS: ...@@ -191,19 +190,14 @@ class TTS:
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
yield wav_bytes yield wav_bytes
def after_process(self, wav): def after_process(self, wav):
# for tvm # for tvm
wav = float2pcm(wav) # float32 to int16 wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64 wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64
return wav_base64 return wav_base64
def streamTTS_TVM(self, text): def streamTTS_TVM(self, text):
# 用 TVM 优化 # 用 TVM 优化
pass pass
\ No newline at end of file
# vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示 # vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示
import logging import logging
import faiss import faiss
from matplotlib import use
import numpy as np import numpy as np
from .sql_helper import DataBase from .sql_helper import DataBase
from .vpr_encode import get_audio_embedding from .vpr_encode import get_audio_embedding
class VPR: class VPR:
def __init__(self, db_path, dim, top_k) -> None: def __init__(self, db_path, dim, top_k) -> None:
# 初始化 # 初始化
...@@ -14,15 +16,15 @@ class VPR: ...@@ -14,15 +16,15 @@ class VPR:
self.top_k = top_k self.top_k = top_k
self.dtype = np.float32 self.dtype = np.float32
self.vpr_idx = 0 self.vpr_idx = 0
# db 初始化 # db 初始化
self.db = DataBase(db_path) self.db = DataBase(db_path)
# faiss 初始化 # faiss 初始化
index_ip = faiss.IndexFlatIP(dim) index_ip = faiss.IndexFlatIP(dim)
self.index_ip = faiss.IndexIDMap(index_ip) self.index_ip = faiss.IndexIDMap(index_ip)
self.init() self.init()
def init(self): def init(self):
# demo 初始化,把 mysql中的向量注册到 faiss 中 # demo 初始化,把 mysql中的向量注册到 faiss 中
sql_dbs = self.db.select_all() sql_dbs = self.db.select_all()
...@@ -34,12 +36,13 @@ class VPR: ...@@ -34,12 +36,13 @@ class VPR:
if len(vc.shape) == 1: if len(vc.shape) == 1:
vc = np.expand_dims(vc, axis=0) vc = np.expand_dims(vc, axis=0)
# 构建数据库 # 构建数据库
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64')) self.index_ip.add_with_ids(vc, np.array(
(idx, )).astype('int64'))
logging.info("faiss 构建完毕") logging.info("faiss 构建完毕")
def faiss_enroll(self, idx, vc): def faiss_enroll(self, idx, vc):
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64')) self.index_ip.add_with_ids(vc, np.array((idx, )).astype('int64'))
def vpr_enroll(self, username, wav_path): def vpr_enroll(self, username, wav_path):
# 注册声纹 # 注册声纹
emb = get_audio_embedding(wav_path) emb = get_audio_embedding(wav_path)
...@@ -53,21 +56,22 @@ class VPR: ...@@ -53,21 +56,22 @@ class VPR:
else: else:
last_idx, mess = None last_idx, mess = None
return last_idx return last_idx
def vpr_recog(self, wav_path): def vpr_recog(self, wav_path):
# 识别声纹 # 识别声纹
emb_search = get_audio_embedding(wav_path) emb_search = get_audio_embedding(wav_path)
if emb_search is not None: if emb_search is not None:
emb_search = np.expand_dims(emb_search, axis=0) emb_search = np.expand_dims(emb_search, axis=0)
D, I = self.index_ip.search(emb_search, self.top_k) D, I = self.index_ip.search(emb_search, self.top_k)
D = D.tolist()[0] D = D.tolist()[0]
I = I.tolist()[0] I = I.tolist()[0]
return [(round(D[i] * 100, 2 ), I[i]) for i in range(len(D)) if I[i] != -1] return [(round(D[i] * 100, 2), I[i]) for i in range(len(D))
if I[i] != -1]
else: else:
logging.error("识别失败") logging.error("识别失败")
return None return None
def do_search_vpr(self, wav_path): def do_search_vpr(self, wav_path):
spk_ids, paths, scores = [], [], [] spk_ids, paths, scores = [], [], []
recog_result = self.vpr_recog(wav_path) recog_result = self.vpr_recog(wav_path)
...@@ -78,41 +82,39 @@ class VPR: ...@@ -78,41 +82,39 @@ class VPR:
scores.append(score) scores.append(score)
paths.append("") paths.append("")
return spk_ids, paths, scores return spk_ids, paths, scores
def vpr_del(self, username): def vpr_del(self, username):
# 根据用户username, 删除声纹 # 根据用户username, 删除声纹
# 查用户ID,删除对应向量 # 查用户ID,删除对应向量
res = self.db.select_by_username(username) res = self.db.select_by_username(username)
for r in res: for r in res:
idx = r['id'] idx = r['id']
self.index_ip.remove_ids(np.array((idx,)).astype('int64')) self.index_ip.remove_ids(np.array((idx, )).astype('int64'))
self.db.drop_by_username(username) self.db.drop_by_username(username)
def vpr_list(self): def vpr_list(self):
# 获取数据列表 # 获取数据列表
return self.db.select_all() return self.db.select_all()
def do_list(self): def do_list(self):
spk_ids, vpr_ids = [], [] spk_ids, vpr_ids = [], []
for res in self.db.select_all(): for res in self.db.select_all():
spk_ids.append(res['username']) spk_ids.append(res['username'])
vpr_ids.append(res['id']) vpr_ids.append(res['id'])
return spk_ids, vpr_ids return spk_ids, vpr_ids
def do_get_wav(self, vpr_idx): def do_get_wav(self, vpr_idx):
res = self.db.select_by_id(vpr_idx) res = self.db.select_by_id(vpr_idx)
return res[0]['wavpath'] return res[0]['wavpath']
def vpr_data(self, idx): def vpr_data(self, idx):
# 获取对应ID的数据 # 获取对应ID的数据
res = self.db.select_by_id(idx) res = self.db.select_by_id(idx)
return res return res
def vpr_droptable(self): def vpr_droptable(self):
# 删除表 # 删除表
self.db.drop_table() self.db.drop_table()
# 清空 faiss # 清空 faiss
self.index_ip.reset() self.index_ip.reset()
from paddlespeech.cli.vector import VectorExecutor
import numpy as np
import logging import logging
import numpy as np
from paddlespeech.cli.vector import VectorExecutor
vector_executor = VectorExecutor() vector_executor = VectorExecutor()
def get_audio_embedding(path): def get_audio_embedding(path):
""" """
Use vpr_inference to generate embedding of audio Use vpr_inference to generate embedding of audio
...@@ -16,5 +19,3 @@ def get_audio_embedding(path): ...@@ -16,5 +19,3 @@ def get_audio_embedding(path):
except Exception as e: except Exception as e:
logging.error(f"Error with embedding:{e}") logging.error(f"Error with embedding:{e}")
return None return None
\ No newline at end of file
...@@ -2,6 +2,7 @@ from typing import List ...@@ -2,6 +2,7 @@ from typing import List
from fastapi import WebSocket from fastapi import WebSocket
class ConnectionManager: class ConnectionManager:
def __init__(self): def __init__(self):
# 存放激活的ws连接对象 # 存放激活的ws连接对象
...@@ -28,4 +29,4 @@ class ConnectionManager: ...@@ -28,4 +29,4 @@ class ConnectionManager:
await connection.send_text(message) await connection.send_text(message)
manager = ConnectionManager() manager = ConnectionManager()
\ No newline at end of file
from paddlespeech.cli.asr.infer import ASRExecutor
import soundfile as sf
import os import os
import librosa
import soundfile as sf
from src.SpeechBase.asr import ASR from src.SpeechBase.asr import ASR
from src.SpeechBase.tts import TTS
from src.SpeechBase.nlp import NLP from src.SpeechBase.nlp import NLP
from src.SpeechBase.tts import TTS
from paddlespeech.cli.asr.infer import ASRExecutor
class Robot: class Robot:
def __init__(self, asr_config, tts_config,asr_init_path, def __init__(self,
asr_config,
tts_config,
asr_init_path,
ie_model_path=None) -> None: ie_model_path=None) -> None:
self.nlp = NLP(ie_model_path=ie_model_path) self.nlp = NLP(ie_model_path=ie_model_path)
self.asr = ASR(config_path=asr_config) self.asr = ASR(config_path=asr_config)
self.tts = TTS(config_path=tts_config) self.tts = TTS(config_path=tts_config)
self.tts_sample_rate = 24000 self.tts_sample_rate = 24000
self.asr_sample_rate = 16000 self.asr_sample_rate = 16000
# 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开 # 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开
self.asr_model = ASRExecutor() self.asr_model = ASRExecutor()
self.asr_name = "conformer_wenetspeech" self.asr_name = "conformer_wenetspeech"
self.warm_up_asrmodel(asr_init_path) self.warm_up_asrmodel(asr_init_path)
def warm_up_asrmodel(self, asr_init_path): def warm_up_asrmodel(self, asr_init_path):
if not os.path.exists(asr_init_path): if not os.path.exists(asr_init_path):
path_dir = os.path.dirname(asr_init_path) path_dir = os.path.dirname(asr_init_path)
if not os.path.exists(path_dir): if not os.path.exists(path_dir):
os.makedirs(path_dir, exist_ok=True) os.makedirs(path_dir, exist_ok=True)
# TTS生成,采样率24000 # TTS生成,采样率24000
text = "生成初始音频" text = "生成初始音频"
self.text2speech(text, asr_init_path) self.text2speech(text, asr_init_path)
# asr model初始化 # asr model初始化
self.asr_model(asr_init_path, model=self.asr_name,lang='zh', self.asr_model(
sample_rate=16000, force_yes=True) asr_init_path,
model=self.asr_name,
lang='zh',
sample_rate=16000,
force_yes=True)
def speech2text(self, audio_file): def speech2text(self, audio_file):
self.asr_model.preprocess(self.asr_name, audio_file) self.asr_model.preprocess(self.asr_name, audio_file)
self.asr_model.infer(self.asr_name) self.asr_model.infer(self.asr_name)
res = self.asr_model.postprocess() res = self.asr_model.postprocess()
return res return res
def text2speech(self, text, outpath): def text2speech(self, text, outpath):
wav = self.tts.offlineTTS(text) wav = self.tts.offlineTTS(text)
sf.write( sf.write(outpath, wav, samplerate=self.tts_sample_rate)
outpath, wav, samplerate=self.tts_sample_rate)
res = wav res = wav
return res return res
def text2speechStream(self, text): def text2speechStream(self, text):
for sub_wav_base64 in self.tts.streamTTS(text=text): for sub_wav_base64 in self.tts.streamTTS(text=text):
yield sub_wav_base64 yield sub_wav_base64
def text2speechStreamBytes(self, text): def text2speechStreamBytes(self, text):
for wav_bytes in self.tts.streamTTSBytes(text=text): for wav_bytes in self.tts.streamTTSBytes(text=text):
yield wav_bytes yield wav_bytes
...@@ -66,5 +70,3 @@ class Robot: ...@@ -66,5 +70,3 @@ class Robot:
def ie(self, text): def ie(self, text):
result = self.nlp.ie(text) result = self.nlp.ie(text)
return result return result
\ No newline at end of file
import random import random
def randName(n=5): def randName(n=5):
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba',n)) return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba', n))
def SuccessRequest(result=None, message="ok"): def SuccessRequest(result=None, message="ok"):
return { return {"code": 0, "result": result, "message": message}
"code": 0,
"result":result,
"message": message
}
def ErrorRequest(result=None, message="error"): def ErrorRequest(result=None, message="error"):
return { return {"code": -1, "result": result, "message": message}
"code": -1,
"result":result,
"message": message
}
\ No newline at end of file
([简体中文](./README_cn.md)|English)
# Story Talker # Story Talker
## Introduction ## Introduction
Storybooks are very important children's enlightenment books, but parents usually don't have enough time to read storybooks for their children. For very young children, they may not understand the Chinese characters in storybooks. Or sometimes, children just want to "listen" but don't want to "read". Storybooks are very important children's enlightenment books, but parents usually don't have enough time to read storybooks for their children. For very young children, they may not understand the Chinese characters in storybooks. Or sometimes, children just want to "listen" but don't want to "read".
......
(简体中文|[English](./README.md))
# Story Talker
## 简介
故事书是非常重要的儿童启蒙书,但家长通常没有足够的时间为孩子读故事书。对于非常小的孩子,他们可能不理解故事书中的汉字。或有时,孩子们只是想“听”,而不想“读”。
您可以使用 `PaddleOCR` 获取故事书的文本,并通过 `PaddleSpeech``TTS` 模块进行阅读。
## 使用
运行以下命令行开始:
```
./run.sh
```
结果已显示在 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
...@@ -28,6 +28,7 @@ asr_online: ...@@ -28,6 +28,7 @@ asr_online:
sample_rate: 16000 sample_rate: 16000
cfg_path: cfg_path:
decode_method: decode_method:
num_decoding_left_chunks: -1
force_yes: True force_yes: True
device: 'cpu' # cpu or gpu:id device: 'cpu' # cpu or gpu:id
decode_method: "attention_rescoring" decode_method: "attention_rescoring"
......
...@@ -34,7 +34,7 @@ if __name__ == '__main__': ...@@ -34,7 +34,7 @@ if __name__ == '__main__':
n = 0 n = 0
for m in rtfs: for m in rtfs:
# not accurate, may have duplicate log # not accurate, may have duplicate log
n += 1 n += 1
T += m['T'] T += m['T']
P += m['P'] P += m['P']
......
...@@ -29,7 +29,7 @@ tts_online: ...@@ -29,7 +29,7 @@ tts_online:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc'] # voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference # Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
...@@ -70,7 +70,6 @@ tts_online-onnx: ...@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_sample_rate: 24000 am_sample_rate: 24000
am_sess_conf: am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu' device: "cpu" # set 'gpu:id' or 'cpu'
...@@ -79,7 +78,7 @@ tts_online-onnx: ...@@ -79,7 +78,7 @@ tts_online-onnx:
# voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx'] # voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx']
# Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference # Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference
voc: 'hifigan_csmsc_onnx' voc: 'mb_melgan_csmsc_onnx'
voc_ckpt: voc_ckpt:
voc_sample_rate: 24000 voc_sample_rate: 24000
voc_sess_conf: voc_sess_conf:
...@@ -100,4 +99,4 @@ tts_online-onnx: ...@@ -100,4 +99,4 @@ tts_online-onnx:
voc_pad: 14 voc_pad: 14
# voc_upsample should be same as n_shift on voc config. # voc_upsample should be same as n_shift on voc config.
voc_upsample: 300 voc_upsample: 300
\ No newline at end of file
...@@ -29,7 +29,7 @@ tts_online: ...@@ -29,7 +29,7 @@ tts_online:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc'] # voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference # Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
...@@ -70,7 +70,6 @@ tts_online-onnx: ...@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict: phones_dict:
tones_dict: tones_dict:
speaker_dict: speaker_dict:
spk_id: 0
am_sample_rate: 24000 am_sample_rate: 24000
am_sess_conf: am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu' device: "cpu" # set 'gpu:id' or 'cpu'
...@@ -79,7 +78,7 @@ tts_online-onnx: ...@@ -79,7 +78,7 @@ tts_online-onnx:
# voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx'] # voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx']
# Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference # Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference
voc: 'hifigan_csmsc_onnx' voc: 'mb_melgan_csmsc_onnx'
voc_ckpt: voc_ckpt:
voc_sample_rate: 24000 voc_sample_rate: 24000
voc_sess_conf: voc_sess_conf:
...@@ -100,4 +99,4 @@ tts_online-onnx: ...@@ -100,4 +99,4 @@ tts_online-onnx:
voc_pad: 14 voc_pad: 14
# voc_upsample should be same as n_shift on voc config. # voc_upsample should be same as n_shift on voc config.
voc_upsample: 300 voc_upsample: 300
\ No newline at end of file
([简体中文](./README_cn.md)|English)
# Style FastSpeech2 # Style FastSpeech2
## Introduction ## Introduction
[FastSpeech2](https://arxiv.org/abs/2006.04558) is a classical acoustic model for Text-to-Speech synthesis, which introduces controllable speech input, including `phoneme duration``energy` and `pitch`. [FastSpeech2](https://arxiv.org/abs/2006.04558) is a classical acoustic model for Text-to-Speech synthesis, which introduces controllable speech input, including `phoneme duration``energy` and `pitch`.
......
(简体中文|[English](./README.md))
# Style FastSpeech2
## 简介
[FastSpeech2](https://arxiv.org/abs/2006.04558) 是用于语音合成的经典声学模型,它引入了可控语音输入,包括 `phoneme duration``energy``pitch`
在预测阶段,您可以更改这些变量以获得一些有趣的结果。
例如:
1. `FastSpeech2` 中的 `duration` 可以控制音频的速度 ,并保持 `pitch` 。(在某些语音工具中,增加速度将增加音调,反之亦然。)
2. 当我们将一个句子的 `pitch` 设置为平均值并将音素的 `tones` 设置为 `1` 时,我们将获得 `robot-style` 的音色。
3. 当我们提高成年女性的 `pitch` (比例固定)时,我们会得到 `child-style` 的音色。
句子中不同音素的 `duration``pitch` 可以具有不同的比例。您可以设置不同的音阶比例来强调或削弱某些音素的发音。
## 运行
运行以下命令行开始:
```
./run.sh
```
`run.sh`, 会首先执行 `source path.sh` 去设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt`中的句子。
更多的细节,请查看 `style_syn.py`
语音样例可以在 [style-control-in-fastspeech2](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html#style-control-in-fastspeech2) 查看。
...@@ -16,8 +16,8 @@ You can choose one way from easy, meduim and hard to install paddlespeech. ...@@ -16,8 +16,8 @@ You can choose one way from easy, meduim and hard to install paddlespeech.
The input of this demo should be a text of the specific language that can be passed via argument. The input of this demo should be a text of the specific language that can be passed via argument.
### 3. Usage ### 3. Usage
- Command Line (Recommended) - Command Line (Recommended)
The default acoustic model is `Fastspeech2`, and the default vocoder is `HiFiGAN`, the default inference method is dygraph inference.
- Chinese - Chinese
The default acoustic model is `Fastspeech2`, and the default vocoder is `Parallel WaveGAN`.
```bash ```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
``` ```
...@@ -45,7 +45,33 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -45,7 +45,33 @@ The input of this demo should be a text of the specific language that can be pas
You can change `spk_id` here. You can change `spk_id` here.
```bash ```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0 paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
``` ```
- Chinese English Mixed, multi-speaker
You can change `spk_id` here.
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- Use ONNXRuntime infer:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
Usage: Usage:
```bash ```bash
...@@ -68,6 +94,8 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -68,6 +94,8 @@ The input of this demo should be a text of the specific language that can be pas
- `lang`: Language of tts task. Default: `zh`. - `lang`: Language of tts task. Default: `zh`.
- `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment. - `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment.
- `output`: Output wave filepath. Default: `output.wav`. - `output`: Output wave filepath. Default: `output.wav`.
- `use_onnx`: whether to usen ONNXRuntime inference.
- `fs`: sample rate for ONNX models when use specified model files.
Output: Output:
```bash ```bash
...@@ -75,54 +103,76 @@ The input of this demo should be a text of the specific language that can be pas ...@@ -75,54 +103,76 @@ The input of this demo should be a text of the specific language that can be pas
``` ```
- Python API - Python API
```python - Dygraph infer:
import paddle ```python
from paddlespeech.cli.tts import TTSExecutor import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor() tts_executor = TTSExecutor()
wav_file = tts_executor( wav_file = tts_executor(
text='今天的天气不错啊', text='今天的天气不错啊',
output='output.wav', output='output.wav',
am='fastspeech2_csmsc', am='fastspeech2_csmsc',
am_config=None, am_config=None,
am_ckpt=None, am_ckpt=None,
am_stat=None, am_stat=None,
spk_id=0, spk_id=0,
phones_dict=None, phones_dict=None,
tones_dict=None, tones_dict=None,
speaker_dict=None, speaker_dict=None,
voc='pwgan_csmsc', voc='pwgan_csmsc',
voc_config=None, voc_config=None,
voc_ckpt=None, voc_ckpt=None,
voc_stat=None, voc_stat=None,
lang='zh', lang='zh',
device=paddle.get_device()) device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file)) print('Wave file has been generated: {}'.format(wav_file))
``` ```
- ONNXRuntime infer:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
Output: Output:
```bash ```bash
Wave file has been generated: output.wav Wave file has been generated: output.wav
``` ```
### 4. Pretrained Models ### 4. Pretrained Models
Here is a list of pretrained models released by PaddleSpeech that can be used by command and python API: Here is a list of pretrained models released by PaddleSpeech that can be used by command and python API:
- Acoustic model - Acoustic model
| Model | Language | Model | Language |
| :--- | :---: | | :--- | :---: |
| speedyspeech_csmsc| zh | speedyspeech_csmsc | zh |
| fastspeech2_csmsc| zh | fastspeech2_csmsc | zh |
| fastspeech2_aishell3| zh | fastspeech2_ljspeech | en |
| fastspeech2_ljspeech| en | fastspeech2_aishell3 | zh |
| fastspeech2_vctk| en | fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- Vocoder - Vocoder
| Model | Language | Model | Language |
| :--- | :---: | | :--- | :---: |
| pwgan_csmsc| zh | pwgan_csmsc | zh |
| pwgan_aishell3| zh | pwgan_ljspeech | en |
| pwgan_ljspeech| en | pwgan_aishell3 | zh |
| pwgan_vctk| en | pwgan_vctk | en |
| mb_melgan_csmsc| zh | mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
# 语音合成 # 语音合成
## 介绍 ## 介绍
语音合成是一种自然语言建模过程,其将文本转换为语音以进行音频演示。 语音合成是一种自然语言建模过程,其将文本转换为语音以进行音频演示。
这个 demo 是一个从给定文本生成音频的实现,它可以通过使用 `PaddleSpeech` 的单个命令或 python 中的几行代码来实现。 这个 demo 是一个从给定文本生成音频的实现,它可以通过使用 `PaddleSpeech` 的单个命令或 python 中的几行代码来实现。
## 使用方法 ## 使用方法
### 1. 安装 ### 1. 安装
请看[安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install_cn.md) 请看[安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install_cn.md)
你可以从 easy,medium,hard 三方式中选择一种方式安装。 你可以从 easy,medium,hard 三方式中选择一种方式安装。
### 2. 准备输入 ### 2. 准备输入
这个 demo 的输入是通过参数传递的特定语言的文本。 这个 demo 的输入是通过参数传递的特定语言的文本。
### 3. 使用方法 ### 3. 使用方法
- 命令行 (推荐使用) - 命令行 (推荐使用)
默认的声学模型是 `Fastspeech2`,默认的声码器是 `HiFiGAN`,默认推理方式是动态图推理。
- 中文 - 中文
默认的声学模型是 `Fastspeech2`,默认的声码器是 `Parallel WaveGAN`.
```bash ```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
``` ```
...@@ -34,7 +31,7 @@ ...@@ -34,7 +31,7 @@
``` ```
- 中文, 多说话人 - 中文, 多说话人
你可以改变 `spk_id` 你可以改变 `spk_id`
```bash ```bash
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0
``` ```
...@@ -45,10 +42,36 @@ ...@@ -45,10 +42,36 @@
``` ```
- 英文,多说话人 - 英文,多说话人
你可以改变 `spk_id` 你可以改变 `spk_id`
```bash ```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0 paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
``` ```
- 中英文混合,多说话人
你可以改变 `spk_id`
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- 使用 ONNXRuntime 推理:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
使用方法: 使用方法:
```bash ```bash
...@@ -71,6 +94,8 @@ ...@@ -71,6 +94,8 @@
- `lang`:TTS 任务的语言, 默认值:`zh` - `lang`:TTS 任务的语言, 默认值:`zh`
- `device`:执行预测的设备, 默认值:当前系统下 paddlepaddle 的默认 device。 - `device`:执行预测的设备, 默认值:当前系统下 paddlepaddle 的默认 device。
- `output`:输出音频的路径, 默认值:`output.wav` - `output`:输出音频的路径, 默认值:`output.wav`
- `use_onnx`: 是否使用 ONNXRuntime 进行推理。
- `fs`: 使用特定 ONNX 模型时的采样率。
输出: 输出:
```bash ```bash
...@@ -78,31 +103,44 @@ ...@@ -78,31 +103,44 @@
``` ```
- Python API - Python API
```python - 动态图推理:
import paddle ```python
from paddlespeech.cli.tts import TTSExecutor import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor() tts_executor = TTSExecutor()
wav_file = tts_executor( wav_file = tts_executor(
text='今天的天气不错啊', text='今天的天气不错啊',
output='output.wav', output='output.wav',
am='fastspeech2_csmsc', am='fastspeech2_csmsc',
am_config=None, am_config=None,
am_ckpt=None, am_ckpt=None,
am_stat=None, am_stat=None,
spk_id=0, spk_id=0,
phones_dict=None, phones_dict=None,
tones_dict=None, tones_dict=None,
speaker_dict=None, speaker_dict=None,
voc='pwgan_csmsc', voc='pwgan_csmsc',
voc_config=None, voc_config=None,
voc_ckpt=None, voc_ckpt=None,
voc_stat=None, voc_stat=None,
lang='zh', lang='zh',
device=paddle.get_device()) device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file)) print('Wave file has been generated: {}'.format(wav_file))
``` ```
- ONNXRuntime 推理:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
输出: 输出:
```bash ```bash
Wave file has been generated: output.wav Wave file has been generated: output.wav
...@@ -112,19 +150,29 @@ ...@@ -112,19 +150,29 @@
以下是 PaddleSpeech 提供的可以被命令行和 python API 使用的预训练模型列表: 以下是 PaddleSpeech 提供的可以被命令行和 python API 使用的预训练模型列表:
- 声学模型 - 声学模型
| 模型 | 语言 | 模型 | 语言 |
| :--- | :---: | | :--- | :---: |
| speedyspeech_csmsc| zh | speedyspeech_csmsc | zh |
| fastspeech2_csmsc| zh | fastspeech2_csmsc | zh |
| fastspeech2_aishell3| zh | fastspeech2_ljspeech | en |
| fastspeech2_ljspeech| en | fastspeech2_aishell3 | zh |
| fastspeech2_vctk| en | fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- 声码器 - 声码器
| 模型 | 语言 | 模型 | 语言 |
| :--- | :---: | | :--- | :---: |
| pwgan_csmsc| zh | pwgan_csmsc | zh |
| pwgan_aishell3| zh | pwgan_ljspeech | en |
| pwgan_ljspeech| en | pwgan_aishell3 | zh |
| pwgan_vctk| en | pwgan_vctk | en |
| mb_melgan_csmsc| zh | mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
myst-parser braceexpand
numpydoc colorlog
recommonmark>=0.5.0
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
paddlepaddle>=2.2.2
editdistance editdistance
fastapi
g2p_en g2p_en
g2pM g2pM
h5py h5py
...@@ -14,39 +9,45 @@ inflect ...@@ -14,39 +9,45 @@ inflect
jieba jieba
jsonlines jsonlines
kaldiio kaldiio
keyboard
librosa==0.8.1 librosa==0.8.1
loguru loguru
matplotlib matplotlib
myst-parser
nara_wpe nara_wpe
onnxruntime numpydoc
pandas onnxruntime==1.10.0
opencc
paddlenlp paddlenlp
paddlepaddle>=2.2.2
paddlespeech_feat paddlespeech_feat
pandas
pathos == 0.2.8
pattern_singleton
Pillow>=9.0.0 Pillow>=9.0.0
praatio==5.0.0 praatio==5.0.0
pypinyin prettytable
pypinyin<=0.44.0
pypinyin-dict pypinyin-dict
python-dateutil python-dateutil
pyworld==0.2.12 pyworld==0.2.12
recommonmark>=0.5.0
resampy==0.2.2 resampy==0.2.2
sacrebleu sacrebleu
scipy scipy
sentencepiece~=0.1.96 sentencepiece~=0.1.96
soundfile~=0.10 soundfile~=0.10
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
textgrid textgrid
timer timer
tqdm tqdm
typeguard typeguard
uvicorn
visualdl visualdl
webrtcvad webrtcvad
websockets
yacs~=0.1.8 yacs~=0.1.8
prettytable
zhon zhon
colorlog
pathos == 0.2.8
fastapi
websockets
keyboard
uvicorn
pattern_singleton
braceexpand
\ No newline at end of file
...@@ -20,4 +20,7 @@ Subpackages ...@@ -20,4 +20,7 @@ Subpackages
paddlespeech.audio.io paddlespeech.audio.io
paddlespeech.audio.metric paddlespeech.audio.metric
paddlespeech.audio.sox_effects paddlespeech.audio.sox_effects
paddlespeech.audio.streamdata
paddlespeech.audio.text
paddlespeech.audio.transform
paddlespeech.audio.utils paddlespeech.audio.utils
paddlespeech.audio.streamdata.autodecode module
===============================================
.. automodule:: paddlespeech.audio.streamdata.autodecode
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.cache module
==========================================
.. automodule:: paddlespeech.audio.streamdata.cache
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.compat module
===========================================
.. automodule:: paddlespeech.audio.streamdata.compat
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.extradatasets module
==================================================
.. automodule:: paddlespeech.audio.streamdata.extradatasets
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.filters module
============================================
.. automodule:: paddlespeech.audio.streamdata.filters
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.gopen module
==========================================
.. automodule:: paddlespeech.audio.streamdata.gopen
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.handlers module
=============================================
.. automodule:: paddlespeech.audio.streamdata.handlers
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.mix module
========================================
.. automodule:: paddlespeech.audio.streamdata.mix
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.paddle\_utils module
==================================================
.. automodule:: paddlespeech.audio.streamdata.paddle_utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.pipeline module
=============================================
.. automodule:: paddlespeech.audio.streamdata.pipeline
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata package
=====================================
.. automodule:: paddlespeech.audio.streamdata
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.streamdata.autodecode
paddlespeech.audio.streamdata.cache
paddlespeech.audio.streamdata.compat
paddlespeech.audio.streamdata.extradatasets
paddlespeech.audio.streamdata.filters
paddlespeech.audio.streamdata.gopen
paddlespeech.audio.streamdata.handlers
paddlespeech.audio.streamdata.mix
paddlespeech.audio.streamdata.paddle_utils
paddlespeech.audio.streamdata.pipeline
paddlespeech.audio.streamdata.shardlists
paddlespeech.audio.streamdata.tariterators
paddlespeech.audio.streamdata.utils
paddlespeech.audio.streamdata.writer
paddlespeech.audio.streamdata.shardlists module
===============================================
.. automodule:: paddlespeech.audio.streamdata.shardlists
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.tariterators module
=================================================
.. automodule:: paddlespeech.audio.streamdata.tariterators
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.utils module
==========================================
.. automodule:: paddlespeech.audio.streamdata.utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.writer module
===========================================
.. automodule:: paddlespeech.audio.streamdata.writer
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.text package
===============================
.. automodule:: paddlespeech.audio.text
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.text.text_featurizer
paddlespeech.audio.text.utility
paddlespeech.audio.text.text\_featurizer module
===============================================
.. automodule:: paddlespeech.audio.text.text_featurizer
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.text.utility module
======================================
.. automodule:: paddlespeech.audio.text.utility
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.add\_deltas module
===============================================
.. automodule:: paddlespeech.audio.transform.add_deltas
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.channel\_selector module
=====================================================
.. automodule:: paddlespeech.audio.transform.channel_selector
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.cmvn module
========================================
.. automodule:: paddlespeech.audio.transform.cmvn
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.functional module
==============================================
.. automodule:: paddlespeech.audio.transform.functional
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.perturb module
===========================================
.. automodule:: paddlespeech.audio.transform.perturb
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform package
====================================
.. automodule:: paddlespeech.audio.transform
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.transform.add_deltas
paddlespeech.audio.transform.channel_selector
paddlespeech.audio.transform.cmvn
paddlespeech.audio.transform.functional
paddlespeech.audio.transform.perturb
paddlespeech.audio.transform.spec_augment
paddlespeech.audio.transform.spectrogram
paddlespeech.audio.transform.transform_interface
paddlespeech.audio.transform.transformation
paddlespeech.audio.transform.wpe
paddlespeech.audio.transform.spec\_augment module
=================================================
.. automodule:: paddlespeech.audio.transform.spec_augment
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.spectrogram module
===============================================
.. automodule:: paddlespeech.audio.transform.spectrogram
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.transform\_interface module
========================================================
.. automodule:: paddlespeech.audio.transform.transform_interface
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.transformation module
==================================================
.. automodule:: paddlespeech.audio.transform.transformation
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.wpe module
=======================================
.. automodule:: paddlespeech.audio.transform.wpe
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.utils.check\_kwargs module
=============================================
.. automodule:: paddlespeech.audio.utils.check_kwargs
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.utils.dynamic\_import module
===============================================
.. automodule:: paddlespeech.audio.utils.dynamic_import
:members:
:undoc-members:
:show-inheritance:
...@@ -12,8 +12,11 @@ Submodules ...@@ -12,8 +12,11 @@ Submodules
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
paddlespeech.audio.utils.check_kwargs
paddlespeech.audio.utils.download paddlespeech.audio.utils.download
paddlespeech.audio.utils.dynamic_import
paddlespeech.audio.utils.error paddlespeech.audio.utils.error
paddlespeech.audio.utils.log paddlespeech.audio.utils.log
paddlespeech.audio.utils.numeric paddlespeech.audio.utils.numeric
paddlespeech.audio.utils.tensor_utils
paddlespeech.audio.utils.time paddlespeech.audio.utils.time
paddlespeech.audio.utils.tensor\_utils module
=============================================
.. automodule:: paddlespeech.audio.utils.tensor_utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.collate module
=========================================
.. automodule:: paddlespeech.kws.exps.mdtc.collate
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.compute\_det module
==============================================
.. automodule:: paddlespeech.kws.exps.mdtc.compute_det
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.plot\_det\_curve module
==================================================
.. automodule:: paddlespeech.kws.exps.mdtc.plot_det_curve
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc package
==================================
.. automodule:: paddlespeech.kws.exps.mdtc
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.kws.exps.mdtc.collate
paddlespeech.kws.exps.mdtc.compute_det
paddlespeech.kws.exps.mdtc.plot_det_curve
paddlespeech.kws.exps.mdtc.score
paddlespeech.kws.exps.mdtc.train
paddlespeech.kws.exps.mdtc.score module
=======================================
.. automodule:: paddlespeech.kws.exps.mdtc.score
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.train module
=======================================
.. automodule:: paddlespeech.kws.exps.mdtc.train
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps package
=============================
.. automodule:: paddlespeech.kws.exps
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
:maxdepth: 4
paddlespeech.kws.exps.mdtc
...@@ -12,4 +12,5 @@ Subpackages ...@@ -12,4 +12,5 @@ Subpackages
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
paddlespeech.kws.exps
paddlespeech.kws.models paddlespeech.kws.models
paddlespeech.resource.model\_alias module
=========================================
.. automodule:: paddlespeech.resource.model_alias
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource.pretrained\_models module
===============================================
.. automodule:: paddlespeech.resource.pretrained_models
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource.resource module
=====================================
.. automodule:: paddlespeech.resource.resource
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource package
=============================
.. automodule:: paddlespeech.resource
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.resource.model_alias
paddlespeech.resource.pretrained_models
paddlespeech.resource.resource
...@@ -16,8 +16,10 @@ Subpackages ...@@ -16,8 +16,10 @@ Subpackages
paddlespeech.cli paddlespeech.cli
paddlespeech.cls paddlespeech.cls
paddlespeech.kws paddlespeech.kws
paddlespeech.resource
paddlespeech.s2t paddlespeech.s2t
paddlespeech.server paddlespeech.server
paddlespeech.t2s paddlespeech.t2s
paddlespeech.text paddlespeech.text
paddlespeech.utils
paddlespeech.vector paddlespeech.vector
...@@ -19,5 +19,4 @@ Subpackages ...@@ -19,5 +19,4 @@ Subpackages
paddlespeech.s2t.models paddlespeech.s2t.models
paddlespeech.s2t.modules paddlespeech.s2t.modules
paddlespeech.s2t.training paddlespeech.s2t.training
paddlespeech.s2t.transform
paddlespeech.s2t.utils paddlespeech.s2t.utils
...@@ -18,7 +18,6 @@ Submodules ...@@ -18,7 +18,6 @@ Submodules
paddlespeech.server.utils.config paddlespeech.server.utils.config
paddlespeech.server.utils.errors paddlespeech.server.utils.errors
paddlespeech.server.utils.exception paddlespeech.server.utils.exception
paddlespeech.server.utils.log
paddlespeech.server.utils.onnx_infer paddlespeech.server.utils.onnx_infer
paddlespeech.server.utils.paddle_predictor paddlespeech.server.utils.paddle_predictor
paddlespeech.server.utils.util paddlespeech.server.utils.util
......
...@@ -19,4 +19,5 @@ Submodules ...@@ -19,4 +19,5 @@ Submodules
paddlespeech.t2s.datasets.get_feats paddlespeech.t2s.datasets.get_feats
paddlespeech.t2s.datasets.ljspeech paddlespeech.t2s.datasets.ljspeech
paddlespeech.t2s.datasets.preprocess_utils paddlespeech.t2s.datasets.preprocess_utils
paddlespeech.t2s.datasets.sampler
paddlespeech.t2s.datasets.vocoder_batch_fn paddlespeech.t2s.datasets.vocoder_batch_fn
paddlespeech.t2s.datasets.sampler module
========================================
.. automodule:: paddlespeech.t2s.datasets.sampler
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.align module
=============================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.align
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.normalize module
=================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.normalize
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.preprocess module
==================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.preprocess
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat package
========================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.t2s.exps.ernie_sat.align
paddlespeech.t2s.exps.ernie_sat.normalize
paddlespeech.t2s.exps.ernie_sat.preprocess
paddlespeech.t2s.exps.ernie_sat.synthesize
paddlespeech.t2s.exps.ernie_sat.synthesize_e2e
paddlespeech.t2s.exps.ernie_sat.train
paddlespeech.t2s.exps.ernie_sat.utils
paddlespeech.t2s.exps.ernie\_sat.synthesize module
==================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.synthesize
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.synthesize\_e2e module
=======================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.synthesize_e2e
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.train module
=============================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.train
:members:
:undoc-members:
:show-inheritance:
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册