提交 e3298c79 编写于 作者: H Hui Zhang

Merge branch 'develop' into u2_export

---
name: Bug report
name: "\U0001F41B S2T Bug Report"
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
title: "[S2T]XXXX"
labels: Bug, S2T
assignees: zh794390558
---
......@@ -27,7 +27,7 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
** Environment (please complete the following information):**
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7]
......
---
name: "\U0001F41B TTS Bug Report"
about: Create a report to help us improve
title: "[TTS]XXXX"
labels: Bug, T2S
assignees: yt605155624
---
For support and discussions, please use our [Discourse forums](https://github.com/PaddlePaddle/DeepSpeech/discussions).
If you've found a bug then please create an issue with the following information:
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu]
- GCC/G++ Version [e.g. 8.3]
- Python Version [e.g. 3.7]
- PaddlePaddle Version [e.g. 2.0.0]
- Model Version [e.g. 2.0.0]
- GPU/DRIVER Informationo [e.g. Tesla V100-SXM2-32GB/440.64.00]
- CUDA/CUDNN Version [e.g. cuda-10.2]
- MKL Version
- TensorRT Version
**Additional context**
Add any other context about the problem here.
---
name: "\U0001F680 Feature Request"
about: As a user, I want to request a New Feature on the product.
title: ''
labels: feature request
assignees: D-DanielYang, iftaken
---
## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
---
name: "\U0001F9E9 Others"
about: Report any other non-support related issues.
title: ''
labels: ''
assignees: ''
---
## Others
<!--
你可以在这里提出任何前面几类模板不适用的问题,包括但不限于:优化性建议、框架使用体验反馈、版本兼容性问题、报错信息不清楚等。
You can report any issues that are not applicable to the previous types of templates, including but not limited to: enhancement suggestions, feedback on the use of the framework, version compatibility issues, unclear error information, etc.
-->
---
name: "\U0001F914 Ask a Question"
about: I want to ask a question.
title: ''
labels: Question
assignees: ''
---
## General Question
<!--
Before asking a question, make sure you have:
- Baidu/Google your question.
- Searched open and closed [GitHub issues](https://github.com/PaddlePaddle/PaddleSpeech/issues?q=is%3Aissue)
- Read the documentation:
- [Readme](https://github.com/PaddlePaddle/PaddleSpeech)
- [Doc](https://paddlespeech.readthedocs.io/)
-->
# Changelog
Date: 2022-3-22, Author: yt605155624.
Add features to: CLI:
- Support aishell3_hifigan、vctk_hifigan
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1587
Date: 2022-3-09, Author: yt605155624.
Add features to: T2S:
- Add ljspeech hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1549
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add aishell3 hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1545
Date: 2022-3-08, Author: yt605155624.
Add features to: T2S:
- Add vctk hifigan egs.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1544
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Update aishell3 vc0 with new Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1419
Date: 2022-1-29, Author: yt605155624.
Add features to: T2S:
- Add ljspeech Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1416
Date: 2022-1-24, Author: yt605155624.
Add features to: T2S:
- Add csmsc WaveRNN.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1379
Date: 2022-1-19, Author: yt605155624.
Add features to: T2S:
- Add csmsc Tacotron2.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1314
Date: 2022-1-10, Author: Jackwaterveg.
Add features to: CLI:
- Support English (librispeech/asr1/transformer).
- Support choosing `decode_method` for conformer and transformer models.
- Refactor the config, using the unified config.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1297
***
Date: 2022-1-17, Author: Jackwaterveg.
Add features to: CLI:
- Support deepspeech2 online/offline model(aishell).
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/1356
***
Date: 2022-1-24, Author: Jackwaterveg.
Add features to: ctc_decoders:
- Support online ctc prefix-beam search decoder.
- Unified ctc online decoder and ctc offline decoder.
- PRLink: https://github.com/PaddlePaddle/PaddleSpeech/pull/821
***
include paddlespeech/t2s/exps/*.txt
include paddlespeech/t2s/frontend/*.yaml
\ No newline at end of file
此差异已折叠。
此差异已折叠。
......@@ -226,6 +226,12 @@ recall and elapsed time statistics are shown in the following figure:
The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios.
* compute embeding takes 500 ms
* retrieval with cosine takes 2.9 ms
* total takes 503 ms
> test audio is 5 sec
### 6.Pretrained Models
Here is a list of pretrained models released by PaddleSpeech :
......
......@@ -26,8 +26,9 @@ def get_audios(path):
"""
supported_formats = [".wav", ".mp3", ".ogg", ".flac", ".m4a"]
return [
item for sublist in [[os.path.join(dir, file) for file in files]
for dir, _, files in list(os.walk(path))]
item
for sublist in [[os.path.join(dir, file) for file in files]
for dir, _, files in list(os.walk(path))]
for item in sublist if os.path.splitext(item)[1] in supported_formats
]
......
([简体中文](./README_cn.md)|English)
# Metaverse
## Introduction
Metaverse is a new Internet application and social form integrating virtual reality produced by integrating a variety of new technologies.
......
(简体中文|[English](./README.md))
# Metaverse
## 简介
Metaverse 是一种新的互联网应用和社交形式,融合了多种新技术,产生了虚拟现实。
这个演示是一个让图片中的名人“说话”的实现。通过 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 的组合,我们集成了安装和特定模块到一个 shell 脚本中。
## 使用
您可以使用 `PaddleSpeech``TTS` 模块和 `PaddleGAN` 让您最喜欢的人说出指定的内容,并构建您的虚拟人。
运行 `run.sh` 完成所有基本程序,包括安装。
```bash
./run.sh
```
`run.sh`, 先会执行 `source path.sh` 来设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt` 中的句子。
如果您想尝试图像,请将图像替换 shell 脚本中的 `download/Lamarr.png`
结果已显示在我们的 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
......@@ -19,6 +19,7 @@ The input of this cli demo should be a WAV file(`.wav`), and the sample rate mus
Here are sample files for this demo that can be downloaded:
```bash
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
```
### 3. Usage
......
......@@ -19,6 +19,7 @@
```bash
# 该音频的内容是数字串 85236145389
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/85236145389.wav
wget -c https://paddlespeech.bj.bcebos.com/vector/audio/123456789.wav
```
### 3. 使用方法
- 命令行 (推荐使用)
......
此差异已折叠。
此差异已折叠。
......@@ -7,7 +7,7 @@ host: 0.0.0.0
port: 8090
# The task format in the engin_list is: <speech task>_<engine type>
# task choices = ['asr_python', 'asr_inference', 'tts_python', 'tts_inference', 'cls_python', 'cls_inference']
# task choices = ['asr_python', 'asr_inference', 'tts_python', 'tts_inference', 'cls_python', 'cls_inference', 'text_python', 'vector_python']
protocol: 'http'
engine_list: ['asr_python', 'tts_python', 'cls_python', 'text_python', 'vector_python']
......@@ -28,7 +28,6 @@ asr_python:
force_yes: True
device: # set 'gpu:id' or 'cpu'
################### speech task: asr; engine_type: inference #######################
asr_inference:
# model_type choices=['deepspeech2offline_aishell']
......@@ -50,10 +49,11 @@ asr_inference:
################################### TTS #########################################
################### speech task: tts; engine_type: python #######################
tts_python:
# am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc',
# 'fastspeech2_ljspeech', 'fastspeech2_aishell3',
# 'fastspeech2_vctk']
tts_python:
# am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc',
# 'fastspeech2_ljspeech', 'fastspeech2_aishell3',
# 'fastspeech2_vctk', 'fastspeech2_mix',
# 'tacotron2_csmsc', 'tacotron2_ljspeech']
am: 'fastspeech2_csmsc'
am_config:
am_ckpt:
......@@ -61,11 +61,13 @@ tts_python:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
# voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3',
# 'pwgan_vctk', 'mb_melgan_csmsc']
voc: 'pwgan_csmsc'
# 'pwgan_vctk', 'mb_melgan_csmsc', 'style_melgan_csmsc',
# 'hifigan_csmsc', 'hifigan_ljspeech', 'hifigan_aishell3',
# 'hifigan_vctk', 'wavernn_csmsc']
voc: 'mb_melgan_csmsc'
voc_config:
voc_ckpt:
voc_stat:
......@@ -85,7 +87,7 @@ tts_inference:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
am_predictor_conf:
device: # set 'gpu:id' or 'cpu'
......@@ -94,7 +96,7 @@ tts_inference:
summary: True # False -> do not show predictor config
# voc (vocoder) choices=['pwgan_csmsc', 'mb_melgan_csmsc','hifigan_csmsc']
voc: 'pwgan_csmsc'
voc: 'mb_melgan_csmsc'
voc_model: # the pdmodel file of your vocoder static model (XX.pdmodel)
voc_params: # the pdiparams file of your vocoder static model (XX.pdipparams)
voc_sample_rate: 24000
......
......@@ -401,4 +401,4 @@ curl -X 'GET' \
"code": 0,
"result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"message": "ok"
```
\ No newline at end of file
```
......@@ -3,48 +3,48 @@
# 2. 接收录音音频,返回识别结果
# 3. 接收ASR识别结果,返回NLP对话结果
# 4. 接收NLP对话结果,返回TTS音频
import argparse
import base64
import yaml
import os
import json
import datetime
import json
import os
from typing import List
import aiofiles
import librosa
import soundfile as sf
import numpy as np
import argparse
import uvicorn
import aiofiles
from typing import Optional, List
from pydantic import BaseModel
from fastapi import FastAPI, Header, File, UploadFile, Form, Cookie, WebSocket, WebSocketDisconnect
from fastapi import FastAPI
from fastapi import File
from fastapi import Form
from fastapi import UploadFile
from fastapi import WebSocket
from fastapi import WebSocketDisconnect
from fastapi.responses import StreamingResponse
from starlette.responses import FileResponse
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.websockets import WebSocketState as WebSocketState
from pydantic import BaseModel
from src.AudioManeger import AudioMannger
from src.util import *
from src.robot import Robot
from src.WebsocketManeger import ConnectionManager
from src.SpeechBase.vpr import VPR
from src.util import *
from src.WebsocketManeger import ConnectionManager
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.responses import FileResponse
from starlette.websockets import WebSocketState as WebSocketState
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.audio_process import float2pcm
# 解析配置
parser = argparse.ArgumentParser(
prog='PaddleSpeechDemo', add_help=True)
parser = argparse.ArgumentParser(prog='PaddleSpeechDemo', add_help=True)
parser.add_argument(
"--port",
action="store",
type=int,
help="port of the app",
default=8010,
required=False)
"--port",
action="store",
type=int,
help="port of the app",
default=8010,
required=False)
args = parser.parse_args()
port = args.port
......@@ -60,39 +60,41 @@ ie_model_path = "source/model"
UPLOAD_PATH = "source/vpr"
WAV_PATH = "source/wav"
base_sources = [
UPLOAD_PATH, WAV_PATH
]
base_sources = [UPLOAD_PATH, WAV_PATH]
for path in base_sources:
os.makedirs(path, exist_ok=True)
# 初始化
app = FastAPI()
chatbot = Robot(asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path)
chatbot = Robot(
asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path)
manager = ConnectionManager()
aumanager = AudioMannger(chatbot)
aumanager.init()
vpr = VPR(db_path, dim = 192, top_k = 5)
vpr = VPR(db_path, dim=192, top_k=5)
# 服务配置
class NlpBase(BaseModel):
chat: str
class TtsBase(BaseModel):
text: str
text: str
class Audios:
def __init__(self) -> None:
self.audios = b""
audios = Audios()
######################################################################
########################### ASR 服务 #################################
#####################################################################
# 接收文件,返回ASR结果
# 上传文件
@app.post("/asr/offline")
......@@ -101,7 +103,8 @@ async def speech2textOffline(files: List[UploadFile]):
asr_res = ""
for file in files[:1]:
# 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read
......@@ -110,10 +113,9 @@ async def speech2textOffline(files: List[UploadFile]):
# 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path)
return SuccessRequest(result=asr_res)
# else:
# return ErrorRequest(message="文件不是.wav格式")
return ErrorRequest(message="上传文件为空")
# 接收文件,同时将wav强制转成16k, int16类型
@app.post("/asr/offlinefile")
async def speech2textOfflineFile(files: List[UploadFile]):
......@@ -121,7 +123,8 @@ async def speech2textOfflineFile(files: List[UploadFile]):
asr_res = ""
for file in files[:1]:
# 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
now_name = "asr_offline_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read
......@@ -132,22 +135,18 @@ async def speech2textOfflineFile(files: List[UploadFile]):
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
# 将文件重新写入
now_name = now_name[:-4] + "_16k" + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
sf.write(out_file_path,wav,16000)
sf.write(out_file_path, wav, 16000)
# 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path)
response_res = {
"asr_result": asr_res,
"wav_base64": wav_base64
}
response_res = {"asr_result": asr_res, "wav_base64": wav_base64}
return SuccessRequest(result=response_res)
return ErrorRequest(message="上传文件为空")
return ErrorRequest(message="上传文件为空")
# 流式接收测试
......@@ -161,15 +160,17 @@ async def speech2textOnlineRecive(files: List[UploadFile]):
print(f"audios长度变化: {len(audios.audios)}")
return SuccessRequest(message="接收成功")
# 采集环境噪音大小
@app.post("/asr/collectEnv")
async def collectEnv(files: List[UploadFile]):
for file in files[:1]:
for file in files[:1]:
content = await file.read() # async read
# 初始化, wav 前44字节是头部信息
aumanager.compute_env_volume(content[44:])
vad_ = aumanager.vad_threshold
return SuccessRequest(result=vad_,message="采集环境噪音成功")
return SuccessRequest(result=vad_, message="采集环境噪音成功")
# 停止录音
@app.get("/asr/stopRecord")
......@@ -179,6 +180,7 @@ async def stopRecord():
print("Online录音暂停")
return SuccessRequest(message="停止成功")
# 恢复录音
@app.get("/asr/resumeRecord")
async def resumeRecord():
......@@ -187,7 +189,7 @@ async def resumeRecord():
return SuccessRequest(message="Online录音恢复")
# 聊天用的ASR
# 聊天用的 ASR
@app.websocket("/ws/asr/offlineStream")
async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket)
......@@ -210,9 +212,9 @@ async def websocket_endpoint(websocket: WebSocket):
# print(f"用户-{user}-离开")
# Online识别的ASR
# 流式识别的 ASR
@app.websocket('/ws/asr/onlineStream')
async def websocket_endpoint(websocket: WebSocket):
async def websocket_endpoint_online(websocket: WebSocket):
"""PaddleSpeech Online ASR Server api
Args:
......@@ -298,12 +300,14 @@ async def websocket_endpoint(websocket: WebSocket):
except WebSocketDisconnect:
pass
######################################################################
########################### NLP 服务 #################################
#####################################################################
@app.post("/nlp/chat")
async def chatOffline(nlp_base:NlpBase):
async def chatOffline(nlp_base: NlpBase):
chat = nlp_base.chat
if not chat:
return ErrorRequest(message="传入文本为空")
......@@ -311,8 +315,9 @@ async def chatOffline(nlp_base:NlpBase):
res = chatbot.chat(chat)
return SuccessRequest(result=res)
@app.post("/nlp/ie")
async def ieOffline(nlp_base:NlpBase):
async def ieOffline(nlp_base: NlpBase):
nlp_text = nlp_base.chat
if not nlp_text:
return ErrorRequest(message="传入文本为空")
......@@ -320,17 +325,20 @@ async def ieOffline(nlp_base:NlpBase):
res = chatbot.ie(nlp_text)
return SuccessRequest(result=res)
######################################################################
########################### TTS 服务 #################################
#####################################################################
@app.post("/tts/offline")
async def text2speechOffline(tts_base:TtsBase):
async def text2speechOffline(tts_base: TtsBase):
text = tts_base.text
if not text:
return ErrorRequest(message="文本为空")
else:
now_name = "tts_"+ datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
now_name = "tts_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
# 保存为文件,再转成base64传输
chatbot.text2speech(text, outpath=out_file_path)
......@@ -339,12 +347,14 @@ async def text2speechOffline(tts_base:TtsBase):
base_str = base64.b64encode(data_bin)
return SuccessRequest(result=base_str)
# http流式TTS
@app.post("/tts/online")
async def stream_tts(request_body: TtsBase):
text = request_body.text
return StreamingResponse(chatbot.text2speechStreamBytes(text=text))
# ws流式TTS
@app.websocket("/ws/tts/online")
async def stream_ttsWS(websocket: WebSocket):
......@@ -356,17 +366,11 @@ async def stream_ttsWS(websocket: WebSocket):
if text:
for sub_wav in chatbot.text2speechStream(text=text):
# print("发送sub wav: ", len(sub_wav))
res = {
"wav": sub_wav,
"done": False
}
res = {"wav": sub_wav, "done": False}
await websocket.send_json(res)
# 输送结束
res = {
"wav": sub_wav,
"done": True
}
res = {"wav": sub_wav, "done": True}
await websocket.send_json(res)
# manager.disconnect(websocket)
......@@ -396,8 +400,9 @@ async def vpr_enroll(table_name: str=None,
return {'status': False, 'msg': "spk_id can not be None"}
# Save the upload data to server.
content = await audio.read()
now_name = "vpr_enroll_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
audio_path = os.path.join(UPLOAD_PATH, now_name)
now_name = "vpr_enroll_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(audio_path, "wb+") as f:
f.write(content)
......@@ -413,20 +418,19 @@ async def vpr_recog(request: Request,
audio: UploadFile=File(...)):
# Voice print recognition online
# try:
# Save the upload data to server.
# Save the upload data to server.
content = await audio.read()
now_name = "vpr_query_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
query_audio_path = os.path.join(UPLOAD_PATH, now_name)
now_name = "vpr_query_" + datetime.datetime.strftime(
datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
query_audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(query_audio_path, "wb+") as f:
f.write(content)
f.write(content)
spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path)
res = dict(zip(spk_ids, zip(paths, scores)))
# Sort results by distance metric, closest distances first
res = sorted(res.items(), key=lambda item: item[1][1], reverse=True)
return res
# except Exception as e:
# return {'status': False, 'msg': e}, 400
@app.post('/vpr/del')
......@@ -460,17 +464,18 @@ async def vpr_database64(vprId: int):
return {'status': False, 'msg': "vpr_id can not be None"}
audio_path = vpr.do_get_wav(vprId)
# 返回base64
# 将文件转成16k, 16bit类型的wav文件
wav, sr = librosa.load(audio_path, sr=16000)
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
return SuccessRequest(result=wav_base64)
except Exception as e:
return {'status': False, 'msg': e}, 400
@app.get('/vpr/data')
async def vpr_data(vprId: int):
# Get the audio file from path by spk_id in MySQL
......@@ -482,11 +487,6 @@ async def vpr_data(vprId: int):
except Exception as e:
return {'status': False, 'msg': e}, 400
if __name__ == '__main__':
uvicorn.run(app=app, host='0.0.0.0', port=port)
aiofiles
faiss-cpu
fastapi
librosa
numpy
paddlenlp
paddlepaddle
paddlespeech
pydantic
scikit_learn
python-multipartscikit_learn
SoundFile
starlette
uvicorn
paddlepaddle
paddlespeech
paddlenlp
faiss-cpu
python-multipart
\ No newline at end of file
import imp
from queue import Queue
import numpy as np
import datetime
import os
import wave
import random
import datetime
import numpy as np
from .util import randName
class AudioMannger:
def __init__(self, robot, frame_length=160, frame=10, data_width=2, vad_default = 300):
def __init__(self,
robot,
frame_length=160,
frame=10,
data_width=2,
vad_default=300):
# 二进制 pcm 流
self.audios = b''
self.asr_result = ""
......@@ -20,8 +24,9 @@ class AudioMannger:
os.makedirs(self.file_dir, exist_ok=True)
self.vad_deafult = vad_default
self.vad_threshold = vad_default
self.vad_threshold_path = os.path.join(self.file_dir, "vad_threshold.npy")
self.vad_threshold_path = os.path.join(self.file_dir,
"vad_threshold.npy")
# 10ms 一帧
self.frame_length = frame_length
# 10帧,检测一次 vad
......@@ -30,67 +35,64 @@ class AudioMannger:
self.data_width = data_width
# window
self.window_length = frame_length * frame * data_width
# 是否开始录音
self.on_asr = False
self.silence_cnt = 0
self.silence_cnt = 0
self.max_silence_cnt = 4
self.is_pause = False # 录音暂停与恢复
def init(self):
if os.path.exists(self.vad_threshold_path):
# 平均响度文件存在
self.vad_threshold = np.load(self.vad_threshold_path)
def clear_audio(self):
# 清空 pcm 累积片段与 asr 识别结果
self.audios = b''
def clear_asr(self):
self.asr_result = ""
def compute_chunk_volume(self, start_index, pcm_bins):
# 根据帧长计算能量平均值
pcm_bin = pcm_bins[start_index: start_index + self.window_length]
pcm_bin = pcm_bins[start_index:start_index + self.window_length]
# 转成 numpy
pcm_np = np.frombuffer(pcm_bin, np.int16)
# 归一化 + 计算响度
x = pcm_np.astype(np.float32)
x = np.abs(x)
return np.mean(x)
return np.mean(x)
def is_speech(self, start_index, pcm_bins):
# 检查是否没
if start_index > len(pcm_bins):
return False
# 检查从这个 start 开始是否为静音帧
energy = self.compute_chunk_volume(start_index=start_index, pcm_bins=pcm_bins)
energy = self.compute_chunk_volume(
start_index=start_index, pcm_bins=pcm_bins)
# print(energy)
if energy > self.vad_threshold:
return True
else:
return False
def compute_env_volume(self, pcm_bins):
max_energy = 0
start = 0
while start < len(pcm_bins):
energy = self.compute_chunk_volume(start_index=start, pcm_bins=pcm_bins)
energy = self.compute_chunk_volume(
start_index=start, pcm_bins=pcm_bins)
if energy > max_energy:
max_energy = energy
start += self.window_length
self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult
# 保存成文件
np.save(self.vad_threshold_path, self.vad_threshold)
print(f"vad 阈值大小: {self.vad_threshold}")
print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}")
def stream_asr(self, pcm_bin):
# 先把 pcm_bin 送进去做端点检测
start = 0
......@@ -99,7 +101,7 @@ class AudioMannger:
self.on_asr = True
self.silence_cnt = 0
print("录音中")
self.audios += pcm_bin[ start : start + self.window_length]
self.audios += pcm_bin[start:start + self.window_length]
else:
if self.on_asr:
self.silence_cnt += 1
......@@ -110,41 +112,42 @@ class AudioMannger:
print("录音停止")
# audios 保存为 wav, 送入 ASR
if len(self.audios) > 2 * 16000:
file_path = os.path.join(self.file_dir, "asr_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav")
file_path = os.path.join(
self.file_dir,
"asr_" + datetime.datetime.strftime(
datetime.datetime.now(),
'%Y%m%d%H%M%S') + randName() + ".wav")
self.save_audio(file_path=file_path)
self.asr_result = self.robot.speech2text(file_path)
self.clear_audio()
return self.asr_result
return self.asr_result
else:
# 正常接收
print("录音中 静音")
self.audios += pcm_bin[ start : start + self.window_length]
self.audios += pcm_bin[start:start + self.window_length]
start += self.window_length
return ""
def save_audio(self, file_path):
print("保存音频")
wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav"
wf.setnchannels(1) # 设置声道数为2
wf.setsampwidth(2) # 设置采样深度为
wf.setframerate(16000) # 设置采样率为16000
wf = wave.open(file_path, 'wb') # 创建一个音频文件,名字为“01.wav"
wf.setnchannels(1) # 设置声道数为2
wf.setsampwidth(2) # 设置采样深度为
wf.setframerate(16000) # 设置采样率为16000
# 将数据写入创建的音频文件
wf.writeframes(self.audios)
# 写完后将文件关闭
wf.close()
def end(self):
# audios 保存为 wav, 送入 ASR
file_path = os.path.join(self.file_dir, "asr.wav")
self.save_audio(file_path=file_path)
return self.robot.speech2text(file_path)
def stop(self):
self.is_pause = True
self.audios = b''
def resume(self):
self.is_pause = False
\ No newline at end of file
from re import sub
import numpy as np
import paddle
import librosa
import soundfile
from paddlespeech.server.engine.asr.online.python.asr_engine import ASREngine
from paddlespeech.server.engine.asr.online.python.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.config import get_config
def readWave(samples):
x_len = len(samples)
......@@ -31,20 +28,23 @@ def readWave(samples):
class ASR:
def __init__(self, config_path, ) -> None:
def __init__(
self,
config_path, ) -> None:
self.config = get_config(config_path)['asr_online']
self.engine = ASREngine()
self.engine.init(self.config)
self.connection_handler = PaddleASRConnectionHanddler(self.engine)
def offlineASR(self, samples, sample_rate=16000):
x_chunk, x_chunk_lens = self.engine.preprocess(samples=samples, sample_rate=sample_rate)
x_chunk, x_chunk_lens = self.engine.preprocess(
samples=samples, sample_rate=sample_rate)
self.engine.run(x_chunk, x_chunk_lens)
result = self.engine.postprocess()
self.engine.reset()
return result
def onlineASR(self, samples:bytes=None, is_finished=False):
def onlineASR(self, samples: bytes=None, is_finished=False):
if not is_finished:
# 流式开始
self.connection_handler.extract_feat(samples)
......@@ -58,5 +58,3 @@ class ASR:
asr_results = self.connection_handler.get_result()
self.connection_handler.reset()
return asr_results
\ No newline at end of file
from paddlenlp import Taskflow
class NLP:
def __init__(self, ie_model_path=None):
schema = ["时间", "出发地", "目的地", "费用"]
if ie_model_path:
self.ie_model = Taskflow("information_extraction",
schema=schema, task_path=ie_model_path)
self.ie_model = Taskflow(
"information_extraction",
schema=schema,
task_path=ie_model_path)
else:
self.ie_model = Taskflow("information_extraction",
schema=schema)
self.ie_model = Taskflow("information_extraction", schema=schema)
self.dialogue_model = Taskflow("dialogue")
def chat(self, text):
result = self.dialogue_model([text])
return result[0]
def ie(self, text):
result = self.ie_model(text)
return result
\ No newline at end of file
import base64
import sqlite3
import os
import sqlite3
import numpy as np
from pkg_resources import resource_stream
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
class DataBase(object):
def __init__(self, db_path:str):
def __init__(self, db_path: str):
db_path = os.path.realpath(db_path)
if os.path.exists(db_path):
......@@ -21,12 +22,12 @@ class DataBase(object):
db_path_dir = os.path.dirname(db_path)
os.makedirs(db_path_dir, exist_ok=True)
self.db_path = db_path
self.conn = sqlite3.connect(self.db_path)
self.conn.row_factory = dict_factory
self.cursor = self.conn.cursor()
self.init_database()
def init_database(self):
"""
初始化数据库, 若表不存在则创建
......@@ -41,20 +42,21 @@ class DataBase(object):
"""
self.cursor.execute(sql)
self.conn.commit()
def execute_base(self, sql, data_dict):
self.cursor.execute(sql, data_dict)
self.conn.commit()
def insert_one(self, username, vector_base64:str, wav_path):
def insert_one(self, username, vector_base64: str, wav_path):
if not os.path.exists(wav_path):
return None, "wav not exists"
else:
sql = f"""
sql = """
insert into
vprtable (username, vector, wavpath)
values (?, ?, ?)
"""
try:
self.cursor.execute(sql, (username, vector_base64, wav_path))
self.conn.commit()
......@@ -63,25 +65,27 @@ class DataBase(object):
except Exception as e:
print(e)
return None, e
def select_all(self):
sql = """
SELECT * from vprtable
"""
result = self.cursor.execute(sql).fetchall()
return result
def select_by_id(self, vpr_id):
sql = f"""
SELECT * from vprtable WHERE `id` = {vpr_id}
"""
result = self.cursor.execute(sql).fetchall()
return result
def select_by_username(self, username):
sql = f"""
SELECT * from vprtable WHERE `username` = '{username}'
"""
result = self.cursor.execute(sql).fetchall()
return result
......@@ -89,28 +93,30 @@ class DataBase(object):
sql = f"""
DELETE from vprtable WHERE `username`='{username}'
"""
self.cursor.execute(sql)
self.conn.commit()
def drop_all(self):
sql = f"""
sql = """
DELETE from vprtable
"""
self.cursor.execute(sql)
self.conn.commit()
def drop_table(self):
sql = f"""
sql = """
DROP TABLE vprtable
"""
self.cursor.execute(sql)
self.conn.commit()
def encode_vector(self, vector:np.ndarray):
def encode_vector(self, vector: np.ndarray):
return base64.b64encode(vector).decode('utf8')
def decode_vector(self, vector_base64, dtype=np.float32):
b = base64.b64decode(vector_base64)
vc = np.frombuffer(b, dtype=dtype)
return vc
\ No newline at end of file
......@@ -5,18 +5,19 @@
# 2. 加载模型
# 3. 端到端推理
# 4. 流式推理
import base64
import math
import logging
import math
import numpy as np
from paddlespeech.server.utils.onnx_infer import get_sess
from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.server.utils.util import denorm, get_chunks
from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
from paddlespeech.server.utils.audio_process import float2pcm
from paddlespeech.server.utils.config import get_config
from paddlespeech.server.utils.util import denorm
from paddlespeech.server.utils.util import get_chunks
from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
class TTS:
def __init__(self, config_path):
......@@ -26,12 +27,12 @@ class TTS:
self.engine.init(self.config)
self.executor = self.engine.executor
#self.engine.warm_up()
# 前端初始化
self.frontend = Frontend(
phone_vocab_path=self.engine.executor.phones_dict,
tone_vocab_path=None)
phone_vocab_path=self.engine.executor.phones_dict,
tone_vocab_path=None)
def depadding(self, data, chunk_num, chunk_id, block, pad, upsample):
"""
Streaming inference removes the result of pad inference
......@@ -48,39 +49,37 @@ class TTS:
data = data[front_pad * upsample:(front_pad + block) * upsample]
return data
def offlineTTS(self, text):
get_tone_ids = False
merge_sentences = False
input_ids = self.frontend.get_input_ids(
text,
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"]
wav_list = []
for i in range(len(phone_ids)):
orig_hs = self.engine.executor.am_encoder_infer_sess.run(
None, input_feed={'text': phone_ids[i].numpy()}
)
None, input_feed={'text': phone_ids[i].numpy()})
hs = orig_hs[0]
am_decoder_output = self.engine.executor.am_decoder_sess.run(
None, input_feed={'xs': hs})
None, input_feed={'xs': hs})
am_postnet_output = self.engine.executor.am_postnet_sess.run(
None,
input_feed={
'xs': np.transpose(am_decoder_output[0], (0, 2, 1))
})
None,
input_feed={
'xs': np.transpose(am_decoder_output[0], (0, 2, 1))
})
am_output_data = am_decoder_output + np.transpose(
am_postnet_output[0], (0, 2, 1))
normalized_mel = am_output_data[0][0]
mel = denorm(normalized_mel, self.engine.executor.am_mu, self.engine.executor.am_std)
mel = denorm(normalized_mel, self.engine.executor.am_mu,
self.engine.executor.am_std)
wav = self.engine.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel})[0]
output_names=None, input_feed={'logmel': mel})[0]
wav_list.append(wav)
wavs = np.concatenate(wav_list)
return wavs
def streamTTS(self, text):
get_tone_ids = False
......@@ -88,9 +87,7 @@ class TTS:
# front
input_ids = self.frontend.get_input_ids(
text,
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
text, merge_sentences=merge_sentences, get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"]
for i in range(len(phone_ids)):
......@@ -105,14 +102,15 @@ class TTS:
mel = mel[0]
# voc streaming
mel_chunks = get_chunks(mel, self.config.voc_block, self.config.voc_pad, "voc")
mel_chunks = get_chunks(mel, self.config.voc_block,
self.config.voc_pad, "voc")
voc_chunk_num = len(mel_chunks)
for i, mel_chunk in enumerate(mel_chunks):
sub_wav = self.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel_chunk})
sub_wav = self.depadding(sub_wav[0], voc_chunk_num, i,
self.config.voc_block, self.config.voc_pad,
self.config.voc_upsample)
sub_wav = self.depadding(
sub_wav[0], voc_chunk_num, i, self.config.voc_block,
self.config.voc_pad, self.config.voc_upsample)
yield self.after_process(sub_wav)
......@@ -130,7 +128,8 @@ class TTS:
end = min(self.config.voc_block + self.config.voc_pad, mel_len)
# streaming am
hss = get_chunks(orig_hs, self.config.am_block, self.config.am_pad, "am")
hss = get_chunks(orig_hs, self.config.am_block,
self.config.am_pad, "am")
am_chunk_num = len(hss)
for i, hs in enumerate(hss):
am_decoder_output = self.executor.am_decoder_sess.run(
......@@ -147,7 +146,8 @@ class TTS:
sub_mel = denorm(normalized_mel, self.executor.am_mu,
self.executor.am_std)
sub_mel = self.depadding(sub_mel, am_chunk_num, i,
self.config.am_block, self.config.am_pad, 1)
self.config.am_block,
self.config.am_pad, 1)
if i == 0:
mel_streaming = sub_mel
......@@ -165,23 +165,22 @@ class TTS:
output_names=None, input_feed={'logmel': voc_chunk})
sub_wav = self.depadding(
sub_wav[0], voc_chunk_num, voc_chunk_id,
self.config.voc_block, self.config.voc_pad, self.config.voc_upsample)
self.config.voc_block, self.config.voc_pad,
self.config.voc_upsample)
yield self.after_process(sub_wav)
voc_chunk_id += 1
start = max(
0, voc_chunk_id * self.config.voc_block - self.config.voc_pad)
end = min(
(voc_chunk_id + 1) * self.config.voc_block + self.config.voc_pad,
mel_len)
start = max(0, voc_chunk_id * self.config.voc_block -
self.config.voc_pad)
end = min((voc_chunk_id + 1) * self.config.voc_block +
self.config.voc_pad, mel_len)
else:
logging.error(
"Only support fastspeech2_csmsc or fastspeech2_cnndecoder_csmsc on streaming tts."
)
)
def streamTTSBytes(self, text):
for wav in self.engine.executor.infer(
text=text,
......@@ -191,19 +190,14 @@ class TTS:
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
yield wav_bytes
def after_process(self, wav):
# for tvm
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64
return wav_base64
def streamTTS_TVM(self, text):
# 用 TVM 优化
pass
\ No newline at end of file
# vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示
import logging
import faiss
from matplotlib import use
import numpy as np
from .sql_helper import DataBase
from .vpr_encode import get_audio_embedding
class VPR:
def __init__(self, db_path, dim, top_k) -> None:
# 初始化
......@@ -14,15 +16,15 @@ class VPR:
self.top_k = top_k
self.dtype = np.float32
self.vpr_idx = 0
# db 初始化
self.db = DataBase(db_path)
# faiss 初始化
index_ip = faiss.IndexFlatIP(dim)
self.index_ip = faiss.IndexIDMap(index_ip)
self.init()
def init(self):
# demo 初始化,把 mysql中的向量注册到 faiss 中
sql_dbs = self.db.select_all()
......@@ -34,12 +36,13 @@ class VPR:
if len(vc.shape) == 1:
vc = np.expand_dims(vc, axis=0)
# 构建数据库
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64'))
self.index_ip.add_with_ids(vc, np.array(
(idx, )).astype('int64'))
logging.info("faiss 构建完毕")
def faiss_enroll(self, idx, vc):
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64'))
self.index_ip.add_with_ids(vc, np.array((idx, )).astype('int64'))
def vpr_enroll(self, username, wav_path):
# 注册声纹
emb = get_audio_embedding(wav_path)
......@@ -53,21 +56,22 @@ class VPR:
else:
last_idx, mess = None
return last_idx
def vpr_recog(self, wav_path):
# 识别声纹
emb_search = get_audio_embedding(wav_path)
if emb_search is not None:
emb_search = np.expand_dims(emb_search, axis=0)
D, I = self.index_ip.search(emb_search, self.top_k)
D = D.tolist()[0]
I = I.tolist()[0]
return [(round(D[i] * 100, 2 ), I[i]) for i in range(len(D)) if I[i] != -1]
I = I.tolist()[0]
return [(round(D[i] * 100, 2), I[i]) for i in range(len(D))
if I[i] != -1]
else:
logging.error("识别失败")
return None
def do_search_vpr(self, wav_path):
spk_ids, paths, scores = [], [], []
recog_result = self.vpr_recog(wav_path)
......@@ -78,41 +82,39 @@ class VPR:
scores.append(score)
paths.append("")
return spk_ids, paths, scores
def vpr_del(self, username):
# 根据用户username, 删除声纹
# 查用户ID,删除对应向量
res = self.db.select_by_username(username)
for r in res:
idx = r['id']
self.index_ip.remove_ids(np.array((idx,)).astype('int64'))
self.index_ip.remove_ids(np.array((idx, )).astype('int64'))
self.db.drop_by_username(username)
def vpr_list(self):
# 获取数据列表
return self.db.select_all()
def do_list(self):
spk_ids, vpr_ids = [], []
for res in self.db.select_all():
spk_ids.append(res['username'])
vpr_ids.append(res['id'])
return spk_ids, vpr_ids
return spk_ids, vpr_ids
def do_get_wav(self, vpr_idx):
res = self.db.select_by_id(vpr_idx)
return res[0]['wavpath']
res = self.db.select_by_id(vpr_idx)
return res[0]['wavpath']
def vpr_data(self, idx):
# 获取对应ID的数据
res = self.db.select_by_id(idx)
return res
def vpr_droptable(self):
# 删除表
self.db.drop_table()
# 清空 faiss
self.index_ip.reset()
from paddlespeech.cli.vector import VectorExecutor
import numpy as np
import logging
import numpy as np
from paddlespeech.cli.vector import VectorExecutor
vector_executor = VectorExecutor()
def get_audio_embedding(path):
"""
Use vpr_inference to generate embedding of audio
......@@ -16,5 +19,3 @@ def get_audio_embedding(path):
except Exception as e:
logging.error(f"Error with embedding:{e}")
return None
\ No newline at end of file
......@@ -2,6 +2,7 @@ from typing import List
from fastapi import WebSocket
class ConnectionManager:
def __init__(self):
# 存放激活的ws连接对象
......@@ -28,4 +29,4 @@ class ConnectionManager:
await connection.send_text(message)
manager = ConnectionManager()
\ No newline at end of file
manager = ConnectionManager()
from paddlespeech.cli.asr.infer import ASRExecutor
import soundfile as sf
import os
import librosa
import soundfile as sf
from src.SpeechBase.asr import ASR
from src.SpeechBase.tts import TTS
from src.SpeechBase.nlp import NLP
from src.SpeechBase.tts import TTS
from paddlespeech.cli.asr.infer import ASRExecutor
class Robot:
def __init__(self, asr_config, tts_config,asr_init_path,
def __init__(self,
asr_config,
tts_config,
asr_init_path,
ie_model_path=None) -> None:
self.nlp = NLP(ie_model_path=ie_model_path)
self.asr = ASR(config_path=asr_config)
self.tts = TTS(config_path=tts_config)
self.tts_sample_rate = 24000
self.asr_sample_rate = 16000
# 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开
self.asr_model = ASRExecutor()
self.asr_name = "conformer_wenetspeech"
self.warm_up_asrmodel(asr_init_path)
def warm_up_asrmodel(self, asr_init_path):
def warm_up_asrmodel(self, asr_init_path):
if not os.path.exists(asr_init_path):
path_dir = os.path.dirname(asr_init_path)
if not os.path.exists(path_dir):
os.makedirs(path_dir, exist_ok=True)
# TTS生成,采样率24000
text = "生成初始音频"
self.text2speech(text, asr_init_path)
# asr model初始化
self.asr_model(asr_init_path, model=self.asr_name,lang='zh',
sample_rate=16000, force_yes=True)
self.asr_model(
asr_init_path,
model=self.asr_name,
lang='zh',
sample_rate=16000,
force_yes=True)
def speech2text(self, audio_file):
self.asr_model.preprocess(self.asr_name, audio_file)
self.asr_model.infer(self.asr_name)
res = self.asr_model.postprocess()
return res
def text2speech(self, text, outpath):
wav = self.tts.offlineTTS(text)
sf.write(
outpath, wav, samplerate=self.tts_sample_rate)
sf.write(outpath, wav, samplerate=self.tts_sample_rate)
res = wav
return res
def text2speechStream(self, text):
for sub_wav_base64 in self.tts.streamTTS(text=text):
yield sub_wav_base64
def text2speechStreamBytes(self, text):
for wav_bytes in self.tts.streamTTSBytes(text=text):
yield wav_bytes
......@@ -66,5 +70,3 @@ class Robot:
def ie(self, text):
result = self.nlp.ie(text)
return result
\ No newline at end of file
import random
def randName(n=5):
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba',n))
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba', n))
def SuccessRequest(result=None, message="ok"):
return {
"code": 0,
"result":result,
"message": message
}
return {"code": 0, "result": result, "message": message}
def ErrorRequest(result=None, message="error"):
return {
"code": -1,
"result":result,
"message": message
}
\ No newline at end of file
return {"code": -1, "result": result, "message": message}
([简体中文](./README_cn.md)|English)
# Story Talker
## Introduction
Storybooks are very important children's enlightenment books, but parents usually don't have enough time to read storybooks for their children. For very young children, they may not understand the Chinese characters in storybooks. Or sometimes, children just want to "listen" but don't want to "read".
......
(简体中文|[English](./README.md))
# Story Talker
## 简介
故事书是非常重要的儿童启蒙书,但家长通常没有足够的时间为孩子读故事书。对于非常小的孩子,他们可能不理解故事书中的汉字。或有时,孩子们只是想“听”,而不想“读”。
您可以使用 `PaddleOCR` 获取故事书的文本,并通过 `PaddleSpeech``TTS` 模块进行阅读。
## 使用
运行以下命令行开始:
```
./run.sh
```
结果已显示在 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)
......@@ -28,6 +28,7 @@ asr_online:
sample_rate: 16000
cfg_path:
decode_method:
num_decoding_left_chunks: -1
force_yes: True
device: 'cpu' # cpu or gpu:id
decode_method: "attention_rescoring"
......
......@@ -34,7 +34,7 @@ if __name__ == '__main__':
n = 0
for m in rtfs:
# not accurate, may have duplicate log
n += 1
n += 1
T += m['T']
P += m['P']
......
......@@ -29,7 +29,7 @@ tts_online:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
......@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
am_sample_rate: 24000
am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu'
......@@ -79,7 +78,7 @@ tts_online-onnx:
# voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx']
# Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference
voc: 'hifigan_csmsc_onnx'
voc: 'mb_melgan_csmsc_onnx'
voc_ckpt:
voc_sample_rate: 24000
voc_sess_conf:
......@@ -100,4 +99,4 @@ tts_online-onnx:
voc_pad: 14
# voc_upsample should be same as n_shift on voc config.
voc_upsample: 300
\ No newline at end of file
......@@ -29,7 +29,7 @@ tts_online:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
......@@ -70,7 +70,6 @@ tts_online-onnx:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
am_sample_rate: 24000
am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu'
......@@ -79,7 +78,7 @@ tts_online-onnx:
# voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx']
# Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference
voc: 'hifigan_csmsc_onnx'
voc: 'mb_melgan_csmsc_onnx'
voc_ckpt:
voc_sample_rate: 24000
voc_sess_conf:
......@@ -100,4 +99,4 @@ tts_online-onnx:
voc_pad: 14
# voc_upsample should be same as n_shift on voc config.
voc_upsample: 300
\ No newline at end of file
([简体中文](./README_cn.md)|English)
# Style FastSpeech2
## Introduction
[FastSpeech2](https://arxiv.org/abs/2006.04558) is a classical acoustic model for Text-to-Speech synthesis, which introduces controllable speech input, including `phoneme duration``energy` and `pitch`.
......
(简体中文|[English](./README.md))
# Style FastSpeech2
## 简介
[FastSpeech2](https://arxiv.org/abs/2006.04558) 是用于语音合成的经典声学模型,它引入了可控语音输入,包括 `phoneme duration``energy``pitch`
在预测阶段,您可以更改这些变量以获得一些有趣的结果。
例如:
1. `FastSpeech2` 中的 `duration` 可以控制音频的速度 ,并保持 `pitch` 。(在某些语音工具中,增加速度将增加音调,反之亦然。)
2. 当我们将一个句子的 `pitch` 设置为平均值并将音素的 `tones` 设置为 `1` 时,我们将获得 `robot-style` 的音色。
3. 当我们提高成年女性的 `pitch` (比例固定)时,我们会得到 `child-style` 的音色。
句子中不同音素的 `duration``pitch` 可以具有不同的比例。您可以设置不同的音阶比例来强调或削弱某些音素的发音。
## 运行
运行以下命令行开始:
```
./run.sh
```
`run.sh`, 会首先执行 `source path.sh` 去设置好环境变量。
如果您想尝试您的句子,请替换 `sentences.txt`中的句子。
更多的细节,请查看 `style_syn.py`
语音样例可以在 [style-control-in-fastspeech2](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html#style-control-in-fastspeech2) 查看。
......@@ -16,8 +16,8 @@ You can choose one way from easy, meduim and hard to install paddlespeech.
The input of this demo should be a text of the specific language that can be passed via argument.
### 3. Usage
- Command Line (Recommended)
The default acoustic model is `Fastspeech2`, and the default vocoder is `HiFiGAN`, the default inference method is dygraph inference.
- Chinese
The default acoustic model is `Fastspeech2`, and the default vocoder is `Parallel WaveGAN`.
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
```
......@@ -45,7 +45,33 @@ The input of this demo should be a text of the specific language that can be pas
You can change `spk_id` here.
```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
```
```
- Chinese English Mixed, multi-speaker
You can change `spk_id` here.
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- Use ONNXRuntime infer:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
Usage:
```bash
......@@ -68,6 +94,8 @@ The input of this demo should be a text of the specific language that can be pas
- `lang`: Language of tts task. Default: `zh`.
- `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment.
- `output`: Output wave filepath. Default: `output.wav`.
- `use_onnx`: whether to usen ONNXRuntime inference.
- `fs`: sample rate for ONNX models when use specified model files.
Output:
```bash
......@@ -75,54 +103,76 @@ The input of this demo should be a text of the specific language that can be pas
```
- Python API
```python
import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='今天的天气不错啊',
output='output.wav',
am='fastspeech2_csmsc',
am_config=None,
am_ckpt=None,
am_stat=None,
spk_id=0,
phones_dict=None,
tones_dict=None,
speaker_dict=None,
voc='pwgan_csmsc',
voc_config=None,
voc_ckpt=None,
voc_stat=None,
lang='zh',
device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file))
```
- Dygraph infer:
```python
import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='今天的天气不错啊',
output='output.wav',
am='fastspeech2_csmsc',
am_config=None,
am_ckpt=None,
am_stat=None,
spk_id=0,
phones_dict=None,
tones_dict=None,
speaker_dict=None,
voc='pwgan_csmsc',
voc_config=None,
voc_ckpt=None,
voc_stat=None,
lang='zh',
device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file))
```
- ONNXRuntime infer:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
Output:
```bash
Wave file has been generated: output.wav
```
### 4. Pretrained Models
Here is a list of pretrained models released by PaddleSpeech that can be used by command and python API:
- Acoustic model
| Model | Language
| Model | Language |
| :--- | :---: |
| speedyspeech_csmsc| zh
| fastspeech2_csmsc| zh
| fastspeech2_aishell3| zh
| fastspeech2_ljspeech| en
| fastspeech2_vctk| en
| speedyspeech_csmsc | zh |
| fastspeech2_csmsc | zh |
| fastspeech2_ljspeech | en |
| fastspeech2_aishell3 | zh |
| fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- Vocoder
| Model | Language
| Model | Language |
| :--- | :---: |
| pwgan_csmsc| zh
| pwgan_aishell3| zh
| pwgan_ljspeech| en
| pwgan_vctk| en
| mb_melgan_csmsc| zh
| pwgan_csmsc | zh |
| pwgan_ljspeech | en |
| pwgan_aishell3 | zh |
| pwgan_vctk | en |
| mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
(简体中文|[English](./README.md))
# 语音合成
## 介绍
语音合成是一种自然语言建模过程,其将文本转换为语音以进行音频演示。
这个 demo 是一个从给定文本生成音频的实现,它可以通过使用 `PaddleSpeech` 的单个命令或 python 中的几行代码来实现。
## 使用方法
### 1. 安装
请看[安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install_cn.md)
你可以从 easy,medium,hard 三方式中选择一种方式安装。
你可以从 easy,medium,hard 三方式中选择一种方式安装。
### 2. 准备输入
这个 demo 的输入是通过参数传递的特定语言的文本。
### 3. 使用方法
- 命令行 (推荐使用)
默认的声学模型是 `Fastspeech2`,默认的声码器是 `HiFiGAN`,默认推理方式是动态图推理。
- 中文
默认的声学模型是 `Fastspeech2`,默认的声码器是 `Parallel WaveGAN`.
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!"
```
......@@ -34,7 +31,7 @@
```
- 中文, 多说话人
你可以改变 `spk_id`
你可以改变 `spk_id`
```bash
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0
```
......@@ -45,10 +42,36 @@
```
- 英文,多说话人
你可以改变 `spk_id`
你可以改变 `spk_id`
```bash
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "hello, boys" --lang en --spk_id 0
```
- 中英文混合,多说话人
你可以改变 `spk_id`
```bash
# The `am` must be `fastspeech2_mix`!
# The `lang` must be `mix`!
# The voc must be chinese datasets' voc now!
# spk 174 is csmcc, spk 175 is ljspeech
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_aishell3 --lang mix --input "热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!" --spk_id 174 --output mix_spk174_aishell3.wav
paddlespeech tts --am fastspeech2_mix --voc pwgan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175_pwgan.wav
paddlespeech tts --am fastspeech2_mix --voc hifigan_csmsc --lang mix --input "我们的声学模型使用了 Fast Speech Two, 声码器使用了 Parallel Wave GAN and Hifi GAN." --spk_id 175 --output mix_spk175.wav
```
- 使用 ONNXRuntime 推理:
```bash
paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output default.wav --use_onnx True
paddlespeech tts --am speedyspeech_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output ss.wav --use_onnx True
paddlespeech tts --voc mb_melgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output mb.wav --use_onnx True
paddlespeech tts --voc pwgan_csmsc --input "你好,欢迎使用百度飞桨深度学习框架!" --output pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc pwgan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_aishell3 --voc hifigan_aishell3 --input "你好,欢迎使用百度飞桨深度学习框架!" --spk_id 0 --output aishell3_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc pwgan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_ljspeech --voc hifigan_ljspeech --lang en --input "Life was like a box of chocolates, you never know what you're gonna get." --output lj_fs2_hifigan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc pwgan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_pwgan.wav --use_onnx True
paddlespeech tts --am fastspeech2_vctk --voc hifigan_vctk --input "Life was like a box of chocolates, you never know what you're gonna get." --lang en --spk_id 0 --output vctk_fs2_hifigan.wav --use_onnx True
```
使用方法:
```bash
......@@ -71,6 +94,8 @@
- `lang`:TTS 任务的语言, 默认值:`zh`
- `device`:执行预测的设备, 默认值:当前系统下 paddlepaddle 的默认 device。
- `output`:输出音频的路径, 默认值:`output.wav`
- `use_onnx`: 是否使用 ONNXRuntime 进行推理。
- `fs`: 使用特定 ONNX 模型时的采样率。
输出:
```bash
......@@ -78,31 +103,44 @@
```
- Python API
```python
import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='今天的天气不错啊',
output='output.wav',
am='fastspeech2_csmsc',
am_config=None,
am_ckpt=None,
am_stat=None,
spk_id=0,
phones_dict=None,
tones_dict=None,
speaker_dict=None,
voc='pwgan_csmsc',
voc_config=None,
voc_ckpt=None,
voc_stat=None,
lang='zh',
device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file))
```
- 动态图推理:
```python
import paddle
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='今天的天气不错啊',
output='output.wav',
am='fastspeech2_csmsc',
am_config=None,
am_ckpt=None,
am_stat=None,
spk_id=0,
phones_dict=None,
tones_dict=None,
speaker_dict=None,
voc='pwgan_csmsc',
voc_config=None,
voc_ckpt=None,
voc_stat=None,
lang='zh',
device=paddle.get_device())
print('Wave file has been generated: {}'.format(wav_file))
```
- ONNXRuntime 推理:
```python
from paddlespeech.cli.tts import TTSExecutor
tts_executor = TTSExecutor()
wav_file = tts_executor(
text='对数据集进行预处理',
output='output.wav',
am='fastspeech2_csmsc',
voc='hifigan_csmsc',
lang='zh',
use_onnx=True,
cpu_threads=2)
```
输出:
```bash
Wave file has been generated: output.wav
......@@ -112,19 +150,29 @@
以下是 PaddleSpeech 提供的可以被命令行和 python API 使用的预训练模型列表:
- 声学模型
| 模型 | 语言
| 模型 | 语言 |
| :--- | :---: |
| speedyspeech_csmsc| zh
| fastspeech2_csmsc| zh
| fastspeech2_aishell3| zh
| fastspeech2_ljspeech| en
| fastspeech2_vctk| en
| speedyspeech_csmsc | zh |
| fastspeech2_csmsc | zh |
| fastspeech2_ljspeech | en |
| fastspeech2_aishell3 | zh |
| fastspeech2_vctk | en |
| fastspeech2_cnndecoder_csmsc | zh |
| fastspeech2_mix | mix |
| tacotron2_csmsc | zh |
| tacotron2_ljspeech | en |
- 声码器
| 模型 | 语言
| 模型 | 语言 |
| :--- | :---: |
| pwgan_csmsc| zh
| pwgan_aishell3| zh
| pwgan_ljspeech| en
| pwgan_vctk| en
| mb_melgan_csmsc| zh
| pwgan_csmsc | zh |
| pwgan_ljspeech | en |
| pwgan_aishell3 | zh |
| pwgan_vctk | en |
| mb_melgan_csmsc | zh |
| style_melgan_csmsc | zh |
| hifigan_csmsc | zh |
| hifigan_ljspeech | en |
| hifigan_aishell3 | zh |
| hifigan_vctk | en |
| wavernn_csmsc | zh |
myst-parser
numpydoc
recommonmark>=0.5.0
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
paddlepaddle>=2.2.2
braceexpand
colorlog
editdistance
fastapi
g2p_en
g2pM
h5py
......@@ -14,39 +9,45 @@ inflect
jieba
jsonlines
kaldiio
keyboard
librosa==0.8.1
loguru
matplotlib
myst-parser
nara_wpe
onnxruntime
pandas
numpydoc
onnxruntime==1.10.0
opencc
paddlenlp
paddlepaddle>=2.2.2
paddlespeech_feat
pandas
pathos == 0.2.8
pattern_singleton
Pillow>=9.0.0
praatio==5.0.0
pypinyin
prettytable
pypinyin<=0.44.0
pypinyin-dict
python-dateutil
pyworld==0.2.12
recommonmark>=0.5.0
resampy==0.2.2
sacrebleu
scipy
sentencepiece~=0.1.96
soundfile~=0.10
sphinx
sphinx-autobuild
sphinx-markdown-tables
sphinx_rtd_theme
textgrid
timer
tqdm
typeguard
uvicorn
visualdl
webrtcvad
websockets
yacs~=0.1.8
prettytable
zhon
colorlog
pathos == 0.2.8
fastapi
websockets
keyboard
uvicorn
pattern_singleton
braceexpand
\ No newline at end of file
......@@ -20,4 +20,7 @@ Subpackages
paddlespeech.audio.io
paddlespeech.audio.metric
paddlespeech.audio.sox_effects
paddlespeech.audio.streamdata
paddlespeech.audio.text
paddlespeech.audio.transform
paddlespeech.audio.utils
paddlespeech.audio.streamdata.autodecode module
===============================================
.. automodule:: paddlespeech.audio.streamdata.autodecode
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.cache module
==========================================
.. automodule:: paddlespeech.audio.streamdata.cache
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.compat module
===========================================
.. automodule:: paddlespeech.audio.streamdata.compat
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.extradatasets module
==================================================
.. automodule:: paddlespeech.audio.streamdata.extradatasets
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.filters module
============================================
.. automodule:: paddlespeech.audio.streamdata.filters
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.gopen module
==========================================
.. automodule:: paddlespeech.audio.streamdata.gopen
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.handlers module
=============================================
.. automodule:: paddlespeech.audio.streamdata.handlers
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.mix module
========================================
.. automodule:: paddlespeech.audio.streamdata.mix
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.paddle\_utils module
==================================================
.. automodule:: paddlespeech.audio.streamdata.paddle_utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.pipeline module
=============================================
.. automodule:: paddlespeech.audio.streamdata.pipeline
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata package
=====================================
.. automodule:: paddlespeech.audio.streamdata
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.streamdata.autodecode
paddlespeech.audio.streamdata.cache
paddlespeech.audio.streamdata.compat
paddlespeech.audio.streamdata.extradatasets
paddlespeech.audio.streamdata.filters
paddlespeech.audio.streamdata.gopen
paddlespeech.audio.streamdata.handlers
paddlespeech.audio.streamdata.mix
paddlespeech.audio.streamdata.paddle_utils
paddlespeech.audio.streamdata.pipeline
paddlespeech.audio.streamdata.shardlists
paddlespeech.audio.streamdata.tariterators
paddlespeech.audio.streamdata.utils
paddlespeech.audio.streamdata.writer
paddlespeech.audio.streamdata.shardlists module
===============================================
.. automodule:: paddlespeech.audio.streamdata.shardlists
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.tariterators module
=================================================
.. automodule:: paddlespeech.audio.streamdata.tariterators
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.utils module
==========================================
.. automodule:: paddlespeech.audio.streamdata.utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.streamdata.writer module
===========================================
.. automodule:: paddlespeech.audio.streamdata.writer
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.text package
===============================
.. automodule:: paddlespeech.audio.text
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.text.text_featurizer
paddlespeech.audio.text.utility
paddlespeech.audio.text.text\_featurizer module
===============================================
.. automodule:: paddlespeech.audio.text.text_featurizer
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.text.utility module
======================================
.. automodule:: paddlespeech.audio.text.utility
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.add\_deltas module
===============================================
.. automodule:: paddlespeech.audio.transform.add_deltas
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.channel\_selector module
=====================================================
.. automodule:: paddlespeech.audio.transform.channel_selector
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.cmvn module
========================================
.. automodule:: paddlespeech.audio.transform.cmvn
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.functional module
==============================================
.. automodule:: paddlespeech.audio.transform.functional
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.perturb module
===========================================
.. automodule:: paddlespeech.audio.transform.perturb
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform package
====================================
.. automodule:: paddlespeech.audio.transform
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.audio.transform.add_deltas
paddlespeech.audio.transform.channel_selector
paddlespeech.audio.transform.cmvn
paddlespeech.audio.transform.functional
paddlespeech.audio.transform.perturb
paddlespeech.audio.transform.spec_augment
paddlespeech.audio.transform.spectrogram
paddlespeech.audio.transform.transform_interface
paddlespeech.audio.transform.transformation
paddlespeech.audio.transform.wpe
paddlespeech.audio.transform.spec\_augment module
=================================================
.. automodule:: paddlespeech.audio.transform.spec_augment
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.spectrogram module
===============================================
.. automodule:: paddlespeech.audio.transform.spectrogram
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.transform\_interface module
========================================================
.. automodule:: paddlespeech.audio.transform.transform_interface
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.transformation module
==================================================
.. automodule:: paddlespeech.audio.transform.transformation
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.transform.wpe module
=======================================
.. automodule:: paddlespeech.audio.transform.wpe
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.utils.check\_kwargs module
=============================================
.. automodule:: paddlespeech.audio.utils.check_kwargs
:members:
:undoc-members:
:show-inheritance:
paddlespeech.audio.utils.dynamic\_import module
===============================================
.. automodule:: paddlespeech.audio.utils.dynamic_import
:members:
:undoc-members:
:show-inheritance:
......@@ -12,8 +12,11 @@ Submodules
.. toctree::
:maxdepth: 4
paddlespeech.audio.utils.check_kwargs
paddlespeech.audio.utils.download
paddlespeech.audio.utils.dynamic_import
paddlespeech.audio.utils.error
paddlespeech.audio.utils.log
paddlespeech.audio.utils.numeric
paddlespeech.audio.utils.tensor_utils
paddlespeech.audio.utils.time
paddlespeech.audio.utils.tensor\_utils module
=============================================
.. automodule:: paddlespeech.audio.utils.tensor_utils
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.collate module
=========================================
.. automodule:: paddlespeech.kws.exps.mdtc.collate
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.compute\_det module
==============================================
.. automodule:: paddlespeech.kws.exps.mdtc.compute_det
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.plot\_det\_curve module
==================================================
.. automodule:: paddlespeech.kws.exps.mdtc.plot_det_curve
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc package
==================================
.. automodule:: paddlespeech.kws.exps.mdtc
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.kws.exps.mdtc.collate
paddlespeech.kws.exps.mdtc.compute_det
paddlespeech.kws.exps.mdtc.plot_det_curve
paddlespeech.kws.exps.mdtc.score
paddlespeech.kws.exps.mdtc.train
paddlespeech.kws.exps.mdtc.score module
=======================================
.. automodule:: paddlespeech.kws.exps.mdtc.score
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps.mdtc.train module
=======================================
.. automodule:: paddlespeech.kws.exps.mdtc.train
:members:
:undoc-members:
:show-inheritance:
paddlespeech.kws.exps package
=============================
.. automodule:: paddlespeech.kws.exps
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
:maxdepth: 4
paddlespeech.kws.exps.mdtc
......@@ -12,4 +12,5 @@ Subpackages
.. toctree::
:maxdepth: 4
paddlespeech.kws.exps
paddlespeech.kws.models
paddlespeech.resource.model\_alias module
=========================================
.. automodule:: paddlespeech.resource.model_alias
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource.pretrained\_models module
===============================================
.. automodule:: paddlespeech.resource.pretrained_models
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource.resource module
=====================================
.. automodule:: paddlespeech.resource.resource
:members:
:undoc-members:
:show-inheritance:
paddlespeech.resource package
=============================
.. automodule:: paddlespeech.resource
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.resource.model_alias
paddlespeech.resource.pretrained_models
paddlespeech.resource.resource
......@@ -16,8 +16,10 @@ Subpackages
paddlespeech.cli
paddlespeech.cls
paddlespeech.kws
paddlespeech.resource
paddlespeech.s2t
paddlespeech.server
paddlespeech.t2s
paddlespeech.text
paddlespeech.utils
paddlespeech.vector
......@@ -19,5 +19,4 @@ Subpackages
paddlespeech.s2t.models
paddlespeech.s2t.modules
paddlespeech.s2t.training
paddlespeech.s2t.transform
paddlespeech.s2t.utils
......@@ -18,7 +18,6 @@ Submodules
paddlespeech.server.utils.config
paddlespeech.server.utils.errors
paddlespeech.server.utils.exception
paddlespeech.server.utils.log
paddlespeech.server.utils.onnx_infer
paddlespeech.server.utils.paddle_predictor
paddlespeech.server.utils.util
......
......@@ -19,4 +19,5 @@ Submodules
paddlespeech.t2s.datasets.get_feats
paddlespeech.t2s.datasets.ljspeech
paddlespeech.t2s.datasets.preprocess_utils
paddlespeech.t2s.datasets.sampler
paddlespeech.t2s.datasets.vocoder_batch_fn
paddlespeech.t2s.datasets.sampler module
========================================
.. automodule:: paddlespeech.t2s.datasets.sampler
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.align module
=============================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.align
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.normalize module
=================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.normalize
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.preprocess module
==================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.preprocess
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat package
========================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
paddlespeech.t2s.exps.ernie_sat.align
paddlespeech.t2s.exps.ernie_sat.normalize
paddlespeech.t2s.exps.ernie_sat.preprocess
paddlespeech.t2s.exps.ernie_sat.synthesize
paddlespeech.t2s.exps.ernie_sat.synthesize_e2e
paddlespeech.t2s.exps.ernie_sat.train
paddlespeech.t2s.exps.ernie_sat.utils
paddlespeech.t2s.exps.ernie\_sat.synthesize module
==================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.synthesize
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.synthesize\_e2e module
=======================================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.synthesize_e2e
:members:
:undoc-members:
:show-inheritance:
paddlespeech.t2s.exps.ernie\_sat.train module
=============================================
.. automodule:: paddlespeech.t2s.exps.ernie_sat.train
:members:
:undoc-members:
:show-inheritance:
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册