提交 05d41523 编写于 作者: H huangyuxin

Merge branch 'develop' into webdataset

([简体中文](./README_cn.md)|English) ([简体中文](./README_cn.md)|English)
<p align="center"> <p align="center">
<img src="./docs/images/PaddleSpeech_logo.png" /> <img src="./docs/images/PaddleSpeech_logo.png" />
...@@ -494,6 +495,14 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r ...@@ -494,6 +495,14 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a> <a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a>
</td> </td>
</tr> </tr>
<tr>
<td rowspan="3">End-to-End</td>
<td>VITS</td>
<td >CSMSC</td>
<td>
<a href = "./examples/csmsc/vits">VITS-csmsc</a>
</td>
</tr>
</tbody> </tbody>
</table> </table>
......
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
<p align="center"> <p align="center">
<img src="./docs/images/PaddleSpeech_logo.png" /> <img src="./docs/images/PaddleSpeech_logo.png" />
...@@ -481,6 +482,15 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 ...@@ -481,6 +482,15 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a> <a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a>
</td> </td>
</tr> </tr>
</tr>
<tr>
<td rowspan="3">端到端</td>
<td>VITS</td>
<td >CSMSC</td>
<td>
<a href = "./examples/csmsc/vits">VITS-csmsc</a>
</td>
</tr>
</tbody> </tbody>
</table> </table>
......
# [Aidatatang_200zh](http://www.openslr.org/62/) # [Aidatatang_200zh](http://openslr.elda.org/62/)
Aidatatang_200zh is a free Chinese Mandarin speech corpus provided by Beijing DataTang Technology Co., Ltd under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License. Aidatatang_200zh is a free Chinese Mandarin speech corpus provided by Beijing DataTang Technology Co., Ltd under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License.
The contents and the corresponding descriptions of the corpus include: The contents and the corresponding descriptions of the corpus include:
......
# [Aishell1](http://www.openslr.org/33/) # [Aishell1](http://openslr.elda.org/33/)
This Open Source Mandarin Speech Corpus, AISHELL-ASR0009-OS1, is 178 hours long. It is a part of AISHELL-ASR0009, of which utterance contains 11 domains, including smart home, autonomous driving, and industrial production. The whole recording was put in quiet indoor environment, using 3 different devices at the same time: high fidelity microphone (44.1kHz, 16-bit,); Android-system mobile phone (16kHz, 16-bit), iOS-system mobile phone (16kHz, 16-bit). Audios in high fidelity were re-sampled to 16kHz to build AISHELL- ASR0009-OS1. 400 speakers from different accent areas in China were invited to participate in the recording. The manual transcription accuracy rate is above 95%, through professional speech annotation and strict quality inspection. The corpus is divided into training, development and testing sets. ( This database is free for academic research, not in the commerce, if without permission. ) This Open Source Mandarin Speech Corpus, AISHELL-ASR0009-OS1, is 178 hours long. It is a part of AISHELL-ASR0009, of which utterance contains 11 domains, including smart home, autonomous driving, and industrial production. The whole recording was put in quiet indoor environment, using 3 different devices at the same time: high fidelity microphone (44.1kHz, 16-bit,); Android-system mobile phone (16kHz, 16-bit), iOS-system mobile phone (16kHz, 16-bit). Audios in high fidelity were re-sampled to 16kHz to build AISHELL- ASR0009-OS1. 400 speakers from different accent areas in China were invited to participate in the recording. The manual transcription accuracy rate is above 95%, through professional speech annotation and strict quality inspection. The corpus is divided into training, development and testing sets. ( This database is free for academic research, not in the commerce, if without permission. )
...@@ -31,7 +31,7 @@ from utils.utility import unpack ...@@ -31,7 +31,7 @@ from utils.utility import unpack
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech') DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = 'http://www.openslr.org/resources/33' URL_ROOT = 'http://openslr.elda.org/resources/33'
# URL_ROOT = 'https://openslr.magicdatatech.com/resources/33' # URL_ROOT = 'https://openslr.magicdatatech.com/resources/33'
DATA_URL = URL_ROOT + '/data_aishell.tgz' DATA_URL = URL_ROOT + '/data_aishell.tgz'
MD5_DATA = '2f494334227864a8a8fec932999db9d8' MD5_DATA = '2f494334227864a8a8fec932999db9d8'
......
...@@ -31,7 +31,7 @@ import soundfile ...@@ -31,7 +31,7 @@ import soundfile
from utils.utility import download from utils.utility import download
from utils.utility import unpack from utils.utility import unpack
URL_ROOT = "http://www.openslr.org/resources/12" URL_ROOT = "http://openslr.elda.org/resources/12"
#URL_ROOT = "https://openslr.magicdatatech.com/resources/12" #URL_ROOT = "https://openslr.magicdatatech.com/resources/12"
URL_TEST_CLEAN = URL_ROOT + "/test-clean.tar.gz" URL_TEST_CLEAN = URL_ROOT + "/test-clean.tar.gz"
URL_TEST_OTHER = URL_ROOT + "/test-other.tar.gz" URL_TEST_OTHER = URL_ROOT + "/test-other.tar.gz"
......
# [MagicData](http://www.openslr.org/68/) # [MagicData](http://openslr.elda.org/68/)
MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use. MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.
The contents and the corresponding descriptions of the corpus include: The contents and the corresponding descriptions of the corpus include:
......
...@@ -30,7 +30,7 @@ import soundfile ...@@ -30,7 +30,7 @@ import soundfile
from utils.utility import download from utils.utility import download
from utils.utility import unpack from utils.utility import unpack
URL_ROOT = "http://www.openslr.org/resources/31" URL_ROOT = "http://openslr.elda.org/resources/31"
URL_TRAIN_CLEAN = URL_ROOT + "/train-clean-5.tar.gz" URL_TRAIN_CLEAN = URL_ROOT + "/train-clean-5.tar.gz"
URL_DEV_CLEAN = URL_ROOT + "/dev-clean-2.tar.gz" URL_DEV_CLEAN = URL_ROOT + "/dev-clean-2.tar.gz"
......
...@@ -34,7 +34,7 @@ from utils.utility import unpack ...@@ -34,7 +34,7 @@ from utils.utility import unpack
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech') DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = 'https://www.openslr.org/resources/17' URL_ROOT = 'https://openslr.elda.org/resources/17'
DATA_URL = URL_ROOT + '/musan.tar.gz' DATA_URL = URL_ROOT + '/musan.tar.gz'
MD5_DATA = '0c472d4fc0c5141eca47ad1ffeb2a7df' MD5_DATA = '0c472d4fc0c5141eca47ad1ffeb2a7df'
......
# [Primewords](http://www.openslr.org/47/) # [Primewords](http://openslr.elda.org/47/)
This free Chinese Mandarin speech corpus set is released by Shanghai Primewords Information Technology Co., Ltd. This free Chinese Mandarin speech corpus set is released by Shanghai Primewords Information Technology Co., Ltd.
The corpus is recorded by smart mobile phones from 296 native Chinese speakers. The transcription accuracy is larger than 98%, at the confidence level of 95%. It is free for academic use. The corpus is recorded by smart mobile phones from 296 native Chinese speakers. The transcription accuracy is larger than 98%, at the confidence level of 95%. It is free for academic use.
......
...@@ -34,7 +34,7 @@ from utils.utility import unzip ...@@ -34,7 +34,7 @@ from utils.utility import unzip
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech') DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = '--no-check-certificate http://www.openslr.org/resources/28' URL_ROOT = '--no-check-certificate https://us.openslr.org/resources/28/rirs_noises.zip'
DATA_URL = URL_ROOT + '/rirs_noises.zip' DATA_URL = URL_ROOT + '/rirs_noises.zip'
MD5_DATA = 'e6f48e257286e05de56413b4779d8ffb' MD5_DATA = 'e6f48e257286e05de56413b4779d8ffb'
......
# [FreeST](http://www.openslr.org/38/) # [FreeST](http://openslr.elda.org/38/)
# [THCHS30](http://www.openslr.org/18/) # [THCHS30](http://openslr.elda.org/18/)
This is the *data part* of the `THCHS30 2015` acoustic data This is the *data part* of the `THCHS30 2015` acoustic data
& scripts dataset. & scripts dataset.
......
...@@ -32,7 +32,7 @@ from utils.utility import unpack ...@@ -32,7 +32,7 @@ from utils.utility import unpack
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech') DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
URL_ROOT = 'http://www.openslr.org/resources/18' URL_ROOT = 'http://openslr.elda.org/resources/18'
# URL_ROOT = 'https://openslr.magicdatatech.com/resources/18' # URL_ROOT = 'https://openslr.magicdatatech.com/resources/18'
DATA_URL = URL_ROOT + '/data_thchs30.tgz' DATA_URL = URL_ROOT + '/data_thchs30.tgz'
TEST_NOISE_URL = URL_ROOT + '/test-noise.tgz' TEST_NOISE_URL = URL_ROOT + '/test-noise.tgz'
......
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2021 Mobvoi Inc. All Rights Reserved.
# Author: zhendong.peng@mobvoi.com (Zhendong Peng)
import argparse
from flask import Flask
from flask import render_template
parser = argparse.ArgumentParser(description='training your network')
parser.add_argument('--port', default=19999, type=int, help='port id')
args = parser.parse_args()
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=args.port, debug=True)
因为 它太大了无法显示 source diff 。你可以改为 查看blob
# paddlespeech serving 网页Demo # paddlespeech serving 网页Demo
- 感谢[wenet](https://github.com/wenet-e2e/wenet)团队的前端demo代码. ![图片](./paddle_web_demo.png)
step1: 开启流式语音识别服务器端
## 使用方法 ```
### 1. 在本地电脑启动网页服务 # 开启流式语音识别服务
``` cd PaddleSpeech/demos/streaming_asr_server
python app.py paddlespeech_server start --config_file conf/ws_conformer_wenetspeech_application_faster.yaml
```
``` step2: 谷歌游览器打开 `web`目录下`index.html`
### 2. 本地电脑浏览器 step3: 点击`连接`,验证WebSocket是否成功连接
step4:点击开始录音(弹窗询问,允许录音)
在浏览器中输入127.0.0.1:19999 即可看到相关网页Demo。
![图片](./paddle_web_demo.png)
/*!
* Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome
* License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
*/@font-face{font-family:'FontAwesome';src:url('../fonts/fontawesome-webfont.eot?v=4.7.0');src:url('../fonts/fontawesome-webfont.eot?#iefix&v=4.7.0') format('embedded-opentype'),url('../fonts/fontawesome-webfont.woff2?v=4.7.0') format('woff2'),url('../fonts/fontawesome-webfont.woff?v=4.7.0') format('woff'),url('../fonts/fontawesome-webfont.ttf?v=4.7.0') format('truetype'),url('../fonts/fontawesome-webfont.svg?v=4.7.0#fontawesomeregular') format('svg');font-weight:normal;font-style:normal}.fa{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571429em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14285714em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14285714em;width:2.14285714em;top:.14285714em;text-align:center}.fa-li.fa-lg{left:-1.85714286em}.fa-border{padding:.2em .25em .15em;border:solid .08em #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left{margin-right:.3em}.fa.fa-pull-right{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left{margin-right:.3em}.fa.pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s infinite linear;animation:fa-spin 2s infinite linear}.fa-pulse{-webkit-animation:fa-spin 1s infinite steps(8);animation:fa-spin 1s infinite steps(8)}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scale(-1, 1);-ms-transform:scale(-1, 1);transform:scale(-1, 1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scale(1, -1);-ms-transform:scale(1, -1);transform:scale(1, -1)}:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270,:root .fa-flip-horizontal,:root .fa-flip-vertical{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:"\f000"}.fa-music:before{content:"\f001"}.fa-search:before{content:"\f002"}.fa-envelope-o:before{content:"\f003"}.fa-heart:before{content:"\f004"}.fa-star:before{content:"\f005"}.fa-star-o:before{content:"\f006"}.fa-user:before{content:"\f007"}.fa-film:before{content:"\f008"}.fa-th-large:before{content:"\f009"}.fa-th:before{content:"\f00a"}.fa-th-list:before{content:"\f00b"}.fa-check:before{content:"\f00c"}.fa-remove:before,.fa-close:before,.fa-times:before{content:"\f00d"}.fa-search-plus:before{content:"\f00e"}.fa-search-minus:before{content:"\f010"}.fa-power-off:before{content:"\f011"}.fa-signal:before{content:"\f012"}.fa-gear:before,.fa-cog:before{content:"\f013"}.fa-trash-o:before{content:"\f014"}.fa-home:before{content:"\f015"}.fa-file-o:before{content:"\f016"}.fa-clock-o:before{content:"\f017"}.fa-road:before{content:"\f018"}.fa-download:before{content:"\f019"}.fa-arrow-circle-o-down:before{content:"\f01a"}.fa-arrow-circle-o-up:before{content:"\f01b"}.fa-inbox:before{content:"\f01c"}.fa-play-circle-o:before{content:"\f01d"}.fa-rotate-right:before,.fa-repeat:before{content:"\f01e"}.fa-refresh:before{content:"\f021"}.fa-list-alt:before{content:"\f022"}.fa-lock:before{content:"\f023"}.fa-flag:before{content:"\f024"}.fa-headphones:before{content:"\f025"}.fa-volume-off:before{content:"\f026"}.fa-volume-down:before{content:"\f027"}.fa-volume-up:before{content:"\f028"}.fa-qrcode:before{content:"\f029"}.fa-barcode:before{content:"\f02a"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-book:before{content:"\f02d"}.fa-bookmark:before{content:"\f02e"}.fa-print:before{content:"\f02f"}.fa-camera:before{content:"\f030"}.fa-font:before{content:"\f031"}.fa-bold:before{content:"\f032"}.fa-italic:before{content:"\f033"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-align-left:before{content:"\f036"}.fa-align-center:before{content:"\f037"}.fa-align-right:before{content:"\f038"}.fa-align-justify:before{content:"\f039"}.fa-list:before{content:"\f03a"}.fa-dedent:before,.fa-outdent:before{content:"\f03b"}.fa-indent:before{content:"\f03c"}.fa-video-camera:before{content:"\f03d"}.fa-photo:before,.fa-image:before,.fa-picture-o:before{content:"\f03e"}.fa-pencil:before{content:"\f040"}.fa-map-marker:before{content:"\f041"}.fa-adjust:before{content:"\f042"}.fa-tint:before{content:"\f043"}.fa-edit:before,.fa-pencil-square-o:before{content:"\f044"}.fa-share-square-o:before{content:"\f045"}.fa-check-square-o:before{content:"\f046"}.fa-arrows:before{content:"\f047"}.fa-step-backward:before{content:"\f048"}.fa-fast-backward:before{content:"\f049"}.fa-backward:before{content:"\f04a"}.fa-play:before{content:"\f04b"}.fa-pause:before{content:"\f04c"}.fa-stop:before{content:"\f04d"}.fa-forward:before{content:"\f04e"}.fa-fast-forward:before{content:"\f050"}.fa-step-forward:before{content:"\f051"}.fa-eject:before{content:"\f052"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-plus-circle:before{content:"\f055"}.fa-minus-circle:before{content:"\f056"}.fa-times-circle:before{content:"\f057"}.fa-check-circle:before{content:"\f058"}.fa-question-circle:before{content:"\f059"}.fa-info-circle:before{content:"\f05a"}.fa-crosshairs:before{content:"\f05b"}.fa-times-circle-o:before{content:"\f05c"}.fa-check-circle-o:before{content:"\f05d"}.fa-ban:before{content:"\f05e"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrow-down:before{content:"\f063"}.fa-mail-forward:before,.fa-share:before{content:"\f064"}.fa-expand:before{content:"\f065"}.fa-compress:before{content:"\f066"}.fa-plus:before{content:"\f067"}.fa-minus:before{content:"\f068"}.fa-asterisk:before{content:"\f069"}.fa-exclamation-circle:before{content:"\f06a"}.fa-gift:before{content:"\f06b"}.fa-leaf:before{content:"\f06c"}.fa-fire:before{content:"\f06d"}.fa-eye:before{content:"\f06e"}.fa-eye-slash:before{content:"\f070"}.fa-warning:before,.fa-exclamation-triangle:before{content:"\f071"}.fa-plane:before{content:"\f072"}.fa-calendar:before{content:"\f073"}.fa-random:before{content:"\f074"}.fa-comment:before{content:"\f075"}.fa-magnet:before{content:"\f076"}.fa-chevron-up:before{content:"\f077"}.fa-chevron-down:before{content:"\f078"}.fa-retweet:before{content:"\f079"}.fa-shopping-cart:before{content:"\f07a"}.fa-folder:before{content:"\f07b"}.fa-folder-open:before{content:"\f07c"}.fa-arrows-v:before{content:"\f07d"}.fa-arrows-h:before{content:"\f07e"}.fa-bar-chart-o:before,.fa-bar-chart:before{content:"\f080"}.fa-twitter-square:before{content:"\f081"}.fa-facebook-square:before{content:"\f082"}.fa-camera-retro:before{content:"\f083"}.fa-key:before{content:"\f084"}.fa-gears:before,.fa-cogs:before{content:"\f085"}.fa-comments:before{content:"\f086"}.fa-thumbs-o-up:before{content:"\f087"}.fa-thumbs-o-down:before{content:"\f088"}.fa-star-half:before{content:"\f089"}.fa-heart-o:before{content:"\f08a"}.fa-sign-out:before{content:"\f08b"}.fa-linkedin-square:before{content:"\f08c"}.fa-thumb-tack:before{content:"\f08d"}.fa-external-link:before{content:"\f08e"}.fa-sign-in:before{content:"\f090"}.fa-trophy:before{content:"\f091"}.fa-github-square:before{content:"\f092"}.fa-upload:before{content:"\f093"}.fa-lemon-o:before{content:"\f094"}.fa-phone:before{content:"\f095"}.fa-square-o:before{content:"\f096"}.fa-bookmark-o:before{content:"\f097"}.fa-phone-square:before{content:"\f098"}.fa-twitter:before{content:"\f099"}.fa-facebook-f:before,.fa-facebook:before{content:"\f09a"}.fa-github:before{content:"\f09b"}.fa-unlock:before{content:"\f09c"}.fa-credit-card:before{content:"\f09d"}.fa-feed:before,.fa-rss:before{content:"\f09e"}.fa-hdd-o:before{content:"\f0a0"}.fa-bullhorn:before{content:"\f0a1"}.fa-bell:before{content:"\f0f3"}.fa-certificate:before{content:"\f0a3"}.fa-hand-o-right:before{content:"\f0a4"}.fa-hand-o-left:before{content:"\f0a5"}.fa-hand-o-up:before{content:"\f0a6"}.fa-hand-o-down:before{content:"\f0a7"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-globe:before{content:"\f0ac"}.fa-wrench:before{content:"\f0ad"}.fa-tasks:before{content:"\f0ae"}.fa-filter:before{content:"\f0b0"}.fa-briefcase:before{content:"\f0b1"}.fa-arrows-alt:before{content:"\f0b2"}.fa-group:before,.fa-users:before{content:"\f0c0"}.fa-chain:before,.fa-link:before{content:"\f0c1"}.fa-cloud:before{content:"\f0c2"}.fa-flask:before{content:"\f0c3"}.fa-cut:before,.fa-scissors:before{content:"\f0c4"}.fa-copy:before,.fa-files-o:before{content:"\f0c5"}.fa-paperclip:before{content:"\f0c6"}.fa-save:before,.fa-floppy-o:before{content:"\f0c7"}.fa-square:before{content:"\f0c8"}.fa-navicon:before,.fa-reorder:before,.fa-bars:before{content:"\f0c9"}.fa-list-ul:before{content:"\f0ca"}.fa-list-ol:before{content:"\f0cb"}.fa-strikethrough:before{content:"\f0cc"}.fa-underline:before{content:"\f0cd"}.fa-table:before{content:"\f0ce"}.fa-magic:before{content:"\f0d0"}.fa-truck:before{content:"\f0d1"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-square:before{content:"\f0d3"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-plus:before{content:"\f0d5"}.fa-money:before{content:"\f0d6"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-up:before{content:"\f0d8"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-columns:before{content:"\f0db"}.fa-unsorted:before,.fa-sort:before{content:"\f0dc"}.fa-sort-down:before,.fa-sort-desc:before{content:"\f0dd"}.fa-sort-up:before,.fa-sort-asc:before{content:"\f0de"}.fa-envelope:before{content:"\f0e0"}.fa-linkedin:before{content:"\f0e1"}.fa-rotate-left:before,.fa-undo:before{content:"\f0e2"}.fa-legal:before,.fa-gavel:before{content:"\f0e3"}.fa-dashboard:before,.fa-tachometer:before{content:"\f0e4"}.fa-comment-o:before{content:"\f0e5"}.fa-comments-o:before{content:"\f0e6"}.fa-flash:before,.fa-bolt:before{content:"\f0e7"}.fa-sitemap:before{content:"\f0e8"}.fa-umbrella:before{content:"\f0e9"}.fa-paste:before,.fa-clipboard:before{content:"\f0ea"}.fa-lightbulb-o:before{content:"\f0eb"}.fa-exchange:before{content:"\f0ec"}.fa-cloud-download:before{content:"\f0ed"}.fa-cloud-upload:before{content:"\f0ee"}.fa-user-md:before{content:"\f0f0"}.fa-stethoscope:before{content:"\f0f1"}.fa-suitcase:before{content:"\f0f2"}.fa-bell-o:before{content:"\f0a2"}.fa-coffee:before{content:"\f0f4"}.fa-cutlery:before{content:"\f0f5"}.fa-file-text-o:before{content:"\f0f6"}.fa-building-o:before{content:"\f0f7"}.fa-hospital-o:before{content:"\f0f8"}.fa-ambulance:before{content:"\f0f9"}.fa-medkit:before{content:"\f0fa"}.fa-fighter-jet:before{content:"\f0fb"}.fa-beer:before{content:"\f0fc"}.fa-h-square:before{content:"\f0fd"}.fa-plus-square:before{content:"\f0fe"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angle-down:before{content:"\f107"}.fa-desktop:before{content:"\f108"}.fa-laptop:before{content:"\f109"}.fa-tablet:before{content:"\f10a"}.fa-mobile-phone:before,.fa-mobile:before{content:"\f10b"}.fa-circle-o:before{content:"\f10c"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-spinner:before{content:"\f110"}.fa-circle:before{content:"\f111"}.fa-mail-reply:before,.fa-reply:before{content:"\f112"}.fa-github-alt:before{content:"\f113"}.fa-folder-o:before{content:"\f114"}.fa-folder-open-o:before{content:"\f115"}.fa-smile-o:before{content:"\f118"}.fa-frown-o:before{content:"\f119"}.fa-meh-o:before{content:"\f11a"}.fa-gamepad:before{content:"\f11b"}.fa-keyboard-o:before{content:"\f11c"}.fa-flag-o:before{content:"\f11d"}.fa-flag-checkered:before{content:"\f11e"}.fa-terminal:before{content:"\f120"}.fa-code:before{content:"\f121"}.fa-mail-reply-all:before,.fa-reply-all:before{content:"\f122"}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:"\f123"}.fa-location-arrow:before{content:"\f124"}.fa-crop:before{content:"\f125"}.fa-code-fork:before{content:"\f126"}.fa-unlink:before,.fa-chain-broken:before{content:"\f127"}.fa-question:before{content:"\f128"}.fa-info:before{content:"\f129"}.fa-exclamation:before{content:"\f12a"}.fa-superscript:before{content:"\f12b"}.fa-subscript:before{content:"\f12c"}.fa-eraser:before{content:"\f12d"}.fa-puzzle-piece:before{content:"\f12e"}.fa-microphone:before{content:"\f130"}.fa-microphone-slash:before{content:"\f131"}.fa-shield:before{content:"\f132"}.fa-calendar-o:before{content:"\f133"}.fa-fire-extinguisher:before{content:"\f134"}.fa-rocket:before{content:"\f135"}.fa-maxcdn:before{content:"\f136"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-html5:before{content:"\f13b"}.fa-css3:before{content:"\f13c"}.fa-anchor:before{content:"\f13d"}.fa-unlock-alt:before{content:"\f13e"}.fa-bullseye:before{content:"\f140"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-rss-square:before{content:"\f143"}.fa-play-circle:before{content:"\f144"}.fa-ticket:before{content:"\f145"}.fa-minus-square:before{content:"\f146"}.fa-minus-square-o:before{content:"\f147"}.fa-level-up:before{content:"\f148"}.fa-level-down:before{content:"\f149"}.fa-check-square:before{content:"\f14a"}.fa-pencil-square:before{content:"\f14b"}.fa-external-link-square:before{content:"\f14c"}.fa-share-square:before{content:"\f14d"}.fa-compass:before{content:"\f14e"}.fa-toggle-down:before,.fa-caret-square-o-down:before{content:"\f150"}.fa-toggle-up:before,.fa-caret-square-o-up:before{content:"\f151"}.fa-toggle-right:before,.fa-caret-square-o-right:before{content:"\f152"}.fa-euro:before,.fa-eur:before{content:"\f153"}.fa-gbp:before{content:"\f154"}.fa-dollar:before,.fa-usd:before{content:"\f155"}.fa-rupee:before,.fa-inr:before{content:"\f156"}.fa-cny:before,.fa-rmb:before,.fa-yen:before,.fa-jpy:before{content:"\f157"}.fa-ruble:before,.fa-rouble:before,.fa-rub:before{content:"\f158"}.fa-won:before,.fa-krw:before{content:"\f159"}.fa-bitcoin:before,.fa-btc:before{content:"\f15a"}.fa-file:before{content:"\f15b"}.fa-file-text:before{content:"\f15c"}.fa-sort-alpha-asc:before{content:"\f15d"}.fa-sort-alpha-desc:before{content:"\f15e"}.fa-sort-amount-asc:before{content:"\f160"}.fa-sort-amount-desc:before{content:"\f161"}.fa-sort-numeric-asc:before{content:"\f162"}.fa-sort-numeric-desc:before{content:"\f163"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbs-down:before{content:"\f165"}.fa-youtube-square:before{content:"\f166"}.fa-youtube:before{content:"\f167"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-youtube-play:before{content:"\f16a"}.fa-dropbox:before{content:"\f16b"}.fa-stack-overflow:before{content:"\f16c"}.fa-instagram:before{content:"\f16d"}.fa-flickr:before{content:"\f16e"}.fa-adn:before{content:"\f170"}.fa-bitbucket:before{content:"\f171"}.fa-bitbucket-square:before{content:"\f172"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-long-arrow-down:before{content:"\f175"}.fa-long-arrow-up:before{content:"\f176"}.fa-long-arrow-left:before{content:"\f177"}.fa-long-arrow-right:before{content:"\f178"}.fa-apple:before{content:"\f179"}.fa-windows:before{content:"\f17a"}.fa-android:before{content:"\f17b"}.fa-linux:before{content:"\f17c"}.fa-dribbble:before{content:"\f17d"}.fa-skype:before{content:"\f17e"}.fa-foursquare:before{content:"\f180"}.fa-trello:before{content:"\f181"}.fa-female:before{content:"\f182"}.fa-male:before{content:"\f183"}.fa-gittip:before,.fa-gratipay:before{content:"\f184"}.fa-sun-o:before{content:"\f185"}.fa-moon-o:before{content:"\f186"}.fa-archive:before{content:"\f187"}.fa-bug:before{content:"\f188"}.fa-vk:before{content:"\f189"}.fa-weibo:before{content:"\f18a"}.fa-renren:before{content:"\f18b"}.fa-pagelines:before{content:"\f18c"}.fa-stack-exchange:before{content:"\f18d"}.fa-arrow-circle-o-right:before{content:"\f18e"}.fa-arrow-circle-o-left:before{content:"\f190"}.fa-toggle-left:before,.fa-caret-square-o-left:before{content:"\f191"}.fa-dot-circle-o:before{content:"\f192"}.fa-wheelchair:before{content:"\f193"}.fa-vimeo-square:before{content:"\f194"}.fa-turkish-lira:before,.fa-try:before{content:"\f195"}.fa-plus-square-o:before{content:"\f196"}.fa-space-shuttle:before{content:"\f197"}.fa-slack:before{content:"\f198"}.fa-envelope-square:before{content:"\f199"}.fa-wordpress:before{content:"\f19a"}.fa-openid:before{content:"\f19b"}.fa-institution:before,.fa-bank:before,.fa-university:before{content:"\f19c"}.fa-mortar-board:before,.fa-graduation-cap:before{content:"\f19d"}.fa-yahoo:before{content:"\f19e"}.fa-google:before{content:"\f1a0"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-square:before{content:"\f1a2"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-stumbleupon:before{content:"\f1a4"}.fa-delicious:before{content:"\f1a5"}.fa-digg:before{content:"\f1a6"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-drupal:before{content:"\f1a9"}.fa-joomla:before{content:"\f1aa"}.fa-language:before{content:"\f1ab"}.fa-fax:before{content:"\f1ac"}.fa-building:before{content:"\f1ad"}.fa-child:before{content:"\f1ae"}.fa-paw:before{content:"\f1b0"}.fa-spoon:before{content:"\f1b1"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-recycle:before{content:"\f1b8"}.fa-automobile:before,.fa-car:before{content:"\f1b9"}.fa-cab:before,.fa-taxi:before{content:"\f1ba"}.fa-tree:before{content:"\f1bb"}.fa-spotify:before{content:"\f1bc"}.fa-deviantart:before{content:"\f1bd"}.fa-soundcloud:before{content:"\f1be"}.fa-database:before{content:"\f1c0"}.fa-file-pdf-o:before{content:"\f1c1"}.fa-file-word-o:before{content:"\f1c2"}.fa-file-excel-o:before{content:"\f1c3"}.fa-file-powerpoint-o:before{content:"\f1c4"}.fa-file-photo-o:before,.fa-file-picture-o:before,.fa-file-image-o:before{content:"\f1c5"}.fa-file-zip-o:before,.fa-file-archive-o:before{content:"\f1c6"}.fa-file-sound-o:before,.fa-file-audio-o:before{content:"\f1c7"}.fa-file-movie-o:before,.fa-file-video-o:before{content:"\f1c8"}.fa-file-code-o:before{content:"\f1c9"}.fa-vine:before{content:"\f1ca"}.fa-codepen:before{content:"\f1cb"}.fa-jsfiddle:before{content:"\f1cc"}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-saver:before,.fa-support:before,.fa-life-ring:before{content:"\f1cd"}.fa-circle-o-notch:before{content:"\f1ce"}.fa-ra:before,.fa-resistance:before,.fa-rebel:before{content:"\f1d0"}.fa-ge:before,.fa-empire:before{content:"\f1d1"}.fa-git-square:before{content:"\f1d2"}.fa-git:before{content:"\f1d3"}.fa-y-combinator-square:before,.fa-yc-square:before,.fa-hacker-news:before{content:"\f1d4"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-qq:before{content:"\f1d6"}.fa-wechat:before,.fa-weixin:before{content:"\f1d7"}.fa-send:before,.fa-paper-plane:before{content:"\f1d8"}.fa-send-o:before,.fa-paper-plane-o:before{content:"\f1d9"}.fa-history:before{content:"\f1da"}.fa-circle-thin:before{content:"\f1db"}.fa-header:before{content:"\f1dc"}.fa-paragraph:before{content:"\f1dd"}.fa-sliders:before{content:"\f1de"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-bomb:before{content:"\f1e2"}.fa-soccer-ball-o:before,.fa-futbol-o:before{content:"\f1e3"}.fa-tty:before{content:"\f1e4"}.fa-binoculars:before{content:"\f1e5"}.fa-plug:before{content:"\f1e6"}.fa-slideshare:before{content:"\f1e7"}.fa-twitch:before{content:"\f1e8"}.fa-yelp:before{content:"\f1e9"}.fa-newspaper-o:before{content:"\f1ea"}.fa-wifi:before{content:"\f1eb"}.fa-calculator:before{content:"\f1ec"}.fa-paypal:before{content:"\f1ed"}.fa-google-wallet:before{content:"\f1ee"}.fa-cc-visa:before{content:"\f1f0"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-bell-slash:before{content:"\f1f6"}.fa-bell-slash-o:before{content:"\f1f7"}.fa-trash:before{content:"\f1f8"}.fa-copyright:before{content:"\f1f9"}.fa-at:before{content:"\f1fa"}.fa-eyedropper:before{content:"\f1fb"}.fa-paint-brush:before{content:"\f1fc"}.fa-birthday-cake:before{content:"\f1fd"}.fa-area-chart:before{content:"\f1fe"}.fa-pie-chart:before{content:"\f200"}.fa-line-chart:before{content:"\f201"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-bicycle:before{content:"\f206"}.fa-bus:before{content:"\f207"}.fa-ioxhost:before{content:"\f208"}.fa-angellist:before{content:"\f209"}.fa-cc:before{content:"\f20a"}.fa-shekel:before,.fa-sheqel:before,.fa-ils:before{content:"\f20b"}.fa-meanpath:before{content:"\f20c"}.fa-buysellads:before{content:"\f20d"}.fa-connectdevelop:before{content:"\f20e"}.fa-dashcube:before{content:"\f210"}.fa-forumbee:before{content:"\f211"}.fa-leanpub:before{content:"\f212"}.fa-sellsy:before{content:"\f213"}.fa-shirtsinbulk:before{content:"\f214"}.fa-simplybuilt:before{content:"\f215"}.fa-skyatlas:before{content:"\f216"}.fa-cart-plus:before{content:"\f217"}.fa-cart-arrow-down:before{content:"\f218"}.fa-diamond:before{content:"\f219"}.fa-ship:before{content:"\f21a"}.fa-user-secret:before{content:"\f21b"}.fa-motorcycle:before{content:"\f21c"}.fa-street-view:before{content:"\f21d"}.fa-heartbeat:before{content:"\f21e"}.fa-venus:before{content:"\f221"}.fa-mars:before{content:"\f222"}.fa-mercury:before{content:"\f223"}.fa-intersex:before,.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-venus-double:before{content:"\f226"}.fa-mars-double:before{content:"\f227"}.fa-venus-mars:before{content:"\f228"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-neuter:before{content:"\f22c"}.fa-genderless:before{content:"\f22d"}.fa-facebook-official:before{content:"\f230"}.fa-pinterest-p:before{content:"\f231"}.fa-whatsapp:before{content:"\f232"}.fa-server:before{content:"\f233"}.fa-user-plus:before{content:"\f234"}.fa-user-times:before{content:"\f235"}.fa-hotel:before,.fa-bed:before{content:"\f236"}.fa-viacoin:before{content:"\f237"}.fa-train:before{content:"\f238"}.fa-subway:before{content:"\f239"}.fa-medium:before{content:"\f23a"}.fa-yc:before,.fa-y-combinator:before{content:"\f23b"}.fa-optin-monster:before{content:"\f23c"}.fa-opencart:before{content:"\f23d"}.fa-expeditedssl:before{content:"\f23e"}.fa-battery-4:before,.fa-battery:before,.fa-battery-full:before{content:"\f240"}.fa-battery-3:before,.fa-battery-three-quarters:before{content:"\f241"}.fa-battery-2:before,.fa-battery-half:before{content:"\f242"}.fa-battery-1:before,.fa-battery-quarter:before{content:"\f243"}.fa-battery-0:before,.fa-battery-empty:before{content:"\f244"}.fa-mouse-pointer:before{content:"\f245"}.fa-i-cursor:before{content:"\f246"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-sticky-note:before{content:"\f249"}.fa-sticky-note-o:before{content:"\f24a"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-diners-club:before{content:"\f24c"}.fa-clone:before{content:"\f24d"}.fa-balance-scale:before{content:"\f24e"}.fa-hourglass-o:before{content:"\f250"}.fa-hourglass-1:before,.fa-hourglass-start:before{content:"\f251"}.fa-hourglass-2:before,.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-3:before,.fa-hourglass-end:before{content:"\f253"}.fa-hourglass:before{content:"\f254"}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:"\f255"}.fa-hand-stop-o:before,.fa-hand-paper-o:before{content:"\f256"}.fa-hand-scissors-o:before{content:"\f257"}.fa-hand-lizard-o:before{content:"\f258"}.fa-hand-spock-o:before{content:"\f259"}.fa-hand-pointer-o:before{content:"\f25a"}.fa-hand-peace-o:before{content:"\f25b"}.fa-trademark:before{content:"\f25c"}.fa-registered:before{content:"\f25d"}.fa-creative-commons:before{content:"\f25e"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-tripadvisor:before{content:"\f262"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-get-pocket:before{content:"\f265"}.fa-wikipedia-w:before{content:"\f266"}.fa-safari:before{content:"\f267"}.fa-chrome:before{content:"\f268"}.fa-firefox:before{content:"\f269"}.fa-opera:before{content:"\f26a"}.fa-internet-explorer:before{content:"\f26b"}.fa-tv:before,.fa-television:before{content:"\f26c"}.fa-contao:before{content:"\f26d"}.fa-500px:before{content:"\f26e"}.fa-amazon:before{content:"\f270"}.fa-calendar-plus-o:before{content:"\f271"}.fa-calendar-minus-o:before{content:"\f272"}.fa-calendar-times-o:before{content:"\f273"}.fa-calendar-check-o:before{content:"\f274"}.fa-industry:before{content:"\f275"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-map-o:before{content:"\f278"}.fa-map:before{content:"\f279"}.fa-commenting:before{content:"\f27a"}.fa-commenting-o:before{content:"\f27b"}.fa-houzz:before{content:"\f27c"}.fa-vimeo:before{content:"\f27d"}.fa-black-tie:before{content:"\f27e"}.fa-fonticons:before{content:"\f280"}.fa-reddit-alien:before{content:"\f281"}.fa-edge:before{content:"\f282"}.fa-credit-card-alt:before{content:"\f283"}.fa-codiepie:before{content:"\f284"}.fa-modx:before{content:"\f285"}.fa-fort-awesome:before{content:"\f286"}.fa-usb:before{content:"\f287"}.fa-product-hunt:before{content:"\f288"}.fa-mixcloud:before{content:"\f289"}.fa-scribd:before{content:"\f28a"}.fa-pause-circle:before{content:"\f28b"}.fa-pause-circle-o:before{content:"\f28c"}.fa-stop-circle:before{content:"\f28d"}.fa-stop-circle-o:before{content:"\f28e"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-hashtag:before{content:"\f292"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-percent:before{content:"\f295"}.fa-gitlab:before{content:"\f296"}.fa-wpbeginner:before{content:"\f297"}.fa-wpforms:before{content:"\f298"}.fa-envira:before{content:"\f299"}.fa-universal-access:before{content:"\f29a"}.fa-wheelchair-alt:before{content:"\f29b"}.fa-question-circle-o:before{content:"\f29c"}.fa-blind:before{content:"\f29d"}.fa-audio-description:before{content:"\f29e"}.fa-volume-control-phone:before{content:"\f2a0"}.fa-braille:before{content:"\f2a1"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asl-interpreting:before,.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-deafness:before,.fa-hard-of-hearing:before,.fa-deaf:before{content:"\f2a4"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-signing:before,.fa-sign-language:before{content:"\f2a7"}.fa-low-vision:before{content:"\f2a8"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-pied-piper:before{content:"\f2ae"}.fa-first-order:before{content:"\f2b0"}.fa-yoast:before{content:"\f2b1"}.fa-themeisle:before{content:"\f2b2"}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:"\f2b3"}.fa-fa:before,.fa-font-awesome:before{content:"\f2b4"}.fa-handshake-o:before{content:"\f2b5"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-o:before{content:"\f2b7"}.fa-linode:before{content:"\f2b8"}.fa-address-book:before{content:"\f2b9"}.fa-address-book-o:before{content:"\f2ba"}.fa-vcard:before,.fa-address-card:before{content:"\f2bb"}.fa-vcard-o:before,.fa-address-card-o:before{content:"\f2bc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-circle-o:before{content:"\f2be"}.fa-user-o:before{content:"\f2c0"}.fa-id-badge:before{content:"\f2c1"}.fa-drivers-license:before,.fa-id-card:before{content:"\f2c2"}.fa-drivers-license-o:before,.fa-id-card-o:before{content:"\f2c3"}.fa-quora:before{content:"\f2c4"}.fa-free-code-camp:before{content:"\f2c5"}.fa-telegram:before{content:"\f2c6"}.fa-thermometer-4:before,.fa-thermometer:before,.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-thermometer-2:before,.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:"\f2cb"}.fa-shower:before{content:"\f2cc"}.fa-bathtub:before,.fa-s15:before,.fa-bath:before{content:"\f2cd"}.fa-podcast:before{content:"\f2ce"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-times-rectangle:before,.fa-window-close:before{content:"\f2d3"}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:"\f2d4"}.fa-bandcamp:before{content:"\f2d5"}.fa-grav:before{content:"\f2d6"}.fa-etsy:before{content:"\f2d7"}.fa-imdb:before{content:"\f2d8"}.fa-ravelry:before{content:"\f2d9"}.fa-eercast:before{content:"\f2da"}.fa-microchip:before{content:"\f2db"}.fa-snowflake-o:before{content:"\f2dc"}.fa-superpowers:before{content:"\f2dd"}.fa-wpexplorer:before{content:"\f2de"}.fa-meetup:before{content:"\f2e0"}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0, 0, 0, 0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}
/*
* @Author: baipengxia
* @Date: 2021-03-12 11:44:28
* @Last Modified by: baipengxia
* @Last Modified time: 2021-03-12 15:14:24
*/
/** COMMON RESET **/
* {
-webkit-tap-highlight-color: rgba(0, 0, 0, 0);
}
body,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
p,
dl,
dt,
dd,
ul,
ol,
li,
fieldset,
lengend,
button,
input,
textarea,
th,
td {
margin: 0;
padding: 0;
color: #000;
}
body {
font-size: 14px;
}
html, body {
min-width: 1200px;
}
button,
input,
select,
textarea {
font-size: 14px;
}
h1 {
font-size: 18px;
}
h2 {
font-size: 14px;
}
h3 {
font-size: 14px;
}
ul,
ol,
li {
list-style: none;
}
a {
text-decoration: none;
}
a:hover {
text-decoration: none;
}
fieldset,
img {
border: none;
}
table {
border-collapse: collapse;
border-spacing: 0;
}
i {
font-style: normal;
}
label {
position: inherit;
}
.clearfix:after {
content: ".";
display: block;
height: 0;
clear: both;
visibility: hidden;
}
.clearfix {
zoom: 1;
display: block;
}
html,
body {
font-family: Tahoma, Arial, 'microsoft yahei', 'Roboto', 'Droid Sans', 'Helvetica Neue', 'Droid Sans Fallback', 'Heiti SC', 'Hiragino Sans GB', 'Simsun', 'sans-self';
}
.audio-banner {
width: 100%;
overflow: auto;
padding: 0;
background: url('../image/voice-dictation.svg');
background-size: cover;
}
.weaper {
width: 1200px;
height: 155px;
margin: 72px auto;
}
.text-content {
width: 670px;
height: 100%;
float: left;
}
.text-content .title {
font-size: 34px;
font-family: 'PingFangSC-Medium';
font-weight: 500;
color: rgba(255, 255, 255, 1);
line-height: 48px;
}
.text-content .con {
font-size: 16px;
font-family: PingFangSC-Light;
font-weight: 300;
color: rgba(255, 255, 255, 1);
line-height: 30px;
}
.img-con {
width: 416px;
height: 100%;
float: right;
}
.img-con img {
width: 100%;
height: 100%;
}
.con-container {
margin-top: 34px;
}
.audio-advantage {
background: #f8f9fa;
}
.asr-advantage {
width: 1200px;
margin: 0 auto;
}
.asr-advantage h2 {
text-align: center;
font-size: 22px;
padding: 30px 0 0 0;
}
.asr-advantage > ul > li {
box-sizing: border-box;
padding: 0 16px;
width: 33%;
text-align: center;
margin-bottom: 35px;
}
.asr-advantage > ul > li .icons{
margin-top: 10px;
margin-bottom: 20px;
width: 42px;
height: 42px;
}
.service-item-content {
margin-top: 35px;
display: flex;
justify-content: center;
flex-wrap: wrap;
}
.service-item-content img {
width: 160px;
vertical-align: bottom;
}
.service-item-content > li {
box-sizing: border-box;
padding: 0 16px;
width: 33%;
text-align: center;
margin-bottom: 35px;
}
.service-item-content > li .service-item-content-title {
line-height: 1.5;
font-weight: 700;
margin-top: 10px;
}
.service-item-content > li .service-item-content-desc {
margin-top: 5px;
line-height: 1.8;
color: #657384;
}
.audio-scene-con {
width: 100%;
padding-bottom: 84px;
background: #fff;
}
.audio-scene {
overflow: auto;
width: 1200px;
background: #fff;
text-align: center;
padding: 0;
margin: 0 auto;
}
.audio-scene h2 {
padding: 30px 0 0 0;
font-size: 22px;
text-align: center;
}
.audio-experience {
width: 100%;
height: 538px;
background: #fff;
padding: 0;
margin: 0;
overflow: auto;
}
.asr-box {
width: 1200px;
height: 394px;
margin: 64px auto;
}
.asr-box h2 {
font-size: 22px;
text-align: center;
margin-bottom: 64px;
}
.voice-container {
position: relative;
width: 1200px;
height: 308px;
background: rgba(255, 255, 255, 1);
border-radius: 8px;
border: 1px solid rgba(225, 225, 225, 1);
}
.voice-container .voice {
height: 236px;
width: 100%;
border-radius: 8px;
}
.voice-container .voice textarea {
height: 100%;
width: 100%;
border: none;
outline: none;
border-radius: 8px;
padding: 25px;
font-size: 14px;
box-sizing: border-box;
resize: none;
}
.voice-input {
width: 100%;
height: 72px;
box-sizing: border-box;
padding-left: 35px;
background: rgba(242, 244, 245, 1);
border-radius: 8px;
line-height: 72px;
}
.voice-input .el-select {
width: 492px;
}
.start-voice {
display: inline-block;
margin-left: 10px;
}
.start-voice .time {
margin-right: 25px;
}
.asr-advantage > ul > li {
margin-bottom: 77px;
}
#msg {
width: 100%;
line-height: 40px;
font-size: 14px;
margin-left: 330px;
}
#captcha {
margin-left: 350px !important;
display: inline-block;
position: relative;
}
.black {
position: fixed;
width: 100%;
height: 100%;
z-index: 5;
background: rgba(0, 0, 0, 0.5);
top: 0;
left: 0;
}
.container {
position: fixed;
z-index: 6;
top: 25%;
left: 10%;
}
.audio-scene-con {
width: 100%;
padding-bottom: 84px;
background: #fff;
}
#sound {
color: #fff;
cursor: pointer;
background: #147ede;
padding: 10px;
margin-top: 30px;
margin-left: 135px;
width: 176px;
height: 30px !important;
text-align: center;
line-height: 30px !important;
border-radius: 10px;
}
.con-ten {
position: absolute;
width: 100%;
height: 100%;
z-index: 5;
background: #fff;
opacity: 0.5;
top: 0;
left: 0;
}
.websocket-url {
width: 320px;
height: 20px;
border: 1px solid #dcdfe6;
line-height: 20px;
padding: 10px;
border-radius: 4px;
}
.voice-btn {
color: #fff;
background-color: #409eff;
font-weight: 500;
padding: 12px 20px;
font-size: 14px;
border-radius: 4px;
border: 0;
cursor: pointer;
}
.voice-btn.end {
display: none;
}
.result-text {
background: #fff;
padding: 20px;
}
.voice-footer {
border-top: 1px solid #dddede;
background: #f7f9fa;
text-align: center;
margin-bottom: 8px;
color: #333;
font-size: 12px;
padding: 20px 0;
}
/** line animate **/
.time-box {
display: none;
margin-left: 10px;
width: 300px;
}
.total-time {
font-size: 14px;
color: #545454;
}
.voice-btn.end.show,
.time-box.show {
display: inline;
}
.start-taste-line {
margin-right: 20px;
display: inline-block;
}
.start-taste-line hr {
background-color: #187cff;
width: 3px;
height: 8px;
margin: 0 3px;
display: inline-block;
border: none;
}
.hr {
animation: note 0.2s ease-in-out;
animation-iteration-count: infinite;
animation-direction: alternate;
}
.hr-one {
animation-delay: -0.9s;
}
.hr-two {
animation-delay: -0.8s;
}
.hr-three {
animation-delay: -0.7s;
}
.hr-four {
animation-delay: -0.6s;
}
.hr-five {
animation-delay: -0.5s;
}
.hr-six {
animation-delay: -0.4s;
}
.hr-seven {
animation-delay: -0.3s;
}
.hr-eight {
animation-delay: -0.2s;
}
.hr-nine {
animation-delay: -0.1s;
}
@keyframes note {
from {
transform: scaleY(1);
}
to {
transform: scaleY(4);
}
}
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
因为 它太大了无法显示 source diff 。你可以改为 查看blob
SoundRecognizer = {
rec: null,
wave: null,
SampleRate: 16000,
testBitRate: 16,
isCloseRecorder: false,
SendInterval: 300,
realTimeSendTryType: 'pcm',
realTimeSendTryEncBusy: 0,
realTimeSendTryTime: 0,
realTimeSendTryNumber: 0,
transferUploadNumberMax: 0,
realTimeSendTryChunk: null,
soundType: "pcm",
init: function (config) {
this.soundType = config.soundType || 'pcm';
this.SampleRate = config.sampleRate || 16000;
this.recwaveElm = config.recwaveElm || '';
this.TransferUpload = config.translerCallBack || this.TransferProcess;
this.initRecorder();
},
RealTimeSendTryReset: function (type) {
this.realTimeSendTryType = type;
this.realTimeSendTryTime = 0;
},
RealTimeSendTry: function (rec, isClose) {
var that = this;
var t1 = Date.now(), endT = 0, recImpl = Recorder.prototype;
if (this.realTimeSendTryTime == 0) {
this.realTimeSendTryTime = t1;
this.realTimeSendTryEncBusy = 0;
this.realTimeSendTryNumber = 0;
this.transferUploadNumberMax = 0;
this.realTimeSendTryChunk = null;
}
if (!isClose && t1 - this.realTimeSendTryTime < this.SendInterval) {
return;//控制缓冲达到指定间隔才进行传输
}
this.realTimeSendTryTime = t1;
var number = ++this.realTimeSendTryNumber;
//借用SampleData函数进行数据的连续处理,采样率转换是顺带的
var chunk = Recorder.SampleData(rec.buffers, rec.srcSampleRate, this.SampleRate, this.realTimeSendTryChunk, { frameType: isClose ? "" : this.realTimeSendTryType });
//清理已处理完的缓冲数据,释放内存以支持长时间录音,最后完成录音时不能调用stop,因为数据已经被清掉了
for (var i = this.realTimeSendTryChunk ? this.realTimeSendTryChunk.index : 0; i < chunk.index; i++) {
rec.buffers[i] = null;
}
this.realTimeSendTryChunk = chunk;
//没有新数据,或结束时的数据量太小,不能进行mock转码
if (chunk.data.length == 0 || isClose && chunk.data.length < 2000) {
this.TransferUpload(number, null, 0, null, isClose);
return;
}
//实时编码队列阻塞处理
if (!isClose) {
if (this.realTimeSendTryEncBusy >= 2) {
console.log("编码队列阻塞,已丢弃一帧", 1);
return;
}
}
this.realTimeSendTryEncBusy++;
//通过mock方法实时转码成mp3、wav
var encStartTime = Date.now();
var recMock = Recorder({
type: this.realTimeSendTryType
, sampleRate: this.SampleRate //采样率
, bitRate: this.testBitRate //比特率
});
recMock.mock(chunk.data, chunk.sampleRate);
recMock.stop(function (blob, duration) {
that.realTimeSendTryEncBusy && (that.realTimeSendTryEncBusy--);
blob.encTime = Date.now() - encStartTime;
//转码好就推入传输
that.TransferUpload(number, blob, duration, recMock, isClose);
}, function (msg) {
that.realTimeSendTryEncBusy && (that.realTimeSendTryEncBusy--);
//转码错误?没想到什么时候会产生错误!
console.log("不应该出现的错误:" + msg, 1);
});
},
recordClose: function () {
try {
this.rec.close(function () {
this.isCloseRecorder = true;
});
this.RealTimeSendTry(this.rec, true);//最后一次发送
} catch (ex) {
// recordClose();
}
},
recordEnd: function () {
try {
this.rec.stop(function (blob, time) {
this.recordClose();
}, function (s) {
this.recordClose();
});
} catch (ex) {
}
},
initRecorder: function () {
var that = this;
var rec = Recorder({
type: that.soundType
, bitRate: that.testBitRate
, sampleRate: that.SampleRate
, onProcess: function (buffers, level, time, sampleRate) {
that.wave.input(buffers[buffers.length - 1], level, sampleRate);
that.RealTimeSendTry(rec, false);//推入实时处理,因为是unknown格式,这里简化函数调用,没有用到buffers和bufferSampleRate,因为这些数据和rec.buffers是完全相同的。
}
});
rec.open(function () {
that.wave = Recorder.FrequencyHistogramView({
elem: that.recwaveElm, lineCount: 90
, position: 0
, minHeight: 1
, stripeEnable: false
});
rec.start();
that.isCloseRecorder = false;
that.RealTimeSendTryReset(that.soundType);//重置
});
this.rec = rec;
},
TransferProcess: function (number, blobOrNull, duration, blobRec, isClose) {
}
}
\ No newline at end of file
/*! jQuery v3.2.1 | (c) JS Foundation and other contributors | jquery.org/license */
!function(a,b){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw new Error("jQuery requires a window with a document");return b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){"use strict";var c=[],d=a.document,e=Object.getPrototypeOf,f=c.slice,g=c.concat,h=c.push,i=c.indexOf,j={},k=j.toString,l=j.hasOwnProperty,m=l.toString,n=m.call(Object),o={};function p(a,b){b=b||d;var c=b.createElement("script");c.text=a,b.head.appendChild(c).parentNode.removeChild(c)}var q="3.2.1",r=function(a,b){return new r.fn.init(a,b)},s=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,t=/^-ms-/,u=/-([a-z])/g,v=function(a,b){return b.toUpperCase()};r.fn=r.prototype={jquery:q,constructor:r,length:0,toArray:function(){return f.call(this)},get:function(a){return null==a?f.call(this):a<0?this[a+this.length]:this[a]},pushStack:function(a){var b=r.merge(this.constructor(),a);return b.prevObject=this,b},each:function(a){return r.each(this,a)},map:function(a){return this.pushStack(r.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(f.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(a<0?b:0);return this.pushStack(c>=0&&c<b?[this[c]]:[])},end:function(){return this.prevObject||this.constructor()},push:h,sort:c.sort,splice:c.splice},r.extend=r.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||r.isFunction(g)||(g={}),h===i&&(g=this,h--);h<i;h++)if(null!=(a=arguments[h]))for(b in a)c=g[b],d=a[b],g!==d&&(j&&d&&(r.isPlainObject(d)||(e=Array.isArray(d)))?(e?(e=!1,f=c&&Array.isArray(c)?c:[]):f=c&&r.isPlainObject(c)?c:{},g[b]=r.extend(j,f,d)):void 0!==d&&(g[b]=d));return g},r.extend({expando:"jQuery"+(q+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===r.type(a)},isWindow:function(a){return null!=a&&a===a.window},isNumeric:function(a){var b=r.type(a);return("number"===b||"string"===b)&&!isNaN(a-parseFloat(a))},isPlainObject:function(a){var b,c;return!(!a||"[object Object]"!==k.call(a))&&(!(b=e(a))||(c=l.call(b,"constructor")&&b.constructor,"function"==typeof c&&m.call(c)===n))},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?j[k.call(a)]||"object":typeof a},globalEval:function(a){p(a)},camelCase:function(a){return a.replace(t,"ms-").replace(u,v)},each:function(a,b){var c,d=0;if(w(a)){for(c=a.length;d<c;d++)if(b.call(a[d],d,a[d])===!1)break}else for(d in a)if(b.call(a[d],d,a[d])===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(s,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(w(Object(a))?r.merge(c,"string"==typeof a?[a]:a):h.call(c,a)),c},inArray:function(a,b,c){return null==b?-1:i.call(b,a,c)},merge:function(a,b){for(var c=+b.length,d=0,e=a.length;d<c;d++)a[e++]=b[d];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;f<g;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,e,f=0,h=[];if(w(a))for(d=a.length;f<d;f++)e=b(a[f],f,c),null!=e&&h.push(e);else for(f in a)e=b(a[f],f,c),null!=e&&h.push(e);return g.apply([],h)},guid:1,proxy:function(a,b){var c,d,e;if("string"==typeof b&&(c=a[b],b=a,a=c),r.isFunction(a))return d=f.call(arguments,2),e=function(){return a.apply(b||this,d.concat(f.call(arguments)))},e.guid=a.guid=a.guid||r.guid++,e},now:Date.now,support:o}),"function"==typeof Symbol&&(r.fn[Symbol.iterator]=c[Symbol.iterator]),r.each("Boolean Number String Function Array Date RegExp Object Error Symbol".split(" "),function(a,b){j["[object "+b+"]"]=b.toLowerCase()});function w(a){var b=!!a&&"length"in a&&a.length,c=r.type(a);return"function"!==c&&!r.isWindow(a)&&("array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a)}var x=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+1*new Date,v=a.document,w=0,x=0,y=ha(),z=ha(),A=ha(),B=function(a,b){return a===b&&(l=!0),0},C={}.hasOwnProperty,D=[],E=D.pop,F=D.push,G=D.push,H=D.slice,I=function(a,b){for(var c=0,d=a.length;c<d;c++)if(a[c]===b)return c;return-1},J="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",K="[\\x20\\t\\r\\n\\f]",L="(?:\\\\.|[\\w-]|[^\0-\\xa0])+",M="\\["+K+"*("+L+")(?:"+K+"*([*^$|!~]?=)"+K+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+L+"))|)"+K+"*\\]",N=":("+L+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+M+")*)|.*)\\)|)",O=new RegExp(K+"+","g"),P=new RegExp("^"+K+"+|((?:^|[^\\\\])(?:\\\\.)*)"+K+"+$","g"),Q=new RegExp("^"+K+"*,"+K+"*"),R=new RegExp("^"+K+"*([>+~]|"+K+")"+K+"*"),S=new RegExp("="+K+"*([^\\]'\"]*?)"+K+"*\\]","g"),T=new RegExp(N),U=new RegExp("^"+L+"$"),V={ID:new RegExp("^#("+L+")"),CLASS:new RegExp("^\\.("+L+")"),TAG:new RegExp("^("+L+"|[*])"),ATTR:new RegExp("^"+M),PSEUDO:new RegExp("^"+N),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+K+"*(even|odd|(([+-]|)(\\d*)n|)"+K+"*(?:([+-]|)"+K+"*(\\d+)|))"+K+"*\\)|)","i"),bool:new RegExp("^(?:"+J+")$","i"),needsContext:new RegExp("^"+K+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+K+"*((?:-\\d)?\\d*)"+K+"*\\)|)(?=[^-]|$)","i")},W=/^(?:input|select|textarea|button)$/i,X=/^h\d$/i,Y=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,$=/[+~]/,_=new RegExp("\\\\([\\da-f]{1,6}"+K+"?|("+K+")|.)","ig"),aa=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:d<0?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)},ba=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ca=function(a,b){return b?"\0"===a?"\ufffd":a.slice(0,-1)+"\\"+a.charCodeAt(a.length-1).toString(16)+" ":"\\"+a},da=function(){m()},ea=ta(function(a){return a.disabled===!0&&("form"in a||"label"in a)},{dir:"parentNode",next:"legend"});try{G.apply(D=H.call(v.childNodes),v.childNodes),D[v.childNodes.length].nodeType}catch(fa){G={apply:D.length?function(a,b){F.apply(a,H.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function ga(a,b,d,e){var f,h,j,k,l,o,r,s=b&&b.ownerDocument,w=b?b.nodeType:9;if(d=d||[],"string"!=typeof a||!a||1!==w&&9!==w&&11!==w)return d;if(!e&&((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,p)){if(11!==w&&(l=Z.exec(a)))if(f=l[1]){if(9===w){if(!(j=b.getElementById(f)))return d;if(j.id===f)return d.push(j),d}else if(s&&(j=s.getElementById(f))&&t(b,j)&&j.id===f)return d.push(j),d}else{if(l[2])return G.apply(d,b.getElementsByTagName(a)),d;if((f=l[3])&&c.getElementsByClassName&&b.getElementsByClassName)return G.apply(d,b.getElementsByClassName(f)),d}if(c.qsa&&!A[a+" "]&&(!q||!q.test(a))){if(1!==w)s=b,r=a;else if("object"!==b.nodeName.toLowerCase()){(k=b.getAttribute("id"))?k=k.replace(ba,ca):b.setAttribute("id",k=u),o=g(a),h=o.length;while(h--)o[h]="#"+k+" "+sa(o[h]);r=o.join(","),s=$.test(a)&&qa(b.parentNode)||b}if(r)try{return G.apply(d,s.querySelectorAll(r)),d}catch(x){}finally{k===u&&b.removeAttribute("id")}}}return i(a.replace(P,"$1"),b,d,e)}function ha(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function ia(a){return a[u]=!0,a}function ja(a){var b=n.createElement("fieldset");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function ka(a,b){var c=a.split("|"),e=c.length;while(e--)d.attrHandle[c[e]]=b}function la(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&a.sourceIndex-b.sourceIndex;if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function ma(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function na(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function oa(a){return function(b){return"form"in b?b.parentNode&&b.disabled===!1?"label"in b?"label"in b.parentNode?b.parentNode.disabled===a:b.disabled===a:b.isDisabled===a||b.isDisabled!==!a&&ea(b)===a:b.disabled===a:"label"in b&&b.disabled===a}}function pa(a){return ia(function(b){return b=+b,ia(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function qa(a){return a&&"undefined"!=typeof a.getElementsByTagName&&a}c=ga.support={},f=ga.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return!!b&&"HTML"!==b.nodeName},m=ga.setDocument=function(a){var b,e,g=a?a.ownerDocument||a:v;return g!==n&&9===g.nodeType&&g.documentElement?(n=g,o=n.documentElement,p=!f(n),v!==n&&(e=n.defaultView)&&e.top!==e&&(e.addEventListener?e.addEventListener("unload",da,!1):e.attachEvent&&e.attachEvent("onunload",da)),c.attributes=ja(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=ja(function(a){return a.appendChild(n.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=Y.test(n.getElementsByClassName),c.getById=ja(function(a){return o.appendChild(a).id=u,!n.getElementsByName||!n.getElementsByName(u).length}),c.getById?(d.filter.ID=function(a){var b=a.replace(_,aa);return function(a){return a.getAttribute("id")===b}},d.find.ID=function(a,b){if("undefined"!=typeof b.getElementById&&p){var c=b.getElementById(a);return c?[c]:[]}}):(d.filter.ID=function(a){var b=a.replace(_,aa);return function(a){var c="undefined"!=typeof a.getAttributeNode&&a.getAttributeNode("id");return c&&c.value===b}},d.find.ID=function(a,b){if("undefined"!=typeof b.getElementById&&p){var c,d,e,f=b.getElementById(a);if(f){if(c=f.getAttributeNode("id"),c&&c.value===a)return[f];e=b.getElementsByName(a),d=0;while(f=e[d++])if(c=f.getAttributeNode("id"),c&&c.value===a)return[f]}return[]}}),d.find.TAG=c.getElementsByTagName?function(a,b){return"undefined"!=typeof b.getElementsByTagName?b.getElementsByTagName(a):c.qsa?b.querySelectorAll(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){if("undefined"!=typeof b.getElementsByClassName&&p)return b.getElementsByClassName(a)},r=[],q=[],(c.qsa=Y.test(n.querySelectorAll))&&(ja(function(a){o.appendChild(a).innerHTML="<a id='"+u+"'></a><select id='"+u+"-\r\\' msallowcapture=''><option selected=''></option></select>",a.querySelectorAll("[msallowcapture^='']").length&&q.push("[*^$]="+K+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+K+"*(?:value|"+J+")"),a.querySelectorAll("[id~="+u+"-]").length||q.push("~="),a.querySelectorAll(":checked").length||q.push(":checked"),a.querySelectorAll("a#"+u+"+*").length||q.push(".#.+[+~]")}),ja(function(a){a.innerHTML="<a href='' disabled='disabled'></a><select disabled='disabled'><option/></select>";var b=n.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+K+"*[*^$|!~]?="),2!==a.querySelectorAll(":enabled").length&&q.push(":enabled",":disabled"),o.appendChild(a).disabled=!0,2!==a.querySelectorAll(":disabled").length&&q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=Y.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&ja(function(a){c.disconnectedMatch=s.call(a,"*"),s.call(a,"[s!='']:x"),r.push("!=",N)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=Y.test(o.compareDocumentPosition),t=b||Y.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===n||a.ownerDocument===v&&t(v,a)?-1:b===n||b.ownerDocument===v&&t(v,b)?1:k?I(k,a)-I(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,e=a.parentNode,f=b.parentNode,g=[a],h=[b];if(!e||!f)return a===n?-1:b===n?1:e?-1:f?1:k?I(k,a)-I(k,b):0;if(e===f)return la(a,b);c=a;while(c=c.parentNode)g.unshift(c);c=b;while(c=c.parentNode)h.unshift(c);while(g[d]===h[d])d++;return d?la(g[d],h[d]):g[d]===v?-1:h[d]===v?1:0},n):n},ga.matches=function(a,b){return ga(a,null,null,b)},ga.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(S,"='$1']"),c.matchesSelector&&p&&!A[b+" "]&&(!r||!r.test(b))&&(!q||!q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return ga(b,n,null,[a]).length>0},ga.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},ga.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&C.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},ga.escape=function(a){return(a+"").replace(ba,ca)},ga.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},ga.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=ga.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=ga.selectors={cacheLength:50,createPseudo:ia,match:V,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(_,aa),a[3]=(a[3]||a[4]||a[5]||"").replace(_,aa),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||ga.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&ga.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return V.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&T.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(_,aa).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+K+")"+a+"("+K+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||"undefined"!=typeof a.getAttribute&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=ga.attr(d,a);return null==e?"!="===b:!b||(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e.replace(O," ")+" ").indexOf(c)>-1:"|="===b&&(e===c||e.slice(0,c.length+1)===c+"-"))}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h,t=!1;if(q){if(f){while(p){m=b;while(m=m[p])if(h?m.nodeName.toLowerCase()===r:1===m.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){m=q,l=m[u]||(m[u]={}),k=l[m.uniqueID]||(l[m.uniqueID]={}),j=k[a]||[],n=j[0]===w&&j[1],t=n&&j[2],m=n&&q.childNodes[n];while(m=++n&&m&&m[p]||(t=n=0)||o.pop())if(1===m.nodeType&&++t&&m===b){k[a]=[w,n,t];break}}else if(s&&(m=b,l=m[u]||(m[u]={}),k=l[m.uniqueID]||(l[m.uniqueID]={}),j=k[a]||[],n=j[0]===w&&j[1],t=n),t===!1)while(m=++n&&m&&m[p]||(t=n=0)||o.pop())if((h?m.nodeName.toLowerCase()===r:1===m.nodeType)&&++t&&(s&&(l=m[u]||(m[u]={}),k=l[m.uniqueID]||(l[m.uniqueID]={}),k[a]=[w,t]),m===b))break;return t-=e,t===d||t%d===0&&t/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||ga.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?ia(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=I(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:ia(function(a){var b=[],c=[],d=h(a.replace(P,"$1"));return d[u]?ia(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),b[0]=null,!c.pop()}}),has:ia(function(a){return function(b){return ga(a,b).length>0}}),contains:ia(function(a){return a=a.replace(_,aa),function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:ia(function(a){return U.test(a||"")||ga.error("unsupported lang: "+a),a=a.replace(_,aa).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:oa(!1),disabled:oa(!0),checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return X.test(a.nodeName)},input:function(a){return W.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:pa(function(){return[0]}),last:pa(function(a,b){return[b-1]}),eq:pa(function(a,b,c){return[c<0?c+b:c]}),even:pa(function(a,b){for(var c=0;c<b;c+=2)a.push(c);return a}),odd:pa(function(a,b){for(var c=1;c<b;c+=2)a.push(c);return a}),lt:pa(function(a,b,c){for(var d=c<0?c+b:c;--d>=0;)a.push(d);return a}),gt:pa(function(a,b,c){for(var d=c<0?c+b:c;++d<b;)a.push(d);return a})}},d.pseudos.nth=d.pseudos.eq;for(b in{radio:!0,checkbox:!0,file:!0,password:!0,image:!0})d.pseudos[b]=ma(b);for(b in{submit:!0,reset:!0})d.pseudos[b]=na(b);function ra(){}ra.prototype=d.filters=d.pseudos,d.setFilters=new ra,g=ga.tokenize=function(a,b){var c,e,f,g,h,i,j,k=z[a+" "];if(k)return b?0:k.slice(0);h=a,i=[],j=d.preFilter;while(h){c&&!(e=Q.exec(h))||(e&&(h=h.slice(e[0].length)||h),i.push(f=[])),c=!1,(e=R.exec(h))&&(c=e.shift(),f.push({value:c,type:e[0].replace(P," ")}),h=h.slice(c.length));for(g in d.filter)!(e=V[g].exec(h))||j[g]&&!(e=j[g](e))||(c=e.shift(),f.push({value:c,type:g,matches:e}),h=h.slice(c.length));if(!c)break}return b?h.length:h?ga.error(a):z(a,i).slice(0)};function sa(a){for(var b=0,c=a.length,d="";b<c;b++)d+=a[b].value;return d}function ta(a,b,c){var d=b.dir,e=b.next,f=e||d,g=c&&"parentNode"===f,h=x++;return b.first?function(b,c,e){while(b=b[d])if(1===b.nodeType||g)return a(b,c,e);return!1}:function(b,c,i){var j,k,l,m=[w,h];if(i){while(b=b[d])if((1===b.nodeType||g)&&a(b,c,i))return!0}else while(b=b[d])if(1===b.nodeType||g)if(l=b[u]||(b[u]={}),k=l[b.uniqueID]||(l[b.uniqueID]={}),e&&e===b.nodeName.toLowerCase())b=b[d]||b;else{if((j=k[f])&&j[0]===w&&j[1]===h)return m[2]=j[2];if(k[f]=m,m[2]=a(b,c,i))return!0}return!1}}function ua(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function va(a,b,c){for(var d=0,e=b.length;d<e;d++)ga(a,b[d],c);return c}function wa(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;h<i;h++)(f=a[h])&&(c&&!c(f,d,e)||(g.push(f),j&&b.push(h)));return g}function xa(a,b,c,d,e,f){return d&&!d[u]&&(d=xa(d)),e&&!e[u]&&(e=xa(e,f)),ia(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||va(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:wa(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=wa(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?I(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=wa(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):G.apply(g,r)})}function ya(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=ta(function(a){return a===b},h,!0),l=ta(function(a){return I(b,a)>-1},h,!0),m=[function(a,c,d){var e=!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d));return b=null,e}];i<f;i++)if(c=d.relative[a[i].type])m=[ta(ua(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;e<f;e++)if(d.relative[a[e].type])break;return xa(i>1&&ua(m),i>1&&sa(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(P,"$1"),c,i<e&&ya(a.slice(i,e)),e<f&&ya(a=a.slice(e)),e<f&&sa(a))}m.push(c)}return ua(m)}function za(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,o,q,r=0,s="0",t=f&&[],u=[],v=j,x=f||e&&d.find.TAG("*",k),y=w+=null==v?1:Math.random()||.1,z=x.length;for(k&&(j=g===n||g||k);s!==z&&null!=(l=x[s]);s++){if(e&&l){o=0,g||l.ownerDocument===n||(m(l),h=!p);while(q=a[o++])if(q(l,g||n,h)){i.push(l);break}k&&(w=y)}c&&((l=!q&&l)&&r--,f&&t.push(l))}if(r+=s,c&&s!==r){o=0;while(q=b[o++])q(t,u,g,h);if(f){if(r>0)while(s--)t[s]||u[s]||(u[s]=E.call(i));u=wa(u)}G.apply(i,u),k&&!f&&u.length>0&&r+b.length>1&&ga.uniqueSort(i)}return k&&(w=y,j=v),t};return c?ia(f):f}return h=ga.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=ya(b[c]),f[u]?d.push(f):e.push(f);f=A(a,za(e,d)),f.selector=a}return f},i=ga.select=function(a,b,c,e){var f,i,j,k,l,m="function"==typeof a&&a,n=!e&&g(a=m.selector||a);if(c=c||[],1===n.length){if(i=n[0]=n[0].slice(0),i.length>2&&"ID"===(j=i[0]).type&&9===b.nodeType&&p&&d.relative[i[1].type]){if(b=(d.find.ID(j.matches[0].replace(_,aa),b)||[])[0],!b)return c;m&&(b=b.parentNode),a=a.slice(i.shift().value.length)}f=V.needsContext.test(a)?0:i.length;while(f--){if(j=i[f],d.relative[k=j.type])break;if((l=d.find[k])&&(e=l(j.matches[0].replace(_,aa),$.test(i[0].type)&&qa(b.parentNode)||b))){if(i.splice(f,1),a=e.length&&sa(i),!a)return G.apply(c,e),c;break}}}return(m||h(a,n))(e,b,!p,c,!b||$.test(a)&&qa(b.parentNode)||b),c},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=ja(function(a){return 1&a.compareDocumentPosition(n.createElement("fieldset"))}),ja(function(a){return a.innerHTML="<a href='#'></a>","#"===a.firstChild.getAttribute("href")})||ka("type|href|height|width",function(a,b,c){if(!c)return a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&ja(function(a){return a.innerHTML="<input/>",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||ka("value",function(a,b,c){if(!c&&"input"===a.nodeName.toLowerCase())return a.defaultValue}),ja(function(a){return null==a.getAttribute("disabled")})||ka(J,function(a,b,c){var d;if(!c)return a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),ga}(a);r.find=x,r.expr=x.selectors,r.expr[":"]=r.expr.pseudos,r.uniqueSort=r.unique=x.uniqueSort,r.text=x.getText,r.isXMLDoc=x.isXML,r.contains=x.contains,r.escapeSelector=x.escape;var y=function(a,b,c){var d=[],e=void 0!==c;while((a=a[b])&&9!==a.nodeType)if(1===a.nodeType){if(e&&r(a).is(c))break;d.push(a)}return d},z=function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c},A=r.expr.match.needsContext;function B(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()}var C=/^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i,D=/^.[^:#\[\.,]*$/;function E(a,b,c){return r.isFunction(b)?r.grep(a,function(a,d){return!!b.call(a,d,a)!==c}):b.nodeType?r.grep(a,function(a){return a===b!==c}):"string"!=typeof b?r.grep(a,function(a){return i.call(b,a)>-1!==c}):D.test(b)?r.filter(b,a,c):(b=r.filter(b,a),r.grep(a,function(a){return i.call(b,a)>-1!==c&&1===a.nodeType}))}r.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?r.find.matchesSelector(d,a)?[d]:[]:r.find.matches(a,r.grep(b,function(a){return 1===a.nodeType}))},r.fn.extend({find:function(a){var b,c,d=this.length,e=this;if("string"!=typeof a)return this.pushStack(r(a).filter(function(){for(b=0;b<d;b++)if(r.contains(e[b],this))return!0}));for(c=this.pushStack([]),b=0;b<d;b++)r.find(a,e[b],c);return d>1?r.uniqueSort(c):c},filter:function(a){return this.pushStack(E(this,a||[],!1))},not:function(a){return this.pushStack(E(this,a||[],!0))},is:function(a){return!!E(this,"string"==typeof a&&A.test(a)?r(a):a||[],!1).length}});var F,G=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/,H=r.fn.init=function(a,b,c){var e,f;if(!a)return this;if(c=c||F,"string"==typeof a){if(e="<"===a[0]&&">"===a[a.length-1]&&a.length>=3?[null,a,null]:G.exec(a),!e||!e[1]&&b)return!b||b.jquery?(b||c).find(a):this.constructor(b).find(a);if(e[1]){if(b=b instanceof r?b[0]:b,r.merge(this,r.parseHTML(e[1],b&&b.nodeType?b.ownerDocument||b:d,!0)),C.test(e[1])&&r.isPlainObject(b))for(e in b)r.isFunction(this[e])?this[e](b[e]):this.attr(e,b[e]);return this}return f=d.getElementById(e[2]),f&&(this[0]=f,this.length=1),this}return a.nodeType?(this[0]=a,this.length=1,this):r.isFunction(a)?void 0!==c.ready?c.ready(a):a(r):r.makeArray(a,this)};H.prototype=r.fn,F=r(d);var I=/^(?:parents|prev(?:Until|All))/,J={children:!0,contents:!0,next:!0,prev:!0};r.fn.extend({has:function(a){var b=r(a,this),c=b.length;return this.filter(function(){for(var a=0;a<c;a++)if(r.contains(this,b[a]))return!0})},closest:function(a,b){var c,d=0,e=this.length,f=[],g="string"!=typeof a&&r(a);if(!A.test(a))for(;d<e;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&r.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?r.uniqueSort(f):f)},index:function(a){return a?"string"==typeof a?i.call(r(a),this[0]):i.call(this,a.jquery?a[0]:a):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(r.uniqueSort(r.merge(this.get(),r(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function K(a,b){while((a=a[b])&&1!==a.nodeType);return a}r.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return y(a,"parentNode")},parentsUntil:function(a,b,c){return y(a,"parentNode",c)},next:function(a){return K(a,"nextSibling")},prev:function(a){return K(a,"previousSibling")},nextAll:function(a){return y(a,"nextSibling")},prevAll:function(a){return y(a,"previousSibling")},nextUntil:function(a,b,c){return y(a,"nextSibling",c)},prevUntil:function(a,b,c){return y(a,"previousSibling",c)},siblings:function(a){return z((a.parentNode||{}).firstChild,a)},children:function(a){return z(a.firstChild)},contents:function(a){return B(a,"iframe")?a.contentDocument:(B(a,"template")&&(a=a.content||a),r.merge([],a.childNodes))}},function(a,b){r.fn[a]=function(c,d){var e=r.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=r.filter(d,e)),this.length>1&&(J[a]||r.uniqueSort(e),I.test(a)&&e.reverse()),this.pushStack(e)}});var L=/[^\x20\t\r\n\f]+/g;function M(a){var b={};return r.each(a.match(L)||[],function(a,c){b[c]=!0}),b}r.Callbacks=function(a){a="string"==typeof a?M(a):r.extend({},a);var b,c,d,e,f=[],g=[],h=-1,i=function(){for(e=e||a.once,d=b=!0;g.length;h=-1){c=g.shift();while(++h<f.length)f[h].apply(c[0],c[1])===!1&&a.stopOnFalse&&(h=f.length,c=!1)}a.memory||(c=!1),b=!1,e&&(f=c?[]:"")},j={add:function(){return f&&(c&&!b&&(h=f.length-1,g.push(c)),function d(b){r.each(b,function(b,c){r.isFunction(c)?a.unique&&j.has(c)||f.push(c):c&&c.length&&"string"!==r.type(c)&&d(c)})}(arguments),c&&!b&&i()),this},remove:function(){return r.each(arguments,function(a,b){var c;while((c=r.inArray(b,f,c))>-1)f.splice(c,1),c<=h&&h--}),this},has:function(a){return a?r.inArray(a,f)>-1:f.length>0},empty:function(){return f&&(f=[]),this},disable:function(){return e=g=[],f=c="",this},disabled:function(){return!f},lock:function(){return e=g=[],c||b||(f=c=""),this},locked:function(){return!!e},fireWith:function(a,c){return e||(c=c||[],c=[a,c.slice?c.slice():c],g.push(c),b||i()),this},fire:function(){return j.fireWith(this,arguments),this},fired:function(){return!!d}};return j};function N(a){return a}function O(a){throw a}function P(a,b,c,d){var e;try{a&&r.isFunction(e=a.promise)?e.call(a).done(b).fail(c):a&&r.isFunction(e=a.then)?e.call(a,b,c):b.apply(void 0,[a].slice(d))}catch(a){c.apply(void 0,[a])}}r.extend({Deferred:function(b){var c=[["notify","progress",r.Callbacks("memory"),r.Callbacks("memory"),2],["resolve","done",r.Callbacks("once memory"),r.Callbacks("once memory"),0,"resolved"],["reject","fail",r.Callbacks("once memory"),r.Callbacks("once memory"),1,"rejected"]],d="pending",e={state:function(){return d},always:function(){return f.done(arguments).fail(arguments),this},"catch":function(a){return e.then(null,a)},pipe:function(){var a=arguments;return r.Deferred(function(b){r.each(c,function(c,d){var e=r.isFunction(a[d[4]])&&a[d[4]];f[d[1]](function(){var a=e&&e.apply(this,arguments);a&&r.isFunction(a.promise)?a.promise().progress(b.notify).done(b.resolve).fail(b.reject):b[d[0]+"With"](this,e?[a]:arguments)})}),a=null}).promise()},then:function(b,d,e){var f=0;function g(b,c,d,e){return function(){var h=this,i=arguments,j=function(){var a,j;if(!(b<f)){if(a=d.apply(h,i),a===c.promise())throw new TypeError("Thenable self-resolution");j=a&&("object"==typeof a||"function"==typeof a)&&a.then,r.isFunction(j)?e?j.call(a,g(f,c,N,e),g(f,c,O,e)):(f++,j.call(a,g(f,c,N,e),g(f,c,O,e),g(f,c,N,c.notifyWith))):(d!==N&&(h=void 0,i=[a]),(e||c.resolveWith)(h,i))}},k=e?j:function(){try{j()}catch(a){r.Deferred.exceptionHook&&r.Deferred.exceptionHook(a,k.stackTrace),b+1>=f&&(d!==O&&(h=void 0,i=[a]),c.rejectWith(h,i))}};b?k():(r.Deferred.getStackHook&&(k.stackTrace=r.Deferred.getStackHook()),a.setTimeout(k))}}return r.Deferred(function(a){c[0][3].add(g(0,a,r.isFunction(e)?e:N,a.notifyWith)),c[1][3].add(g(0,a,r.isFunction(b)?b:N)),c[2][3].add(g(0,a,r.isFunction(d)?d:O))}).promise()},promise:function(a){return null!=a?r.extend(a,e):e}},f={};return r.each(c,function(a,b){var g=b[2],h=b[5];e[b[1]]=g.add,h&&g.add(function(){d=h},c[3-a][2].disable,c[0][2].lock),g.add(b[3].fire),f[b[0]]=function(){return f[b[0]+"With"](this===f?void 0:this,arguments),this},f[b[0]+"With"]=g.fireWith}),e.promise(f),b&&b.call(f,f),f},when:function(a){var b=arguments.length,c=b,d=Array(c),e=f.call(arguments),g=r.Deferred(),h=function(a){return function(c){d[a]=this,e[a]=arguments.length>1?f.call(arguments):c,--b||g.resolveWith(d,e)}};if(b<=1&&(P(a,g.done(h(c)).resolve,g.reject,!b),"pending"===g.state()||r.isFunction(e[c]&&e[c].then)))return g.then();while(c--)P(e[c],h(c),g.reject);return g.promise()}});var Q=/^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/;r.Deferred.exceptionHook=function(b,c){a.console&&a.console.warn&&b&&Q.test(b.name)&&a.console.warn("jQuery.Deferred exception: "+b.message,b.stack,c)},r.readyException=function(b){a.setTimeout(function(){throw b})};var R=r.Deferred();r.fn.ready=function(a){return R.then(a)["catch"](function(a){r.readyException(a)}),this},r.extend({isReady:!1,readyWait:1,ready:function(a){(a===!0?--r.readyWait:r.isReady)||(r.isReady=!0,a!==!0&&--r.readyWait>0||R.resolveWith(d,[r]))}}),r.ready.then=R.then;function S(){d.removeEventListener("DOMContentLoaded",S),
a.removeEventListener("load",S),r.ready()}"complete"===d.readyState||"loading"!==d.readyState&&!d.documentElement.doScroll?a.setTimeout(r.ready):(d.addEventListener("DOMContentLoaded",S),a.addEventListener("load",S));var T=function(a,b,c,d,e,f,g){var h=0,i=a.length,j=null==c;if("object"===r.type(c)){e=!0;for(h in c)T(a,b,h,c[h],!0,f,g)}else if(void 0!==d&&(e=!0,r.isFunction(d)||(g=!0),j&&(g?(b.call(a,d),b=null):(j=b,b=function(a,b,c){return j.call(r(a),c)})),b))for(;h<i;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},U=function(a){return 1===a.nodeType||9===a.nodeType||!+a.nodeType};function V(){this.expando=r.expando+V.uid++}V.uid=1,V.prototype={cache:function(a){var b=a[this.expando];return b||(b={},U(a)&&(a.nodeType?a[this.expando]=b:Object.defineProperty(a,this.expando,{value:b,configurable:!0}))),b},set:function(a,b,c){var d,e=this.cache(a);if("string"==typeof b)e[r.camelCase(b)]=c;else for(d in b)e[r.camelCase(d)]=b[d];return e},get:function(a,b){return void 0===b?this.cache(a):a[this.expando]&&a[this.expando][r.camelCase(b)]},access:function(a,b,c){return void 0===b||b&&"string"==typeof b&&void 0===c?this.get(a,b):(this.set(a,b,c),void 0!==c?c:b)},remove:function(a,b){var c,d=a[this.expando];if(void 0!==d){if(void 0!==b){Array.isArray(b)?b=b.map(r.camelCase):(b=r.camelCase(b),b=b in d?[b]:b.match(L)||[]),c=b.length;while(c--)delete d[b[c]]}(void 0===b||r.isEmptyObject(d))&&(a.nodeType?a[this.expando]=void 0:delete a[this.expando])}},hasData:function(a){var b=a[this.expando];return void 0!==b&&!r.isEmptyObject(b)}};var W=new V,X=new V,Y=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,Z=/[A-Z]/g;function $(a){return"true"===a||"false"!==a&&("null"===a?null:a===+a+""?+a:Y.test(a)?JSON.parse(a):a)}function _(a,b,c){var d;if(void 0===c&&1===a.nodeType)if(d="data-"+b.replace(Z,"-$&").toLowerCase(),c=a.getAttribute(d),"string"==typeof c){try{c=$(c)}catch(e){}X.set(a,b,c)}else c=void 0;return c}r.extend({hasData:function(a){return X.hasData(a)||W.hasData(a)},data:function(a,b,c){return X.access(a,b,c)},removeData:function(a,b){X.remove(a,b)},_data:function(a,b,c){return W.access(a,b,c)},_removeData:function(a,b){W.remove(a,b)}}),r.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=X.get(f),1===f.nodeType&&!W.get(f,"hasDataAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=r.camelCase(d.slice(5)),_(f,d,e[d])));W.set(f,"hasDataAttrs",!0)}return e}return"object"==typeof a?this.each(function(){X.set(this,a)}):T(this,function(b){var c;if(f&&void 0===b){if(c=X.get(f,a),void 0!==c)return c;if(c=_(f,a),void 0!==c)return c}else this.each(function(){X.set(this,a,b)})},null,b,arguments.length>1,null,!0)},removeData:function(a){return this.each(function(){X.remove(this,a)})}}),r.extend({queue:function(a,b,c){var d;if(a)return b=(b||"fx")+"queue",d=W.get(a,b),c&&(!d||Array.isArray(c)?d=W.access(a,b,r.makeArray(c)):d.push(c)),d||[]},dequeue:function(a,b){b=b||"fx";var c=r.queue(a,b),d=c.length,e=c.shift(),f=r._queueHooks(a,b),g=function(){r.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return W.get(a,c)||W.access(a,c,{empty:r.Callbacks("once memory").add(function(){W.remove(a,[b+"queue",c])})})}}),r.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.length<c?r.queue(this[0],a):void 0===b?this:this.each(function(){var c=r.queue(this,a,b);r._queueHooks(this,a),"fx"===a&&"inprogress"!==c[0]&&r.dequeue(this,a)})},dequeue:function(a){return this.each(function(){r.dequeue(this,a)})},clearQueue:function(a){return this.queue(a||"fx",[])},promise:function(a,b){var c,d=1,e=r.Deferred(),f=this,g=this.length,h=function(){--d||e.resolveWith(f,[f])};"string"!=typeof a&&(b=a,a=void 0),a=a||"fx";while(g--)c=W.get(f[g],a+"queueHooks"),c&&c.empty&&(d++,c.empty.add(h));return h(),e.promise(b)}});var aa=/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source,ba=new RegExp("^(?:([+-])=|)("+aa+")([a-z%]*)$","i"),ca=["Top","Right","Bottom","Left"],da=function(a,b){return a=b||a,"none"===a.style.display||""===a.style.display&&r.contains(a.ownerDocument,a)&&"none"===r.css(a,"display")},ea=function(a,b,c,d){var e,f,g={};for(f in b)g[f]=a.style[f],a.style[f]=b[f];e=c.apply(a,d||[]);for(f in b)a.style[f]=g[f];return e};function fa(a,b,c,d){var e,f=1,g=20,h=d?function(){return d.cur()}:function(){return r.css(a,b,"")},i=h(),j=c&&c[3]||(r.cssNumber[b]?"":"px"),k=(r.cssNumber[b]||"px"!==j&&+i)&&ba.exec(r.css(a,b));if(k&&k[3]!==j){j=j||k[3],c=c||[],k=+i||1;do f=f||".5",k/=f,r.style(a,b,k+j);while(f!==(f=h()/i)&&1!==f&&--g)}return c&&(k=+k||+i||0,e=c[1]?k+(c[1]+1)*c[2]:+c[2],d&&(d.unit=j,d.start=k,d.end=e)),e}var ga={};function ha(a){var b,c=a.ownerDocument,d=a.nodeName,e=ga[d];return e?e:(b=c.body.appendChild(c.createElement(d)),e=r.css(b,"display"),b.parentNode.removeChild(b),"none"===e&&(e="block"),ga[d]=e,e)}function ia(a,b){for(var c,d,e=[],f=0,g=a.length;f<g;f++)d=a[f],d.style&&(c=d.style.display,b?("none"===c&&(e[f]=W.get(d,"display")||null,e[f]||(d.style.display="")),""===d.style.display&&da(d)&&(e[f]=ha(d))):"none"!==c&&(e[f]="none",W.set(d,"display",c)));for(f=0;f<g;f++)null!=e[f]&&(a[f].style.display=e[f]);return a}r.fn.extend({show:function(){return ia(this,!0)},hide:function(){return ia(this)},toggle:function(a){return"boolean"==typeof a?a?this.show():this.hide():this.each(function(){da(this)?r(this).show():r(this).hide()})}});var ja=/^(?:checkbox|radio)$/i,ka=/<([a-z][^\/\0>\x20\t\r\n\f]+)/i,la=/^$|\/(?:java|ecma)script/i,ma={option:[1,"<select multiple='multiple'>","</select>"],thead:[1,"<table>","</table>"],col:[2,"<table><colgroup>","</colgroup></table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],_default:[0,"",""]};ma.optgroup=ma.option,ma.tbody=ma.tfoot=ma.colgroup=ma.caption=ma.thead,ma.th=ma.td;function na(a,b){var c;return c="undefined"!=typeof a.getElementsByTagName?a.getElementsByTagName(b||"*"):"undefined"!=typeof a.querySelectorAll?a.querySelectorAll(b||"*"):[],void 0===b||b&&B(a,b)?r.merge([a],c):c}function oa(a,b){for(var c=0,d=a.length;c<d;c++)W.set(a[c],"globalEval",!b||W.get(b[c],"globalEval"))}var pa=/<|&#?\w+;/;function qa(a,b,c,d,e){for(var f,g,h,i,j,k,l=b.createDocumentFragment(),m=[],n=0,o=a.length;n<o;n++)if(f=a[n],f||0===f)if("object"===r.type(f))r.merge(m,f.nodeType?[f]:f);else if(pa.test(f)){g=g||l.appendChild(b.createElement("div")),h=(ka.exec(f)||["",""])[1].toLowerCase(),i=ma[h]||ma._default,g.innerHTML=i[1]+r.htmlPrefilter(f)+i[2],k=i[0];while(k--)g=g.lastChild;r.merge(m,g.childNodes),g=l.firstChild,g.textContent=""}else m.push(b.createTextNode(f));l.textContent="",n=0;while(f=m[n++])if(d&&r.inArray(f,d)>-1)e&&e.push(f);else if(j=r.contains(f.ownerDocument,f),g=na(l.appendChild(f),"script"),j&&oa(g),c){k=0;while(f=g[k++])la.test(f.type||"")&&c.push(f)}return l}!function(){var a=d.createDocumentFragment(),b=a.appendChild(d.createElement("div")),c=d.createElement("input");c.setAttribute("type","radio"),c.setAttribute("checked","checked"),c.setAttribute("name","t"),b.appendChild(c),o.checkClone=b.cloneNode(!0).cloneNode(!0).lastChild.checked,b.innerHTML="<textarea>x</textarea>",o.noCloneChecked=!!b.cloneNode(!0).lastChild.defaultValue}();var ra=d.documentElement,sa=/^key/,ta=/^(?:mouse|pointer|contextmenu|drag|drop)|click/,ua=/^([^.]*)(?:\.(.+)|)/;function va(){return!0}function wa(){return!1}function xa(){try{return d.activeElement}catch(a){}}function ya(a,b,c,d,e,f){var g,h;if("object"==typeof b){"string"!=typeof c&&(d=d||c,c=void 0);for(h in b)ya(a,h,c,d,b[h],f);return a}if(null==d&&null==e?(e=c,d=c=void 0):null==e&&("string"==typeof c?(e=d,d=void 0):(e=d,d=c,c=void 0)),e===!1)e=wa;else if(!e)return a;return 1===f&&(g=e,e=function(a){return r().off(a),g.apply(this,arguments)},e.guid=g.guid||(g.guid=r.guid++)),a.each(function(){r.event.add(this,b,e,d,c)})}r.event={global:{},add:function(a,b,c,d,e){var f,g,h,i,j,k,l,m,n,o,p,q=W.get(a);if(q){c.handler&&(f=c,c=f.handler,e=f.selector),e&&r.find.matchesSelector(ra,e),c.guid||(c.guid=r.guid++),(i=q.events)||(i=q.events={}),(g=q.handle)||(g=q.handle=function(b){return"undefined"!=typeof r&&r.event.triggered!==b.type?r.event.dispatch.apply(a,arguments):void 0}),b=(b||"").match(L)||[""],j=b.length;while(j--)h=ua.exec(b[j])||[],n=p=h[1],o=(h[2]||"").split(".").sort(),n&&(l=r.event.special[n]||{},n=(e?l.delegateType:l.bindType)||n,l=r.event.special[n]||{},k=r.extend({type:n,origType:p,data:d,handler:c,guid:c.guid,selector:e,needsContext:e&&r.expr.match.needsContext.test(e),namespace:o.join(".")},f),(m=i[n])||(m=i[n]=[],m.delegateCount=0,l.setup&&l.setup.call(a,d,o,g)!==!1||a.addEventListener&&a.addEventListener(n,g)),l.add&&(l.add.call(a,k),k.handler.guid||(k.handler.guid=c.guid)),e?m.splice(m.delegateCount++,0,k):m.push(k),r.event.global[n]=!0)}},remove:function(a,b,c,d,e){var f,g,h,i,j,k,l,m,n,o,p,q=W.hasData(a)&&W.get(a);if(q&&(i=q.events)){b=(b||"").match(L)||[""],j=b.length;while(j--)if(h=ua.exec(b[j])||[],n=p=h[1],o=(h[2]||"").split(".").sort(),n){l=r.event.special[n]||{},n=(d?l.delegateType:l.bindType)||n,m=i[n]||[],h=h[2]&&new RegExp("(^|\\.)"+o.join("\\.(?:.*\\.|)")+"(\\.|$)"),g=f=m.length;while(f--)k=m[f],!e&&p!==k.origType||c&&c.guid!==k.guid||h&&!h.test(k.namespace)||d&&d!==k.selector&&("**"!==d||!k.selector)||(m.splice(f,1),k.selector&&m.delegateCount--,l.remove&&l.remove.call(a,k));g&&!m.length&&(l.teardown&&l.teardown.call(a,o,q.handle)!==!1||r.removeEvent(a,n,q.handle),delete i[n])}else for(n in i)r.event.remove(a,n+b[j],c,d,!0);r.isEmptyObject(i)&&W.remove(a,"handle events")}},dispatch:function(a){var b=r.event.fix(a),c,d,e,f,g,h,i=new Array(arguments.length),j=(W.get(this,"events")||{})[b.type]||[],k=r.event.special[b.type]||{};for(i[0]=b,c=1;c<arguments.length;c++)i[c]=arguments[c];if(b.delegateTarget=this,!k.preDispatch||k.preDispatch.call(this,b)!==!1){h=r.event.handlers.call(this,b,j),c=0;while((f=h[c++])&&!b.isPropagationStopped()){b.currentTarget=f.elem,d=0;while((g=f.handlers[d++])&&!b.isImmediatePropagationStopped())b.rnamespace&&!b.rnamespace.test(g.namespace)||(b.handleObj=g,b.data=g.data,e=((r.event.special[g.origType]||{}).handle||g.handler).apply(f.elem,i),void 0!==e&&(b.result=e)===!1&&(b.preventDefault(),b.stopPropagation()))}return k.postDispatch&&k.postDispatch.call(this,b),b.result}},handlers:function(a,b){var c,d,e,f,g,h=[],i=b.delegateCount,j=a.target;if(i&&j.nodeType&&!("click"===a.type&&a.button>=1))for(;j!==this;j=j.parentNode||this)if(1===j.nodeType&&("click"!==a.type||j.disabled!==!0)){for(f=[],g={},c=0;c<i;c++)d=b[c],e=d.selector+" ",void 0===g[e]&&(g[e]=d.needsContext?r(e,this).index(j)>-1:r.find(e,this,null,[j]).length),g[e]&&f.push(d);f.length&&h.push({elem:j,handlers:f})}return j=this,i<b.length&&h.push({elem:j,handlers:b.slice(i)}),h},addProp:function(a,b){Object.defineProperty(r.Event.prototype,a,{enumerable:!0,configurable:!0,get:r.isFunction(b)?function(){if(this.originalEvent)return b(this.originalEvent)}:function(){if(this.originalEvent)return this.originalEvent[a]},set:function(b){Object.defineProperty(this,a,{enumerable:!0,configurable:!0,writable:!0,value:b})}})},fix:function(a){return a[r.expando]?a:new r.Event(a)},special:{load:{noBubble:!0},focus:{trigger:function(){if(this!==xa()&&this.focus)return this.focus(),!1},delegateType:"focusin"},blur:{trigger:function(){if(this===xa()&&this.blur)return this.blur(),!1},delegateType:"focusout"},click:{trigger:function(){if("checkbox"===this.type&&this.click&&B(this,"input"))return this.click(),!1},_default:function(a){return B(a.target,"a")}},beforeunload:{postDispatch:function(a){void 0!==a.result&&a.originalEvent&&(a.originalEvent.returnValue=a.result)}}}},r.removeEvent=function(a,b,c){a.removeEventListener&&a.removeEventListener(b,c)},r.Event=function(a,b){return this instanceof r.Event?(a&&a.type?(this.originalEvent=a,this.type=a.type,this.isDefaultPrevented=a.defaultPrevented||void 0===a.defaultPrevented&&a.returnValue===!1?va:wa,this.target=a.target&&3===a.target.nodeType?a.target.parentNode:a.target,this.currentTarget=a.currentTarget,this.relatedTarget=a.relatedTarget):this.type=a,b&&r.extend(this,b),this.timeStamp=a&&a.timeStamp||r.now(),void(this[r.expando]=!0)):new r.Event(a,b)},r.Event.prototype={constructor:r.Event,isDefaultPrevented:wa,isPropagationStopped:wa,isImmediatePropagationStopped:wa,isSimulated:!1,preventDefault:function(){var a=this.originalEvent;this.isDefaultPrevented=va,a&&!this.isSimulated&&a.preventDefault()},stopPropagation:function(){var a=this.originalEvent;this.isPropagationStopped=va,a&&!this.isSimulated&&a.stopPropagation()},stopImmediatePropagation:function(){var a=this.originalEvent;this.isImmediatePropagationStopped=va,a&&!this.isSimulated&&a.stopImmediatePropagation(),this.stopPropagation()}},r.each({altKey:!0,bubbles:!0,cancelable:!0,changedTouches:!0,ctrlKey:!0,detail:!0,eventPhase:!0,metaKey:!0,pageX:!0,pageY:!0,shiftKey:!0,view:!0,"char":!0,charCode:!0,key:!0,keyCode:!0,button:!0,buttons:!0,clientX:!0,clientY:!0,offsetX:!0,offsetY:!0,pointerId:!0,pointerType:!0,screenX:!0,screenY:!0,targetTouches:!0,toElement:!0,touches:!0,which:function(a){var b=a.button;return null==a.which&&sa.test(a.type)?null!=a.charCode?a.charCode:a.keyCode:!a.which&&void 0!==b&&ta.test(a.type)?1&b?1:2&b?3:4&b?2:0:a.which}},r.event.addProp),r.each({mouseenter:"mouseover",mouseleave:"mouseout",pointerenter:"pointerover",pointerleave:"pointerout"},function(a,b){r.event.special[a]={delegateType:b,bindType:b,handle:function(a){var c,d=this,e=a.relatedTarget,f=a.handleObj;return e&&(e===d||r.contains(d,e))||(a.type=f.origType,c=f.handler.apply(this,arguments),a.type=b),c}}}),r.fn.extend({on:function(a,b,c,d){return ya(this,a,b,c,d)},one:function(a,b,c,d){return ya(this,a,b,c,d,1)},off:function(a,b,c){var d,e;if(a&&a.preventDefault&&a.handleObj)return d=a.handleObj,r(a.delegateTarget).off(d.namespace?d.origType+"."+d.namespace:d.origType,d.selector,d.handler),this;if("object"==typeof a){for(e in a)this.off(e,b,a[e]);return this}return b!==!1&&"function"!=typeof b||(c=b,b=void 0),c===!1&&(c=wa),this.each(function(){r.event.remove(this,a,c,b)})}});var za=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi,Aa=/<script|<style|<link/i,Ba=/checked\s*(?:[^=]|=\s*.checked.)/i,Ca=/^true\/(.*)/,Da=/^\s*<!(?:\[CDATA\[|--)|(?:\]\]|--)>\s*$/g;function Ea(a,b){return B(a,"table")&&B(11!==b.nodeType?b:b.firstChild,"tr")?r(">tbody",a)[0]||a:a}function Fa(a){return a.type=(null!==a.getAttribute("type"))+"/"+a.type,a}function Ga(a){var b=Ca.exec(a.type);return b?a.type=b[1]:a.removeAttribute("type"),a}function Ha(a,b){var c,d,e,f,g,h,i,j;if(1===b.nodeType){if(W.hasData(a)&&(f=W.access(a),g=W.set(b,f),j=f.events)){delete g.handle,g.events={};for(e in j)for(c=0,d=j[e].length;c<d;c++)r.event.add(b,e,j[e][c])}X.hasData(a)&&(h=X.access(a),i=r.extend({},h),X.set(b,i))}}function Ia(a,b){var c=b.nodeName.toLowerCase();"input"===c&&ja.test(a.type)?b.checked=a.checked:"input"!==c&&"textarea"!==c||(b.defaultValue=a.defaultValue)}function Ja(a,b,c,d){b=g.apply([],b);var e,f,h,i,j,k,l=0,m=a.length,n=m-1,q=b[0],s=r.isFunction(q);if(s||m>1&&"string"==typeof q&&!o.checkClone&&Ba.test(q))return a.each(function(e){var f=a.eq(e);s&&(b[0]=q.call(this,e,f.html())),Ja(f,b,c,d)});if(m&&(e=qa(b,a[0].ownerDocument,!1,a,d),f=e.firstChild,1===e.childNodes.length&&(e=f),f||d)){for(h=r.map(na(e,"script"),Fa),i=h.length;l<m;l++)j=e,l!==n&&(j=r.clone(j,!0,!0),i&&r.merge(h,na(j,"script"))),c.call(a[l],j,l);if(i)for(k=h[h.length-1].ownerDocument,r.map(h,Ga),l=0;l<i;l++)j=h[l],la.test(j.type||"")&&!W.access(j,"globalEval")&&r.contains(k,j)&&(j.src?r._evalUrl&&r._evalUrl(j.src):p(j.textContent.replace(Da,""),k))}return a}function Ka(a,b,c){for(var d,e=b?r.filter(b,a):a,f=0;null!=(d=e[f]);f++)c||1!==d.nodeType||r.cleanData(na(d)),d.parentNode&&(c&&r.contains(d.ownerDocument,d)&&oa(na(d,"script")),d.parentNode.removeChild(d));return a}r.extend({htmlPrefilter:function(a){return a.replace(za,"<$1></$2>")},clone:function(a,b,c){var d,e,f,g,h=a.cloneNode(!0),i=r.contains(a.ownerDocument,a);if(!(o.noCloneChecked||1!==a.nodeType&&11!==a.nodeType||r.isXMLDoc(a)))for(g=na(h),f=na(a),d=0,e=f.length;d<e;d++)Ia(f[d],g[d]);if(b)if(c)for(f=f||na(a),g=g||na(h),d=0,e=f.length;d<e;d++)Ha(f[d],g[d]);else Ha(a,h);return g=na(h,"script"),g.length>0&&oa(g,!i&&na(a,"script")),h},cleanData:function(a){for(var b,c,d,e=r.event.special,f=0;void 0!==(c=a[f]);f++)if(U(c)){if(b=c[W.expando]){if(b.events)for(d in b.events)e[d]?r.event.remove(c,d):r.removeEvent(c,d,b.handle);c[W.expando]=void 0}c[X.expando]&&(c[X.expando]=void 0)}}}),r.fn.extend({detach:function(a){return Ka(this,a,!0)},remove:function(a){return Ka(this,a)},text:function(a){return T(this,function(a){return void 0===a?r.text(this):this.empty().each(function(){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||(this.textContent=a)})},null,a,arguments.length)},append:function(){return Ja(this,arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=Ea(this,a);b.appendChild(a)}})},prepend:function(){return Ja(this,arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=Ea(this,a);b.insertBefore(a,b.firstChild)}})},before:function(){return Ja(this,arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this)})},after:function(){return Ja(this,arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this.nextSibling)})},empty:function(){for(var a,b=0;null!=(a=this[b]);b++)1===a.nodeType&&(r.cleanData(na(a,!1)),a.textContent="");return this},clone:function(a,b){return a=null!=a&&a,b=null==b?a:b,this.map(function(){return r.clone(this,a,b)})},html:function(a){return T(this,function(a){var b=this[0]||{},c=0,d=this.length;if(void 0===a&&1===b.nodeType)return b.innerHTML;if("string"==typeof a&&!Aa.test(a)&&!ma[(ka.exec(a)||["",""])[1].toLowerCase()]){a=r.htmlPrefilter(a);try{for(;c<d;c++)b=this[c]||{},1===b.nodeType&&(r.cleanData(na(b,!1)),b.innerHTML=a);b=0}catch(e){}}b&&this.empty().append(a)},null,a,arguments.length)},replaceWith:function(){var a=[];return Ja(this,arguments,function(b){var c=this.parentNode;r.inArray(this,a)<0&&(r.cleanData(na(this)),c&&c.replaceChild(b,this))},a)}}),r.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){r.fn[a]=function(a){for(var c,d=[],e=r(a),f=e.length-1,g=0;g<=f;g++)c=g===f?this:this.clone(!0),r(e[g])[b](c),h.apply(d,c.get());return this.pushStack(d)}});var La=/^margin/,Ma=new RegExp("^("+aa+")(?!px)[a-z%]+$","i"),Na=function(b){var c=b.ownerDocument.defaultView;return c&&c.opener||(c=a),c.getComputedStyle(b)};!function(){function b(){if(i){i.style.cssText="box-sizing:border-box;position:relative;display:block;margin:auto;border:1px;padding:1px;top:1%;width:50%",i.innerHTML="",ra.appendChild(h);var b=a.getComputedStyle(i);c="1%"!==b.top,g="2px"===b.marginLeft,e="4px"===b.width,i.style.marginRight="50%",f="4px"===b.marginRight,ra.removeChild(h),i=null}}var c,e,f,g,h=d.createElement("div"),i=d.createElement("div");i.style&&(i.style.backgroundClip="content-box",i.cloneNode(!0).style.backgroundClip="",o.clearCloneStyle="content-box"===i.style.backgroundClip,h.style.cssText="border:0;width:8px;height:0;top:0;left:-9999px;padding:0;margin-top:1px;position:absolute",h.appendChild(i),r.extend(o,{pixelPosition:function(){return b(),c},boxSizingReliable:function(){return b(),e},pixelMarginRight:function(){return b(),f},reliableMarginLeft:function(){return b(),g}}))}();function Oa(a,b,c){var d,e,f,g,h=a.style;return c=c||Na(a),c&&(g=c.getPropertyValue(b)||c[b],""!==g||r.contains(a.ownerDocument,a)||(g=r.style(a,b)),!o.pixelMarginRight()&&Ma.test(g)&&La.test(b)&&(d=h.width,e=h.minWidth,f=h.maxWidth,h.minWidth=h.maxWidth=h.width=g,g=c.width,h.width=d,h.minWidth=e,h.maxWidth=f)),void 0!==g?g+"":g}function Pa(a,b){return{get:function(){return a()?void delete this.get:(this.get=b).apply(this,arguments)}}}var Qa=/^(none|table(?!-c[ea]).+)/,Ra=/^--/,Sa={position:"absolute",visibility:"hidden",display:"block"},Ta={letterSpacing:"0",fontWeight:"400"},Ua=["Webkit","Moz","ms"],Va=d.createElement("div").style;function Wa(a){if(a in Va)return a;var b=a[0].toUpperCase()+a.slice(1),c=Ua.length;while(c--)if(a=Ua[c]+b,a in Va)return a}function Xa(a){var b=r.cssProps[a];return b||(b=r.cssProps[a]=Wa(a)||a),b}function Ya(a,b,c){var d=ba.exec(b);return d?Math.max(0,d[2]-(c||0))+(d[3]||"px"):b}function Za(a,b,c,d,e){var f,g=0;for(f=c===(d?"border":"content")?4:"width"===b?1:0;f<4;f+=2)"margin"===c&&(g+=r.css(a,c+ca[f],!0,e)),d?("content"===c&&(g-=r.css(a,"padding"+ca[f],!0,e)),"margin"!==c&&(g-=r.css(a,"border"+ca[f]+"Width",!0,e))):(g+=r.css(a,"padding"+ca[f],!0,e),"padding"!==c&&(g+=r.css(a,"border"+ca[f]+"Width",!0,e)));return g}function $a(a,b,c){var d,e=Na(a),f=Oa(a,b,e),g="border-box"===r.css(a,"boxSizing",!1,e);return Ma.test(f)?f:(d=g&&(o.boxSizingReliable()||f===a.style[b]),"auto"===f&&(f=a["offset"+b[0].toUpperCase()+b.slice(1)]),f=parseFloat(f)||0,f+Za(a,b,c||(g?"border":"content"),d,e)+"px")}r.extend({cssHooks:{opacity:{get:function(a,b){if(b){var c=Oa(a,"opacity");return""===c?"1":c}}}},cssNumber:{animationIterationCount:!0,columnCount:!0,fillOpacity:!0,flexGrow:!0,flexShrink:!0,fontWeight:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{"float":"cssFloat"},style:function(a,b,c,d){if(a&&3!==a.nodeType&&8!==a.nodeType&&a.style){var e,f,g,h=r.camelCase(b),i=Ra.test(b),j=a.style;return i||(b=Xa(h)),g=r.cssHooks[b]||r.cssHooks[h],void 0===c?g&&"get"in g&&void 0!==(e=g.get(a,!1,d))?e:j[b]:(f=typeof c,"string"===f&&(e=ba.exec(c))&&e[1]&&(c=fa(a,b,e),f="number"),null!=c&&c===c&&("number"===f&&(c+=e&&e[3]||(r.cssNumber[h]?"":"px")),o.clearCloneStyle||""!==c||0!==b.indexOf("background")||(j[b]="inherit"),g&&"set"in g&&void 0===(c=g.set(a,c,d))||(i?j.setProperty(b,c):j[b]=c)),void 0)}},css:function(a,b,c,d){var e,f,g,h=r.camelCase(b),i=Ra.test(b);return i||(b=Xa(h)),g=r.cssHooks[b]||r.cssHooks[h],g&&"get"in g&&(e=g.get(a,!0,c)),void 0===e&&(e=Oa(a,b,d)),"normal"===e&&b in Ta&&(e=Ta[b]),""===c||c?(f=parseFloat(e),c===!0||isFinite(f)?f||0:e):e}}),r.each(["height","width"],function(a,b){r.cssHooks[b]={get:function(a,c,d){if(c)return!Qa.test(r.css(a,"display"))||a.getClientRects().length&&a.getBoundingClientRect().width?$a(a,b,d):ea(a,Sa,function(){return $a(a,b,d)})},set:function(a,c,d){var e,f=d&&Na(a),g=d&&Za(a,b,d,"border-box"===r.css(a,"boxSizing",!1,f),f);return g&&(e=ba.exec(c))&&"px"!==(e[3]||"px")&&(a.style[b]=c,c=r.css(a,b)),Ya(a,c,g)}}}),r.cssHooks.marginLeft=Pa(o.reliableMarginLeft,function(a,b){if(b)return(parseFloat(Oa(a,"marginLeft"))||a.getBoundingClientRect().left-ea(a,{marginLeft:0},function(){return a.getBoundingClientRect().left}))+"px"}),r.each({margin:"",padding:"",border:"Width"},function(a,b){r.cssHooks[a+b]={expand:function(c){for(var d=0,e={},f="string"==typeof c?c.split(" "):[c];d<4;d++)e[a+ca[d]+b]=f[d]||f[d-2]||f[0];return e}},La.test(a)||(r.cssHooks[a+b].set=Ya)}),r.fn.extend({css:function(a,b){return T(this,function(a,b,c){var d,e,f={},g=0;if(Array.isArray(b)){for(d=Na(a),e=b.length;g<e;g++)f[b[g]]=r.css(a,b[g],!1,d);return f}return void 0!==c?r.style(a,b,c):r.css(a,b)},a,b,arguments.length>1)}});function _a(a,b,c,d,e){return new _a.prototype.init(a,b,c,d,e)}r.Tween=_a,_a.prototype={constructor:_a,init:function(a,b,c,d,e,f){this.elem=a,this.prop=c,this.easing=e||r.easing._default,this.options=b,this.start=this.now=this.cur(),this.end=d,this.unit=f||(r.cssNumber[c]?"":"px")},cur:function(){var a=_a.propHooks[this.prop];return a&&a.get?a.get(this):_a.propHooks._default.get(this)},run:function(a){var b,c=_a.propHooks[this.prop];return this.options.duration?this.pos=b=r.easing[this.easing](a,this.options.duration*a,0,1,this.options.duration):this.pos=b=a,this.now=(this.end-this.start)*b+this.start,this.options.step&&this.options.step.call(this.elem,this.now,this),c&&c.set?c.set(this):_a.propHooks._default.set(this),this}},_a.prototype.init.prototype=_a.prototype,_a.propHooks={_default:{get:function(a){var b;return 1!==a.elem.nodeType||null!=a.elem[a.prop]&&null==a.elem.style[a.prop]?a.elem[a.prop]:(b=r.css(a.elem,a.prop,""),b&&"auto"!==b?b:0)},set:function(a){r.fx.step[a.prop]?r.fx.step[a.prop](a):1!==a.elem.nodeType||null==a.elem.style[r.cssProps[a.prop]]&&!r.cssHooks[a.prop]?a.elem[a.prop]=a.now:r.style(a.elem,a.prop,a.now+a.unit)}}},_a.propHooks.scrollTop=_a.propHooks.scrollLeft={set:function(a){a.elem.nodeType&&a.elem.parentNode&&(a.elem[a.prop]=a.now)}},r.easing={linear:function(a){return a},swing:function(a){return.5-Math.cos(a*Math.PI)/2},_default:"swing"},r.fx=_a.prototype.init,r.fx.step={};var ab,bb,cb=/^(?:toggle|show|hide)$/,db=/queueHooks$/;function eb(){bb&&(d.hidden===!1&&a.requestAnimationFrame?a.requestAnimationFrame(eb):a.setTimeout(eb,r.fx.interval),r.fx.tick())}function fb(){return a.setTimeout(function(){ab=void 0}),ab=r.now()}function gb(a,b){var c,d=0,e={height:a};for(b=b?1:0;d<4;d+=2-b)c=ca[d],e["margin"+c]=e["padding"+c]=a;return b&&(e.opacity=e.width=a),e}function hb(a,b,c){for(var d,e=(kb.tweeners[b]||[]).concat(kb.tweeners["*"]),f=0,g=e.length;f<g;f++)if(d=e[f].call(c,b,a))return d}function ib(a,b,c){var d,e,f,g,h,i,j,k,l="width"in b||"height"in b,m=this,n={},o=a.style,p=a.nodeType&&da(a),q=W.get(a,"fxshow");c.queue||(g=r._queueHooks(a,"fx"),null==g.unqueued&&(g.unqueued=0,h=g.empty.fire,g.empty.fire=function(){g.unqueued||h()}),g.unqueued++,m.always(function(){m.always(function(){g.unqueued--,r.queue(a,"fx").length||g.empty.fire()})}));for(d in b)if(e=b[d],cb.test(e)){if(delete b[d],f=f||"toggle"===e,e===(p?"hide":"show")){if("show"!==e||!q||void 0===q[d])continue;p=!0}n[d]=q&&q[d]||r.style(a,d)}if(i=!r.isEmptyObject(b),i||!r.isEmptyObject(n)){l&&1===a.nodeType&&(c.overflow=[o.overflow,o.overflowX,o.overflowY],j=q&&q.display,null==j&&(j=W.get(a,"display")),k=r.css(a,"display"),"none"===k&&(j?k=j:(ia([a],!0),j=a.style.display||j,k=r.css(a,"display"),ia([a]))),("inline"===k||"inline-block"===k&&null!=j)&&"none"===r.css(a,"float")&&(i||(m.done(function(){o.display=j}),null==j&&(k=o.display,j="none"===k?"":k)),o.display="inline-block")),c.overflow&&(o.overflow="hidden",m.always(function(){o.overflow=c.overflow[0],o.overflowX=c.overflow[1],o.overflowY=c.overflow[2]})),i=!1;for(d in n)i||(q?"hidden"in q&&(p=q.hidden):q=W.access(a,"fxshow",{display:j}),f&&(q.hidden=!p),p&&ia([a],!0),m.done(function(){p||ia([a]),W.remove(a,"fxshow");for(d in n)r.style(a,d,n[d])})),i=hb(p?q[d]:0,d,m),d in q||(q[d]=i.start,p&&(i.end=i.start,i.start=0))}}function jb(a,b){var c,d,e,f,g;for(c in a)if(d=r.camelCase(c),e=b[d],f=a[c],Array.isArray(f)&&(e=f[1],f=a[c]=f[0]),c!==d&&(a[d]=f,delete a[c]),g=r.cssHooks[d],g&&"expand"in g){f=g.expand(f),delete a[d];for(c in f)c in a||(a[c]=f[c],b[c]=e)}else b[d]=e}function kb(a,b,c){var d,e,f=0,g=kb.prefilters.length,h=r.Deferred().always(function(){delete i.elem}),i=function(){if(e)return!1;for(var b=ab||fb(),c=Math.max(0,j.startTime+j.duration-b),d=c/j.duration||0,f=1-d,g=0,i=j.tweens.length;g<i;g++)j.tweens[g].run(f);return h.notifyWith(a,[j,f,c]),f<1&&i?c:(i||h.notifyWith(a,[j,1,0]),h.resolveWith(a,[j]),!1)},j=h.promise({elem:a,props:r.extend({},b),opts:r.extend(!0,{specialEasing:{},easing:r.easing._default},c),originalProperties:b,originalOptions:c,startTime:ab||fb(),duration:c.duration,tweens:[],createTween:function(b,c){var d=r.Tween(a,j.opts,b,c,j.opts.specialEasing[b]||j.opts.easing);return j.tweens.push(d),d},stop:function(b){var c=0,d=b?j.tweens.length:0;if(e)return this;for(e=!0;c<d;c++)j.tweens[c].run(1);return b?(h.notifyWith(a,[j,1,0]),h.resolveWith(a,[j,b])):h.rejectWith(a,[j,b]),this}}),k=j.props;for(jb(k,j.opts.specialEasing);f<g;f++)if(d=kb.prefilters[f].call(j,a,k,j.opts))return r.isFunction(d.stop)&&(r._queueHooks(j.elem,j.opts.queue).stop=r.proxy(d.stop,d)),d;return r.map(k,hb,j),r.isFunction(j.opts.start)&&j.opts.start.call(a,j),j.progress(j.opts.progress).done(j.opts.done,j.opts.complete).fail(j.opts.fail).always(j.opts.always),r.fx.timer(r.extend(i,{elem:a,anim:j,queue:j.opts.queue})),j}r.Animation=r.extend(kb,{tweeners:{"*":[function(a,b){var c=this.createTween(a,b);return fa(c.elem,a,ba.exec(b),c),c}]},tweener:function(a,b){r.isFunction(a)?(b=a,a=["*"]):a=a.match(L);for(var c,d=0,e=a.length;d<e;d++)c=a[d],kb.tweeners[c]=kb.tweeners[c]||[],kb.tweeners[c].unshift(b)},prefilters:[ib],prefilter:function(a,b){b?kb.prefilters.unshift(a):kb.prefilters.push(a)}}),r.speed=function(a,b,c){var d=a&&"object"==typeof a?r.extend({},a):{complete:c||!c&&b||r.isFunction(a)&&a,duration:a,easing:c&&b||b&&!r.isFunction(b)&&b};return r.fx.off?d.duration=0:"number"!=typeof d.duration&&(d.duration in r.fx.speeds?d.duration=r.fx.speeds[d.duration]:d.duration=r.fx.speeds._default),null!=d.queue&&d.queue!==!0||(d.queue="fx"),d.old=d.complete,d.complete=function(){r.isFunction(d.old)&&d.old.call(this),d.queue&&r.dequeue(this,d.queue)},d},r.fn.extend({fadeTo:function(a,b,c,d){return this.filter(da).css("opacity",0).show().end().animate({opacity:b},a,c,d)},animate:function(a,b,c,d){var e=r.isEmptyObject(a),f=r.speed(b,c,d),g=function(){var b=kb(this,r.extend({},a),f);(e||W.get(this,"finish"))&&b.stop(!0)};return g.finish=g,e||f.queue===!1?this.each(g):this.queue(f.queue,g)},stop:function(a,b,c){var d=function(a){var b=a.stop;delete a.stop,b(c)};return"string"!=typeof a&&(c=b,b=a,a=void 0),b&&a!==!1&&this.queue(a||"fx",[]),this.each(function(){var b=!0,e=null!=a&&a+"queueHooks",f=r.timers,g=W.get(this);if(e)g[e]&&g[e].stop&&d(g[e]);else for(e in g)g[e]&&g[e].stop&&db.test(e)&&d(g[e]);for(e=f.length;e--;)f[e].elem!==this||null!=a&&f[e].queue!==a||(f[e].anim.stop(c),b=!1,f.splice(e,1));!b&&c||r.dequeue(this,a)})},finish:function(a){return a!==!1&&(a=a||"fx"),this.each(function(){var b,c=W.get(this),d=c[a+"queue"],e=c[a+"queueHooks"],f=r.timers,g=d?d.length:0;for(c.finish=!0,r.queue(this,a,[]),e&&e.stop&&e.stop.call(this,!0),b=f.length;b--;)f[b].elem===this&&f[b].queue===a&&(f[b].anim.stop(!0),f.splice(b,1));for(b=0;b<g;b++)d[b]&&d[b].finish&&d[b].finish.call(this);delete c.finish})}}),r.each(["toggle","show","hide"],function(a,b){var c=r.fn[b];r.fn[b]=function(a,d,e){return null==a||"boolean"==typeof a?c.apply(this,arguments):this.animate(gb(b,!0),a,d,e)}}),r.each({slideDown:gb("show"),slideUp:gb("hide"),slideToggle:gb("toggle"),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(a,b){r.fn[a]=function(a,c,d){return this.animate(b,a,c,d)}}),r.timers=[],r.fx.tick=function(){var a,b=0,c=r.timers;for(ab=r.now();b<c.length;b++)a=c[b],a()||c[b]!==a||c.splice(b--,1);c.length||r.fx.stop(),ab=void 0},r.fx.timer=function(a){r.timers.push(a),r.fx.start()},r.fx.interval=13,r.fx.start=function(){bb||(bb=!0,eb())},r.fx.stop=function(){bb=null},r.fx.speeds={slow:600,fast:200,_default:400},r.fn.delay=function(b,c){return b=r.fx?r.fx.speeds[b]||b:b,c=c||"fx",this.queue(c,function(c,d){var e=a.setTimeout(c,b);d.stop=function(){a.clearTimeout(e)}})},function(){var a=d.createElement("input"),b=d.createElement("select"),c=b.appendChild(d.createElement("option"));a.type="checkbox",o.checkOn=""!==a.value,o.optSelected=c.selected,a=d.createElement("input"),a.value="t",a.type="radio",o.radioValue="t"===a.value}();var lb,mb=r.expr.attrHandle;r.fn.extend({attr:function(a,b){return T(this,r.attr,a,b,arguments.length>1)},removeAttr:function(a){return this.each(function(){r.removeAttr(this,a)})}}),r.extend({attr:function(a,b,c){var d,e,f=a.nodeType;if(3!==f&&8!==f&&2!==f)return"undefined"==typeof a.getAttribute?r.prop(a,b,c):(1===f&&r.isXMLDoc(a)||(e=r.attrHooks[b.toLowerCase()]||(r.expr.match.bool.test(b)?lb:void 0)),void 0!==c?null===c?void r.removeAttr(a,b):e&&"set"in e&&void 0!==(d=e.set(a,c,b))?d:(a.setAttribute(b,c+""),c):e&&"get"in e&&null!==(d=e.get(a,b))?d:(d=r.find.attr(a,b),
null==d?void 0:d))},attrHooks:{type:{set:function(a,b){if(!o.radioValue&&"radio"===b&&B(a,"input")){var c=a.value;return a.setAttribute("type",b),c&&(a.value=c),b}}}},removeAttr:function(a,b){var c,d=0,e=b&&b.match(L);if(e&&1===a.nodeType)while(c=e[d++])a.removeAttribute(c)}}),lb={set:function(a,b,c){return b===!1?r.removeAttr(a,c):a.setAttribute(c,c),c}},r.each(r.expr.match.bool.source.match(/\w+/g),function(a,b){var c=mb[b]||r.find.attr;mb[b]=function(a,b,d){var e,f,g=b.toLowerCase();return d||(f=mb[g],mb[g]=e,e=null!=c(a,b,d)?g:null,mb[g]=f),e}});var nb=/^(?:input|select|textarea|button)$/i,ob=/^(?:a|area)$/i;r.fn.extend({prop:function(a,b){return T(this,r.prop,a,b,arguments.length>1)},removeProp:function(a){return this.each(function(){delete this[r.propFix[a]||a]})}}),r.extend({prop:function(a,b,c){var d,e,f=a.nodeType;if(3!==f&&8!==f&&2!==f)return 1===f&&r.isXMLDoc(a)||(b=r.propFix[b]||b,e=r.propHooks[b]),void 0!==c?e&&"set"in e&&void 0!==(d=e.set(a,c,b))?d:a[b]=c:e&&"get"in e&&null!==(d=e.get(a,b))?d:a[b]},propHooks:{tabIndex:{get:function(a){var b=r.find.attr(a,"tabindex");return b?parseInt(b,10):nb.test(a.nodeName)||ob.test(a.nodeName)&&a.href?0:-1}}},propFix:{"for":"htmlFor","class":"className"}}),o.optSelected||(r.propHooks.selected={get:function(a){var b=a.parentNode;return b&&b.parentNode&&b.parentNode.selectedIndex,null},set:function(a){var b=a.parentNode;b&&(b.selectedIndex,b.parentNode&&b.parentNode.selectedIndex)}}),r.each(["tabIndex","readOnly","maxLength","cellSpacing","cellPadding","rowSpan","colSpan","useMap","frameBorder","contentEditable"],function(){r.propFix[this.toLowerCase()]=this});function pb(a){var b=a.match(L)||[];return b.join(" ")}function qb(a){return a.getAttribute&&a.getAttribute("class")||""}r.fn.extend({addClass:function(a){var b,c,d,e,f,g,h,i=0;if(r.isFunction(a))return this.each(function(b){r(this).addClass(a.call(this,b,qb(this)))});if("string"==typeof a&&a){b=a.match(L)||[];while(c=this[i++])if(e=qb(c),d=1===c.nodeType&&" "+pb(e)+" "){g=0;while(f=b[g++])d.indexOf(" "+f+" ")<0&&(d+=f+" ");h=pb(d),e!==h&&c.setAttribute("class",h)}}return this},removeClass:function(a){var b,c,d,e,f,g,h,i=0;if(r.isFunction(a))return this.each(function(b){r(this).removeClass(a.call(this,b,qb(this)))});if(!arguments.length)return this.attr("class","");if("string"==typeof a&&a){b=a.match(L)||[];while(c=this[i++])if(e=qb(c),d=1===c.nodeType&&" "+pb(e)+" "){g=0;while(f=b[g++])while(d.indexOf(" "+f+" ")>-1)d=d.replace(" "+f+" "," ");h=pb(d),e!==h&&c.setAttribute("class",h)}}return this},toggleClass:function(a,b){var c=typeof a;return"boolean"==typeof b&&"string"===c?b?this.addClass(a):this.removeClass(a):r.isFunction(a)?this.each(function(c){r(this).toggleClass(a.call(this,c,qb(this),b),b)}):this.each(function(){var b,d,e,f;if("string"===c){d=0,e=r(this),f=a.match(L)||[];while(b=f[d++])e.hasClass(b)?e.removeClass(b):e.addClass(b)}else void 0!==a&&"boolean"!==c||(b=qb(this),b&&W.set(this,"__className__",b),this.setAttribute&&this.setAttribute("class",b||a===!1?"":W.get(this,"__className__")||""))})},hasClass:function(a){var b,c,d=0;b=" "+a+" ";while(c=this[d++])if(1===c.nodeType&&(" "+pb(qb(c))+" ").indexOf(b)>-1)return!0;return!1}});var rb=/\r/g;r.fn.extend({val:function(a){var b,c,d,e=this[0];{if(arguments.length)return d=r.isFunction(a),this.each(function(c){var e;1===this.nodeType&&(e=d?a.call(this,c,r(this).val()):a,null==e?e="":"number"==typeof e?e+="":Array.isArray(e)&&(e=r.map(e,function(a){return null==a?"":a+""})),b=r.valHooks[this.type]||r.valHooks[this.nodeName.toLowerCase()],b&&"set"in b&&void 0!==b.set(this,e,"value")||(this.value=e))});if(e)return b=r.valHooks[e.type]||r.valHooks[e.nodeName.toLowerCase()],b&&"get"in b&&void 0!==(c=b.get(e,"value"))?c:(c=e.value,"string"==typeof c?c.replace(rb,""):null==c?"":c)}}}),r.extend({valHooks:{option:{get:function(a){var b=r.find.attr(a,"value");return null!=b?b:pb(r.text(a))}},select:{get:function(a){var b,c,d,e=a.options,f=a.selectedIndex,g="select-one"===a.type,h=g?null:[],i=g?f+1:e.length;for(d=f<0?i:g?f:0;d<i;d++)if(c=e[d],(c.selected||d===f)&&!c.disabled&&(!c.parentNode.disabled||!B(c.parentNode,"optgroup"))){if(b=r(c).val(),g)return b;h.push(b)}return h},set:function(a,b){var c,d,e=a.options,f=r.makeArray(b),g=e.length;while(g--)d=e[g],(d.selected=r.inArray(r.valHooks.option.get(d),f)>-1)&&(c=!0);return c||(a.selectedIndex=-1),f}}}}),r.each(["radio","checkbox"],function(){r.valHooks[this]={set:function(a,b){if(Array.isArray(b))return a.checked=r.inArray(r(a).val(),b)>-1}},o.checkOn||(r.valHooks[this].get=function(a){return null===a.getAttribute("value")?"on":a.value})});var sb=/^(?:focusinfocus|focusoutblur)$/;r.extend(r.event,{trigger:function(b,c,e,f){var g,h,i,j,k,m,n,o=[e||d],p=l.call(b,"type")?b.type:b,q=l.call(b,"namespace")?b.namespace.split("."):[];if(h=i=e=e||d,3!==e.nodeType&&8!==e.nodeType&&!sb.test(p+r.event.triggered)&&(p.indexOf(".")>-1&&(q=p.split("."),p=q.shift(),q.sort()),k=p.indexOf(":")<0&&"on"+p,b=b[r.expando]?b:new r.Event(p,"object"==typeof b&&b),b.isTrigger=f?2:3,b.namespace=q.join("."),b.rnamespace=b.namespace?new RegExp("(^|\\.)"+q.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,b.result=void 0,b.target||(b.target=e),c=null==c?[b]:r.makeArray(c,[b]),n=r.event.special[p]||{},f||!n.trigger||n.trigger.apply(e,c)!==!1)){if(!f&&!n.noBubble&&!r.isWindow(e)){for(j=n.delegateType||p,sb.test(j+p)||(h=h.parentNode);h;h=h.parentNode)o.push(h),i=h;i===(e.ownerDocument||d)&&o.push(i.defaultView||i.parentWindow||a)}g=0;while((h=o[g++])&&!b.isPropagationStopped())b.type=g>1?j:n.bindType||p,m=(W.get(h,"events")||{})[b.type]&&W.get(h,"handle"),m&&m.apply(h,c),m=k&&h[k],m&&m.apply&&U(h)&&(b.result=m.apply(h,c),b.result===!1&&b.preventDefault());return b.type=p,f||b.isDefaultPrevented()||n._default&&n._default.apply(o.pop(),c)!==!1||!U(e)||k&&r.isFunction(e[p])&&!r.isWindow(e)&&(i=e[k],i&&(e[k]=null),r.event.triggered=p,e[p](),r.event.triggered=void 0,i&&(e[k]=i)),b.result}},simulate:function(a,b,c){var d=r.extend(new r.Event,c,{type:a,isSimulated:!0});r.event.trigger(d,null,b)}}),r.fn.extend({trigger:function(a,b){return this.each(function(){r.event.trigger(a,b,this)})},triggerHandler:function(a,b){var c=this[0];if(c)return r.event.trigger(a,b,c,!0)}}),r.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(a,b){r.fn[b]=function(a,c){return arguments.length>0?this.on(b,null,a,c):this.trigger(b)}}),r.fn.extend({hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)}}),o.focusin="onfocusin"in a,o.focusin||r.each({focus:"focusin",blur:"focusout"},function(a,b){var c=function(a){r.event.simulate(b,a.target,r.event.fix(a))};r.event.special[b]={setup:function(){var d=this.ownerDocument||this,e=W.access(d,b);e||d.addEventListener(a,c,!0),W.access(d,b,(e||0)+1)},teardown:function(){var d=this.ownerDocument||this,e=W.access(d,b)-1;e?W.access(d,b,e):(d.removeEventListener(a,c,!0),W.remove(d,b))}}});var tb=a.location,ub=r.now(),vb=/\?/;r.parseXML=function(b){var c;if(!b||"string"!=typeof b)return null;try{c=(new a.DOMParser).parseFromString(b,"text/xml")}catch(d){c=void 0}return c&&!c.getElementsByTagName("parsererror").length||r.error("Invalid XML: "+b),c};var wb=/\[\]$/,xb=/\r?\n/g,yb=/^(?:submit|button|image|reset|file)$/i,zb=/^(?:input|select|textarea|keygen)/i;function Ab(a,b,c,d){var e;if(Array.isArray(b))r.each(b,function(b,e){c||wb.test(a)?d(a,e):Ab(a+"["+("object"==typeof e&&null!=e?b:"")+"]",e,c,d)});else if(c||"object"!==r.type(b))d(a,b);else for(e in b)Ab(a+"["+e+"]",b[e],c,d)}r.param=function(a,b){var c,d=[],e=function(a,b){var c=r.isFunction(b)?b():b;d[d.length]=encodeURIComponent(a)+"="+encodeURIComponent(null==c?"":c)};if(Array.isArray(a)||a.jquery&&!r.isPlainObject(a))r.each(a,function(){e(this.name,this.value)});else for(c in a)Ab(c,a[c],b,e);return d.join("&")},r.fn.extend({serialize:function(){return r.param(this.serializeArray())},serializeArray:function(){return this.map(function(){var a=r.prop(this,"elements");return a?r.makeArray(a):this}).filter(function(){var a=this.type;return this.name&&!r(this).is(":disabled")&&zb.test(this.nodeName)&&!yb.test(a)&&(this.checked||!ja.test(a))}).map(function(a,b){var c=r(this).val();return null==c?null:Array.isArray(c)?r.map(c,function(a){return{name:b.name,value:a.replace(xb,"\r\n")}}):{name:b.name,value:c.replace(xb,"\r\n")}}).get()}});var Bb=/%20/g,Cb=/#.*$/,Db=/([?&])_=[^&]*/,Eb=/^(.*?):[ \t]*([^\r\n]*)$/gm,Fb=/^(?:about|app|app-storage|.+-extension|file|res|widget):$/,Gb=/^(?:GET|HEAD)$/,Hb=/^\/\//,Ib={},Jb={},Kb="*/".concat("*"),Lb=d.createElement("a");Lb.href=tb.href;function Mb(a){return function(b,c){"string"!=typeof b&&(c=b,b="*");var d,e=0,f=b.toLowerCase().match(L)||[];if(r.isFunction(c))while(d=f[e++])"+"===d[0]?(d=d.slice(1)||"*",(a[d]=a[d]||[]).unshift(c)):(a[d]=a[d]||[]).push(c)}}function Nb(a,b,c,d){var e={},f=a===Jb;function g(h){var i;return e[h]=!0,r.each(a[h]||[],function(a,h){var j=h(b,c,d);return"string"!=typeof j||f||e[j]?f?!(i=j):void 0:(b.dataTypes.unshift(j),g(j),!1)}),i}return g(b.dataTypes[0])||!e["*"]&&g("*")}function Ob(a,b){var c,d,e=r.ajaxSettings.flatOptions||{};for(c in b)void 0!==b[c]&&((e[c]?a:d||(d={}))[c]=b[c]);return d&&r.extend(!0,a,d),a}function Pb(a,b,c){var d,e,f,g,h=a.contents,i=a.dataTypes;while("*"===i[0])i.shift(),void 0===d&&(d=a.mimeType||b.getResponseHeader("Content-Type"));if(d)for(e in h)if(h[e]&&h[e].test(d)){i.unshift(e);break}if(i[0]in c)f=i[0];else{for(e in c){if(!i[0]||a.converters[e+" "+i[0]]){f=e;break}g||(g=e)}f=f||g}if(f)return f!==i[0]&&i.unshift(f),c[f]}function Qb(a,b,c,d){var e,f,g,h,i,j={},k=a.dataTypes.slice();if(k[1])for(g in a.converters)j[g.toLowerCase()]=a.converters[g];f=k.shift();while(f)if(a.responseFields[f]&&(c[a.responseFields[f]]=b),!i&&d&&a.dataFilter&&(b=a.dataFilter(b,a.dataType)),i=f,f=k.shift())if("*"===f)f=i;else if("*"!==i&&i!==f){if(g=j[i+" "+f]||j["* "+f],!g)for(e in j)if(h=e.split(" "),h[1]===f&&(g=j[i+" "+h[0]]||j["* "+h[0]])){g===!0?g=j[e]:j[e]!==!0&&(f=h[0],k.unshift(h[1]));break}if(g!==!0)if(g&&a["throws"])b=g(b);else try{b=g(b)}catch(l){return{state:"parsererror",error:g?l:"No conversion from "+i+" to "+f}}}return{state:"success",data:b}}r.extend({active:0,lastModified:{},etag:{},ajaxSettings:{url:tb.href,type:"GET",isLocal:Fb.test(tb.protocol),global:!0,processData:!0,async:!0,contentType:"application/x-www-form-urlencoded; charset=UTF-8",accepts:{"*":Kb,text:"text/plain",html:"text/html",xml:"application/xml, text/xml",json:"application/json, text/javascript"},contents:{xml:/\bxml\b/,html:/\bhtml/,json:/\bjson\b/},responseFields:{xml:"responseXML",text:"responseText",json:"responseJSON"},converters:{"* text":String,"text html":!0,"text json":JSON.parse,"text xml":r.parseXML},flatOptions:{url:!0,context:!0}},ajaxSetup:function(a,b){return b?Ob(Ob(a,r.ajaxSettings),b):Ob(r.ajaxSettings,a)},ajaxPrefilter:Mb(Ib),ajaxTransport:Mb(Jb),ajax:function(b,c){"object"==typeof b&&(c=b,b=void 0),c=c||{};var e,f,g,h,i,j,k,l,m,n,o=r.ajaxSetup({},c),p=o.context||o,q=o.context&&(p.nodeType||p.jquery)?r(p):r.event,s=r.Deferred(),t=r.Callbacks("once memory"),u=o.statusCode||{},v={},w={},x="canceled",y={readyState:0,getResponseHeader:function(a){var b;if(k){if(!h){h={};while(b=Eb.exec(g))h[b[1].toLowerCase()]=b[2]}b=h[a.toLowerCase()]}return null==b?null:b},getAllResponseHeaders:function(){return k?g:null},setRequestHeader:function(a,b){return null==k&&(a=w[a.toLowerCase()]=w[a.toLowerCase()]||a,v[a]=b),this},overrideMimeType:function(a){return null==k&&(o.mimeType=a),this},statusCode:function(a){var b;if(a)if(k)y.always(a[y.status]);else for(b in a)u[b]=[u[b],a[b]];return this},abort:function(a){var b=a||x;return e&&e.abort(b),A(0,b),this}};if(s.promise(y),o.url=((b||o.url||tb.href)+"").replace(Hb,tb.protocol+"//"),o.type=c.method||c.type||o.method||o.type,o.dataTypes=(o.dataType||"*").toLowerCase().match(L)||[""],null==o.crossDomain){j=d.createElement("a");try{j.href=o.url,j.href=j.href,o.crossDomain=Lb.protocol+"//"+Lb.host!=j.protocol+"//"+j.host}catch(z){o.crossDomain=!0}}if(o.data&&o.processData&&"string"!=typeof o.data&&(o.data=r.param(o.data,o.traditional)),Nb(Ib,o,c,y),k)return y;l=r.event&&o.global,l&&0===r.active++&&r.event.trigger("ajaxStart"),o.type=o.type.toUpperCase(),o.hasContent=!Gb.test(o.type),f=o.url.replace(Cb,""),o.hasContent?o.data&&o.processData&&0===(o.contentType||"").indexOf("application/x-www-form-urlencoded")&&(o.data=o.data.replace(Bb,"+")):(n=o.url.slice(f.length),o.data&&(f+=(vb.test(f)?"&":"?")+o.data,delete o.data),o.cache===!1&&(f=f.replace(Db,"$1"),n=(vb.test(f)?"&":"?")+"_="+ub++ +n),o.url=f+n),o.ifModified&&(r.lastModified[f]&&y.setRequestHeader("If-Modified-Since",r.lastModified[f]),r.etag[f]&&y.setRequestHeader("If-None-Match",r.etag[f])),(o.data&&o.hasContent&&o.contentType!==!1||c.contentType)&&y.setRequestHeader("Content-Type",o.contentType),y.setRequestHeader("Accept",o.dataTypes[0]&&o.accepts[o.dataTypes[0]]?o.accepts[o.dataTypes[0]]+("*"!==o.dataTypes[0]?", "+Kb+"; q=0.01":""):o.accepts["*"]);for(m in o.headers)y.setRequestHeader(m,o.headers[m]);if(o.beforeSend&&(o.beforeSend.call(p,y,o)===!1||k))return y.abort();if(x="abort",t.add(o.complete),y.done(o.success),y.fail(o.error),e=Nb(Jb,o,c,y)){if(y.readyState=1,l&&q.trigger("ajaxSend",[y,o]),k)return y;o.async&&o.timeout>0&&(i=a.setTimeout(function(){y.abort("timeout")},o.timeout));try{k=!1,e.send(v,A)}catch(z){if(k)throw z;A(-1,z)}}else A(-1,"No Transport");function A(b,c,d,h){var j,m,n,v,w,x=c;k||(k=!0,i&&a.clearTimeout(i),e=void 0,g=h||"",y.readyState=b>0?4:0,j=b>=200&&b<300||304===b,d&&(v=Pb(o,y,d)),v=Qb(o,v,y,j),j?(o.ifModified&&(w=y.getResponseHeader("Last-Modified"),w&&(r.lastModified[f]=w),w=y.getResponseHeader("etag"),w&&(r.etag[f]=w)),204===b||"HEAD"===o.type?x="nocontent":304===b?x="notmodified":(x=v.state,m=v.data,n=v.error,j=!n)):(n=x,!b&&x||(x="error",b<0&&(b=0))),y.status=b,y.statusText=(c||x)+"",j?s.resolveWith(p,[m,x,y]):s.rejectWith(p,[y,x,n]),y.statusCode(u),u=void 0,l&&q.trigger(j?"ajaxSuccess":"ajaxError",[y,o,j?m:n]),t.fireWith(p,[y,x]),l&&(q.trigger("ajaxComplete",[y,o]),--r.active||r.event.trigger("ajaxStop")))}return y},getJSON:function(a,b,c){return r.get(a,b,c,"json")},getScript:function(a,b){return r.get(a,void 0,b,"script")}}),r.each(["get","post"],function(a,b){r[b]=function(a,c,d,e){return r.isFunction(c)&&(e=e||d,d=c,c=void 0),r.ajax(r.extend({url:a,type:b,dataType:e,data:c,success:d},r.isPlainObject(a)&&a))}}),r._evalUrl=function(a){return r.ajax({url:a,type:"GET",dataType:"script",cache:!0,async:!1,global:!1,"throws":!0})},r.fn.extend({wrapAll:function(a){var b;return this[0]&&(r.isFunction(a)&&(a=a.call(this[0])),b=r(a,this[0].ownerDocument).eq(0).clone(!0),this[0].parentNode&&b.insertBefore(this[0]),b.map(function(){var a=this;while(a.firstElementChild)a=a.firstElementChild;return a}).append(this)),this},wrapInner:function(a){return r.isFunction(a)?this.each(function(b){r(this).wrapInner(a.call(this,b))}):this.each(function(){var b=r(this),c=b.contents();c.length?c.wrapAll(a):b.append(a)})},wrap:function(a){var b=r.isFunction(a);return this.each(function(c){r(this).wrapAll(b?a.call(this,c):a)})},unwrap:function(a){return this.parent(a).not("body").each(function(){r(this).replaceWith(this.childNodes)}),this}}),r.expr.pseudos.hidden=function(a){return!r.expr.pseudos.visible(a)},r.expr.pseudos.visible=function(a){return!!(a.offsetWidth||a.offsetHeight||a.getClientRects().length)},r.ajaxSettings.xhr=function(){try{return new a.XMLHttpRequest}catch(b){}};var Rb={0:200,1223:204},Sb=r.ajaxSettings.xhr();o.cors=!!Sb&&"withCredentials"in Sb,o.ajax=Sb=!!Sb,r.ajaxTransport(function(b){var c,d;if(o.cors||Sb&&!b.crossDomain)return{send:function(e,f){var g,h=b.xhr();if(h.open(b.type,b.url,b.async,b.username,b.password),b.xhrFields)for(g in b.xhrFields)h[g]=b.xhrFields[g];b.mimeType&&h.overrideMimeType&&h.overrideMimeType(b.mimeType),b.crossDomain||e["X-Requested-With"]||(e["X-Requested-With"]="XMLHttpRequest");for(g in e)h.setRequestHeader(g,e[g]);c=function(a){return function(){c&&(c=d=h.onload=h.onerror=h.onabort=h.onreadystatechange=null,"abort"===a?h.abort():"error"===a?"number"!=typeof h.status?f(0,"error"):f(h.status,h.statusText):f(Rb[h.status]||h.status,h.statusText,"text"!==(h.responseType||"text")||"string"!=typeof h.responseText?{binary:h.response}:{text:h.responseText},h.getAllResponseHeaders()))}},h.onload=c(),d=h.onerror=c("error"),void 0!==h.onabort?h.onabort=d:h.onreadystatechange=function(){4===h.readyState&&a.setTimeout(function(){c&&d()})},c=c("abort");try{h.send(b.hasContent&&b.data||null)}catch(i){if(c)throw i}},abort:function(){c&&c()}}}),r.ajaxPrefilter(function(a){a.crossDomain&&(a.contents.script=!1)}),r.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/\b(?:java|ecma)script\b/},converters:{"text script":function(a){return r.globalEval(a),a}}}),r.ajaxPrefilter("script",function(a){void 0===a.cache&&(a.cache=!1),a.crossDomain&&(a.type="GET")}),r.ajaxTransport("script",function(a){if(a.crossDomain){var b,c;return{send:function(e,f){b=r("<script>").prop({charset:a.scriptCharset,src:a.url}).on("load error",c=function(a){b.remove(),c=null,a&&f("error"===a.type?404:200,a.type)}),d.head.appendChild(b[0])},abort:function(){c&&c()}}}});var Tb=[],Ub=/(=)\?(?=&|$)|\?\?/;r.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var a=Tb.pop()||r.expando+"_"+ub++;return this[a]=!0,a}}),r.ajaxPrefilter("json jsonp",function(b,c,d){var e,f,g,h=b.jsonp!==!1&&(Ub.test(b.url)?"url":"string"==typeof b.data&&0===(b.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ub.test(b.data)&&"data");if(h||"jsonp"===b.dataTypes[0])return e=b.jsonpCallback=r.isFunction(b.jsonpCallback)?b.jsonpCallback():b.jsonpCallback,h?b[h]=b[h].replace(Ub,"$1"+e):b.jsonp!==!1&&(b.url+=(vb.test(b.url)?"&":"?")+b.jsonp+"="+e),b.converters["script json"]=function(){return g||r.error(e+" was not called"),g[0]},b.dataTypes[0]="json",f=a[e],a[e]=function(){g=arguments},d.always(function(){void 0===f?r(a).removeProp(e):a[e]=f,b[e]&&(b.jsonpCallback=c.jsonpCallback,Tb.push(e)),g&&r.isFunction(f)&&f(g[0]),g=f=void 0}),"script"}),o.createHTMLDocument=function(){var a=d.implementation.createHTMLDocument("").body;return a.innerHTML="<form></form><form></form>",2===a.childNodes.length}(),r.parseHTML=function(a,b,c){if("string"!=typeof a)return[];"boolean"==typeof b&&(c=b,b=!1);var e,f,g;return b||(o.createHTMLDocument?(b=d.implementation.createHTMLDocument(""),e=b.createElement("base"),e.href=d.location.href,b.head.appendChild(e)):b=d),f=C.exec(a),g=!c&&[],f?[b.createElement(f[1])]:(f=qa([a],b,g),g&&g.length&&r(g).remove(),r.merge([],f.childNodes))},r.fn.load=function(a,b,c){var d,e,f,g=this,h=a.indexOf(" ");return h>-1&&(d=pb(a.slice(h)),a=a.slice(0,h)),r.isFunction(b)?(c=b,b=void 0):b&&"object"==typeof b&&(e="POST"),g.length>0&&r.ajax({url:a,type:e||"GET",dataType:"html",data:b}).done(function(a){f=arguments,g.html(d?r("<div>").append(r.parseHTML(a)).find(d):a)}).always(c&&function(a,b){g.each(function(){c.apply(this,f||[a.responseText,b,a])})}),this},r.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(a,b){r.fn[b]=function(a){return this.on(b,a)}}),r.expr.pseudos.animated=function(a){return r.grep(r.timers,function(b){return a===b.elem}).length},r.offset={setOffset:function(a,b,c){var d,e,f,g,h,i,j,k=r.css(a,"position"),l=r(a),m={};"static"===k&&(a.style.position="relative"),h=l.offset(),f=r.css(a,"top"),i=r.css(a,"left"),j=("absolute"===k||"fixed"===k)&&(f+i).indexOf("auto")>-1,j?(d=l.position(),g=d.top,e=d.left):(g=parseFloat(f)||0,e=parseFloat(i)||0),r.isFunction(b)&&(b=b.call(a,c,r.extend({},h))),null!=b.top&&(m.top=b.top-h.top+g),null!=b.left&&(m.left=b.left-h.left+e),"using"in b?b.using.call(a,m):l.css(m)}},r.fn.extend({offset:function(a){if(arguments.length)return void 0===a?this:this.each(function(b){r.offset.setOffset(this,a,b)});var b,c,d,e,f=this[0];if(f)return f.getClientRects().length?(d=f.getBoundingClientRect(),b=f.ownerDocument,c=b.documentElement,e=b.defaultView,{top:d.top+e.pageYOffset-c.clientTop,left:d.left+e.pageXOffset-c.clientLeft}):{top:0,left:0}},position:function(){if(this[0]){var a,b,c=this[0],d={top:0,left:0};return"fixed"===r.css(c,"position")?b=c.getBoundingClientRect():(a=this.offsetParent(),b=this.offset(),B(a[0],"html")||(d=a.offset()),d={top:d.top+r.css(a[0],"borderTopWidth",!0),left:d.left+r.css(a[0],"borderLeftWidth",!0)}),{top:b.top-d.top-r.css(c,"marginTop",!0),left:b.left-d.left-r.css(c,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var a=this.offsetParent;while(a&&"static"===r.css(a,"position"))a=a.offsetParent;return a||ra})}}),r.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(a,b){var c="pageYOffset"===b;r.fn[a]=function(d){return T(this,function(a,d,e){var f;return r.isWindow(a)?f=a:9===a.nodeType&&(f=a.defaultView),void 0===e?f?f[b]:a[d]:void(f?f.scrollTo(c?f.pageXOffset:e,c?e:f.pageYOffset):a[d]=e)},a,d,arguments.length)}}),r.each(["top","left"],function(a,b){r.cssHooks[b]=Pa(o.pixelPosition,function(a,c){if(c)return c=Oa(a,b),Ma.test(c)?r(a).position()[b]+"px":c})}),r.each({Height:"height",Width:"width"},function(a,b){r.each({padding:"inner"+a,content:b,"":"outer"+a},function(c,d){r.fn[d]=function(e,f){var g=arguments.length&&(c||"boolean"!=typeof e),h=c||(e===!0||f===!0?"margin":"border");return T(this,function(b,c,e){var f;return r.isWindow(b)?0===d.indexOf("outer")?b["inner"+a]:b.document.documentElement["client"+a]:9===b.nodeType?(f=b.documentElement,Math.max(b.body["scroll"+a],f["scroll"+a],b.body["offset"+a],f["offset"+a],f["client"+a])):void 0===e?r.css(b,c,h):r.style(b,c,e,h)},b,g?e:void 0,g)}})}),r.fn.extend({bind:function(a,b,c){return this.on(a,null,b,c)},unbind:function(a,b){return this.off(a,null,b)},delegate:function(a,b,c,d){return this.on(b,a,c,d)},undelegate:function(a,b,c){return 1===arguments.length?this.off(a,"**"):this.off(b,a||"**",c)}}),r.holdReady=function(a){a?r.readyWait++:r.ready(!0)},r.isArray=Array.isArray,r.parseJSON=JSON.parse,r.nodeName=B,"function"==typeof define&&define.amd&&define("jquery",[],function(){return r});var Vb=a.jQuery,Wb=a.$;return r.noConflict=function(b){return a.$===r&&(a.$=Wb),b&&a.jQuery===r&&(a.jQuery=Vb),r},b||(a.jQuery=a.$=r),r});
/*
录音
https://github.com/xiangyuecn/Recorder
src: engine/mp3.js,engine/mp3-engine.js
*/
!function(){"use strict";var i;Recorder.prototype.enc_mp3={stable:!0,testmsg:"采样率范围48000, 44100, 32000, 24000, 22050, 16000, 12000, 11025, 8000"},Recorder.prototype.mp3=function(a,s,e){var t=this,n=t.set,r=a.length,i=t.mp3_start(n);if(i)return t.mp3_encode(i,a),void t.mp3_complete(i,s,e,1);var _=new Recorder.lamejs.Mp3Encoder(1,n.sampleRate,n.bitRate),o=[],l=0,f=0,c=function(){if(l<r){0<(e=_.encodeBuffer(a.subarray(l,l+57600))).length&&(f+=e.buffer.byteLength,o.push(e.buffer)),l+=57600,setTimeout(c)}else{var e;0<(e=_.flush()).length&&(f+=e.buffer.byteLength,o.push(e.buffer));var t=h.fn(o,f,r,n.sampleRate);u(t,n),s(new Blob(o,{type:"audio/mp3"}))}};c()},Recorder.BindDestroy("mp3Worker",function(){console.log("mp3Worker Destroy"),i&&i.terminate(),i=null}),Recorder.prototype.mp3_envCheck=function(e,t){var a="";return t.takeoffEncodeChunk&&(e.canProcess?s()||(a="当前浏览器版本太低,无法实时处理"):a=e.envName+"环境不支持实时处理"),a},Recorder.prototype.mp3_start=function(e){return s(e)};var _={id:0},s=function(e){var t=i;try{if(!t){var a=");wk_lame();var wk_ctxs={};self.onmessage="+function(e){var t=e.data,a=wk_ctxs[t.id];if("init"==t.action)wk_ctxs[t.id]={sampleRate:t.sampleRate,bitRate:t.bitRate,takeoff:t.takeoff,mp3Size:0,pcmSize:0,encArr:[],encObj:new wk_lame.Mp3Encoder(1,t.sampleRate,t.bitRate)};else if(!a)return;switch(t.action){case"stop":a.encObj=null,delete wk_ctxs[t.id];break;case"encode":a.pcmSize+=t.pcm.length,0<(s=a.encObj.encodeBuffer(t.pcm)).length&&(a.takeoff?self.postMessage({action:"takeoff",id:t.id,chunk:s}):(a.mp3Size+=s.buffer.byteLength,a.encArr.push(s.buffer)));break;case"complete":var s;0<(s=a.encObj.flush()).length&&(a.takeoff?self.postMessage({action:"takeoff",id:t.id,chunk:s}):(a.mp3Size+=s.buffer.byteLength,a.encArr.push(s.buffer)));var n=wk_mp3TrimFix.fn(a.encArr,a.mp3Size,a.pcmSize,a.sampleRate);self.postMessage({action:t.action,id:t.id,blob:new Blob(a.encArr,{type:"audio/mp3"}),meta:n})}};a+=";var wk_mp3TrimFix={rm:"+h.rm+",fn:"+h.fn+"}";var s=Recorder.lamejs.toString(),n=(window.URL||webkitURL).createObjectURL(new Blob(["var wk_lame=(",s,a],{type:"text/javascript"}));t=new Worker(n),setTimeout(function(){(window.URL||webkitURL).revokeObjectURL(n)},1e4),t.onmessage=function(e){var t=e.data,a=_[t.id];a&&("takeoff"==t.action?a.set.takeoffEncodeChunk(new Uint8Array(t.chunk.buffer)):(a.call&&a.call(t),a.call=null))}}var r={worker:t,set:e,takeoffQueue:[]};return e?(r.id=++_.id,_[r.id]=r,t.postMessage({action:"init",id:r.id,sampleRate:e.sampleRate,bitRate:e.bitRate,takeoff:!!e.takeoffEncodeChunk,x:new Int16Array(5)})):t.postMessage({x:new Int16Array(5)}),i=t,r}catch(e){return t&&t.terminate(),console.error(e),null}};Recorder.prototype.mp3_stop=function(e){if(e&&e.worker){e.worker.postMessage({action:"stop",id:e.id}),e.worker=null,delete _[e.id];var t=-1;for(var a in _)t++;t&&console.warn("mp3 worker剩"+t+"个在串行等待")}},Recorder.prototype.mp3_encode=function(e,t){e&&e.worker&&e.worker.postMessage({action:"encode",id:e.id,pcm:t})},Recorder.prototype.mp3_complete=function(t,a,e,s){var n=this;t&&t.worker?(t.call=function(e){u(e.meta,t.set),a(e.blob),s&&n.mp3_stop(t)},t.worker.postMessage({action:"complete",id:t.id})):e("mp3编码器未打开")},Recorder.mp3ReadMeta=function(e,t){var a="object"==typeof window?window.parseInt:self.parseInt,s=new Uint8Array(e[0]||[]);if(s.length<4)return null;var n=function(e,t){return("0000000"+((t||s)[e]||0).toString(2)).substr(-8)},r=n(0)+n(1),i=n(2)+n(3);if(!/^1{11}/.test(r))return null;var _={"00":2.5,10:2,11:1}[r.substr(11,2)],o={"01":3}[r.substr(13,2)],l={1:[44100,48e3,32e3],2:[22050,24e3,16e3],2.5:[11025,12e3,8e3]}[_];l&&(l=l[a(i.substr(4,2),2)]);var f=[[0,8,16,24,32,40,48,56,64,80,96,112,128,144,160],[0,32,40,48,56,64,80,96,112,128,160,192,224,256,320]][1==_?1:0][a(i.substr(0,4),2)];if(!(_&&o&&f&&l))return null;for(var c=Math.round(8*t/f),h=1==o?384:2==o?1152:1==_?1152:576,u=h/l*1e3,b=Math.floor(h*f/8/l*1e3),m=0,p=0,v=0;v<e.length;v++){var d=e[v];if(b+3<=(p+=d.byteLength)){var g=new Uint8Array(d);m="1"==n(d.byteLength-(p-(b+3)+1),g).charAt(6);break}}return m&&b++,{version:_,layer:o,sampleRate:l,bitRate:f,duration:c,size:t,hasPadding:m,frameSize:b,frameDurationFloat:u}};var h={rm:Recorder.mp3ReadMeta,fn:function(e,t,a,s){var n=this.rm(e,t);if(!n)return{err:"mp3非预定格式"};var r=Math.round(a/s*1e3),i=Math.floor((n.duration-r)/n.frameDurationFloat);if(0<i){var _=i*n.frameSize-(n.hasPadding?1:0);t-=_;for(var o=0,l=[],f=0;f<e.length;f++){var c=e[f];if(_<=0)break;_>=c.byteLength?(_-=c.byteLength,l.push(c),e.splice(f,1),f--):(e[f]=c.slice(_),o=c,_=0)}if(!this.rm(e,t)){o&&(e[0]=o);for(f=0;f<l.length;f++)e.splice(f,0,l[f]);n.err="fix后数据错误,已还原,错误原因不明"}var h=n.trimFix={};h.remove=i,h.removeDuration=Math.round(i*n.frameDurationFloat),h.duration=Math.round(8*t/n.bitRate)}return n}},u=function(e,t){var a="MP3信息 ";(e.sampleRate&&e.sampleRate!=t.sampleRate||e.bitRate&&e.bitRate!=t.bitRate)&&(console.warn(a+"和设置的不匹配set:"+t.bitRate+"kbps "+t.sampleRate+"hz,已更新set:"+e.bitRate+"kbps "+e.sampleRate+"hz",t),t.sampleRate=e.sampleRate,t.bitRate=e.bitRate);var s=e.trimFix;s?(a+="Fix移除"+s.remove+""+s.removeDuration+"ms -> "+s.duration+"ms",2<s.remove&&(e.err=(e.err?e.err+", ":"")+"移除帧数过多")):a+=(e.duration||"-")+"ms",e.err?console.error(a,e.err,e):console.log(a,e)}}(),function(){"use strict";function t(){var A=function(e){return Math.log(e)/Math.log(10)};function B(e){return new Int8Array(e)}function n(e){return new Int16Array(e)}function Be(e){return new Int32Array(e)}function Ae(e){return new Float32Array(e)}function s(e){return new Float64Array(e)}function ke(e){if(1==e.length)return Ae(e[0]);var t=e[0];e=e.slice(1);for(var a=[],s=0;s<t;s++)a.push(ke(e));return a}function X(e){if(1==e.length)return Be(e[0]);var t=e[0];e=e.slice(1);for(var a=[],s=0;s<t;s++)a.push(X(e));return a}function m(e){if(1==e.length)return n(e[0]);var t=e[0];e=e.slice(1);for(var a=[],s=0;s<t;s++)a.push(m(e));return a}function O(e){if(1==e.length)return new Array(e[0]);var t=e[0];e=e.slice(1);for(var a=[],s=0;s<t;s++)a.push(O(e));return a}var Te={fill:function(e,t,a,s){if(2==arguments.length)for(var n=0;n<e.length;n++)e[n]=t;else for(n=t;n<a;n++)e[n]=s}},$={arraycopy:function(e,t,a,s,n){for(var r=t+n;t<r;)a[s++]=e[t++]}},ee={};function xe(e){this.ordinal=e}ee.SQRT2=1.4142135623730951,ee.FAST_LOG10=function(e){return A(e)},ee.FAST_LOG10_X=function(e,t){return A(e)*t},xe.short_block_allowed=new xe(0),xe.short_block_coupled=new xe(1),xe.short_block_dispensed=new xe(2),xe.short_block_forced=new xe(3);var K={};function ye(e){this.ordinal=e}K.MAX_VALUE=3.4028235e38,ye.vbr_off=new ye(0),ye.vbr_mt=new ye(1),ye.vbr_rh=new ye(2),ye.vbr_abr=new ye(3),ye.vbr_mtrh=new ye(4),ye.vbr_default=ye.vbr_mtrh;function Ee(e){var t=e;this.ordinal=function(){return t}}function k(){var M=null;function v(e){this.bits=0|e}this.qupvt=null,this.setModules=function(e){this.qupvt=e,M=e};var n=[[0,0],[0,0],[0,0],[0,0],[0,0],[0,1],[1,1],[1,1],[1,2],[2,2],[2,3],[2,3],[3,4],[3,4],[3,4],[4,5],[4,5],[4,6],[5,6],[5,6],[5,7],[6,7],[6,7]];function w(e,t,a,s,n,r){var i=.5946/t;for(e>>=1;0!=e--;)n[r++]=i>a[s++]?0:1,n[r++]=i>a[s++]?0:1}function R(e,t,a,s,n,r){var i=(e>>=1)%2;for(e>>=1;0!=e--;){var _,o,l,f,c,h,u,b;_=a[s++]*t,o=a[s++]*t,c=0|_,l=a[s++]*t,h=0|o,f=a[s++]*t,u=0|l,_+=M.adj43[c],b=0|f,o+=M.adj43[h],n[r++]=0|_,l+=M.adj43[u],n[r++]=0|o,f+=M.adj43[b],n[r++]=0|l,n[r++]=0|f}0!=i&&(c=0|(_=a[s++]*t),h=0|(o=a[s++]*t),_+=M.adj43[c],o+=M.adj43[h],n[r++]=0|_,n[r++]=0|o)}var _=[1,2,5,7,7,10,10,13,13,13,13,13,13,13,13];function d(e,t,a,s){var n=function(e,t,a){var s=0,n=0;do{var r=e[t++],i=e[t++];s<r&&(s=r),n<i&&(n=i)}while(t<a);return s<n&&(s=n),s}(e,t,a);switch(n){case 0:return n;case 1:return function(e,t,a,s){var n=0,r=C.ht[1].hlen;do{var i=2*e[t+0]+e[t+1];t+=2,n+=r[i]}while(t<a);return s.bits+=n,1}(e,t,a,s);case 2:case 3:return function(e,t,a,s,n){var r,i,_=0,o=C.ht[s].xlen;i=2==s?C.table23:C.table56;do{var l=e[t+0]*o+e[t+1];t+=2,_+=i[l]}while(t<a);return(r=65535&_)<(_>>=16)&&(_=r,s++),n.bits+=_,s}(e,t,a,_[n-1],s);case 4:case 5:case 6:case 7:case 8:case 9:case 10:case 11:case 12:case 13:case 14:case 15:return function(e,t,a,s,n){var r=0,i=0,_=0,o=C.ht[s].xlen,l=C.ht[s].hlen,f=C.ht[s+1].hlen,c=C.ht[s+2].hlen;do{var h=e[t+0]*o+e[t+1];t+=2,r+=l[h],i+=f[h],_+=c[h]}while(t<a);var u=s;return i<r&&(r=i,u++),_<r&&(r=_,u=s+2),n.bits+=r,u}(e,t,a,_[n-1],s);default:if(y.IXMAX_VAL<n)return s.bits=y.LARGE_BITS,-1;var r,i;for(n-=15,r=24;r<32&&!(C.ht[r].linmax>=n);r++);for(i=r-8;i<24&&!(C.ht[i].linmax>=n);i++);return function(e,t,a,s,n,r){var i,_=65536*C.ht[s].xlen+C.ht[n].xlen,o=0;do{var l=e[t++],f=e[t++];0!=l&&(14<l&&(l=15,o+=_),l*=16),0!=f&&(14<f&&(f=15,o+=_),l+=f),o+=C.largetbl[l]}while(t<a);return(i=65535&o)<(o>>=16)&&(o=i,s=n),r.bits+=o,s}(e,t,a,i,r,s)}}function u(e,t,a,s,n,r,i,_){for(var o=t.big_values,l=2;l<Pe.SBMAX_l+1;l++){var f=e.scalefac_band.l[l];if(o<=f)break;var c=n[l-2]+t.count1bits;if(a.part2_3_length<=c)break;var h=new v(c),u=d(s,f,o,h);c=h.bits,a.part2_3_length<=c||(a.assign(t),a.part2_3_length=c,a.region0_count=r[l-2],a.region1_count=l-2-r[l-2],a.table_select[0]=i[l-2],a.table_select[1]=_[l-2],a.table_select[2]=u)}}this.noquant_count_bits=function(e,t,a){var s=t.l3_enc,n=Math.min(576,t.max_nonzero_coeff+2>>1<<1);for(null!=a&&(a.sfb_count1=0);1<n&&0==(s[n-1]|s[n-2]);n-=2);t.count1=n;for(var r=0,i=0;3<n;n-=4){var _;if(1<(2147483647&(s[n-1]|s[n-2]|s[n-3]|s[n-4])))break;_=2*(2*(2*s[n-4]+s[n-3])+s[n-2])+s[n-1],r+=C.t32l[_],i+=C.t33l[_]}var o=r;if(t.count1table_select=0,i<r&&(o=i,t.count1table_select=1),t.count1bits=o,0==(t.big_values=n))return o;if(t.block_type==Pe.SHORT_TYPE)(r=3*e.scalefac_band.s[3])>t.big_values&&(r=t.big_values),i=t.big_values;else if(t.block_type==Pe.NORM_TYPE){if(r=t.region0_count=e.bv_scf[n-2],i=t.region1_count=e.bv_scf[n-1],i=e.scalefac_band.l[r+i+2],r=e.scalefac_band.l[r+1],i<n){var l=new v(o);t.table_select[2]=d(s,i,n,l),o=l.bits}}else t.region0_count=7,t.region1_count=Pe.SBMAX_l-1-7-1,(i=n)<(r=e.scalefac_band.l[8])&&(r=i);if(r=Math.min(r,n),i=Math.min(i,n),0<r){l=new v(o);t.table_select[0]=d(s,0,r,l),o=l.bits}if(r<i){l=new v(o);t.table_select[1]=d(s,r,i,l),o=l.bits}if(2==e.use_best_huffman&&(t.part2_3_length=o,best_huffman_divide(e,t),o=t.part2_3_length),null!=a&&t.block_type==Pe.NORM_TYPE){for(var f=0;e.scalefac_band.l[f]<t.big_values;)f++;a.sfb_count1=f}return o},this.count_bits=function(e,t,a,s){var n=a.l3_enc,r=y.IXMAX_VAL/M.IPOW20(a.global_gain);if(a.xrpow_max>r)return y.LARGE_BITS;if(function(e,t,a,s,n){var r,i,_,o=0,l=0,f=0,c=0,h=t,u=0,b=h,m=0,p=e,v=0;for(_=null!=n&&s.global_gain==n.global_gain,i=s.block_type==Pe.SHORT_TYPE?38:21,r=0;r<=i;r++){var d=-1;if((_||s.block_type==Pe.NORM_TYPE)&&(d=s.global_gain-(s.scalefac[r]+(0!=s.preflag?M.pretab[r]:0)<<s.scalefac_scale+1)-8*s.subblock_gain[s.window[r]]),_&&n.step[r]==d)0!=l&&(R(l,a,p,v,b,m),l=0),0!=f&&(w(f,a,p,v,b,m),f=0);else{var g,S=s.width[r];if(o+s.width[r]>s.max_nonzero_coeff&&(g=s.max_nonzero_coeff-o+1,Te.fill(t,s.max_nonzero_coeff,576,0),(S=g)<0&&(S=0),r=i+1),0==l&&0==f&&(b=h,m=u,p=e,v=c),null!=n&&0<n.sfb_count1&&r>=n.sfb_count1&&0<n.step[r]&&d>=n.step[r]?(0!=l&&(R(l,a,p,v,b,m),l=0,b=h,m=u,p=e,v=c),f+=S):(0!=f&&(w(f,a,p,v,b,m),f=0,b=h,m=u,p=e,v=c),l+=S),S<=0){0!=f&&(w(f,a,p,v,b,m),f=0),0!=l&&(R(l,a,p,v,b,m),l=0);break}}r<=i&&(u+=s.width[r],c+=s.width[r],o+=s.width[r])}0!=l&&(R(l,a,p,v,b,m),l=0),0!=f&&(w(f,a,p,v,b,m),f=0)}(t,n,M.IPOW20(a.global_gain),a,s),0!=(2&e.substep_shaping))for(var i=0,_=a.global_gain+a.scalefac_scale,o=.634521682242439/M.IPOW20(_),l=0;l<a.sfbmax;l++){var f,c=a.width[l];if(0==e.pseudohalf[l])i+=c;else for(f=i,i+=c;f<i;++f)n[f]=t[f]>=o?n[f]:0}return this.noquant_count_bits(e,a,s)},this.best_huffman_divide=function(e,t){var a=new x,s=t.l3_enc,n=Be(23),r=Be(23),i=Be(23),_=Be(23);if(t.block_type!=Pe.SHORT_TYPE||1!=e.mode_gr){a.assign(t),t.block_type==Pe.NORM_TYPE&&(!function(e,t,a,s,n,r,i){for(var _=t.big_values,o=0;o<=22;o++)s[o]=y.LARGE_BITS;for(o=0;o<16;o++){var l=e.scalefac_band.l[o+1];if(_<=l)break;var f=0,c=new v(f),h=d(a,0,l,c);f=c.bits;for(var u=0;u<8;u++){var b=e.scalefac_band.l[o+u+2];if(_<=b)break;var m=f,p=d(a,l,b,c=new v(m));m=c.bits,s[o+u]>m&&(s[o+u]=m,r[(n[o+u]=o)+u]=h,i[o+u]=p)}}}(e,t,s,n,r,i,_),u(e,a,t,s,n,r,i,_));var o=a.big_values;if(!(0==o||1<(s[o-2]|s[o-1])||576<(o=t.count1+2))){a.assign(t),a.count1=o;for(var l=0,f=0;o>a.big_values;o-=4){var c=2*(2*(2*s[o-4]+s[o-3])+s[o-2])+s[o-1];l+=C.t32l[c],f+=C.t33l[c]}if(a.big_values=o,a.count1table_select=0,f<l&&(l=f,a.count1table_select=1),a.count1bits=l,a.block_type==Pe.NORM_TYPE)u(e,a,t,s,n,r,i,_);else{if(a.part2_3_length=l,o<(l=e.scalefac_band.l[8])&&(l=o),0<l){var h=new v(a.part2_3_length);a.table_select[0]=d(s,0,l,h),a.part2_3_length=h.bits}if(l<o){h=new v(a.part2_3_length);a.table_select[1]=d(s,l,o,h),a.part2_3_length=h.bits}t.part2_3_length>a.part2_3_length&&t.assign(a)}}}};var h=[1,1,1,1,8,2,2,2,4,4,4,8,8,8,16,16],b=[1,2,4,8,1,2,4,8,2,4,8,2,4,8,4,8],m=[0,0,0,0,3,1,1,1,2,2,2,3,3,3,4,4],p=[0,1,2,3,0,1,2,3,1,2,3,1,2,3,2,3];k.slen1_tab=m,k.slen2_tab=p,this.best_scalefac_store=function(e,t,a,s){var n,r,i,_,o=s.tt[t][a],l=0;for(n=i=0;n<o.sfbmax;n++){var f=o.width[n];for(i+=f,_=-f;_<0&&0==o.l3_enc[_+i];_++);0==_&&(o.scalefac[n]=l=-2)}if(0==o.scalefac_scale&&0==o.preflag){var c=0;for(n=0;n<o.sfbmax;n++)0<o.scalefac[n]&&(c|=o.scalefac[n]);if(0==(1&c)&&0!=c){for(n=0;n<o.sfbmax;n++)0<o.scalefac[n]&&(o.scalefac[n]>>=1);o.scalefac_scale=l=1}}if(0==o.preflag&&o.block_type!=Pe.SHORT_TYPE&&2==e.mode_gr){for(n=11;n<Pe.SBPSY_l&&!(o.scalefac[n]<M.pretab[n]&&-2!=o.scalefac[n]);n++);if(n==Pe.SBPSY_l){for(n=11;n<Pe.SBPSY_l;n++)0<o.scalefac[n]&&(o.scalefac[n]-=M.pretab[n]);o.preflag=l=1}}for(r=0;r<4;r++)s.scfsi[a][r]=0;for(2==e.mode_gr&&1==t&&s.tt[0][a].block_type!=Pe.SHORT_TYPE&&s.tt[1][a].block_type!=Pe.SHORT_TYPE&&(!function(e,t){for(var a,s=t.tt[1][e],n=t.tt[0][e],r=0;r<C.scfsi_band.length-1;r++){for(a=C.scfsi_band[r];a<C.scfsi_band[r+1]&&!(n.scalefac[a]!=s.scalefac[a]&&0<=s.scalefac[a]);a++);if(a==C.scfsi_band[r+1]){for(a=C.scfsi_band[r];a<C.scfsi_band[r+1];a++)s.scalefac[a]=-1;t.scfsi[e][r]=1}}var i=0,_=0;for(a=0;a<11;a++)-1!=s.scalefac[a]&&(_++,i<s.scalefac[a]&&(i=s.scalefac[a]));for(var o=0,l=0;a<Pe.SBPSY_l;a++)-1!=s.scalefac[a]&&(l++,o<s.scalefac[a]&&(o=s.scalefac[a]));for(r=0;r<16;r++)if(i<h[r]&&o<b[r]){var f=m[r]*_+p[r]*l;s.part2_length>f&&(s.part2_length=f,s.scalefac_compress=r)}}(a,s),l=0),n=0;n<o.sfbmax;n++)-2==o.scalefac[n]&&(o.scalefac[n]=0);0!=l&&(2==e.mode_gr?this.scale_bitcount(o):this.scale_bitcount_lsf(e,o))};var o=[0,18,36,54,54,36,54,72,54,72,90,72,90,108,108,126],l=[0,18,36,54,51,35,53,71,52,70,88,69,87,105,104,122],f=[0,10,20,30,33,21,31,41,32,42,52,43,53,63,64,74];this.scale_bitcount=function(e){var t,a,s,n=0,r=0,i=e.scalefac;if(e.block_type==Pe.SHORT_TYPE)s=o,0!=e.mixed_block_flag&&(s=l);else if(s=f,0==e.preflag){for(a=11;a<Pe.SBPSY_l&&!(i[a]<M.pretab[a]);a++);if(a==Pe.SBPSY_l)for(e.preflag=1,a=11;a<Pe.SBPSY_l;a++)i[a]-=M.pretab[a]}for(a=0;a<e.sfbdivide;a++)n<i[a]&&(n=i[a]);for(;a<e.sfbmax;a++)r<i[a]&&(r=i[a]);for(e.part2_length=y.LARGE_BITS,t=0;t<16;t++)n<h[t]&&r<b[t]&&e.part2_length>s[t]&&(e.part2_length=s[t],e.scalefac_compress=t);return e.part2_length==y.LARGE_BITS};var g=[[15,15,7,7],[15,15,7,0],[7,3,0,0],[15,31,31,0],[7,7,7,0],[3,3,0,0]];this.scale_bitcount_lsf=function(e,t){var a,s,n,r,i,_,o,l,f=Be(4),c=t.scalefac;for(a=0!=t.preflag?2:0,o=0;o<4;o++)f[o]=0;if(t.block_type==Pe.SHORT_TYPE){s=1;var h=M.nr_of_sfb_block[a][s];for(n=l=0;n<4;n++)for(r=h[n]/3,o=0;o<r;o++,l++)for(i=0;i<3;i++)c[3*l+i]>f[n]&&(f[n]=c[3*l+i])}else{s=0;h=M.nr_of_sfb_block[a][s];for(n=l=0;n<4;n++)for(r=h[n],o=0;o<r;o++,l++)c[l]>f[n]&&(f[n]=c[l])}for(_=!1,n=0;n<4;n++)f[n]>g[a][n]&&(_=!0);if(!_){var u,b,m,p;for(t.sfb_partition_table=M.nr_of_sfb_block[a][s],n=0;n<4;n++)t.slen[n]=S[f[n]];switch(u=t.slen[0],b=t.slen[1],m=t.slen[2],p=t.slen[3],a){case 0:t.scalefac_compress=(5*u+b<<4)+(m<<2)+p;break;case 1:t.scalefac_compress=400+(5*u+b<<2)+m;break;case 2:t.scalefac_compress=500+3*u+b;break;default:$.err.printf("intensity stereo not implemented yet\n")}}if(!_)for(n=t.part2_length=0;n<4;n++)t.part2_length+=t.slen[n]*t.sfb_partition_table[n];return _};var S=[0,1,2,2,3,3,3,3,4,4,4,4,4,4,4,4];this.huffman_init=function(e){for(var t=2;t<=576;t+=2){for(var a,s=0;e.scalefac_band.l[++s]<t;);for(a=n[s][0];e.scalefac_band.l[a+1]>t;)a--;for(a<0&&(a=n[s][0]),e.bv_scf[t-2]=a,a=n[s][1];e.scalefac_band.l[a+e.bv_scf[t-2]+2]>t;)a--;a<0&&(a=n[s][1]),e.bv_scf[t-1]=a}}}function q(){}function M(){this.setModules=function(e,t,a){e,t,a};var _=[0,49345,49537,320,49921,960,640,49729,50689,1728,1920,51009,1280,50625,50305,1088,52225,3264,3456,52545,3840,53185,52865,3648,2560,51905,52097,2880,51457,2496,2176,51265,55297,6336,6528,55617,6912,56257,55937,6720,7680,57025,57217,8e3,56577,7616,7296,56385,5120,54465,54657,5440,55041,6080,5760,54849,53761,4800,4992,54081,4352,53697,53377,4160,61441,12480,12672,61761,13056,62401,62081,12864,13824,63169,63361,14144,62721,13760,13440,62529,15360,64705,64897,15680,65281,16320,16e3,65089,64001,15040,15232,64321,14592,63937,63617,14400,10240,59585,59777,10560,60161,11200,10880,59969,60929,11968,12160,61249,11520,60865,60545,11328,58369,9408,9600,58689,9984,59329,59009,9792,8704,58049,58241,9024,57601,8640,8320,57409,40961,24768,24960,41281,25344,41921,41601,25152,26112,42689,42881,26432,42241,26048,25728,42049,27648,44225,44417,27968,44801,28608,28288,44609,43521,27328,27520,43841,26880,43457,43137,26688,30720,47297,47489,31040,47873,31680,31360,47681,48641,32448,32640,48961,32e3,48577,48257,31808,46081,29888,30080,46401,30464,47041,46721,30272,29184,45761,45953,29504,45313,29120,28800,45121,20480,37057,37249,20800,37633,21440,21120,37441,38401,22208,22400,38721,21760,38337,38017,21568,39937,23744,23936,40257,24320,40897,40577,24128,23040,39617,39809,23360,39169,22976,22656,38977,34817,18624,18816,35137,19200,35777,35457,19008,19968,36545,36737,20288,36097,19904,19584,35905,17408,33985,34177,17728,34561,18368,18048,34369,33281,17088,17280,33601,16640,33217,32897,16448];this.updateMusicCRC=function(e,t,a,s){for(var n=0;n<s;++n)e[0]=(r=t[a+n],i=(i=e[0])>>8^_[255&(i^r)]);var r,i}}function j(){var o=this,s=32773,c=null,h=null,r=null,u=null;this.setModules=function(e,t,a,s){c=e,h=t,r=a,u=s};var b=null,l=0,m=0,p=0;function v(e,t,a){for(;0<a;){var s;0==p&&(p=8,m++,e.header[e.w_ptr].write_timing==l&&(n=e,$.arraycopy(n.header[n.w_ptr].buf,0,b,m,n.sideinfo_len),m+=n.sideinfo_len,l+=8*n.sideinfo_len,n.w_ptr=n.w_ptr+1&Z.MAX_HEADER_BUF-1),b[m]=0),a-=s=Math.min(a,p),p-=s,b[m]|=t>>a<<p,l+=s}var n}function i(e,t,a){for(;0<a;){var s;0==p&&(p=8,b[++m]=0),a-=s=Math.min(a,p),p-=s,b[m]|=t>>a<<p,l+=s}}function _(e,t){var a,s=e.internal_flags;if(8<=t&&(v(s,76,8),t-=8),8<=t&&(v(s,65,8),t-=8),8<=t&&(v(s,77,8),t-=8),8<=t&&(v(s,69,8),t-=8),32<=t){var n=r.getLameShortVersion();if(32<=t)for(a=0;a<n.length&&8<=t;++a)t-=8,v(s,n.charCodeAt(a),8)}for(;1<=t;t-=1)v(s,s.ancillary_flag,1),s.ancillary_flag^=e.disable_reservoir?0:1}function f(e,t,a){for(var s=e.header[e.h_ptr].ptr;0<a;){var n=Math.min(a,8-(7&s));a-=n,e.header[e.h_ptr].buf[s>>3]|=t>>a<<8-(7&s)-n,s+=n}e.header[e.h_ptr].ptr=s}function n(e,t){e<<=8;for(var a=0;a<8;a++)0!=(65536&((t<<=1)^(e<<=1)))&&(t^=s);return t}function d(e,t){var a,s=C.ht[t.count1table_select+32],n=0,r=t.big_values,i=t.big_values;for(a=(t.count1-t.big_values)/4;0<a;--a){var _=0,o=0;0!=t.l3_enc[r+0]&&(o+=8,t.xr[i+0]<0&&_++),0!=t.l3_enc[r+1]&&(o+=4,_*=2,t.xr[i+1]<0&&_++),0!=t.l3_enc[r+2]&&(o+=2,_*=2,t.xr[i+2]<0&&_++),0!=t.l3_enc[r+3]&&(o++,_*=2,t.xr[i+3]<0&&_++),r+=4,i+=4,v(e,_+s.table[o],s.hlen[o]),n+=s.hlen[o]}return n}function g(e,t,a,s,n){var r=C.ht[t],i=0;if(0==t)return i;for(var _=a;_<s;_+=2){var o=0,l=0,f=r.xlen,c=r.xlen,h=0,u=n.l3_enc[_],b=n.l3_enc[_+1];if(0!=u&&(n.xr[_]<0&&h++,o--),15<t){if(14<u)h|=u-15<<1,l=f,u=15;if(14<b)h<<=f,h|=b-15,l+=f,b=15;c=16}0!=b&&(h<<=1,n.xr[_+1]<0&&h++,o--),u=u*c+b,l-=o,o+=r.hlen[u],v(e,r.table[u],o),v(e,h,l),i+=o+l}return i}function S(e,t){var a=3*e.scalefac_band.s[3];a>t.big_values&&(a=t.big_values);var s=g(e,t.table_select[0],0,a,t);return s+=g(e,t.table_select[1],a,t.big_values,t)}function M(e,t){var a,s,n,r;a=t.big_values;var i=t.region0_count+1;return n=e.scalefac_band.l[i],i+=t.region1_count+1,a<n&&(n=a),a<(r=e.scalefac_band.l[i])&&(r=a),s=g(e,t.table_select[0],0,n,t),s+=g(e,t.table_select[1],n,r,t),s+=g(e,t.table_select[2],r,a,t)}function w(){this.total=0}function R(e,t){var a,s,n,r,i,_=e.internal_flags;return i=_.w_ptr,-1==(r=_.h_ptr-1)&&(r=Z.MAX_HEADER_BUF-1),a=_.header[r].write_timing-l,0<=(t.total=a)&&(s=1+r-i,r<i&&(s=1+r-i+Z.MAX_HEADER_BUF),a-=8*s*_.sideinfo_len),a+=n=o.getframebits(e),t.total+=n,t.total%8!=0?t.total=1+t.total/8:t.total=t.total/8,t.total+=m+1,a<0&&$.err.println("strange error flushing buffer ... \n"),a}this.getframebits=function(e){var t,a=e.internal_flags;return t=0!=a.bitrate_index?C.bitrate_table[e.version][a.bitrate_index]:e.brate,8*(0|72e3*(e.version+1)*t/e.out_samplerate+a.padding)},this.CRC_writeheader=function(e,t){var a=65535;a=n(255&t[2],a),a=n(255&t[3],a);for(var s=6;s<e.sideinfo_len;s++)a=n(255&t[s],a);t[4]=byte(a>>8),t[5]=byte(255&a)},this.flush_bitstream=function(e){var t,a,s=e.internal_flags,n=s.h_ptr-1;if(-1==n&&(n=Z.MAX_HEADER_BUF-1),t=s.l3_side,!((a=R(e,new w))<0)){if(_(e,a),s.ResvSize=0,t.main_data_begin=0,s.findReplayGain){var r=c.GetTitleGain(s.rgdata);s.RadioGain=0|Math.floor(10*r+.5)}s.findPeakSample&&(s.noclipGainChange=0|Math.ceil(20*A(s.PeakSample/32767)*10),0<s.noclipGainChange&&(EQ(e.scale,1)||EQ(e.scale,0))?s.noclipScale=Math.floor(32767/s.PeakSample*100)/100:s.noclipScale=-1)}},this.add_dummy_byte=function(e,t,a){for(var s,n=e.internal_flags;0<a--;)for(i(0,t,8),s=0;s<Z.MAX_HEADER_BUF;++s)n.header[s].write_timing+=8},this.format_bitstream=function(e){var t,a=e.internal_flags;t=a.l3_side;var s=this.getframebits(e);_(e,t.resvDrain_pre),function(e,t){var a,s,n,r=e.internal_flags;if(a=r.l3_side,r.header[r.h_ptr].ptr=0,Te.fill(r.header[r.h_ptr].buf,0,r.sideinfo_len,0),e.out_samplerate<16e3?f(r,4094,12):f(r,4095,12),f(r,e.version,1),f(r,1,2),f(r,e.error_protection?0:1,1),f(r,r.bitrate_index,4),f(r,r.samplerate_index,2),f(r,r.padding,1),f(r,e.extension,1),f(r,e.mode.ordinal(),2),f(r,r.mode_ext,2),f(r,e.copyright,1),f(r,e.original,1),f(r,e.emphasis,2),e.error_protection&&f(r,0,16),1==e.version){for(f(r,a.main_data_begin,9),2==r.channels_out?f(r,a.private_bits,3):f(r,a.private_bits,5),n=0;n<r.channels_out;n++){var i;for(i=0;i<4;i++)f(r,a.scfsi[n][i],1)}for(s=0;s<2;s++)for(n=0;n<r.channels_out;n++)f(r,(_=a.tt[s][n]).part2_3_length+_.part2_length,12),f(r,_.big_values/2,9),f(r,_.global_gain,8),f(r,_.scalefac_compress,4),_.block_type!=Pe.NORM_TYPE?(f(r,1,1),f(r,_.block_type,2),f(r,_.mixed_block_flag,1),14==_.table_select[0]&&(_.table_select[0]=16),f(r,_.table_select[0],5),14==_.table_select[1]&&(_.table_select[1]=16),f(r,_.table_select[1],5),f(r,_.subblock_gain[0],3),f(r,_.subblock_gain[1],3),f(r,_.subblock_gain[2],3)):(f(r,0,1),14==_.table_select[0]&&(_.table_select[0]=16),f(r,_.table_select[0],5),14==_.table_select[1]&&(_.table_select[1]=16),f(r,_.table_select[1],5),14==_.table_select[2]&&(_.table_select[2]=16),f(r,_.table_select[2],5),f(r,_.region0_count,4),f(r,_.region1_count,3)),f(r,_.preflag,1),f(r,_.scalefac_scale,1),f(r,_.count1table_select,1)}else for(f(r,a.main_data_begin,8),f(r,a.private_bits,r.channels_out),n=s=0;n<r.channels_out;n++){var _;f(r,(_=a.tt[s][n]).part2_3_length+_.part2_length,12),f(r,_.big_values/2,9),f(r,_.global_gain,8),f(r,_.scalefac_compress,9),_.block_type!=Pe.NORM_TYPE?(f(r,1,1),f(r,_.block_type,2),f(r,_.mixed_block_flag,1),14==_.table_select[0]&&(_.table_select[0]=16),f(r,_.table_select[0],5),14==_.table_select[1]&&(_.table_select[1]=16),f(r,_.table_select[1],5),f(r,_.subblock_gain[0],3),f(r,_.subblock_gain[1],3),f(r,_.subblock_gain[2],3)):(f(r,0,1),14==_.table_select[0]&&(_.table_select[0]=16),f(r,_.table_select[0],5),14==_.table_select[1]&&(_.table_select[1]=16),f(r,_.table_select[1],5),14==_.table_select[2]&&(_.table_select[2]=16),f(r,_.table_select[2],5),f(r,_.region0_count,4),f(r,_.region1_count,3)),f(r,_.scalefac_scale,1),f(r,_.count1table_select,1)}e.error_protection&&CRC_writeheader(r,r.header[r.h_ptr].buf);var o=r.h_ptr;r.h_ptr=o+1&Z.MAX_HEADER_BUF-1,r.header[r.h_ptr].write_timing=r.header[o].write_timing+t,r.h_ptr==r.w_ptr&&$.err.println("Error: MAX_HEADER_BUF too small in bitstream.c \n")}(e,s);var n=8*a.sideinfo_len;if(n+=function(e){var t,a,s,n,r=0,i=e.internal_flags,_=i.l3_side;if(1==e.version)for(t=0;t<2;t++)for(a=0;a<i.channels_out;a++){var o=_.tt[t][a],l=k.slen1_tab[o.scalefac_compress],f=k.slen2_tab[o.scalefac_compress];for(s=n=0;s<o.sfbdivide;s++)-1!=o.scalefac[s]&&(v(i,o.scalefac[s],l),n+=l);for(;s<o.sfbmax;s++)-1!=o.scalefac[s]&&(v(i,o.scalefac[s],f),n+=f);o.block_type==Pe.SHORT_TYPE?n+=S(i,o):n+=M(i,o),r+=n+=d(i,o)}else for(a=t=0;a<i.channels_out;a++){var c,h,u=0;if(h=s=n=0,(o=_.tt[t][a]).block_type==Pe.SHORT_TYPE){for(;h<4;h++){var b=o.sfb_partition_table[h]/3,m=o.slen[h];for(c=0;c<b;c++,s++)v(i,Math.max(o.scalefac[3*s+0],0),m),v(i,Math.max(o.scalefac[3*s+1],0),m),v(i,Math.max(o.scalefac[3*s+2],0),m),u+=3*m}n+=S(i,o)}else{for(;h<4;h++)for(b=o.sfb_partition_table[h],m=o.slen[h],c=0;c<b;c++,s++)v(i,Math.max(o.scalefac[s],0),m),u+=m;n+=M(i,o)}r+=u+(n+=d(i,o))}return r}(e),_(e,t.resvDrain_post),n+=t.resvDrain_post,t.main_data_begin+=(s-n)/8,R(e,new w)!=a.ResvSize&&$.err.println("Internal buffer inconsistency. flushbits <> ResvSize"),8*t.main_data_begin!=a.ResvSize&&($.err.printf("bit reservoir error: \nl3_side.main_data_begin: %d \nResvoir size: %d \nresv drain (post) %d \nresv drain (pre) %d \nheader and sideinfo: %d \ndata bits: %d \ntotal bits: %d (remainder: %d) \nbitsperframe: %d \n",8*t.main_data_begin,a.ResvSize,t.resvDrain_post,t.resvDrain_pre,8*a.sideinfo_len,n-t.resvDrain_post-8*a.sideinfo_len,n,n%8,s),$.err.println("This is a fatal error. It has several possible causes:"),$.err.println("90%% LAME compiled with buggy version of gcc using advanced optimizations"),$.err.println(" 9%% Your system is overclocked"),$.err.println(" 1%% bug in LAME encoding library"),a.ResvSize=8*t.main_data_begin),1e9<l){var r;for(r=0;r<Z.MAX_HEADER_BUF;++r)a.header[r].write_timing-=l;l=0}return 0},this.copy_buffer=function(e,t,a,s,n){var r=m+1;if(r<=0)return 0;if(0!=s&&s<r)return-1;if($.arraycopy(b,0,t,a,r),m=-1,(p=0)!=n){var i=Be(1);if(i[0]=e.nMusicCRC,u.updateMusicCRC(i,t,a,r),e.nMusicCRC=i[0],0<r&&(e.VBR_seek_table.nBytesWritten+=r),e.decode_on_the_fly)for(var _,o=ke([2,1152]),l=r,f=-1;0!=f;)if(f=h.hip_decode1_unclipped(e.hip,t,a,l,o[0],o[1]),l=0,-1==f&&(f=0),0<f){if(e.findPeakSample){for(_=0;_<f;_++)o[0][_]>e.PeakSample?e.PeakSample=o[0][_]:-o[0][_]>e.PeakSample&&(e.PeakSample=-o[0][_]);if(1<e.channels_out)for(_=0;_<f;_++)o[1][_]>e.PeakSample?e.PeakSample=o[1][_]:-o[1][_]>e.PeakSample&&(e.PeakSample=-o[1][_])}if(e.findReplayGain&&c.AnalyzeSamples(e.rgdata,o[0],0,o[1],0,f,e.channels_out)==q.GAIN_ANALYSIS_ERROR)return-6}}return r},this.init_bit_stream_w=function(e){b=B(Q.LAME_MAXMP3BUFFER),e.h_ptr=e.w_ptr=0,e.header[e.h_ptr].write_timing=0,m=-1,l=p=0}}function e(e,t,a,s){this.xlen=e,this.linmax=t,this.table=a,this.hlen=s}Ee.STEREO=new Ee(0),Ee.JOINT_STEREO=new Ee(1),Ee.DUAL_CHANNEL=new Ee(2),Ee.MONO=new Ee(3),Ee.NOT_SET=new Ee(4),q.STEPS_per_dB=100,q.MAX_dB=120,q.GAIN_NOT_ENOUGH_SAMPLES=-24601,q.GAIN_ANALYSIS_ERROR=0,q.GAIN_ANALYSIS_OK=1,q.INIT_GAIN_ANALYSIS_ERROR=0,q.INIT_GAIN_ANALYSIS_OK=1,q.MAX_ORDER=q.YULE_ORDER=10,q.MAX_SAMPLES_PER_WINDOW=(q.MAX_SAMP_FREQ=48e3)*(q.RMS_WINDOW_TIME_NUMERATOR=1)/(q.RMS_WINDOW_TIME_DENOMINATOR=20)+1,M.NUMTOCENTRIES=100,M.MAXFRAMESIZE=2880,j.EQ=function(e,t){return Math.abs(e)>Math.abs(t)?Math.abs(e-t)<=1e-6*Math.abs(e):Math.abs(e-t)<=1e-6*Math.abs(t)},j.NEQ=function(e,t){return!j.EQ(e,t)};var C={};function F(e){this.bits=e}function T(){this.over_noise=0,this.tot_noise=0,this.max_noise=0,this.over_count=0,this.over_SSD=0,this.bits=0}function r(e,t,a,s){this.l=Be(1+Pe.SBMAX_l),this.s=Be(1+Pe.SBMAX_s),this.psfb21=Be(1+Pe.PSFB21),this.psfb12=Be(1+Pe.PSFB12);var n=this.l,r=this.s;4==arguments.length&&(this.arrL=e,this.arrS=t,this.arr21=a,this.arr12=s,$.arraycopy(this.arrL,0,n,0,Math.min(this.arrL.length,this.l.length)),$.arraycopy(this.arrS,0,r,0,Math.min(this.arrS.length,this.s.length)),$.arraycopy(this.arr21,0,this.psfb21,0,Math.min(this.arr21.length,this.psfb21.length)),$.arraycopy(this.arr12,0,this.psfb12,0,Math.min(this.arr12.length,this.psfb12.length)))}function y(){var l=null,b=null,s=null;this.setModules=function(e,t,a){l=e,b=t,s=a},this.IPOW20=function(e){return u[e]};var x=2.220446049250313e-16,f=y.IXMAX_VAL+2,c=y.Q_MAX,h=y.Q_MAX2,n=100;this.nr_of_sfb_block=[[[6,5,5,5],[9,9,9,9],[6,9,9,9]],[[6,5,7,3],[9,9,12,6],[6,9,12,6]],[[11,10,0,0],[18,18,0,0],[15,18,0,0]],[[7,7,7,0],[12,12,12,0],[6,15,12,0]],[[6,6,6,3],[12,9,9,6],[6,12,9,6]],[[8,8,5,0],[15,12,9,0],[6,18,9,0]]];var w=[0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,2,2,3,3,3,2,0];this.pretab=w,this.sfBandIndex=[new r([0,6,12,18,24,30,36,44,54,66,80,96,116,140,168,200,238,284,336,396,464,522,576],[0,4,8,12,18,24,32,42,56,74,100,132,174,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,6,12,18,24,30,36,44,54,66,80,96,114,136,162,194,232,278,332,394,464,540,576],[0,4,8,12,18,26,36,48,62,80,104,136,180,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,6,12,18,24,30,36,44,54,66,80,96,116,140,168,200,238,284,336,396,464,522,576],[0,4,8,12,18,26,36,48,62,80,104,134,174,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,4,8,12,16,20,24,30,36,44,52,62,74,90,110,134,162,196,238,288,342,418,576],[0,4,8,12,16,22,30,40,52,66,84,106,136,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,4,8,12,16,20,24,30,36,42,50,60,72,88,106,128,156,190,230,276,330,384,576],[0,4,8,12,16,22,28,38,50,64,80,100,126,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,4,8,12,16,20,24,30,36,44,54,66,82,102,126,156,194,240,296,364,448,550,576],[0,4,8,12,16,22,30,42,58,78,104,138,180,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,6,12,18,24,30,36,44,54,66,80,96,116,140,168,200,238,284,336,396,464,522,576],[0,4,8,12,18,26,36,48,62,80,104,134,174,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,6,12,18,24,30,36,44,54,66,80,96,116,140,168,200,238,284,336,396,464,522,576],[0,4,8,12,18,26,36,48,62,80,104,134,174,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]),new r([0,12,24,36,48,60,72,88,108,132,160,192,232,280,336,400,476,566,568,570,572,574,576],[0,8,16,24,36,52,72,96,124,160,162,164,166,192],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0])];var R=Ae(c+h+1),u=Ae(c),m=Ae(f),p=Ae(f);function v(e,t){var a=s.ATHformula(t,e);return a-=n,a=Math.pow(10,a/10+e.ATHlower)}function B(e){this.s=e}this.adj43=p,this.iteration_init=function(e){var t,a=e.internal_flags,s=a.l3_side;if(0==a.iteration_init_init){for(a.iteration_init_init=1,s.main_data_begin=0,function(e){for(var t=e.internal_flags.ATH.l,a=e.internal_flags.ATH.psfb21,s=e.internal_flags.ATH.s,n=e.internal_flags.ATH.psfb12,r=e.internal_flags,i=e.out_samplerate,_=0;_<Pe.SBMAX_l;_++){var o=r.scalefac_band.l[_],l=r.scalefac_band.l[_+1];t[_]=K.MAX_VALUE;for(var f=o;f<l;f++){var c=v(e,f*i/1152);t[_]=Math.min(t[_],c)}}for(_=0;_<Pe.PSFB21;_++)for(o=r.scalefac_band.psfb21[_],l=r.scalefac_band.psfb21[_+1],a[_]=K.MAX_VALUE,f=o;f<l;f++)c=v(e,f*i/1152),a[_]=Math.min(a[_],c);for(_=0;_<Pe.SBMAX_s;_++){for(o=r.scalefac_band.s[_],l=r.scalefac_band.s[_+1],s[_]=K.MAX_VALUE,f=o;f<l;f++)c=v(e,f*i/384),s[_]=Math.min(s[_],c);s[_]*=r.scalefac_band.s[_+1]-r.scalefac_band.s[_]}for(_=0;_<Pe.PSFB12;_++){for(o=r.scalefac_band.psfb12[_],l=r.scalefac_band.psfb12[_+1],n[_]=K.MAX_VALUE,f=o;f<l;f++)c=v(e,f*i/384),n[_]=Math.min(n[_],c);n[_]*=r.scalefac_band.s[13]-r.scalefac_band.s[12]}if(e.noATH){for(_=0;_<Pe.SBMAX_l;_++)t[_]=1e-20;for(_=0;_<Pe.PSFB21;_++)a[_]=1e-20;for(_=0;_<Pe.SBMAX_s;_++)s[_]=1e-20;for(_=0;_<Pe.PSFB12;_++)n[_]=1e-20}r.ATH.floor=10*A(v(e,-1))}(e),m[0]=0,t=1;t<f;t++)m[t]=Math.pow(t,4/3);for(t=0;t<f-1;t++)p[t]=t+1-Math.pow(.5*(m[t]+m[t+1]),.75);for(p[t]=.5,t=0;t<c;t++)u[t]=Math.pow(2,-.1875*(t-210));for(t=0;t<=c+h;t++)R[t]=Math.pow(2,.25*(t-210-h));var n,r,i,_;for(l.huffman_init(a),32<=(t=e.exp_nspsytune>>2&63)&&(t-=64),n=Math.pow(10,t/4/10),32<=(t=e.exp_nspsytune>>8&63)&&(t-=64),r=Math.pow(10,t/4/10),32<=(t=e.exp_nspsytune>>14&63)&&(t-=64),i=Math.pow(10,t/4/10),32<=(t=e.exp_nspsytune>>20&63)&&(t-=64),_=i*Math.pow(10,t/4/10),t=0;t<Pe.SBMAX_l;t++){o=t<=6?n:t<=13?r:t<=20?i:_,a.nsPsy.longfact[t]=o}for(t=0;t<Pe.SBMAX_s;t++){var o;o=t<=5?n:t<=10?r:t<=11?i:_,a.nsPsy.shortfact[t]=o}}},this.on_pe=function(e,t,a,s,n,r){var i,_,o=e.internal_flags,l=0,f=Be(2),c=new F(l),h=b.ResvMaxBits(e,s,c,r),u=(l=c.bits)+h;for(Z.MAX_BITS_PER_GRANULE<u&&(u=Z.MAX_BITS_PER_GRANULE),_=i=0;_<o.channels_out;++_)a[_]=Math.min(Z.MAX_BITS_PER_CHANNEL,l/o.channels_out),f[_]=0|a[_]*t[n][_]/700-a[_],f[_]>3*s/4&&(f[_]=3*s/4),f[_]<0&&(f[_]=0),f[_]+a[_]>Z.MAX_BITS_PER_CHANNEL&&(f[_]=Math.max(0,Z.MAX_BITS_PER_CHANNEL-a[_])),i+=f[_];if(h<i)for(_=0;_<o.channels_out;++_)f[_]=h*f[_]/i;for(_=0;_<o.channels_out;++_)a[_]+=f[_],h-=f[_];for(_=i=0;_<o.channels_out;++_)i+=a[_];if(Z.MAX_BITS_PER_GRANULE<i){for(_=0;_<o.channels_out;++_)a[_]*=Z.MAX_BITS_PER_GRANULE,a[_]/=i,a[_]}return u},this.reduce_side=function(e,t,a,s){var n=.33*(.5-t)/.5;n<0&&(n=0),.5<n&&(n=.5);var r=0|.5*n*(e[0]+e[1]);r>Z.MAX_BITS_PER_CHANNEL-e[0]&&(r=Z.MAX_BITS_PER_CHANNEL-e[0]),r<0&&(r=0),125<=e[1]&&(125<e[1]-r?(e[0]<a&&(e[0]+=r),e[1]-=r):(e[0]+=e[1]-125,e[1]=125)),s<(r=e[0]+e[1])&&(e[0]=s*e[0]/r,e[1]=s*e[1]/r)},this.athAdjust=function(e,t,a){var s=90.30873362,n=ee.FAST_LOG10_X(t,10),r=e*e,i=0;return n-=a,1e-20<r&&(i=1+ee.FAST_LOG10_X(r,10/s)),i<0&&(i=0),n*=i,n+=a+s-94.82444863,Math.pow(10,.1*n)},this.calc_xmin=function(e,t,a,s){var n,r=0,i=e.internal_flags,_=0,o=0,l=i.ATH,f=a.xr,c=e.VBR==ye.vbr_mtrh?1:0,h=i.masking_lower;for(e.VBR!=ye.vbr_mtrh&&e.VBR!=ye.vbr_mt||(h=1),n=0;n<a.psy_lmax;n++){S=(g=e.VBR==ye.vbr_rh||e.VBR==ye.vbr_mtrh?athAdjust(l.adjust,l.l[n],l.floor):l.adjust*l.l[n])/(p=a.width[n]),M=x,A=p>>1,B=0;do{B+=k=f[_]*f[_],M+=k<S?k:S,B+=T=f[++_]*f[_],M+=T<S?T:S,_++}while(0<--A);if(g<B&&o++,n==Pe.SBPSY_l)M<(R=g*i.nsPsy.longfact[n])&&(M=R);if(0!=c&&(g=M),!e.ATHonly)if(0<(w=t.en.l[n]))R=B*t.thm.l[n]*h/w,0!=c&&(R*=i.nsPsy.longfact[n]),g<R&&(g=R);s[r++]=0!=c?g:g*i.nsPsy.longfact[n]}var u=575;if(a.block_type!=Pe.SHORT_TYPE)for(var b=576;0!=b--&&j.EQ(f[b],0);)u=b;a.max_nonzero_coeff=u;for(var m=a.sfb_smin;n<a.psymax;m++,n+=3){var p,v,d;for(d=e.VBR==ye.vbr_rh||e.VBR==ye.vbr_mtrh?athAdjust(l.adjust,l.s[m],l.floor):l.adjust*l.s[m],p=a.width[n],v=0;v<3;v++){var g,S,M,w,R,B=0,A=p>>1;S=d/p,M=x;do{var k,T;B+=k=f[_]*f[_],M+=k<S?k:S,B+=T=f[++_]*f[_],M+=T<S?T:S,_++}while(0<--A);if(d<B&&o++,m==Pe.SBPSY_s)M<(R=d*i.nsPsy.shortfact[m])&&(M=R);if(g=0!=c?M:d,!e.ATHonly&&!e.ATHshort)if(0<(w=t.en.s[m][v]))R=B*t.thm.s[m][v]*h/w,0!=c&&(R*=i.nsPsy.shortfact[m]),g<R&&(g=R);s[r++]=0!=c?g:g*i.nsPsy.shortfact[m]}e.useTemporal&&(s[r-3]>s[r-3+1]&&(s[r-3+1]+=(s[r-3]-s[r-3+1])*i.decay),s[r-3+1]>s[r-3+2]&&(s[r-3+2]+=(s[r-3+1]-s[r-3+2])*i.decay))}return o},this.calc_noise_core=function(e,t,a,s){var n=0,r=t.s,i=e.l3_enc;if(r>e.count1)for(;0!=a--;){o=e.xr[r],r++,n+=o*o,o=e.xr[r],r++,n+=o*o}else if(r>e.big_values){var _=Ae(2);for(_[0]=0,_[1]=s;0!=a--;){o=Math.abs(e.xr[r])-_[i[r]],r++,n+=o*o,o=Math.abs(e.xr[r])-_[i[r]],r++,n+=o*o}}else for(;0!=a--;){var o;o=Math.abs(e.xr[r])-m[i[r]]*s,r++,n+=o*o,o=Math.abs(e.xr[r])-m[i[r]]*s,r++,n+=o*o}return t.s=r,n},this.calc_noise=function(e,t,a,s,n){var r,i,_=0,o=0,l=0,f=0,c=0,h=-20,u=0,b=e.scalefac,m=0;for(r=s.over_SSD=0;r<e.psymax;r++){var p,v=e.global_gain-(b[m++]+(0!=e.preflag?w[r]:0)<<e.scalefac_scale+1)-8*e.subblock_gain[e.window[r]],d=0;if(null!=n&&n.step[r]==v)d=n.noise[r],u+=e.width[r],a[_++]=d/t[o++],d=n.noise_log[r];else{var g,S=R[v+y.Q_MAX2];if(i=e.width[r]>>1,u+e.width[r]>e.max_nonzero_coeff)i=0<(g=e.max_nonzero_coeff-u+1)?g>>1:0;var M=new B(u);d=this.calc_noise_core(e,M,i,S),u=M.s,null!=n&&(n.step[r]=v,n.noise[r]=d),d=a[_++]=d/t[o++],d=ee.FAST_LOG10(Math.max(d,1e-20)),null!=n&&(n.noise_log[r]=d)}if(null!=n&&(n.global_gain=e.global_gain),c+=d,0<d)p=Math.max(0|10*d+.5,1),s.over_SSD+=p*p,l++,f+=d;h=Math.max(h,d)}return s.over_count=l,s.tot_noise=c,s.over_noise=f,s.max_noise=h,l},this.set_pinfo=function(e,t,a,s,n){var r,i,_,o,l,f=e.internal_flags,c=0==t.scalefac_scale?.5:1,h=t.scalefac,u=Ae(z.SFBMAX),b=Ae(z.SFBMAX),m=new T;calc_xmin(e,a,t,u),calc_noise(t,u,b,m,null);var p=0;for(i=t.sfb_lmax,t.block_type!=Pe.SHORT_TYPE&&0==t.mixed_block_flag&&(i=22),r=0;r<i;r++){var v=f.scalefac_band.l[r],d=(g=f.scalefac_band.l[r+1])-v;for(o=0;p<g;p++)o+=t.xr[p]*t.xr[p];o/=d,l=1e15,f.pinfo.en[s][n][r]=l*o,f.pinfo.xfsf[s][n][r]=l*u[r]*b[r]/d,0<a.en.l[r]&&!e.ATHonly?o/=a.en.l[r]:o=0,f.pinfo.thr[s][n][r]=l*Math.max(o*a.thm.l[r],f.ATH.l[r]),(f.pinfo.LAMEsfb[s][n][r]=0)!=t.preflag&&11<=r&&(f.pinfo.LAMEsfb[s][n][r]=-c*w[r]),r<Pe.SBPSY_l&&(f.pinfo.LAMEsfb[s][n][r]-=c*h[r])}if(t.block_type==Pe.SHORT_TYPE)for(i=r,r=t.sfb_smin;r<Pe.SBMAX_s;r++){v=f.scalefac_band.s[r],d=(g=f.scalefac_band.s[r+1])-v;for(var g,S=0;S<3;S++){for(o=0,_=v;_<g;_++)o+=t.xr[p]*t.xr[p],p++;o=Math.max(o/d,1e-20),l=1e15,f.pinfo.en_s[s][n][3*r+S]=l*o,f.pinfo.xfsf_s[s][n][3*r+S]=l*u[i]*b[i]/d,0<a.en.s[r][S]?o/=a.en.s[r][S]:o=0,(e.ATHonly||e.ATHshort)&&(o=0),f.pinfo.thr_s[s][n][3*r+S]=l*Math.max(o*a.thm.s[r][S],f.ATH.s[r]),f.pinfo.LAMEsfb_s[s][n][3*r+S]=-2*t.subblock_gain[S],r<Pe.SBPSY_s&&(f.pinfo.LAMEsfb_s[s][n][3*r+S]-=c*h[i]),i++}}f.pinfo.LAMEqss[s][n]=t.global_gain,f.pinfo.LAMEmainbits[s][n]=t.part2_3_length+t.part2_length,f.pinfo.LAMEsfbits[s][n]=t.part2_length,f.pinfo.over[s][n]=m.over_count,f.pinfo.max_noise[s][n]=10*m.max_noise,f.pinfo.over_noise[s][n]=10*m.over_noise,f.pinfo.tot_noise[s][n]=10*m.tot_noise,f.pinfo.over_SSD[s][n]=m.over_SSD}}function x(){this.xr=Ae(576),this.l3_enc=Be(576),this.scalefac=Be(z.SFBMAX),this.xrpow_max=0,this.part2_3_length=0,this.big_values=0,this.count1=0,this.global_gain=0,this.scalefac_compress=0,this.block_type=0,this.mixed_block_flag=0,this.table_select=Be(3),this.subblock_gain=Be(4),this.region0_count=0,this.region1_count=0,this.preflag=0,this.scalefac_scale=0,this.count1table_select=0,this.part2_length=0,this.sfb_lmax=0,this.sfb_smin=0,this.psy_lmax=0,this.sfbmax=0,this.psymax=0,this.sfbdivide=0,this.width=Be(z.SFBMAX),this.window=Be(z.SFBMAX),this.count1bits=0,this.sfb_partition_table=null,this.slen=Be(4),this.max_nonzero_coeff=0;var a=this;function s(e){return new Int32Array(e)}this.assign=function(e){var t;a.xr=(t=e.xr,new Float32Array(t)),a.l3_enc=s(e.l3_enc),a.scalefac=s(e.scalefac),a.xrpow_max=e.xrpow_max,a.part2_3_length=e.part2_3_length,a.big_values=e.big_values,a.count1=e.count1,a.global_gain=e.global_gain,a.scalefac_compress=e.scalefac_compress,a.block_type=e.block_type,a.mixed_block_flag=e.mixed_block_flag,a.table_select=s(e.table_select),a.subblock_gain=s(e.subblock_gain),a.region0_count=e.region0_count,a.region1_count=e.region1_count,a.preflag=e.preflag,a.scalefac_scale=e.scalefac_scale,a.count1table_select=e.count1table_select,a.part2_length=e.part2_length,a.sfb_lmax=e.sfb_lmax,a.sfb_smin=e.sfb_smin,a.psy_lmax=e.psy_lmax,a.sfbmax=e.sfbmax,a.psymax=e.psymax,a.sfbdivide=e.sfbdivide,a.width=s(e.width),a.window=s(e.window),a.count1bits=e.count1bits,a.sfb_partition_table=e.sfb_partition_table.slice(0),a.slen=s(e.slen),a.max_nonzero_coeff=e.max_nonzero_coeff}}C.t1HB=[1,1,1,0],C.t2HB=[1,2,1,3,1,1,3,2,0],C.t3HB=[3,2,1,1,1,1,3,2,0],C.t5HB=[1,2,6,5,3,1,4,4,7,5,7,1,6,1,1,0],C.t6HB=[7,3,5,1,6,2,3,2,5,4,4,1,3,3,2,0],C.t7HB=[1,2,10,19,16,10,3,3,7,10,5,3,11,4,13,17,8,4,12,11,18,15,11,2,7,6,9,14,3,1,6,4,5,3,2,0],C.t8HB=[3,4,6,18,12,5,5,1,2,16,9,3,7,3,5,14,7,3,19,17,15,13,10,4,13,5,8,11,5,1,12,4,4,1,1,0],C.t9HB=[7,5,9,14,15,7,6,4,5,5,6,7,7,6,8,8,8,5,15,6,9,10,5,1,11,7,9,6,4,1,14,4,6,2,6,0],C.t10HB=[1,2,10,23,35,30,12,17,3,3,8,12,18,21,12,7,11,9,15,21,32,40,19,6,14,13,22,34,46,23,18,7,20,19,33,47,27,22,9,3,31,22,41,26,21,20,5,3,14,13,10,11,16,6,5,1,9,8,7,8,4,4,2,0],C.t11HB=[3,4,10,24,34,33,21,15,5,3,4,10,32,17,11,10,11,7,13,18,30,31,20,5,25,11,19,59,27,18,12,5,35,33,31,58,30,16,7,5,28,26,32,19,17,15,8,14,14,12,9,13,14,9,4,1,11,4,6,6,6,3,2,0],C.t12HB=[9,6,16,33,41,39,38,26,7,5,6,9,23,16,26,11,17,7,11,14,21,30,10,7,17,10,15,12,18,28,14,5,32,13,22,19,18,16,9,5,40,17,31,29,17,13,4,2,27,12,11,15,10,7,4,1,27,12,8,12,6,3,1,0],C.t13HB=[1,5,14,21,34,51,46,71,42,52,68,52,67,44,43,19,3,4,12,19,31,26,44,33,31,24,32,24,31,35,22,14,15,13,23,36,59,49,77,65,29,40,30,40,27,33,42,16,22,20,37,61,56,79,73,64,43,76,56,37,26,31,25,14,35,16,60,57,97,75,114,91,54,73,55,41,48,53,23,24,58,27,50,96,76,70,93,84,77,58,79,29,74,49,41,17,47,45,78,74,115,94,90,79,69,83,71,50,59,38,36,15,72,34,56,95,92,85,91,90,86,73,77,65,51,44,43,42,43,20,30,44,55,78,72,87,78,61,46,54,37,30,20,16,53,25,41,37,44,59,54,81,66,76,57,54,37,18,39,11,35,33,31,57,42,82,72,80,47,58,55,21,22,26,38,22,53,25,23,38,70,60,51,36,55,26,34,23,27,14,9,7,34,32,28,39,49,75,30,52,48,40,52,28,18,17,9,5,45,21,34,64,56,50,49,45,31,19,12,15,10,7,6,3,48,23,20,39,36,35,53,21,16,23,13,10,6,1,4,2,16,15,17,27,25,20,29,11,17,12,16,8,1,1,0,1],C.t15HB=[7,12,18,53,47,76,124,108,89,123,108,119,107,81,122,63,13,5,16,27,46,36,61,51,42,70,52,83,65,41,59,36,19,17,15,24,41,34,59,48,40,64,50,78,62,80,56,33,29,28,25,43,39,63,55,93,76,59,93,72,54,75,50,29,52,22,42,40,67,57,95,79,72,57,89,69,49,66,46,27,77,37,35,66,58,52,91,74,62,48,79,63,90,62,40,38,125,32,60,56,50,92,78,65,55,87,71,51,73,51,70,30,109,53,49,94,88,75,66,122,91,73,56,42,64,44,21,25,90,43,41,77,73,63,56,92,77,66,47,67,48,53,36,20,71,34,67,60,58,49,88,76,67,106,71,54,38,39,23,15,109,53,51,47,90,82,58,57,48,72,57,41,23,27,62,9,86,42,40,37,70,64,52,43,70,55,42,25,29,18,11,11,118,68,30,55,50,46,74,65,49,39,24,16,22,13,14,7,91,44,39,38,34,63,52,45,31,52,28,19,14,8,9,3,123,60,58,53,47,43,32,22,37,24,17,12,15,10,2,1,71,37,34,30,28,20,17,26,21,16,10,6,8,6,2,0],C.t16HB=[1,5,14,44,74,63,110,93,172,149,138,242,225,195,376,17,3,4,12,20,35,62,53,47,83,75,68,119,201,107,207,9,15,13,23,38,67,58,103,90,161,72,127,117,110,209,206,16,45,21,39,69,64,114,99,87,158,140,252,212,199,387,365,26,75,36,68,65,115,101,179,164,155,264,246,226,395,382,362,9,66,30,59,56,102,185,173,265,142,253,232,400,388,378,445,16,111,54,52,100,184,178,160,133,257,244,228,217,385,366,715,10,98,48,91,88,165,157,148,261,248,407,397,372,380,889,884,8,85,84,81,159,156,143,260,249,427,401,392,383,727,713,708,7,154,76,73,141,131,256,245,426,406,394,384,735,359,710,352,11,139,129,67,125,247,233,229,219,393,743,737,720,885,882,439,4,243,120,118,115,227,223,396,746,742,736,721,712,706,223,436,6,202,224,222,218,216,389,386,381,364,888,443,707,440,437,1728,4,747,211,210,208,370,379,734,723,714,1735,883,877,876,3459,865,2,377,369,102,187,726,722,358,711,709,866,1734,871,3458,870,434,0,12,10,7,11,10,17,11,9,13,12,10,7,5,3,1,3],C.t24HB=[15,13,46,80,146,262,248,434,426,669,653,649,621,517,1032,88,14,12,21,38,71,130,122,216,209,198,327,345,319,297,279,42,47,22,41,74,68,128,120,221,207,194,182,340,315,295,541,18,81,39,75,70,134,125,116,220,204,190,178,325,311,293,271,16,147,72,69,135,127,118,112,210,200,188,352,323,306,285,540,14,263,66,129,126,119,114,214,202,192,180,341,317,301,281,262,12,249,123,121,117,113,215,206,195,185,347,330,308,291,272,520,10,435,115,111,109,211,203,196,187,353,332,313,298,283,531,381,17,427,212,208,205,201,193,186,177,169,320,303,286,268,514,377,16,335,199,197,191,189,181,174,333,321,305,289,275,521,379,371,11,668,184,183,179,175,344,331,314,304,290,277,530,383,373,366,10,652,346,171,168,164,318,309,299,287,276,263,513,375,368,362,6,648,322,316,312,307,302,292,284,269,261,512,376,370,364,359,4,620,300,296,294,288,282,273,266,515,380,374,369,365,361,357,2,1033,280,278,274,267,264,259,382,378,372,367,363,360,358,356,0,43,20,19,17,15,13,11,9,7,6,4,7,5,3,1,3],C.t32HB=[1,10,8,20,12,20,16,32,14,12,24,0,28,16,24,16],C.t33HB=[15,28,26,48,22,40,36,64,14,24,20,32,12,16,8,0],C.t1l=[1,4,3,5],C.t2l=[1,4,7,4,5,7,6,7,8],C.t3l=[2,3,7,4,4,7,6,7,8],C.t5l=[1,4,7,8,4,5,8,9,7,8,9,10,8,8,9,10],C.t6l=[3,4,6,8,4,4,6,7,5,6,7,8,7,7,8,9],C.t7l=[1,4,7,9,9,10,4,6,8,9,9,10,7,7,9,10,10,11,8,9,10,11,11,11,8,9,10,11,11,12,9,10,11,12,12,12],C.t8l=[2,4,7,9,9,10,4,4,6,10,10,10,7,6,8,10,10,11,9,10,10,11,11,12,9,9,10,11,12,12,10,10,11,11,13,13],C.t9l=[3,4,6,7,9,10,4,5,6,7,8,10,5,6,7,8,9,10,7,7,8,9,9,10,8,8,9,9,10,11,9,9,10,10,11,11],C.t10l=[1,4,7,9,10,10,10,11,4,6,8,9,10,11,10,10,7,8,9,10,11,12,11,11,8,9,10,11,12,12,11,12,9,10,11,12,12,12,12,12,10,11,12,12,13,13,12,13,9,10,11,12,12,12,13,13,10,10,11,12,12,13,13,13],C.t11l=[2,4,6,8,9,10,9,10,4,5,6,8,10,10,9,10,6,7,8,9,10,11,10,10,8,8,9,11,10,12,10,11,9,10,10,11,11,12,11,12,9,10,11,12,12,13,12,13,9,9,9,10,11,12,12,12,9,9,10,11,12,12,12,12],C.t12l=[4,4,6,8,9,10,10,10,4,5,6,7,9,9,10,10,6,6,7,8,9,10,9,10,7,7,8,8,9,10,10,10,8,8,9,9,10,10,10,11,9,9,10,10,10,11,10,11,9,9,9,10,10,11,11,12,10,10,10,11,11,11,11,12],C.t13l=[1,5,7,8,9,10,10,11,10,11,12,12,13,13,14,14,4,6,8,9,10,10,11,11,11,11,12,12,13,14,14,14,7,8,9,10,11,11,12,12,11,12,12,13,13,14,15,15,8,9,10,11,11,12,12,12,12,13,13,13,13,14,15,15,9,9,11,11,12,12,13,13,12,13,13,14,14,15,15,16,10,10,11,12,12,12,13,13,13,13,14,13,15,15,16,16,10,11,12,12,13,13,13,13,13,14,14,14,15,15,16,16,11,11,12,13,13,13,14,14,14,14,15,15,15,16,18,18,10,10,11,12,12,13,13,14,14,14,14,15,15,16,17,17,11,11,12,12,13,13,13,15,14,15,15,16,16,16,18,17,11,12,12,13,13,14,14,15,14,15,16,15,16,17,18,19,12,12,12,13,14,14,14,14,15,15,15,16,17,17,17,18,12,13,13,14,14,15,14,15,16,16,17,17,17,18,18,18,13,13,14,15,15,15,16,16,16,16,16,17,18,17,18,18,14,14,14,15,15,15,17,16,16,19,17,17,17,19,18,18,13,14,15,16,16,16,17,16,17,17,18,18,21,20,21,18],C.t15l=[3,5,6,8,8,9,10,10,10,11,11,12,12,12,13,14,5,5,7,8,9,9,10,10,10,11,11,12,12,12,13,13,6,7,7,8,9,9,10,10,10,11,11,12,12,13,13,13,7,8,8,9,9,10,10,11,11,11,12,12,12,13,13,13,8,8,9,9,10,10,11,11,11,11,12,12,12,13,13,13,9,9,9,10,10,10,11,11,11,11,12,12,13,13,13,14,10,9,10,10,10,11,11,11,11,12,12,12,13,13,14,14,10,10,10,11,11,11,11,12,12,12,12,12,13,13,13,14,10,10,10,11,11,11,11,12,12,12,12,13,13,14,14,14,10,10,11,11,11,11,12,12,12,13,13,13,13,14,14,14,11,11,11,11,12,12,12,12,12,13,13,13,13,14,15,14,11,11,11,11,12,12,12,12,13,13,13,13,14,14,14,15,12,12,11,12,12,12,13,13,13,13,13,13,14,14,15,15,12,12,12,12,12,13,13,13,13,14,14,14,14,14,15,15,13,13,13,13,13,13,13,13,14,14,14,14,15,15,14,15,13,13,13,13,13,13,13,14,14,14,14,14,15,15,15,15],C.t16_5l=[1,5,7,9,10,10,11,11,12,12,12,13,13,13,14,11,4,6,8,9,10,11,11,11,12,12,12,13,14,13,14,11,7,8,9,10,11,11,12,12,13,12,13,13,13,14,14,12,9,9,10,11,11,12,12,12,13,13,14,14,14,15,15,13,10,10,11,11,12,12,13,13,13,14,14,14,15,15,15,12,10,10,11,11,12,13,13,14,13,14,14,15,15,15,16,13,11,11,11,12,13,13,13,13,14,14,14,14,15,15,16,13,11,11,12,12,13,13,13,14,14,15,15,15,15,17,17,13,11,12,12,13,13,13,14,14,15,15,15,15,16,16,16,13,12,12,12,13,13,14,14,15,15,15,15,16,15,16,15,14,12,13,12,13,14,14,14,14,15,16,16,16,17,17,16,13,13,13,13,13,14,14,15,16,16,16,16,16,16,15,16,14,13,14,14,14,14,15,15,15,15,17,16,16,16,16,18,14,15,14,14,14,15,15,16,16,16,18,17,17,17,19,17,14,14,15,13,14,16,16,15,16,16,17,18,17,19,17,16,14,11,11,11,12,12,13,13,13,14,14,14,14,14,14,14,12],C.t16l=[1,5,7,9,10,10,11,11,12,12,12,13,13,13,14,10,4,6,8,9,10,11,11,11,12,12,12,13,14,13,14,10,7,8,9,10,11,11,12,12,13,12,13,13,13,14,14,11,9,9,10,11,11,12,12,12,13,13,14,14,14,15,15,12,10,10,11,11,12,12,13,13,13,14,14,14,15,15,15,11,10,10,11,11,12,13,13,14,13,14,14,15,15,15,16,12,11,11,11,12,13,13,13,13,14,14,14,14,15,15,16,12,11,11,12,12,13,13,13,14,14,15,15,15,15,17,17,12,11,12,12,13,13,13,14,14,15,15,15,15,16,16,16,12,12,12,12,13,13,14,14,15,15,15,15,16,15,16,15,13,12,13,12,13,14,14,14,14,15,16,16,16,17,17,16,12,13,13,13,13,14,14,15,16,16,16,16,16,16,15,16,13,13,14,14,14,14,15,15,15,15,17,16,16,16,16,18,13,15,14,14,14,15,15,16,16,16,18,17,17,17,19,17,13,14,15,13,14,16,16,15,16,16,17,18,17,19,17,16,13,10,10,10,11,11,12,12,12,13,13,13,13,13,13,13,10],C.t24l=[4,5,7,8,9,10,10,11,11,12,12,12,12,12,13,10,5,6,7,8,9,10,10,11,11,11,12,12,12,12,12,10,7,7,8,9,9,10,10,11,11,11,11,12,12,12,13,9,8,8,9,9,10,10,10,11,11,11,11,12,12,12,12,9,9,9,9,10,10,10,10,11,11,11,12,12,12,12,13,9,10,9,10,10,10,10,11,11,11,11,12,12,12,12,12,9,10,10,10,10,10,11,11,11,11,12,12,12,12,12,13,9,11,10,10,10,11,11,11,11,12,12,12,12,12,13,13,10,11,11,11,11,11,11,11,11,11,12,12,12,12,13,13,10,11,11,11,11,11,11,11,12,12,12,12,12,13,13,13,10,12,11,11,11,11,12,12,12,12,12,12,13,13,13,13,10,12,12,11,11,11,12,12,12,12,12,12,13,13,13,13,10,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,10,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,10,13,12,12,12,12,12,12,13,13,13,13,13,13,13,13,10,9,9,9,9,9,9,9,9,9,9,9,10,10,10,10,6],C.t32l=[1,5,5,7,5,8,7,9,5,7,7,9,7,9,9,10],C.t33l=[4,5,5,6,5,6,6,7,5,6,6,7,6,7,7,8],C.ht=[new e(0,0,null,null),new e(2,0,C.t1HB,C.t1l),new e(3,0,C.t2HB,C.t2l),new e(3,0,C.t3HB,C.t3l),new e(0,0,null,null),new e(4,0,C.t5HB,C.t5l),new e(4,0,C.t6HB,C.t6l),new e(6,0,C.t7HB,C.t7l),new e(6,0,C.t8HB,C.t8l),new e(6,0,C.t9HB,C.t9l),new e(8,0,C.t10HB,C.t10l),new e(8,0,C.t11HB,C.t11l),new e(8,0,C.t12HB,C.t12l),new e(16,0,C.t13HB,C.t13l),new e(0,0,null,C.t16_5l),new e(16,0,C.t15HB,C.t15l),new e(1,1,C.t16HB,C.t16l),new e(2,3,C.t16HB,C.t16l),new e(3,7,C.t16HB,C.t16l),new e(4,15,C.t16HB,C.t16l),new e(6,63,C.t16HB,C.t16l),new e(8,255,C.t16HB,C.t16l),new e(10,1023,C.t16HB,C.t16l),new e(13,8191,C.t16HB,C.t16l),new e(4,15,C.t24HB,C.t24l),new e(5,31,C.t24HB,C.t24l),new e(6,63,C.t24HB,C.t24l),new e(7,127,C.t24HB,C.t24l),new e(8,255,C.t24HB,C.t24l),new e(9,511,C.t24HB,C.t24l),new e(11,2047,C.t24HB,C.t24l),new e(13,8191,C.t24HB,C.t24l),new e(0,0,C.t32HB,C.t32l),new e(0,0,C.t33HB,C.t33l)],C.largetbl=[65540,327685,458759,589832,655369,655370,720906,720907,786443,786444,786444,851980,851980,851980,917517,655370,262149,393222,524295,589832,655369,720906,720906,720907,786443,786443,786444,851980,917516,851980,917516,655370,458759,524295,589832,655369,720905,720906,786442,786443,851979,786443,851979,851980,851980,917516,917517,720905,589832,589832,655369,720905,720906,786442,786442,786443,851979,851979,917515,917516,917516,983052,983052,786441,655369,655369,720905,720906,786442,786442,851978,851979,851979,917515,917516,917516,983052,983052,983053,720905,655370,655369,720906,720906,786442,851978,851979,917515,851979,917515,917516,983052,983052,983052,1048588,786441,720906,720906,720906,786442,851978,851979,851979,851979,917515,917516,917516,917516,983052,983052,1048589,786441,720907,720906,786442,786442,851979,851979,851979,917515,917516,983052,983052,983052,983052,1114125,1114125,786442,720907,786443,786443,851979,851979,851979,917515,917515,983051,983052,983052,983052,1048588,1048589,1048589,786442,786443,786443,786443,851979,851979,917515,917515,983052,983052,983052,983052,1048588,983053,1048589,983053,851978,786444,851979,786443,851979,917515,917516,917516,917516,983052,1048588,1048588,1048589,1114125,1114125,1048589,786442,851980,851980,851979,851979,917515,917516,983052,1048588,1048588,1048588,1048588,1048589,1048589,983053,1048589,851978,851980,917516,917516,917516,917516,983052,983052,983052,983052,1114124,1048589,1048589,1048589,1048589,1179661,851978,983052,917516,917516,917516,983052,983052,1048588,1048588,1048589,1179661,1114125,1114125,1114125,1245197,1114125,851978,917517,983052,851980,917516,1048588,1048588,983052,1048589,1048589,1114125,1179661,1114125,1245197,1114125,1048589,851978,655369,655369,655369,720905,720905,786441,786441,786441,851977,851977,851977,851978,851978,851978,851978,655366],C.table23=[65538,262147,458759,262148,327684,458759,393222,458759,524296],C.table56=[65539,262148,458758,524296,262148,327684,524294,589831,458757,524294,589831,655368,524295,524295,589832,655369],C.bitrate_table=[[0,8,16,24,32,40,48,56,64,80,96,112,128,144,160,-1],[0,32,40,48,56,64,80,96,112,128,160,192,224,256,320,-1],[0,8,16,24,32,40,48,56,64,-1,-1,-1,-1,-1,-1,-1]],C.samplerate_table=[[22050,24e3,16e3,-1],[44100,48e3,32e3,-1],[11025,12e3,8e3,-1]],C.scfsi_band=[0,6,11,16,21],y.Q_MAX=257,y.Q_MAX2=116,y.LARGE_BITS=1e5,y.IXMAX_VAL=8206;var z={};function w(){var v,g,M;this.rv=null,this.qupvt=null;var w,n=new function(){this.setModules=function(e,t){}};function R(e){this.ordinal=e}function _(e){for(var t=0;t<e.sfbmax;t++)if(e.scalefac[t]+e.subblock_gain[e.window[t]]==0)return!1;return!0}function B(e,t,a,s,n){var r;switch(e){default:case 9:0<t.over_count?(r=a.over_SSD<=t.over_SSD,a.over_SSD==t.over_SSD&&(r=a.bits<t.bits)):r=a.max_noise<0&&10*a.max_noise+a.bits<=10*t.max_noise+t.bits;break;case 0:r=a.over_count<t.over_count||a.over_count==t.over_count&&a.over_noise<t.over_noise||a.over_count==t.over_count&&j.EQ(a.over_noise,t.over_noise)&&a.tot_noise<t.tot_noise;break;case 8:a.max_noise=function(e,t){for(var a,s=1e-37,n=0;n<t.psymax;n++)s+=(a=e[n],ee.FAST_LOG10(.368+.632*a*a*a));return Math.max(1e-20,s)}(n,s);case 1:r=a.max_noise<t.max_noise;break;case 2:r=a.tot_noise<t.tot_noise;break;case 3:r=a.tot_noise<t.tot_noise&&a.max_noise<t.max_noise;break;case 4:r=a.max_noise<=0&&.2<t.max_noise||a.max_noise<=0&&t.max_noise<0&&t.max_noise>a.max_noise-.2&&a.tot_noise<t.tot_noise||a.max_noise<=0&&0<t.max_noise&&t.max_noise>a.max_noise-.2&&a.tot_noise<t.tot_noise+t.over_noise||0<a.max_noise&&-.05<t.max_noise&&t.max_noise>a.max_noise-.1&&a.tot_noise+a.over_noise<t.tot_noise+t.over_noise||0<a.max_noise&&-.1<t.max_noise&&t.max_noise>a.max_noise-.15&&a.tot_noise+a.over_noise+a.over_noise<t.tot_noise+t.over_noise+t.over_noise;break;case 5:r=a.over_noise<t.over_noise||j.EQ(a.over_noise,t.over_noise)&&a.tot_noise<t.tot_noise;break;case 6:r=a.over_noise<t.over_noise||j.EQ(a.over_noise,t.over_noise)&&(a.max_noise<t.max_noise||j.EQ(a.max_noise,t.max_noise)&&a.tot_noise<=t.tot_noise);break;case 7:r=a.over_count<t.over_count||a.over_noise<t.over_noise}return 0==t.over_count&&(r=r&&a.bits<t.bits),r}function A(e,t,a,s,n){var r=e.internal_flags;!function(e,t,a,s,n){var r,i=e.internal_flags;r=0==t.scalefac_scale?1.2968395546510096:1.6817928305074292;for(var _=0,o=0;o<t.sfbmax;o++)_<a[o]&&(_=a[o]);var l=i.noise_shaping_amp;switch(3==l&&(l=n?2:1),l){case 2:break;case 1:1<_?_=Math.pow(_,.5):_*=.95;break;case 0:default:1<_?_=1:_*=.95}var f=0;for(o=0;o<t.sfbmax;o++){var c,h=t.width[o];if(f+=h,!(a[o]<_)){if(0!=(2&i.substep_shaping)&&(i.pseudohalf[o]=0==i.pseudohalf[o]?1:0,0==i.pseudohalf[o]&&2==i.noise_shaping_amp))return;for(t.scalefac[o]++,c=-h;c<0;c++)s[f+c]*=r,s[f+c]>t.xrpow_max&&(t.xrpow_max=s[f+c]);if(2==i.noise_shaping_amp)return}}}(e,t,a,s,n);var i=_(t);return!i&&(!(i=2==r.mode_gr?w.scale_bitcount(t):w.scale_bitcount_lsf(r,t))||(1<r.noise_shaping&&(Te.fill(r.pseudohalf,0),0==t.scalefac_scale?(!function(e,t){for(var a=0,s=0;s<e.sfbmax;s++){var n=e.width[s],r=e.scalefac[s];if(0!=e.preflag&&(r+=M.pretab[s]),a+=n,0!=(1&r)){r++;for(var i=-n;i<0;i++)t[a+i]*=1.2968395546510096,t[a+i]>e.xrpow_max&&(e.xrpow_max=t[a+i])}e.scalefac[s]=r>>1}e.preflag=0,e.scalefac_scale=1}(t,s),i=!1):t.block_type==Pe.SHORT_TYPE&&0<r.subblock_gain&&(i=function(e,t,a){var s,n=t.scalefac;for(s=0;s<t.sfb_lmax;s++)if(16<=n[s])return!0;for(var r=0;r<3;r++){var i=0,_=0;for(s=t.sfb_lmax+r;s<t.sfbdivide;s+=3)i<n[s]&&(i=n[s]);for(;s<t.sfbmax;s+=3)_<n[s]&&(_=n[s]);if(!(i<16&&_<8)){if(7<=t.subblock_gain[r])return!0;t.subblock_gain[r]++;var o=e.scalefac_band.l[t.sfb_lmax];for(s=t.sfb_lmax+r;s<t.sfbmax;s+=3){var l=t.width[s],f=n[s];if(0<=(f-=4>>t.scalefac_scale))n[s]=f,o+=3*l;else{n[s]=0;var c=210+(f<<t.scalefac_scale+1);u=M.IPOW20(c),o+=l*(r+1);for(var h=-l;h<0;h++)a[o+h]*=u,a[o+h]>t.xrpow_max&&(t.xrpow_max=a[o+h]);o+=l*(3-r-1)}}var u=M.IPOW20(202);for(o+=t.width[s]*(r+1),h=-t.width[s];h<0;h++)a[o+h]*=u,a[o+h]>t.xrpow_max&&(t.xrpow_max=a[o+h])}}return!1}(r,t,s)||_(t))),i||(i=2==r.mode_gr?w.scale_bitcount(t):w.scale_bitcount_lsf(r,t)),!i))}this.setModules=function(e,t,a,s){v=e,g=t,this.rv=t,M=a,this.qupvt=a,w=s,n.setModules(M,w)},this.ms_convert=function(e,t){for(var a=0;a<576;++a){var s=e.tt[t][0].xr[a],n=e.tt[t][1].xr[a];e.tt[t][0].xr[a]=(s+n)*(.5*ee.SQRT2),e.tt[t][1].xr[a]=(s-n)*(.5*ee.SQRT2)}},this.init_xrpow=function(e,t,a){var s=0,n=0|t.max_nonzero_coeff;if(t.xrpow_max=0,Te.fill(a,n,576,0),1e-20<(s=function(e,t,a,s){for(var n=s=0;n<=a;++n){var r=Math.abs(e.xr[n]);s+=r,t[n]=Math.sqrt(r*Math.sqrt(r)),t[n]>e.xrpow_max&&(e.xrpow_max=t[n])}return s}(t,a,n,s))){var r=0;0!=(2&e.substep_shaping)&&(r=1);for(var i=0;i<t.psymax;i++)e.pseudohalf[i]=r;return!0}return Te.fill(t.l3_enc,0,576,0),!1},this.init_outer_loop=function(e,t){t.part2_3_length=0,t.big_values=0,t.count1=0,t.global_gain=210,t.scalefac_compress=0,t.table_select[0]=0,t.table_select[1]=0,t.table_select[2]=0,t.subblock_gain[0]=0,t.subblock_gain[1]=0,t.subblock_gain[2]=0,t.subblock_gain[3]=0,t.region0_count=0,t.region1_count=0,t.preflag=0,t.scalefac_scale=0,t.count1table_select=0,t.part2_length=0,t.sfb_lmax=Pe.SBPSY_l,t.sfb_smin=Pe.SBPSY_s,t.psy_lmax=e.sfb21_extra?Pe.SBMAX_l:Pe.SBPSY_l,t.psymax=t.psy_lmax,t.sfbmax=t.sfb_lmax,t.sfbdivide=11;for(var a=0;a<Pe.SBMAX_l;a++)t.width[a]=e.scalefac_band.l[a+1]-e.scalefac_band.l[a],t.window[a]=3;if(t.block_type==Pe.SHORT_TYPE){var s=Ae(576);t.sfb_smin=0,(t.sfb_lmax=0)!=t.mixed_block_flag&&(t.sfb_smin=3,t.sfb_lmax=2*e.mode_gr+4),t.psymax=t.sfb_lmax+3*((e.sfb21_extra?Pe.SBMAX_s:Pe.SBPSY_s)-t.sfb_smin),t.sfbmax=t.sfb_lmax+3*(Pe.SBPSY_s-t.sfb_smin),t.sfbdivide=t.sfbmax-18,t.psy_lmax=t.sfb_lmax;var n=e.scalefac_band.l[t.sfb_lmax];$.arraycopy(t.xr,0,s,0,576);for(a=t.sfb_smin;a<Pe.SBMAX_s;a++)for(var r=e.scalefac_band.s[a],i=e.scalefac_band.s[a+1],_=0;_<3;_++)for(var o=r;o<i;o++)t.xr[n++]=s[3*o+_];var l=t.sfb_lmax;for(a=t.sfb_smin;a<Pe.SBMAX_s;a++)t.width[l]=t.width[l+1]=t.width[l+2]=e.scalefac_band.s[a+1]-e.scalefac_band.s[a],t.window[l]=0,t.window[l+1]=1,t.window[l+2]=2,l+=3}t.count1bits=0,t.sfb_partition_table=M.nr_of_sfb_block[0][0],t.slen[0]=0,t.slen[1]=0,t.slen[2]=0,t.slen[3]=0,t.max_nonzero_coeff=575,Te.fill(t.scalefac,0),function(e,t){var a=e.ATH,s=t.xr;if(t.block_type!=Pe.SHORT_TYPE)for(var n=!1,r=Pe.PSFB21-1;0<=r&&!n;r--){var i=e.scalefac_band.psfb21[r],_=e.scalefac_band.psfb21[r+1],o=M.athAdjust(a.adjust,a.psfb21[r],a.floor);1e-12<e.nsPsy.longfact[21]&&(o*=e.nsPsy.longfact[21]);for(var l=_-1;i<=l;l--){if(!(Math.abs(s[l])<o)){n=!0;break}s[l]=0}}else for(var f=0;f<3;f++)for(n=!1,r=Pe.PSFB12-1;0<=r&&!n;r--){_=(i=3*e.scalefac_band.s[12]+(e.scalefac_band.s[13]-e.scalefac_band.s[12])*f+(e.scalefac_band.psfb12[r]-e.scalefac_band.psfb12[0]))+(e.scalefac_band.psfb12[r+1]-e.scalefac_band.psfb12[r]);var c=M.athAdjust(a.adjust,a.psfb12[r],a.floor);for(1e-12<e.nsPsy.shortfact[12]&&(c*=e.nsPsy.shortfact[12]),l=_-1;i<=l;l--){if(!(Math.abs(s[l])<c)){n=!0;break}s[l]=0}}}(e,t)},R.BINSEARCH_NONE=new R(0),R.BINSEARCH_UP=new R(1),R.BINSEARCH_DOWN=new R(2),this.trancate_smallspectrums=function(e,t,a,s){var n=Ae(z.SFBMAX);if((0!=(4&e.substep_shaping)||t.block_type!=Pe.SHORT_TYPE)&&0==(128&e.substep_shaping)){M.calc_noise(t,a,n,new T,null);for(var r=0;r<576;r++){var i=0;0!=t.l3_enc[r]&&(i=Math.abs(t.xr[r])),s[r]=i}r=0;var _=8;t.block_type==Pe.SHORT_TYPE&&(_=6);do{var o,l,f,c,h=t.width[_];if(r+=h,!(1<=n[_]||(Te.sort(s,r-h,h),j.EQ(s[r-1],0)))){o=(1-n[_])*a[_],c=l=0;do{var u;for(f=1;c+f<h&&!j.NEQ(s[c+r-h],s[c+r+f-h]);f++);if(o<(u=s[c+r-h]*s[c+r-h]*f)){0!=c&&(l=s[c+r-h-1]);break}o-=u,c+=f}while(c<h);if(!j.EQ(l,0))for(;Math.abs(t.xr[r-h])<=l&&(t.l3_enc[r-h]=0),0<--h;);}}while(++_<t.psymax);t.part2_3_length=w.noquant_count_bits(e,t,null)}},this.outer_loop=function(e,t,a,s,n,r){var i=e.internal_flags,_=new x,o=Ae(576),l=Ae(z.SFBMAX),f=new T,c=new function(){this.global_gain=0,this.sfb_count1=0,this.step=Be(39),this.noise=Ae(39),this.noise_log=Ae(39)},h=9999999,u=!1,b=!1,m=0;if(function(e,t,a,s,n){var r,i=e.CurrentStep[s],_=!1,o=e.OldValue[s],l=R.BINSEARCH_NONE;for(t.global_gain=o,a-=t.part2_length;;){var f;if(r=w.count_bits(e,n,t,null),1==i||r==a)break;a<r?(l==R.BINSEARCH_DOWN&&(_=!0),_&&(i/=2),l=R.BINSEARCH_UP,f=i):(l==R.BINSEARCH_UP&&(_=!0),_&&(i/=2),l=R.BINSEARCH_DOWN,f=-i),t.global_gain+=f,t.global_gain<0&&(_=!(t.global_gain=0)),255<t.global_gain&&(t.global_gain=255,_=!0)}for(;a<r&&t.global_gain<255;)t.global_gain++,r=w.count_bits(e,n,t,null);e.CurrentStep[s]=4<=o-t.global_gain?4:2,e.OldValue[s]=t.global_gain,t.part2_3_length=r}(i,t,r,n,s),0==i.noise_shaping)return 100;M.calc_noise(t,a,l,f,c),f.bits=t.part2_3_length,_.assign(t);var p=0;for($.arraycopy(s,0,o,0,576);!u;){do{var v,d=new T,g=255;if(v=0!=(2&i.substep_shaping)?20:3,i.sfb21_extra){if(1<l[_.sfbmax])break;if(_.block_type==Pe.SHORT_TYPE&&(1<l[_.sfbmax+1]||1<l[_.sfbmax+2]))break}if(!A(e,_,l,s,b))break;0!=_.scalefac_scale&&(g=254);var S=r-_.part2_length;if(S<=0)break;for(;(_.part2_3_length=w.count_bits(i,s,_,c))>S&&_.global_gain<=g;)_.global_gain++;if(_.global_gain>g)break;if(0==f.over_count){for(;(_.part2_3_length=w.count_bits(i,s,_,c))>h&&_.global_gain<=g;)_.global_gain++;if(_.global_gain>g)break}if(M.calc_noise(_,a,l,d,c),d.bits=_.part2_3_length,0!=(B(t.block_type!=Pe.SHORT_TYPE?e.quant_comp:e.quant_comp_short,f,d,_,l)?1:0))h=t.part2_3_length,f=d,t.assign(_),p=0,$.arraycopy(s,0,o,0,576);else if(0==i.full_outer_loop){if(++p>v&&0==f.over_count)break;if(3==i.noise_shaping_amp&&b&&30<p)break;if(3==i.noise_shaping_amp&&b&&15<_.global_gain-m)break}}while(_.global_gain+_.scalefac_scale<255);3==i.noise_shaping_amp?b?u=!0:(_.assign(t),$.arraycopy(o,0,s,0,576),p=0,m=_.global_gain,b=!0):u=!0}return e.VBR==ye.vbr_rh||e.VBR==ye.vbr_mtrh?$.arraycopy(o,0,s,0,576):0!=(1&i.substep_shaping)&&trancate_smallspectrums(i,t,a,s),f.over_count},this.iteration_finish_one=function(e,t,a){var s=e.l3_side,n=s.tt[t][a];w.best_scalefac_store(e,t,a,s),1==e.use_best_huffman&&w.best_huffman_divide(e,n),g.ResvAdjust(e,n)},this.VBR_encode_granule=function(e,t,a,s,n,r,i){var _,o=e.internal_flags,l=new x,f=Ae(576),c=i,h=i+1,u=(i+r)/2,b=0,m=o.sfb21_extra;for(Te.fill(l.l3_enc,0);o.sfb21_extra=!(c-42<u)&&m,outer_loop(e,t,a,s,n,u)<=0?(b=1,h=t.part2_3_length,l.assign(t),$.arraycopy(s,0,f,0,576),_=(i=h-32)-r,u=(i+r)/2):(_=i-(r=u+32),u=(i+r)/2,0!=b&&(b=2,t.assign(l),$.arraycopy(f,0,s,0,576))),12<_;);o.sfb21_extra=m,2==b&&$.arraycopy(l.l3_enc,0,t.l3_enc,0,576)},this.get_framebits=function(e,t){var a=e.internal_flags;a.bitrate_index=a.VBR_min_bitrate;var s=v.getframebits(e);a.bitrate_index=1,s=v.getframebits(e);for(var n=1;n<=a.VBR_max_bitrate;n++){a.bitrate_index=n;var r=new F(s);t[n]=g.ResvFrameBegin(e,r),s=r.bits}},this.VBR_old_prepare=function(e,t,a,s,n,r,i,_,o){var l,f=e.internal_flags,c=0,h=1,u=0;f.bitrate_index=f.VBR_max_bitrate;var b=g.ResvFrameBegin(e,new F(0))/f.mode_gr;get_framebits(e,r);for(var m=0;m<f.mode_gr;m++){var p=M.on_pe(e,t,_[m],b,m,0);f.mode_ext==Pe.MPG_MD_MS_LR&&(ms_convert(f.l3_side,m),M.reduce_side(_[m],a[m],b,p));for(var v=0;v<f.channels_out;++v){var d=f.l3_side.tt[m][v];d.block_type!=Pe.SHORT_TYPE?(c=1.28/(1+Math.exp(3.5-t[m][v]/300))-.05,l=f.PSY.mask_adjust-c):(c=2.56/(1+Math.exp(3.5-t[m][v]/300))-.14,l=f.PSY.mask_adjust_short-c),f.masking_lower=Math.pow(10,.1*l),init_outer_loop(f,d),o[m][v]=M.calc_xmin(e,s[m][v],d,n[m][v]),0!=o[m][v]&&(h=0),i[m][v]=126,u+=_[m][v]}}for(m=0;m<f.mode_gr;m++)for(v=0;v<f.channels_out;v++)u>r[f.VBR_max_bitrate]&&(_[m][v]*=r[f.VBR_max_bitrate],_[m][v]/=u),i[m][v]>_[m][v]&&(i[m][v]=_[m][v]);return h},this.bitpressure_strategy=function(e,t,a,s){for(var n=0;n<e.mode_gr;n++)for(var r=0;r<e.channels_out;r++){for(var i=e.l3_side.tt[n][r],_=t[n][r],o=0,l=0;l<i.psy_lmax;l++)_[o++]*=1+.029*l*l/Pe.SBMAX_l/Pe.SBMAX_l;if(i.block_type==Pe.SHORT_TYPE)for(l=i.sfb_smin;l<Pe.SBMAX_s;l++)_[o++]*=1+.029*l*l/Pe.SBMAX_s/Pe.SBMAX_s,_[o++]*=1+.029*l*l/Pe.SBMAX_s/Pe.SBMAX_s,_[o++]*=1+.029*l*l/Pe.SBMAX_s/Pe.SBMAX_s;s[n][r]=0|Math.max(a[n][r],.9*s[n][r])}},this.VBR_new_prepare=function(e,t,a,s,n,r){var i,_=e.internal_flags,o=1,l=0,f=0;if(e.free_format){_.bitrate_index=0;c=new F(l);i=g.ResvFrameBegin(e,c),l=c.bits,n[0]=i}else{_.bitrate_index=_.VBR_max_bitrate;var c=new F(l);g.ResvFrameBegin(e,c),l=c.bits,get_framebits(e,n),i=n[_.VBR_max_bitrate]}for(var h=0;h<_.mode_gr;h++){M.on_pe(e,t,r[h],l,h,0),_.mode_ext==Pe.MPG_MD_MS_LR&&ms_convert(_.l3_side,h);for(var u=0;u<_.channels_out;++u){var b=_.l3_side.tt[h][u];_.masking_lower=Math.pow(10,.1*_.PSY.mask_adjust),init_outer_loop(_,b),0!=M.calc_xmin(e,a[h][u],b,s[h][u])&&(o=0),f+=r[h][u]}}for(h=0;h<_.mode_gr;h++)for(u=0;u<_.channels_out;u++)i<f&&(r[h][u]*=i,r[h][u]/=f);return o},this.calc_target_bits=function(e,t,a,s,n,r){var i,_,o,l,f=e.internal_flags,c=f.l3_side,h=0;f.bitrate_index=f.VBR_max_bitrate;var u=new F(h);for(r[0]=g.ResvFrameBegin(e,u),h=u.bits,f.bitrate_index=1,h=v.getframebits(e)-8*f.sideinfo_len,n[0]=h/(f.mode_gr*f.channels_out),h=e.VBR_mean_bitrate_kbps*e.framesize*1e3,0!=(1&f.substep_shaping)&&(h*=1.09),h/=e.out_samplerate,h-=8*f.sideinfo_len,h/=f.mode_gr*f.channels_out,(i=.93+.07*(11-e.compression_ratio)/5.5)<.9&&(i=.9),1<i&&(i=1),_=0;_<f.mode_gr;_++){var b=0;for(o=0;o<f.channels_out;o++){if(s[_][o]=int(i*h),700<t[_][o]){var m=int((t[_][o]-700)/1.4),p=c.tt[_][o];s[_][o]=int(i*h),p.block_type==Pe.SHORT_TYPE&&m<h/2&&(m=h/2),3*h/2<m?m=3*h/2:m<0&&(m=0),s[_][o]+=m}s[_][o]>Z.MAX_BITS_PER_CHANNEL&&(s[_][o]=Z.MAX_BITS_PER_CHANNEL),b+=s[_][o]}if(Z.MAX_BITS_PER_GRANULE<b)for(o=0;o<f.channels_out;++o)s[_][o]*=Z.MAX_BITS_PER_GRANULE,s[_][o]/=b}if(f.mode_ext==Pe.MPG_MD_MS_LR)for(_=0;_<f.mode_gr;_++)M.reduce_side(s[_],a[_],h*f.channels_out,Z.MAX_BITS_PER_GRANULE);for(_=l=0;_<f.mode_gr;_++)for(o=0;o<f.channels_out;o++)s[_][o]>Z.MAX_BITS_PER_CHANNEL&&(s[_][o]=Z.MAX_BITS_PER_CHANNEL),l+=s[_][o];if(l>r[0])for(_=0;_<f.mode_gr;_++)for(o=0;o<f.channels_out;o++)s[_][o]*=r[0],s[_][o]/=l}}function Y(){this.thm=new i,this.en=new i}function Pe(){var E=Pe.FFTOFFSET,P=Pe.MPG_MD_MS_LR,H=null,L=this.psy=null,I=null,V=null;this.setModules=function(e,t,a,s){H=e,this.psy=t,L=t,I=s,V=a};var N=new function(){var h=[-.1482523854003001,32.308141959636465,296.40344946382766,883.1344870032432,11113.947376231741,1057.2713659324597,305.7402417275812,30.825928907280012,3.8533188138216365,59.42900443849514,709.5899960123345,5281.91112291017,-5829.66483675846,-817.6293103748613,-76.91656988279972,-4.594269939176596,.9063471690191471,.1960342806591213,-.15466694054279598,34.324387823855965,301.8067566458425,817.599602898885,11573.795901679885,1181.2520595540152,321.59731579894424,31.232021761053772,3.7107095756221318,53.650946155329365,684.167428119626,5224.56624370173,-6366.391851890084,-908.9766368219582,-89.83068876699639,-5.411397422890401,.8206787908286602,.3901806440322567,-.16070888947830023,36.147034243915876,304.11815768187864,732.7429163887613,11989.60988270091,1300.012278487897,335.28490093152146,31.48816102859945,3.373875931311736,47.232241542899175,652.7371796173471,5132.414255594984,-6909.087078780055,-1001.9990371107289,-103.62185754286375,-6.104916304710272,.7416505462720353,.5805693545089249,-.16636367662261495,37.751650073343995,303.01103387567713,627.9747488785183,12358.763425278165,1412.2779918482834,346.7496836825721,31.598286663170416,3.1598635433980946,40.57878626349686,616.1671130880391,5007.833007176154,-7454.040671756168,-1095.7960341867115,-118.24411666465777,-6.818469345853504,.6681786379192989,.7653668647301797,-.1716176790982088,39.11551877123304,298.3413246578966,503.5259106886539,12679.589408408976,1516.5821921214542,355.9850766329023,31.395241710249053,2.9164211881972335,33.79716964664243,574.8943997801362,4853.234992253242,-7997.57021486075,-1189.7624067269965,-133.6444792601766,-7.7202770609839915,.5993769336819237,.9427934736519954,-.17645823955292173,40.21879108166477,289.9982036694474,359.3226160751053,12950.259102786438,1612.1013903507662,362.85067106591504,31.045922092242872,2.822222032597987,26.988862316190684,529.8996541764288,4671.371946949588,-8535.899136645805,-1282.5898586244496,-149.58553632943463,-8.643494270763135,.5345111359507916,1.111140466039205,-.36174739330527045,41.04429910497807,277.5463268268618,195.6386023135583,13169.43812144731,1697.6433561479398,367.40983966190305,30.557037410382826,2.531473372857427,20.070154905927314,481.50208566532336,4464.970341588308,-9065.36882077239,-1373.62841526722,-166.1660487028118,-9.58289321133207,.4729647758913199,1.268786568327291,-.36970682634889585,41.393213350082036,261.2935935556502,12.935476055240873,13336.131683328815,1772.508612059496,369.76534388639965,29.751323653701338,2.4023193045459172,13.304795348228817,430.5615775526625,4237.0568611071185,-9581.931701634761,-1461.6913552409758,-183.12733958476446,-10.718010163869403,.41421356237309503,1.414213562373095,-.37677560326535325,41.619486213528496,241.05423794991074,-187.94665032361226,13450.063605744153,1836.153896465782,369.4908799925761,29.001847876923147,2.0714759319987186,6.779591200894186,377.7767837205709,3990.386575512536,-10081.709459700915,-1545.947424837898,-200.3762958015653,-11.864482073055006,.3578057213145241,1.546020906725474,-.3829366947518991,41.1516456456653,216.47684307105183,-406.1569483347166,13511.136535077321,1887.8076599260432,367.3025214564151,28.136213436723654,1.913880671464418,.3829366947518991,323.85365704338597,3728.1472257487526,-10561.233882199509,-1625.2025997821418,-217.62525175416,-13.015432208941645,.3033466836073424,1.66293922460509,-.5822628872992417,40.35639251440489,188.20071124269245,-640.2706748618148,13519.21490106562,1927.6022433578062,362.8197642637487,26.968821921868447,1.7463817695935329,-5.62650678237171,269.3016715297017,3453.386536448852,-11016.145278780888,-1698.6569643425091,-234.7658734267683,-14.16351421663124,.2504869601913055,1.76384252869671,-.5887180101749253,39.23429103868072,155.76096234403798,-889.2492977967378,13475.470561874661,1955.0535223723712,356.4450994756727,25.894952980042156,1.5695032905781554,-11.181939564328772,214.80884394039484,3169.1640829158237,-11443.321309975563,-1765.1588461316153,-251.68908574481912,-15.49755935939164,.198912367379658,1.847759065022573,-.7912582233652842,37.39369355329111,119.699486012458,-1151.0956593239027,13380.446257078214,1970.3952110853447,348.01959814116185,24.731487364283044,1.3850130831637748,-16.421408865300393,161.05030052864092,2878.3322807850063,-11838.991423510031,-1823.985884688674,-268.2854986386903,-16.81724543849939,.1483359875383474,1.913880671464418,-.7960642926861912,35.2322109610459,80.01928065061526,-1424.0212633405113,13235.794061869668,1973.804052543835,337.9908651258184,23.289159354463873,1.3934255946442087,-21.099669467133474,108.48348407242611,2583.700758091299,-12199.726194855148,-1874.2780658979746,-284.2467154529415,-18.11369784385905,.09849140335716425,1.961570560806461,-.998795456205172,32.56307803611191,36.958364584370486,-1706.075448829146,13043.287458812016,1965.3831106103316,326.43182772364605,22.175018750622293,1.198638339011324,-25.371248002043963,57.53505923036915,2288.41886619975,-12522.674544337233,-1914.8400385312243,-299.26241273417224,-19.37805630698734,.04912684976946725,1.990369453344394,.035780907*ee.SQRT2*.5/2384e-9,.017876148*ee.SQRT2*.5/2384e-9,.003134727*ee.SQRT2*.5/2384e-9,.002457142*ee.SQRT2*.5/2384e-9,971317e-9*ee.SQRT2*.5/2384e-9,218868e-9*ee.SQRT2*.5/2384e-9,101566e-9*ee.SQRT2*.5/2384e-9,13828e-9*ee.SQRT2*.5/2384e-9,12804.797818791945,1945.5515939597317,313.4244966442953,49591e-9/2384e-9,1995.1556208053692,21458e-9/2384e-9,-69618e-9/2384e-9],z=[[2.382191739347913e-13,6.423305872147834e-13,9.400849094049688e-13,1.122435026096556e-12,1.183840321267481e-12,1.122435026096556e-12,9.40084909404969e-13,6.423305872147839e-13,2.382191739347918e-13,5.456116108943412e-12,4.878985199565852e-12,4.240448995017367e-12,3.559909094758252e-12,2.858043359288075e-12,2.156177623817898e-12,1.475637723558783e-12,8.371015190102974e-13,2.599706096327376e-13,-5.456116108943412e-12,-4.878985199565852e-12,-4.240448995017367e-12,-3.559909094758252e-12,-2.858043359288076e-12,-2.156177623817898e-12,-1.475637723558783e-12,-8.371015190102975e-13,-2.599706096327376e-13,-2.382191739347923e-13,-6.423305872147843e-13,-9.400849094049696e-13,-1.122435026096556e-12,-1.183840321267481e-12,-1.122435026096556e-12,-9.400849094049694e-13,-6.42330587214784e-13,-2.382191739347918e-13],[2.382191739347913e-13,6.423305872147834e-13,9.400849094049688e-13,1.122435026096556e-12,1.183840321267481e-12,1.122435026096556e-12,9.400849094049688e-13,6.423305872147841e-13,2.382191739347918e-13,5.456116108943413e-12,4.878985199565852e-12,4.240448995017367e-12,3.559909094758253e-12,2.858043359288075e-12,2.156177623817898e-12,1.475637723558782e-12,8.371015190102975e-13,2.599706096327376e-13,-5.461314069809755e-12,-4.921085770524055e-12,-4.343405037091838e-12,-3.732668368707687e-12,-3.093523840190885e-12,-2.430835727329465e-12,-1.734679010007751e-12,-9.74825365660928e-13,-2.797435120168326e-13,0,0,0,0,0,0,-2.283748241799531e-13,-4.037858874020686e-13,-2.146547464825323e-13],[.1316524975873958,.414213562373095,.7673269879789602,1.091308501069271,1.303225372841206,1.56968557711749,1.920982126971166,2.414213562373094,3.171594802363212,4.510708503662055,7.595754112725146,22.90376554843115,.984807753012208,.6427876096865394,.3420201433256688,.9396926207859084,-.1736481776669303,-.7660444431189779,.8660254037844387,.5,-.5144957554275265,-.4717319685649723,-.3133774542039019,-.1819131996109812,-.09457419252642064,-.04096558288530405,-.01419856857247115,-.003699974673760037,.8574929257125442,.8817419973177052,.9496286491027329,.9833145924917901,.9955178160675857,.9991605581781475,.999899195244447,.9999931550702802],[0,0,0,0,0,0,2.283748241799531e-13,4.037858874020686e-13,2.146547464825323e-13,5.461314069809755e-12,4.921085770524055e-12,4.343405037091838e-12,3.732668368707687e-12,3.093523840190885e-12,2.430835727329466e-12,1.734679010007751e-12,9.74825365660928e-13,2.797435120168326e-13,-5.456116108943413e-12,-4.878985199565852e-12,-4.240448995017367e-12,-3.559909094758253e-12,-2.858043359288075e-12,-2.156177623817898e-12,-1.475637723558782e-12,-8.371015190102975e-13,-2.599706096327376e-13,-2.382191739347913e-13,-6.423305872147834e-13,-9.400849094049688e-13,-1.122435026096556e-12,-1.183840321267481e-12,-1.122435026096556e-12,-9.400849094049688e-13,-6.423305872147841e-13,-2.382191739347918e-13]],Z=z[Pe.SHORT_TYPE],K=z[Pe.SHORT_TYPE],G=z[Pe.SHORT_TYPE],Q=z[Pe.SHORT_TYPE],U=[0,1,16,17,8,9,24,25,4,5,20,21,12,13,28,29,2,3,18,19,10,11,26,27,6,7,22,23,14,15,30,31];function W(e,t,a){for(var s,n,r,i=10,_=t+238-14-286,o=-15;o<0;o++){var l,f,c;l=h[i+-10],f=e[_+-224]*l,c=e[t+224]*l,l=h[i+-9],f+=e[_+-160]*l,c+=e[t+160]*l,l=h[i+-8],f+=e[_+-96]*l,c+=e[t+96]*l,l=h[i+-7],f+=e[_+-32]*l,c+=e[t+32]*l,l=h[i+-6],f+=e[_+32]*l,c+=e[t+-32]*l,l=h[i+-5],f+=e[_+96]*l,c+=e[t+-96]*l,l=h[i+-4],f+=e[_+160]*l,c+=e[t+-160]*l,l=h[i+-3],f+=e[_+224]*l,c+=e[t+-224]*l,l=h[i+-2],f+=e[t+-256]*l,c-=e[_+256]*l,l=h[i+-1],f+=e[t+-192]*l,c-=e[_+192]*l,l=h[i+0],f+=e[t+-128]*l,c-=e[_+128]*l,l=h[i+1],f+=e[t+-64]*l,c-=e[_+64]*l,l=h[i+2],f+=e[t+0]*l,c-=e[_+0]*l,l=h[i+3],f+=e[t+64]*l,c-=e[_+-64]*l,l=h[i+4],f+=e[t+128]*l,c-=e[_+-128]*l,l=h[i+5],f+=e[t+192]*l,l=(c-=e[_+-192]*l)-(f*=h[i+6]),a[30+2*o]=c+f,a[31+2*o]=h[i+7]*l,i+=18,t--,_++}c=e[t+-16]*h[i+-10],f=e[t+-32]*h[i+-2],c+=(e[t+-48]-e[t+16])*h[i+-9],f+=e[t+-96]*h[i+-1],c+=(e[t+-80]+e[t+48])*h[i+-8],f+=e[t+-160]*h[i+0],c+=(e[t+-112]-e[t+80])*h[i+-7],f+=e[t+-224]*h[i+1],c+=(e[t+-144]+e[t+112])*h[i+-6],f-=e[t+32]*h[i+2],c+=(e[t+-176]-e[t+144])*h[i+-5],f-=e[t+96]*h[i+3],c+=(e[t+-208]+e[t+176])*h[i+-4],f-=e[t+160]*h[i+4],c+=(e[t+-240]-e[t+208])*h[i+-3],s=(f-=e[t+224])-c,n=f+c,c=a[14],f=a[15]-c,a[31]=n+c,a[30]=s+f,a[15]=s-f,a[14]=n-c,r=a[28]-a[0],a[0]+=a[28],a[28]=r*h[i+-36+7],r=a[29]-a[1],a[1]+=a[29],a[29]=r*h[i+-36+7],r=a[26]-a[2],a[2]+=a[26],a[26]=r*h[i+-72+7],r=a[27]-a[3],a[3]+=a[27],a[27]=r*h[i+-72+7],r=a[24]-a[4],a[4]+=a[24],a[24]=r*h[i+-108+7],r=a[25]-a[5],a[5]+=a[25],a[25]=r*h[i+-108+7],r=a[22]-a[6],a[6]+=a[22],a[22]=r*ee.SQRT2,r=a[23]-a[7],a[7]+=a[23],a[23]=r*ee.SQRT2-a[7],a[7]-=a[6],a[22]-=a[7],a[23]-=a[22],r=a[6],a[6]=a[31]-r,a[31]=a[31]+r,r=a[7],a[7]=a[30]-r,a[30]=a[30]+r,r=a[22],a[22]=a[15]-r,a[15]=a[15]+r,r=a[23],a[23]=a[14]-r,a[14]=a[14]+r,r=a[20]-a[8],a[8]+=a[20],a[20]=r*h[i+-180+7],r=a[21]-a[9],a[9]+=a[21],a[21]=r*h[i+-180+7],r=a[18]-a[10],a[10]+=a[18],a[18]=r*h[i+-216+7],r=a[19]-a[11],a[11]+=a[19],a[19]=r*h[i+-216+7],r=a[16]-a[12],a[12]+=a[16],a[16]=r*h[i+-252+7],r=a[17]-a[13],a[13]+=a[17],a[17]=r*h[i+-252+7],r=-a[20]+a[24],a[20]+=a[24],a[24]=r*h[i+-216+7],r=-a[21]+a[25],a[21]+=a[25],a[25]=r*h[i+-216+7],r=a[4]-a[8],a[4]+=a[8],a[8]=r*h[i+-216+7],r=a[5]-a[9],a[5]+=a[9],a[9]=r*h[i+-216+7],r=a[0]-a[12],a[0]+=a[12],a[12]=r*h[i+-72+7],r=a[1]-a[13],a[1]+=a[13],a[13]=r*h[i+-72+7],r=a[16]-a[28],a[16]+=a[28],a[28]=r*h[i+-72+7],r=-a[17]+a[29],a[17]+=a[29],a[29]=r*h[i+-72+7],r=ee.SQRT2*(a[2]-a[10]),a[2]+=a[10],a[10]=r,r=ee.SQRT2*(a[3]-a[11]),a[3]+=a[11],a[11]=r,r=ee.SQRT2*(-a[18]+a[26]),a[18]+=a[26],a[26]=r-a[18],r=ee.SQRT2*(-a[19]+a[27]),a[19]+=a[27],a[27]=r-a[19],r=a[2],a[19]-=a[3],a[3]-=r,a[2]=a[31]-r,a[31]+=r,r=a[3],a[11]-=a[19],a[18]-=r,a[3]=a[30]-r,a[30]+=r,r=a[18],a[27]-=a[11],a[19]-=r,a[18]=a[15]-r,a[15]+=r,r=a[19],a[10]-=r,a[19]=a[14]-r,a[14]+=r,r=a[10],a[11]-=r,a[10]=a[23]-r,a[23]+=r,r=a[11],a[26]-=r,a[11]=a[22]-r,a[22]+=r,r=a[26],a[27]-=r,a[26]=a[7]-r,a[7]+=r,r=a[27],a[27]=a[6]-r,a[6]+=r,r=ee.SQRT2*(a[0]-a[4]),a[0]+=a[4],a[4]=r,r=ee.SQRT2*(a[1]-a[5]),a[1]+=a[5],a[5]=r,r=ee.SQRT2*(a[16]-a[20]),a[16]+=a[20],a[20]=r,r=ee.SQRT2*(a[17]-a[21]),a[17]+=a[21],a[21]=r,r=-ee.SQRT2*(a[8]-a[12]),a[8]+=a[12],a[12]=r-a[8],r=-ee.SQRT2*(a[9]-a[13]),a[9]+=a[13],a[13]=r-a[9],r=-ee.SQRT2*(a[25]-a[29]),a[25]+=a[29],a[29]=r-a[25],r=-ee.SQRT2*(a[24]+a[28]),a[24]-=a[28],a[28]=r-a[24],r=a[24]-a[16],a[24]=r,r=a[20]-r,a[20]=r,r=a[28]-r,a[28]=r,r=a[25]-a[17],a[25]=r,r=a[21]-r,a[21]=r,r=a[29]-r,a[29]=r,r=a[17]-a[1],a[17]=r,r=a[9]-r,a[9]=r,r=a[25]-r,a[25]=r,r=a[5]-r,a[5]=r,r=a[21]-r,a[21]=r,r=a[13]-r,a[13]=r,r=a[29]-r,a[29]=r,r=a[1]-a[0],a[1]=r,r=a[16]-r,a[16]=r,r=a[17]-r,a[17]=r,r=a[8]-r,a[8]=r,r=a[9]-r,a[9]=r,r=a[24]-r,a[24]=r,r=a[25]-r,a[25]=r,r=a[4]-r,a[4]=r,r=a[5]-r,a[5]=r,r=a[20]-r,a[20]=r,r=a[21]-r,a[21]=r,r=a[12]-r,a[12]=r,r=a[13]-r,a[13]=r,r=a[28]-r,a[28]=r,r=a[29]-r,a[29]=r,r=a[0],a[0]+=a[31],a[31]-=r,r=a[1],a[1]+=a[30],a[30]-=r,r=a[16],a[16]+=a[15],a[15]-=r,r=a[17],a[17]+=a[14],a[14]-=r,r=a[8],a[8]+=a[23],a[23]-=r,r=a[9],a[9]+=a[22],a[22]-=r,r=a[24],a[24]+=a[7],a[7]-=r,r=a[25],a[25]+=a[6],a[6]-=r,r=a[4],a[4]+=a[27],a[27]-=r,r=a[5],a[5]+=a[26],a[26]-=r,r=a[20],a[20]+=a[11],a[11]-=r,r=a[21],a[21]+=a[10],a[10]-=r,r=a[12],a[12]+=a[19],a[19]-=r,r=a[13],a[13]+=a[18],a[18]-=r,r=a[28],a[28]+=a[3],a[3]-=r,r=a[29],a[29]+=a[2],a[2]-=r}function J(e,t){for(var a=0;a<3;a++){var s,n,r,i,_,o;n=(i=e[t+6]*z[Pe.SHORT_TYPE][0]-e[t+15])+(s=e[t+0]*z[Pe.SHORT_TYPE][2]-e[t+9]),r=i-s,_=(i=e[t+15]*z[Pe.SHORT_TYPE][0]+e[t+6])+(s=e[t+9]*z[Pe.SHORT_TYPE][2]+e[t+0]),o=-i+s,s=2.069978111953089e-11*(e[t+3]*z[Pe.SHORT_TYPE][1]-e[t+12]),i=2.069978111953089e-11*(e[t+12]*z[Pe.SHORT_TYPE][1]+e[t+3]),e[t+0]=1.90752519173728e-11*n+s,e[t+15]=1.90752519173728e-11*-_+i,r=.8660254037844387*r*1.907525191737281e-11,_=.5*_*1.907525191737281e-11+i,e[t+3]=r-_,e[t+6]=r+_,n=.5*n*1.907525191737281e-11-s,o=.8660254037844387*o*1.907525191737281e-11,e[t+9]=n+o,e[t+12]=n-o,t++}}this.mdct_sub48=function(e,t,a){for(var s,n,r,i,_,o,l,f,c,h,u,b,m,p,v,d,g,S,M,w,R,B=t,A=286,k=0;k<e.channels_out;k++){for(var T=0;T<e.mode_gr;T++){for(var x,y=e.l3_side.tt[T][k],E=y.xr,P=0,H=e.sb_sample[k][1-T],L=0,I=0;I<9;I++)for(W(B,A,H[L]),W(B,A+32,H[L+1]),L+=2,A+=64,x=1;x<32;x+=2)H[L-1][x]*=-1;for(x=0;x<32;x++,P+=18){var V=y.block_type,N=e.sb_sample[k][T],O=e.sb_sample[k][1-T];if(0!=y.mixed_block_flag&&x<2&&(V=0),e.amp_filter[x]<1e-12)Te.fill(E,P+0,P+18,0);else{if(e.amp_filter[x]<1)for(I=0;I<18;I++)O[I][U[x]]*=e.amp_filter[x];if(V==Pe.SHORT_TYPE){for(I=-3;I<0;I++){var Y=z[Pe.SHORT_TYPE][I+3];E[P+3*I+9]=N[9+I][U[x]]*Y-N[8-I][U[x]],E[P+3*I+18]=N[14-I][U[x]]*Y+N[15+I][U[x]],E[P+3*I+10]=N[15+I][U[x]]*Y-N[14-I][U[x]],E[P+3*I+19]=O[2-I][U[x]]*Y+O[3+I][U[x]],E[P+3*I+11]=O[3+I][U[x]]*Y-O[2-I][U[x]],E[P+3*I+20]=O[8-I][U[x]]*Y+O[9+I][U[x]]}J(E,P)}else{var D=Ae(18);for(I=-9;I<0;I++){var X,q;X=z[V][I+27]*O[I+9][U[x]]+z[V][I+36]*O[8-I][U[x]],q=z[V][I+9]*N[I+9][U[x]]-z[V][I+18]*N[8-I][U[x]],D[I+9]=X-q*Z[3+I+9],D[I+18]=X*Z[3+I+9]+q}s=E,n=P,R=w=M=S=g=d=v=p=m=b=u=h=c=f=l=o=_=i=void 0,o=(r=D)[17]-r[9],f=r[15]-r[11],c=r[14]-r[12],h=r[0]+r[8],u=r[1]+r[7],b=r[2]+r[6],m=r[3]+r[5],s[n+17]=h+b-m-(u-r[4]),_=(h+b-m)*K[19]+(u-r[4]),i=(o-f-c)*K[18],s[n+5]=i+_,s[n+6]=i-_,l=(r[16]-r[10])*K[18],u=u*K[19]+r[4],i=o*K[12]+l+f*K[13]+c*K[14],_=-h*K[16]+u-b*K[17]+m*K[15],s[n+1]=i+_,s[n+2]=i-_,i=o*K[13]-l-f*K[14]+c*K[12],_=-h*K[17]+u-b*K[15]+m*K[16],s[n+9]=i+_,s[n+10]=i-_,i=o*K[14]-l+f*K[12]-c*K[13],_=h*K[15]-u+b*K[16]-m*K[17],s[n+13]=i+_,s[n+14]=i-_,p=r[8]-r[0],d=r[6]-r[2],g=r[5]-r[3],S=r[17]+r[9],M=r[16]+r[10],w=r[15]+r[11],R=r[14]+r[12],s[n+0]=S+w+R+(M+r[13]),i=(S+w+R)*K[19]-(M+r[13]),_=(p-d+g)*K[18],s[n+11]=i+_,s[n+12]=i-_,v=(r[7]-r[1])*K[18],M=r[13]-M*K[19],i=S*K[15]-M+w*K[16]+R*K[17],_=p*K[14]+v+d*K[12]+g*K[13],s[n+3]=i+_,s[n+4]=i-_,i=-S*K[17]+M-w*K[15]-R*K[16],_=p*K[13]+v-d*K[14]-g*K[12],s[n+7]=i+_,s[n+8]=i-_,i=-S*K[16]+M-w*K[17]-R*K[15],_=p*K[12]-v+d*K[13]-g*K[14],s[n+15]=i+_,s[n+16]=i-_}}if(V!=Pe.SHORT_TYPE&&0!=x)for(I=7;0<=I;--I){var j,C;j=E[P+I]*G[20+I]+E[P+-1-I]*Q[28+I],C=E[P+I]*Q[28+I]-E[P+-1-I]*G[20+I],E[P+-1-I]=j,E[P+I]=C}}}if(B=a,A=286,1==e.mode_gr)for(var F=0;F<18;F++)$.arraycopy(e.sb_sample[k][1][F],0,e.sb_sample[k][0][F],0,32)}}};this.lame_encode_mp3_frame=function(e,t,a,s,n,r){var i,_=O([2,2]);_[0][0]=new Y,_[0][1]=new Y,_[1][0]=new Y,_[1][1]=new Y;var o,l=O([2,2]);l[0][0]=new Y,l[0][1]=new Y,l[1][0]=new Y,l[1][1]=new Y;var f,c,h,u=[null,null],b=e.internal_flags,m=ke([2,4]),p=[.5,.5],v=[[0,0],[0,0]],d=[[0,0],[0,0]];if(u[0]=t,u[1]=a,0==b.lame_encode_frame_init&&function(e,t){var a,s,n=e.internal_flags;if(0==n.lame_encode_frame_init){var r,i,_=Ae(2014),o=Ae(2014);for(n.lame_encode_frame_init=1,i=r=0;r<286+576*(1+n.mode_gr);++r)r<576*n.mode_gr?(_[r]=0,2==n.channels_out&&(o[r]=0)):(_[r]=t[0][i],2==n.channels_out&&(o[r]=t[1][i]),++i);for(s=0;s<n.mode_gr;s++)for(a=0;a<n.channels_out;a++)n.l3_side.tt[s][a].block_type=Pe.SHORT_TYPE;N.mdct_sub48(n,_,o)}}(e,u),b.padding=0,(b.slot_lag-=b.frac_SpF)<0&&(b.slot_lag+=e.out_samplerate,b.padding=1),0!=b.psymodel){var g=[null,null],S=0,M=Be(2);for(h=0;h<b.mode_gr;h++){for(c=0;c<b.channels_out;c++)g[c]=u[c],S=576+576*h-Pe.FFTOFFSET;if(0!=(e.VBR==ye.vbr_mtrh||e.VBR==ye.vbr_mt?L.L3psycho_anal_vbr(e,g,S,h,_,l,v[h],d[h],m[h],M):L.L3psycho_anal_ns(e,g,S,h,_,l,v[h],d[h],m[h],M)))return-4;for(e.mode==Ee.JOINT_STEREO&&(p[h]=m[h][2]+m[h][3],0<p[h]&&(p[h]=m[h][3]/p[h])),c=0;c<b.channels_out;c++){var w=b.l3_side.tt[h][c];w.block_type=M[c],w.mixed_block_flag=0}}}else for(h=0;h<b.mode_gr;h++)for(c=0;c<b.channels_out;c++)b.l3_side.tt[h][c].block_type=Pe.NORM_TYPE,b.l3_side.tt[h][c].mixed_block_flag=0,d[h][c]=v[h][c]=700;if(function(e){var t,a;if(0!=e.ATH.useAdjust)if(a=e.loudness_sq[0][0],t=e.loudness_sq[1][0],2==e.channels_out?(a+=e.loudness_sq[0][1],t+=e.loudness_sq[1][1]):(a+=a,t+=t),2==e.mode_gr&&(a=Math.max(a,t)),a*=.5,.03125<(a*=e.ATH.aaSensitivityP))1<=e.ATH.adjust?e.ATH.adjust=1:e.ATH.adjust<e.ATH.adjustLimit&&(e.ATH.adjust=e.ATH.adjustLimit),e.ATH.adjustLimit=1;else{var s=31.98*a+625e-6;e.ATH.adjust>=s?(e.ATH.adjust*=.075*s+.925,e.ATH.adjust<s&&(e.ATH.adjust=s)):e.ATH.adjustLimit>=s?e.ATH.adjust=s:e.ATH.adjust<e.ATH.adjustLimit&&(e.ATH.adjust=e.ATH.adjustLimit),e.ATH.adjustLimit=s}else e.ATH.adjust=1}(b),N.mdct_sub48(b,u[0],u[1]),b.mode_ext=Pe.MPG_MD_LR_LR,e.force_ms)b.mode_ext=Pe.MPG_MD_MS_LR;else if(e.mode==Ee.JOINT_STEREO){var R=0,B=0;for(h=0;h<b.mode_gr;h++)for(c=0;c<b.channels_out;c++)R+=d[h][c],B+=v[h][c];if(R<=1*B){var A=b.l3_side.tt[0],k=b.l3_side.tt[b.mode_gr-1];A[0].block_type==A[1].block_type&&k[0].block_type==k[1].block_type&&(b.mode_ext=Pe.MPG_MD_MS_LR)}}if(b.mode_ext==P?(o=l,f=d):(o=_,f=v),e.analysis&&null!=b.pinfo)for(h=0;h<b.mode_gr;h++)for(c=0;c<b.channels_out;c++)b.pinfo.ms_ratio[h]=b.ms_ratio[h],b.pinfo.ms_ener_ratio[h]=p[h],b.pinfo.blocktype[h][c]=b.l3_side.tt[h][c].block_type,b.pinfo.pe[h][c]=f[h][c],$.arraycopy(b.l3_side.tt[h][c].xr,0,b.pinfo.xr[h][c],0,576),b.mode_ext==P&&(b.pinfo.ers[h][c]=b.pinfo.ers[h][c+2],$.arraycopy(b.pinfo.energy[h][c+2],0,b.pinfo.energy[h][c],0,b.pinfo.energy[h][c].length));if(e.VBR==ye.vbr_off||e.VBR==ye.vbr_abr){var T,x;for(T=0;T<18;T++)b.nsPsy.pefirbuf[T]=b.nsPsy.pefirbuf[T+1];for(h=x=0;h<b.mode_gr;h++)for(c=0;c<b.channels_out;c++)x+=f[h][c];for(b.nsPsy.pefirbuf[18]=x,x=b.nsPsy.pefirbuf[9],T=0;T<9;T++)x+=(b.nsPsy.pefirbuf[T]+b.nsPsy.pefirbuf[18-T])*Pe.fircoef[T];for(x=3350*b.mode_gr*b.channels_out/x,h=0;h<b.mode_gr;h++)for(c=0;c<b.channels_out;c++)f[h][c]*=x}if(b.iteration_loop.iteration_loop(e,f,p,o),H.format_bitstream(e),i=H.copy_buffer(b,s,n,r,1),e.bWriteVbrTag&&I.addVbrFrame(e),e.analysis&&null!=b.pinfo){for(c=0;c<b.channels_out;c++){var y;for(y=0;y<E;y++)b.pinfo.pcmdata[c][y]=b.pinfo.pcmdata[c][y+e.framesize];for(y=E;y<1600;y++)b.pinfo.pcmdata[c][y]=u[c][y-E]}V.set_frame_pinfo(e,o)}return function(e){var t,a;for(e.bitrate_stereoMode_Hist[e.bitrate_index][4]++,e.bitrate_stereoMode_Hist[15][4]++,2==e.channels_out&&(e.bitrate_stereoMode_Hist[e.bitrate_index][e.mode_ext]++,e.bitrate_stereoMode_Hist[15][e.mode_ext]++),t=0;t<e.mode_gr;++t)for(a=0;a<e.channels_out;++a){var s=0|e.l3_side.tt[t][a].block_type;0!=e.l3_side.tt[t][a].mixed_block_flag&&(s=4),e.bitrate_blockType_Hist[e.bitrate_index][s]++,e.bitrate_blockType_Hist[e.bitrate_index][5]++,e.bitrate_blockType_Hist[15][s]++,e.bitrate_blockType_Hist[15][5]++}}(b),i}}function i(){this.l=Ae(Pe.SBMAX_l),this.s=ke([Pe.SBMAX_s,3]);var s=this;this.assign=function(e){$.arraycopy(e.l,0,s.l,0,Pe.SBMAX_l);for(var t=0;t<Pe.SBMAX_s;t++)for(var a=0;a<3;a++)s.s[t][a]=e.s[t][a]}}function Z(){var e=40;function t(){this.write_timing=0,this.ptr=0,this.buf=B(e)}this.Class_ID=0,this.lame_encode_frame_init=0,this.iteration_init_init=0,this.fill_buffer_resample_init=0,this.mfbuf=ke([2,Z.MFSIZE]),this.mode_gr=0,this.channels_in=0,this.channels_out=0,this.resample_ratio=0,this.mf_samples_to_encode=0,this.mf_size=0,this.VBR_min_bitrate=0,this.VBR_max_bitrate=0,this.bitrate_index=0,this.samplerate_index=0,this.mode_ext=0,this.lowpass1=0,this.lowpass2=0,this.highpass1=0,this.highpass2=0,this.noise_shaping=0,this.noise_shaping_amp=0,this.substep_shaping=0,this.psymodel=0,this.noise_shaping_stop=0,this.subblock_gain=0,this.use_best_huffman=0,this.full_outer_loop=0,this.l3_side=new function(){this.tt=[[null,null],[null,null]],this.main_data_begin=0,this.private_bits=0,this.resvDrain_pre=0,this.resvDrain_post=0,this.scfsi=[Be(4),Be(4)];for(var e=0;e<2;e++)for(var t=0;t<2;t++)this.tt[e][t]=new x},this.ms_ratio=Ae(2),this.padding=0,this.frac_SpF=0,this.slot_lag=0,this.tag_spec=null,this.nMusicCRC=0,this.OldValue=Be(2),this.CurrentStep=Be(2),this.masking_lower=0,this.bv_scf=Be(576),this.pseudohalf=Be(z.SFBMAX),this.sfb21_extra=!1,this.inbuf_old=new Array(2),this.blackfilt=new Array(2*Z.BPC+1),this.itime=s(2),this.sideinfo_len=0,this.sb_sample=ke([2,2,18,Pe.SBLIMIT]),this.amp_filter=Ae(32),this.header=new Array(Z.MAX_HEADER_BUF),this.h_ptr=0,this.w_ptr=0,this.ancillary_flag=0,this.ResvSize=0,this.ResvMax=0,this.scalefac_band=new r,this.minval_l=Ae(Pe.CBANDS),this.minval_s=Ae(Pe.CBANDS),this.nb_1=ke([4,Pe.CBANDS]),this.nb_2=ke([4,Pe.CBANDS]),this.nb_s1=ke([4,Pe.CBANDS]),this.nb_s2=ke([4,Pe.CBANDS]),this.s3_ss=null,this.s3_ll=null,this.decay=0,this.thm=new Array(4),this.en=new Array(4),this.tot_ener=Ae(4),this.loudness_sq=ke([2,2]),this.loudness_sq_save=Ae(2),this.mld_l=Ae(Pe.SBMAX_l),this.mld_s=Ae(Pe.SBMAX_s),this.bm_l=Be(Pe.SBMAX_l),this.bo_l=Be(Pe.SBMAX_l),this.bm_s=Be(Pe.SBMAX_s),this.bo_s=Be(Pe.SBMAX_s),this.npart_l=0,this.npart_s=0,this.s3ind=X([Pe.CBANDS,2]),this.s3ind_s=X([Pe.CBANDS,2]),this.numlines_s=Be(Pe.CBANDS),this.numlines_l=Be(Pe.CBANDS),this.rnumlines_l=Ae(Pe.CBANDS),this.mld_cb_l=Ae(Pe.CBANDS),this.mld_cb_s=Ae(Pe.CBANDS),this.numlines_s_num1=0,this.numlines_l_num1=0,this.pe=Ae(4),this.ms_ratio_s_old=0,this.ms_ratio_l_old=0,this.ms_ener_ratio_old=0,this.blocktype_old=Be(2),this.nsPsy=new function(){this.last_en_subshort=ke([4,9]),this.lastAttacks=Be(4),this.pefirbuf=Ae(19),this.longfact=Ae(Pe.SBMAX_l),this.shortfact=Ae(Pe.SBMAX_s),this.attackthre=0,this.attackthre_s=0},this.VBR_seek_table=new function(){this.sum=0,this.seen=0,this.want=0,this.pos=0,this.size=0,this.bag=null,this.nVbrNumFrames=0,this.nBytesWritten=0,this.TotalFrameSize=0},this.ATH=null,this.PSY=null,this.nogap_total=0,this.nogap_current=0,this.decode_on_the_fly=!0,this.findReplayGain=!0,this.findPeakSample=!0,this.PeakSample=0,this.RadioGain=0,this.AudiophileGain=0,this.rgdata=null,this.noclipGainChange=0,this.noclipScale=0,this.bitrate_stereoMode_Hist=X([16,5]),this.bitrate_blockType_Hist=X([16,6]),this.pinfo=null,this.hip=null,this.in_buffer_nsamples=0,this.in_buffer_0=null,this.in_buffer_1=null,this.iteration_loop=null;for(var a=0;a<this.en.length;a++)this.en[a]=new i;for(a=0;a<this.thm.length;a++)this.thm[a]=new i;for(a=0;a<this.header.length;a++)this.header[a]=new t}function G(){var A=new function(){var u=Ae(Pe.BLKSIZE),m=Ae(Pe.BLKSIZE_s/2),T=[.9238795325112867,.3826834323650898,.9951847266721969,.0980171403295606,.9996988186962042,.02454122852291229,.9999811752826011,.006135884649154475];function p(e,t,a){var s,n,r,i=0,_=t+(a<<=1);s=4;do{var o,l,f,c,h,u,b;for(b=s>>1,u=(h=(c=s)<<1)+c,s=h<<1,r=(n=t)+b;M=e[n+0]-e[n+c],S=e[n+0]+e[n+c],A=e[n+h]-e[n+u],R=e[n+h]+e[n+u],e[n+h]=S-R,e[n+0]=S+R,e[n+u]=M-A,e[n+c]=M+A,M=e[r+0]-e[r+c],S=e[r+0]+e[r+c],A=ee.SQRT2*e[r+u],R=ee.SQRT2*e[r+h],e[r+h]=S-R,e[r+0]=S+R,e[r+u]=M-A,e[r+c]=M+A,r+=s,(n+=s)<_;);for(l=T[i+0],o=T[i+1],f=1;f<b;f++){var m,p;m=1-2*o*o,p=2*o*l,n=t+f,r=t+c-f;do{var v,d,g,S,M,w,R,B,A,k;d=p*e[n+c]-m*e[r+c],v=m*e[n+c]+p*e[r+c],M=e[n+0]-v,S=e[n+0]+v,w=e[r+0]-d,g=e[r+0]+d,d=p*e[n+u]-m*e[r+u],v=m*e[n+u]+p*e[r+u],A=e[n+h]-v,R=e[n+h]+v,k=e[r+h]-d,B=e[r+h]+d,d=o*R-l*k,v=l*R+o*k,e[n+h]=S-v,e[n+0]=S+v,e[r+u]=w-d,e[r+c]=w+d,d=l*B-o*A,v=o*B+l*A,e[r+h]=g-v,e[r+0]=g+v,e[n+u]=M-d,e[n+c]=M+d,r+=s,n+=s}while(n<_);l=(m=l)*T[i+0]-o*T[i+1],o=m*T[i+1]+o*T[i+0]}i+=2}while(s<a)}var v=[0,128,64,192,32,160,96,224,16,144,80,208,48,176,112,240,8,136,72,200,40,168,104,232,24,152,88,216,56,184,120,248,4,132,68,196,36,164,100,228,20,148,84,212,52,180,116,244,12,140,76,204,44,172,108,236,28,156,92,220,60,188,124,252,2,130,66,194,34,162,98,226,18,146,82,210,50,178,114,242,10,138,74,202,42,170,106,234,26,154,90,218,58,186,122,250,6,134,70,198,38,166,102,230,22,150,86,214,54,182,118,246,14,142,78,206,46,174,110,238,30,158,94,222,62,190,126,254];this.fft_short=function(e,t,a,s,n){for(var r=0;r<3;r++){var i=Pe.BLKSIZE_s/2,_=65535&192*(r+1),o=Pe.BLKSIZE_s/8-1;do{var l,f,c,h,u,b=255&v[o<<2];f=(l=m[b]*s[a][n+b+_])-(u=m[127-b]*s[a][n+b+_+128]),l+=u,h=(c=m[b+64]*s[a][n+b+_+64])-(u=m[63-b]*s[a][n+b+_+192]),c+=u,i-=4,t[r][i+0]=l+c,t[r][i+2]=l-c,t[r][i+1]=f+h,t[r][i+3]=f-h,f=(l=m[b+1]*s[a][n+b+_+1])-(u=m[126-b]*s[a][n+b+_+129]),l+=u,h=(c=m[b+65]*s[a][n+b+_+65])-(u=m[62-b]*s[a][n+b+_+193]),c+=u,t[r][i+Pe.BLKSIZE_s/2+0]=l+c,t[r][i+Pe.BLKSIZE_s/2+2]=l-c,t[r][i+Pe.BLKSIZE_s/2+1]=f+h,t[r][i+Pe.BLKSIZE_s/2+3]=f-h}while(0<=--o);p(t[r],i,Pe.BLKSIZE_s/2)}},this.fft_long=function(e,t,a,s,n){var r=Pe.BLKSIZE/8-1,i=Pe.BLKSIZE/2;do{var _,o,l,f,c,h=255&v[r];o=(_=u[h]*s[a][n+h])-(c=u[h+512]*s[a][n+h+512]),_+=c,f=(l=u[h+256]*s[a][n+h+256])-(c=u[h+768]*s[a][n+h+768]),l+=c,t[0+(i-=4)]=_+l,t[i+2]=_-l,t[i+1]=o+f,t[i+3]=o-f,o=(_=u[h+1]*s[a][n+h+1])-(c=u[h+513]*s[a][n+h+513]),_+=c,f=(l=u[h+257]*s[a][n+h+257])-(c=u[h+769]*s[a][n+h+769]),l+=c,t[i+Pe.BLKSIZE/2+0]=_+l,t[i+Pe.BLKSIZE/2+2]=_-l,t[i+Pe.BLKSIZE/2+1]=o+f,t[i+Pe.BLKSIZE/2+3]=o-f}while(0<=--r);p(t,i,Pe.BLKSIZE/2)},this.init_fft=function(e){for(var t=0;t<Pe.BLKSIZE;t++)u[t]=.42-.5*Math.cos(2*Math.PI*(t+.5)/Pe.BLKSIZE)+.08*Math.cos(4*Math.PI*(t+.5)/Pe.BLKSIZE);for(t=0;t<Pe.BLKSIZE_s/2;t++)m[t]=.5*(1-Math.cos(2*Math.PI*(t+.5)/Pe.BLKSIZE_s))}},k=2.302585092994046,oe=2,le=16,d=2,g=16,E=.34,n=1/217621504/(Pe.BLKSIZE/2),fe=.3,ce=21,S=.2302585093;function M(e){return e}function Y(e,t){for(var a=0,s=0;s<Pe.BLKSIZE/2;++s)a+=e[s]*t.ATH.eql_w[s];return a*=n}function he(e,t,a,s,n,r,i,_,o,l,f){var c=e.internal_flags;if(o<2)A.fft_long(c,s[n],o,l,f),A.fft_short(c,r[i],o,l,f);else if(2==o){for(var h=Pe.BLKSIZE-1;0<=h;--h){var u=s[n+0][h],b=s[n+1][h];s[n+0][h]=(u+b)*ee.SQRT2*.5,s[n+1][h]=(u-b)*ee.SQRT2*.5}for(var m=2;0<=m;--m)for(h=Pe.BLKSIZE_s-1;0<=h;--h){u=r[i+0][m][h],b=r[i+1][m][h];r[i+0][m][h]=(u+b)*ee.SQRT2*.5,r[i+1][m][h]=(u-b)*ee.SQRT2*.5}}t[0]=M(s[n+0][0]),t[0]*=t[0];for(h=Pe.BLKSIZE/2-1;0<=h;--h){var p=s[n+0][Pe.BLKSIZE/2-h],v=s[n+0][Pe.BLKSIZE/2+h];t[Pe.BLKSIZE/2-h]=M(.5*(p*p+v*v))}for(m=2;0<=m;--m){a[m][0]=r[i+0][m][0],a[m][0]*=a[m][0];for(h=Pe.BLKSIZE_s/2-1;0<=h;--h){p=r[i+0][m][Pe.BLKSIZE_s/2-h],v=r[i+0][m][Pe.BLKSIZE_s/2+h];a[m][Pe.BLKSIZE_s/2-h]=M(.5*(p*p+v*v))}}var d=0;for(h=11;h<Pe.HBLKSIZE;h++)d+=t[h];if(c.tot_ener[o]=d,e.analysis){for(h=0;h<Pe.HBLKSIZE;h++)c.pinfo.energy[_][o][h]=c.pinfo.energy_save[o][h],c.pinfo.energy_save[o][h]=t[h];c.pinfo.pe[_][o]=c.pe[o]}2==e.athaa_loudapprox&&o<2&&(c.loudness_sq[_][o]=c.loudness_sq_save[o],c.loudness_sq_save[o]=Y(t,c))}var T,x,y,P=8,H=23,L=15,ue=[1,.79433,.63096,.63096,.63096,.63096,.63096,.25119,.11749];var f=[3.3246*3.3246,3.23837*3.23837,9.9500500969,9.0247369744,8.1854926609,7.0440875649,2.46209*2.46209,2.284*2.284,4.4892710641,1.96552*1.96552,1.82335*1.82335,1.69146*1.69146,2.4621061921,2.1508568964,1.37074*1.37074,1.31036*1.31036,1.5691069696,1.4555939904,1.16203*1.16203,1.2715945225,1.09428*1.09428,1.0659*1.0659,1.0779838276,1.0382591025,1],c=[1.7782755904,1.35879*1.35879,1.38454*1.38454,1.39497*1.39497,1.40548*1.40548,1.3537*1.3537,1.6999465924,1.22321*1.22321,1.3169398564,1],h=[5.5396212496,2.29259*2.29259,4.9868695969,2.12675*2.12675,2.02545*2.02545,1.87894*1.87894,1.74303*1.74303,1.61695*1.61695,2.2499700001,1.39148*1.39148,1.29083*1.29083,1.19746*1.19746,1.2339655056,1.0779838276];function be(e,t,a,s,n,r){var i;if(e<t){if(!(t<e*x))return e+t;i=t/e}else{if(t*x<=e)return e+t;i=e/t}if(e+=t,s+3<=6){if(T<=i)return e;var _=0|ee.FAST_LOG10_X(i,16);return e*c[_]}var o,l;_=0|ee.FAST_LOG10_X(i,16);return t=0!=r?n.ATH.cb_s[a]*n.ATH.adjust:n.ATH.cb_l[a]*n.ATH.adjust,e<y*t?t<e?(o=1,_<=13&&(o=h[_]),l=ee.FAST_LOG10_X(e/t,10/15),e*((f[_]-o)*l+o)):13<_?e:e*h[_]:e*f[_]}var r=[1.7782755904,1.35879*1.35879,1.38454*1.38454,1.39497*1.39497,1.40548*1.40548,1.3537*1.3537,1.6999465924,1.22321*1.22321,1.3169398564,1];function B(e,t,a){var s;if(e<0&&(e=0),t<0&&(t=0),e<=0)return t;if(t<=0)return e;if(s=e<t?t/e:e/t,-2<=a&&a<=2){if(T<=s)return e+t;var n=0|ee.FAST_LOG10_X(s,16);return(e+t)*r[n]}return s<x?e+t:(e<t&&(e=t),e)}function me(e,t,a,s,n){var r,i,_=0,o=0;for(r=i=0;r<Pe.SBMAX_s;++i,++r){for(var l=e.bo_s[r],f=e.npart_s,c=l<f?l:f;i<c;)_+=t[i],o+=a[i],i++;if(e.en[s].s[r][n]=_,e.thm[s].s[r][n]=o,f<=i){++r;break}var h=e.PSY.bo_s_weight[r],u=1-h;_=h*t[i],o=h*a[i],e.en[s].s[r][n]+=_,e.thm[s].s[r][n]+=o,_=u*t[i],o=u*a[i]}for(;r<Pe.SBMAX_s;++r)e.en[s].s[r][n]=0,e.thm[s].s[r][n]=0}function pe(e,t,a,s){var n,r,i=0,_=0;for(n=r=0;n<Pe.SBMAX_l;++r,++n){for(var o=e.bo_l[n],l=e.npart_l,f=o<l?o:l;r<f;)i+=t[r],_+=a[r],r++;if(e.en[s].l[n]=i,e.thm[s].l[n]=_,l<=r){++n;break}var c=e.PSY.bo_l_weight[n],h=1-c;i=c*t[r],_=c*a[r],e.en[s].l[n]+=i,e.thm[s].l[n]+=_,i=h*t[r],_=h*a[r]}for(;n<Pe.SBMAX_l;++n)e.en[s].l[n]=0,e.thm[s].l[n]=0}function ve(e,t,a,s,n,r){var i,_,o=e.internal_flags;for(_=i=0;_<o.npart_s;++_){for(var l=0,f=0,c=o.numlines_s[_],h=0;h<c;++h,++i){var u=t[r][i];l+=u,f<u&&(f=u)}a[_]=l}for(i=_=0;_<o.npart_s;_++){var b=o.s3ind_s[_][0],m=o.s3_ss[i++]*a[b];for(++b;b<=o.s3ind_s[_][1];)m+=o.s3_ss[i]*a[b],++i,++b;var p=d*o.nb_s1[n][_];if(s[_]=Math.min(m,p),o.blocktype_old[1&n]==Pe.SHORT_TYPE){p=g*o.nb_s2[n][_];var v=s[_];s[_]=Math.min(p,v)}o.nb_s2[n][_]=o.nb_s1[n][_],o.nb_s1[n][_]=m}for(;_<=Pe.CBANDS;++_)a[_]=0,s[_]=0}function de(e,t,a){return 1<=a?e:a<=0?t:0<t?Math.pow(e/t,a)*t:0}var o=[11.8,13.6,17.2,32,46.5,51.3,57.5,67.1,71.5,84.6,97.6,130];function ge(e,t){for(var a=309.07,s=0;s<Pe.SBMAX_s-1;s++)for(var n=0;n<3;n++){var r=e.thm.s[s][n];if(0<r){var i=r*t,_=e.en.s[s][n];i<_&&(a+=1e10*i<_?o[s]*(10*k):o[s]*ee.FAST_LOG10(_/i))}}return a}var _=[6.8,5.8,5.8,6.4,6.5,9.9,12.1,14.4,15,18.9,21.6,26.9,34.2,40.2,46.8,56.5,60.7,73.9,85.7,93.4,126.1];function Se(e,t){for(var a=281.0575,s=0;s<Pe.SBMAX_l-1;s++){var n=e.thm.l[s];if(0<n){var r=n*t,i=e.en.l[s];r<i&&(a+=1e10*r<i?_[s]*(10*k):_[s]*ee.FAST_LOG10(i/r))}}return a}function Me(e,t,a,s,n){var r,i;for(r=i=0;r<e.npart_l;++r){var _,o=0,l=0;for(_=0;_<e.numlines_l[r];++_,++i){var f=t[i];o+=f,l<f&&(l=f)}a[r]=o,s[r]=l,n[r]=o*e.rnumlines_l[r]}}function we(e,t,a,s){var n=ue.length-1,r=0,i=a[r]+a[r+1];0<i?((_=t[r])<t[r+1]&&(_=t[r+1]),n<(o=0|(i=20*(2*_-i)/(i*(e.numlines_l[r]+e.numlines_l[r+1]-1))))&&(o=n),s[r]=o):s[r]=0;for(r=1;r<e.npart_l-1;r++){var _,o;if(0<(i=a[r-1]+a[r]+a[r+1]))(_=t[r-1])<t[r]&&(_=t[r]),_<t[r+1]&&(_=t[r+1]),n<(o=0|(i=20*(3*_-i)/(i*(e.numlines_l[r-1]+e.numlines_l[r]+e.numlines_l[r+1]-1))))&&(o=n),s[r]=o;else s[r]=0}0<(i=a[r-1]+a[r])?((_=t[r-1])<t[r]&&(_=t[r]),n<(o=0|(i=20*(2*_-i)/(i*(e.numlines_l[r-1]+e.numlines_l[r]-1))))&&(o=n),s[r]=o):s[r]=0}var Re=[-1.730326e-17,-.01703172,-1.349528e-17,.0418072,-6.73278e-17,-.0876324,-3.0835e-17,.1863476,-1.104424e-16,-.627638];function D(e,t,a,s,n,r,i,_){var o=e.internal_flags;if(s<2)A.fft_long(o,i[_],s,t,a);else if(2==s)for(var l=Pe.BLKSIZE-1;0<=l;--l){var f=i[_+0][l],c=i[_+1][l];i[_+0][l]=(f+c)*ee.SQRT2*.5,i[_+1][l]=(f-c)*ee.SQRT2*.5}r[0]=M(i[_+0][0]),r[0]*=r[0];for(l=Pe.BLKSIZE/2-1;0<=l;--l){var h=i[_+0][Pe.BLKSIZE/2-l],u=i[_+0][Pe.BLKSIZE/2+l];r[Pe.BLKSIZE/2-l]=M(.5*(h*h+u*u))}var b=0;for(l=11;l<Pe.HBLKSIZE;l++)b+=r[l];if(o.tot_ener[s]=b,e.analysis){for(l=0;l<Pe.HBLKSIZE;l++)o.pinfo.energy[n][s][l]=o.pinfo.energy_save[s][l],o.pinfo.energy_save[s][l]=r[l];o.pinfo.pe[n][s]=o.pe[s]}}function X(e,t,a,s,n,r,i,_){var o=e.internal_flags;if(0==n&&s<2&&A.fft_short(o,i[_],s,t,a),2==s)for(var l=Pe.BLKSIZE_s-1;0<=l;--l){var f=i[_+0][n][l],c=i[_+1][n][l];i[_+0][n][l]=(f+c)*ee.SQRT2*.5,i[_+1][n][l]=(f-c)*ee.SQRT2*.5}r[n][0]=i[_+0][n][0],r[n][0]*=r[n][0];for(l=Pe.BLKSIZE_s/2-1;0<=l;--l){var h=i[_+0][n][Pe.BLKSIZE_s/2-l],u=i[_+0][n][Pe.BLKSIZE_s/2+l];r[n][Pe.BLKSIZE_s/2-l]=M(.5*(h*h+u*u))}}this.L3psycho_anal_ns=function(e,t,a,s,n,r,i,_,o,l){var f,c,h,u,b,m,p,v,d,g,S=e.internal_flags,M=ke([2,Pe.BLKSIZE]),w=ke([2,3,Pe.BLKSIZE_s]),R=Ae(Pe.CBANDS+1),B=Ae(Pe.CBANDS+1),A=Ae(Pe.CBANDS+2),k=Be(2),T=Be(2),x=ke([2,576]),y=Be(Pe.CBANDS+2),E=Be(Pe.CBANDS+2);for(Te.fill(E,0),f=S.channels_out,e.mode==Ee.JOINT_STEREO&&(f=4),d=e.VBR==ye.vbr_off?0==S.ResvMax?0:S.ResvSize/S.ResvMax*.5:e.VBR==ye.vbr_rh||e.VBR==ye.vbr_mtrh||e.VBR==ye.vbr_mt?.6:1,c=0;c<S.channels_out;c++){var P=t[c],H=a+576-350-ce+192;for(u=0;u<576;u++){var L,I;for(L=P[H+u+10],b=I=0;b<(ce-1)/2-1;b+=2)L+=Re[b]*(P[H+u+b]+P[H+u+ce-b]),I+=Re[b+1]*(P[H+u+b+1]+P[H+u+ce-b-1]);x[c][u]=L+I}n[s][c].en.assign(S.en[c]),n[s][c].thm.assign(S.thm[c]),2<f&&(r[s][c].en.assign(S.en[c+2]),r[s][c].thm.assign(S.thm[c+2]))}for(c=0;c<f;c++){var V,N=Ae(12),O=[0,0,0,0],Y=Ae(12),D=1,X=Ae(Pe.CBANDS),q=Ae(Pe.CBANDS),j=[0,0,0,0],C=Ae(Pe.HBLKSIZE),F=ke([3,Pe.HBLKSIZE_s]);for(u=0;u<3;u++)N[u]=S.nsPsy.last_en_subshort[c][u+6],Y[u]=N[u]/S.nsPsy.last_en_subshort[c][u+4],O[0]+=N[u];if(2==c)for(u=0;u<576;u++){var z,Z;z=x[0][u],Z=x[1][u],x[0][u]=z+Z,x[1][u]=z-Z}var K=x[1&c],G=0;for(u=0;u<9;u++){for(var Q=G+64,U=1;G<Q;G++)U<Math.abs(K[G])&&(U=Math.abs(K[G]));S.nsPsy.last_en_subshort[c][u]=N[u+3]=U,O[1+u/3]+=U,U>N[u+3-2]?U/=N[u+3-2]:U=N[u+3-2]>10*U?N[u+3-2]/(10*U):0,Y[u+3]=U}if(e.analysis){var W=Y[0];for(u=1;u<12;u++)W<Y[u]&&(W=Y[u]);S.pinfo.ers[s][c]=S.pinfo.ers_save[c],S.pinfo.ers_save[c]=W}for(V=3==c?S.nsPsy.attackthre_s:S.nsPsy.attackthre,u=0;u<12;u++)0==j[u/3]&&Y[u]>V&&(j[u/3]=u%3+1);for(u=1;u<4;u++){(O[u-1]>O[u]?O[u-1]/O[u]:O[u]/O[u-1])<1.7&&(j[u]=0,1==u&&(j[0]=0))}for(0!=j[0]&&0!=S.nsPsy.lastAttacks[c]&&(j[0]=0),3!=S.nsPsy.lastAttacks[c]&&j[0]+j[1]+j[2]+j[3]==0||((D=0)!=j[1]&&0!=j[0]&&(j[1]=0),0!=j[2]&&0!=j[1]&&(j[2]=0),0!=j[3]&&0!=j[2]&&(j[3]=0)),c<2?T[c]=D:0==D&&(T[0]=T[1]=0),o[c]=S.tot_ener[c],he(e,C,F,M,1&c,w,1&c,s,c,t,a),Me(S,C,R,X,q),we(S,X,q,y),v=0;v<3;v++){var J,$;for(ve(e,F,B,A,c,v),me(S,B,A,c,v),p=0;p<Pe.SBMAX_s;p++){if($=S.thm[c].s[p][v],$*=.8,2<=j[v]||1==j[v+1]){var ee=0!=v?v-1:2;U=de(S.thm[c].s[p][ee],$,.6*d);$=Math.min($,U)}if(1==j[v]){ee=0!=v?v-1:2,U=de(S.thm[c].s[p][ee],$,fe*d);$=Math.min($,U)}else if(0!=v&&3==j[v-1]||0==v&&3==S.nsPsy.lastAttacks[c]){ee=2!=v?v+1:0,U=de(S.thm[c].s[p][ee],$,fe*d);$=Math.min($,U)}J=N[3*v+3]+N[3*v+4]+N[3*v+5],6*N[3*v+5]<J&&($*=.5,6*N[3*v+4]<J&&($*=.5)),S.thm[c].s[p][v]=$}}for(S.nsPsy.lastAttacks[c]=j[2],h=m=0;h<S.npart_l;h++){for(var te=S.s3ind[h][0],ae=R[te]*ue[y[te]],se=S.s3_ll[m++]*ae;++te<=S.s3ind[h][1];)ae=R[te]*ue[y[te]],se=be(se,S.s3_ll[m++]*ae,te,te-h,S,0);se*=.158489319246111,S.blocktype_old[1&c]==Pe.SHORT_TYPE?A[h]=se:A[h]=de(Math.min(se,Math.min(oe*S.nb_1[c][h],le*S.nb_2[c][h])),se,d),S.nb_2[c][h]=S.nb_1[c][h],S.nb_1[c][h]=se}for(;h<=Pe.CBANDS;++h)R[h]=0,A[h]=0;pe(S,R,A,c)}(e.mode!=Ee.STEREO&&e.mode!=Ee.JOINT_STEREO||0<e.interChRatio&&function(e,t){var a=e.internal_flags;if(1<a.channels_out){for(var s=0;s<Pe.SBMAX_l;s++){var n=a.thm[0].l[s],r=a.thm[1].l[s];a.thm[0].l[s]+=r*t,a.thm[1].l[s]+=n*t}for(s=0;s<Pe.SBMAX_s;s++)for(var i=0;i<3;i++)n=a.thm[0].s[s][i],r=a.thm[1].s[s][i],a.thm[0].s[s][i]+=r*t,a.thm[1].s[s][i]+=n*t}}(e,e.interChRatio),e.mode==Ee.JOINT_STEREO)&&(!function(e){for(var t=0;t<Pe.SBMAX_l;t++)if(!(e.thm[0].l[t]>1.58*e.thm[1].l[t]||e.thm[1].l[t]>1.58*e.thm[0].l[t])){var a=e.mld_l[t]*e.en[3].l[t],s=Math.max(e.thm[2].l[t],Math.min(e.thm[3].l[t],a));a=e.mld_l[t]*e.en[2].l[t];var n=Math.max(e.thm[3].l[t],Math.min(e.thm[2].l[t],a));e.thm[2].l[t]=s,e.thm[3].l[t]=n}for(t=0;t<Pe.SBMAX_s;t++)for(var r=0;r<3;r++)e.thm[0].s[t][r]>1.58*e.thm[1].s[t][r]||e.thm[1].s[t][r]>1.58*e.thm[0].s[t][r]||(a=e.mld_s[t]*e.en[3].s[t][r],s=Math.max(e.thm[2].s[t][r],Math.min(e.thm[3].s[t][r],a)),a=e.mld_s[t]*e.en[2].s[t][r],n=Math.max(e.thm[3].s[t][r],Math.min(e.thm[2].s[t][r],a)),e.thm[2].s[t][r]=s,e.thm[3].s[t][r]=n)}(S),g=e.msfix,0<Math.abs(g)&&function(e,t,a){var s=t,n=Math.pow(10,a);t*=2,s*=2;for(var r=0;r<Pe.SBMAX_l;r++)f=e.ATH.cb_l[e.bm_l[r]]*n,(_=Math.min(Math.max(e.thm[0].l[r],f),Math.max(e.thm[1].l[r],f)))*t<(o=Math.max(e.thm[2].l[r],f))+(l=Math.max(e.thm[3].l[r],f))&&(o*=c=_*s/(o+l),l*=c),e.thm[2].l[r]=Math.min(o,e.thm[2].l[r]),e.thm[3].l[r]=Math.min(l,e.thm[3].l[r]);for(n*=Pe.BLKSIZE_s/Pe.BLKSIZE,r=0;r<Pe.SBMAX_s;r++)for(var i=0;i<3;i++){var _,o,l,f,c;f=e.ATH.cb_s[e.bm_s[r]]*n,(_=Math.min(Math.max(e.thm[0].s[r][i],f),Math.max(e.thm[1].s[r][i],f)))*t<(o=Math.max(e.thm[2].s[r][i],f))+(l=Math.max(e.thm[3].s[r][i],f))&&(o*=c=_*t/(o+l),l*=c),e.thm[2].s[r][i]=Math.min(e.thm[2].s[r][i],o),e.thm[3].s[r][i]=Math.min(e.thm[3].s[r][i],l)}}(S,g,e.ATHlower*S.ATH.adjust));for(function(e,t,a,s){var n=e.internal_flags;e.short_blocks!=xe.short_block_coupled||0!=t[0]&&0!=t[1]||(t[0]=t[1]=0);for(var r=0;r<n.channels_out;r++)s[r]=Pe.NORM_TYPE,e.short_blocks==xe.short_block_dispensed&&(t[r]=1),e.short_blocks==xe.short_block_forced&&(t[r]=0),0!=t[r]?n.blocktype_old[r]==Pe.SHORT_TYPE&&(s[r]=Pe.STOP_TYPE):(s[r]=Pe.SHORT_TYPE,n.blocktype_old[r]==Pe.NORM_TYPE&&(n.blocktype_old[r]=Pe.START_TYPE),n.blocktype_old[r]==Pe.STOP_TYPE&&(n.blocktype_old[r]=Pe.SHORT_TYPE)),a[r]=n.blocktype_old[r],n.blocktype_old[r]=s[r]}(e,T,l,k),c=0;c<f;c++){var ne,re,ie,_e=0;1<c?(ne=_,_e=-2,re=Pe.NORM_TYPE,l[0]!=Pe.SHORT_TYPE&&l[1]!=Pe.SHORT_TYPE||(re=Pe.SHORT_TYPE),ie=r[s][c-2]):(ne=i,_e=0,re=l[c],ie=n[s][c]),ne[_e+c]=re==Pe.SHORT_TYPE?ge(ie,S.masking_lower):Se(ie,S.masking_lower),e.analysis&&(S.pinfo.pe[s][c]=ne[_e+c])}return 0};var q=[-1.730326e-17,-.01703172,-1.349528e-17,.0418072,-6.73278e-17,-.0876324,-3.0835e-17,.1863476,-1.104424e-16,-.627638];function j(e,t,a){if(0==a)for(var s=0;s<e.npart_s;s++)e.nb_s2[t][s]=e.nb_s1[t][s],e.nb_s1[t][s]=0}function C(e,t){for(var a=0;a<e.npart_l;a++)e.nb_2[t][a]=e.nb_1[t][a],e.nb_1[t][a]=0}function F(e,t,a,s,n,r){var i,_,o,l=e.internal_flags,f=new float[Pe.CBANDS],c=Ae(Pe.CBANDS),h=new int[Pe.CBANDS];for(o=_=0;o<l.npart_s;++o){var u=0,b=0,m=l.numlines_s[o];for(i=0;i<m;++i,++_){var p=t[r][_];u+=p,b<p&&(b=p)}a[o]=u,f[o]=b,c[o]=u/m}for(;o<Pe.CBANDS;++o)f[o]=0,c[o]=0;for(function(e,t,a,s){var n=ue.length-1,r=0,i=a[r]+a[r+1];for(0<i?((_=t[r])<t[r+1]&&(_=t[r+1]),n<(o=0|(i=20*(2*_-i)/(i*(e.numlines_s[r]+e.numlines_s[r+1]-1))))&&(o=n),s[r]=o):s[r]=0,r=1;r<e.npart_s-1;r++){var _,o;0<(i=a[r-1]+a[r]+a[r+1])?((_=t[r-1])<t[r]&&(_=t[r]),_<t[r+1]&&(_=t[r+1]),n<(o=0|(i=20*(3*_-i)/(i*(e.numlines_s[r-1]+e.numlines_s[r]+e.numlines_s[r+1]-1))))&&(o=n),s[r]=o):s[r]=0}0<(i=a[r-1]+a[r])?((_=t[r-1])<t[r]&&(_=t[r]),n<(o=0|(i=20*(2*_-i)/(i*(e.numlines_s[r-1]+e.numlines_s[r]-1))))&&(o=n),s[r]=o):s[r]=0}(l,f,c,h),_=o=0;o<l.npart_s;o++){var v,d,g,S,M,w=l.s3ind_s[o][0],R=l.s3ind_s[o][1];for(v=h[w],d=1,S=l.s3_ss[_]*a[w]*ue[h[w]],++_,++w;w<=R;)v+=h[w],d+=1,S=B(S,g=l.s3_ss[_]*a[w]*ue[h[w]],w-o),++_,++w;S*=M=.5*ue[v=(1+2*v)/(2*d)],s[o]=S,l.nb_s2[n][o]=l.nb_s1[n][o],l.nb_s1[n][o]=S,g=f[o],g*=l.minval_s[o],g*=M,s[o]>g&&(s[o]=g),1<l.masking_lower&&(s[o]*=l.masking_lower),s[o]>a[o]&&(s[o]=a[o]),l.masking_lower<1&&(s[o]*=l.masking_lower)}for(;o<Pe.CBANDS;++o)a[o]=0,s[o]=0}function z(e,t,a,s,n){var r,i=Ae(Pe.CBANDS),_=Ae(Pe.CBANDS),o=Be(Pe.CBANDS+2);Me(e,t,a,i,_),we(e,i,_,o);var l=0;for(r=0;r<e.npart_l;r++){var f,c,h,u=e.s3ind[r][0],b=e.s3ind[r][1],m=0,p=0;for(m=o[u],p+=1,c=e.s3_ll[l]*a[u]*ue[o[u]],++l,++u;u<=b;)m+=o[u],p+=1,c=B(c,f=e.s3_ll[l]*a[u]*ue[o[u]],u-r),++l,++u;if(c*=h=.5*ue[m=(1+2*m)/(2*p)],e.blocktype_old[1&n]==Pe.SHORT_TYPE){var v=oe*e.nb_1[n][r];s[r]=0<v?Math.min(c,v):Math.min(c,a[r]*fe)}else{var d=le*e.nb_2[n][r],g=oe*e.nb_1[n][r];d<=0&&(d=c),g<=0&&(g=c),v=e.blocktype_old[1&n]==Pe.NORM_TYPE?Math.min(g,d):g,s[r]=Math.min(c,v)}e.nb_2[n][r]=e.nb_1[n][r],e.nb_1[n][r]=c,f=i[r],f*=e.minval_l[r],f*=h,s[r]>f&&(s[r]=f),1<e.masking_lower&&(s[r]*=e.masking_lower),s[r]>a[r]&&(s[r]=a[r]),e.masking_lower<1&&(s[r]*=e.masking_lower)}for(;r<Pe.CBANDS;++r)a[r]=0,s[r]=0}function Z(e,t,a,s,n,r,i){for(var _,o,l=2*r,f=0<r?Math.pow(10,n):1,c=0;c<i;++c){var h=e[2][c],u=e[3][c],b=t[0][c],m=t[1][c],p=t[2][c],v=t[3][c];if(b<=1.58*m&&m<=1.58*b){var d=a[c]*u,g=a[c]*h;o=Math.max(p,Math.min(v,d)),_=Math.max(v,Math.min(p,g))}else o=p,_=v;if(0<r){var S,M,w=s[c]*f;if(S=Math.min(Math.max(b,w),Math.max(m,w)),0<(M=(p=Math.max(o,w))+(v=Math.max(_,w)))&&S*l<M){var R=S*l/M;p*=R,v*=R}o=Math.min(p,o),_=Math.min(v,_)}h<o&&(o=h),u<_&&(_=u),t[2][c]=o,t[3][c]=_}}function w(e,t){var a;return(a=0<=e?27*-e:e*t)<=-72?0:Math.exp(a*S)}function R(e){var t,a,s=0;for(s=0;1e-20<w(s,e);s-=1);for(n=s,r=0;1e-12<Math.abs(r-n);)0<w(s=(r+n)/2,e)?r=s:n=s;t=n;var n,r;s=0;for(s=0;1e-20<w(s,e);s+=1);for(n=0,r=s;1e-12<Math.abs(r-n);)0<w(s=(r+n)/2,e)?n=s:r=s;a=r;var i,_=0;for(i=0;i<=1e3;++i){_+=w(s=t+i*(a-t)/1e3,e)}return 1001/(_*(a-t))}function I(e){return e<0&&(e=0),e*=.001,13*Math.atan(.76*e)+3.5*Math.atan(e*e/56.25)}function V(e,t,a,s,n,r,i,_,o,l,f,c){var h,u=Ae(Pe.CBANDS+1),b=_/(15<c?1152:384),m=Be(Pe.HBLKSIZE);_/=o;var p=0,v=0;for(h=0;h<Pe.CBANDS;h++){var d;for(T=I(_*p),u[h]=_*p,d=p;I(_*d)-T<E&&d<=o/2;d++);for(e[h]=d-p,v=h+1;p<d;)m[p++]=h;if(o/2<p){p=o/2,++h;break}}u[h]=_*p;for(var g=0;g<c;g++){var S,M,w,R,B;w=l[g],R=l[g+1],(S=0|Math.floor(.5+f*(w-.5)))<0&&(S=0),o/2<(M=0|Math.floor(.5+f*(R-.5)))&&(M=o/2),a[g]=(m[S]+m[M])/2,t[g]=m[M];var A=b*R;i[g]=(A-u[t[g]])/(u[t[g]+1]-u[t[g]]),i[g]<0?i[g]=0:1<i[g]&&(i[g]=1),B=I(_*l[g]*f),B=Math.min(B,15.5)/15.5,r[g]=Math.pow(10,1.25*(1-Math.cos(Math.PI*B))-2.5)}for(var k=p=0;k<v;k++){var T,x,y=e[k];T=I(_*p),x=I(_*(p+y-1)),s[k]=.5*(T+x),T=I(_*(p-.5)),x=I(_*(p+y-.5)),n[k]=x-T,p+=y}return v}function N(e,t,a,s,n,r){var i,_,o,l,f,c,h=ke([Pe.CBANDS,Pe.CBANDS]),u=0;if(r)for(var b=0;b<t;b++)for(i=0;i<t;i++){var m=(_=a[b]-a[i],c=f=l=o=void 0,o=_,l=.5<=(o*=0<=o?3:1.5)&&o<=2.5?8*((c=o-.5)*c-2*c):0,((f=15.811389+7.5*(o+=.474)-17.5*Math.sqrt(1+o*o))<=-60?0:(o=Math.exp((l+f)*S),o/=.6609193))*s[i]);h[b][i]=m*n[b]}else for(i=0;i<t;i++){var p=15+Math.min(21/a[i],12),v=R(p);for(b=0;b<t;b++){m=v*w(a[b]-a[i],p)*s[i];h[b][i]=m*n[b]}}for(b=0;b<t;b++){for(i=0;i<t&&!(0<h[b][i]);i++);for(e[b][0]=i,i=t-1;0<i&&!(0<h[b][i]);i--);e[b][1]=i,u+=e[b][1]-e[b][0]+1}var d=Ae(u),g=0;for(b=0;b<t;b++)for(i=e[b][0];i<=e[b][1];i++)d[g++]=h[b][i];return d}function O(e){var t=I(e);return t=Math.min(t,15.5)/15.5,Math.pow(10,1.25*(1-Math.cos(Math.PI*t))-2.5)}function s(e,t){return e<-.3&&(e=3410),e/=1e3,e=Math.max(.1,e),3.64*Math.pow(e,-.8)-6.8*Math.exp(-.6*Math.pow(e-3.4,2))+6*Math.exp(-.15*Math.pow(e-8.7,2))+.001*(.6+.04*t)*Math.pow(e,4)}this.L3psycho_anal_vbr=function(e,t,a,s,n,r,i,_,o,l){var f,c,h,u,b,m=e.internal_flags,p=Ae(Pe.HBLKSIZE),v=ke([3,Pe.HBLKSIZE_s]),d=ke([2,Pe.BLKSIZE]),g=ke([2,3,Pe.BLKSIZE_s]),S=ke([4,Pe.CBANDS]),M=ke([4,Pe.CBANDS]),w=ke([4,3]),R=[[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]],B=Be(2),A=e.mode==Ee.JOINT_STEREO?4:m.channels_out;!function(e,t,a,s,n,r,i,_,o,l){for(var f=ke([2,576]),c=e.internal_flags,h=c.channels_out,u=e.mode==Ee.JOINT_STEREO?4:h,b=0;b<h;b++){firbuf=t[b];for(var m=a+576-350-ce+192,p=0;p<576;p++){var v,d;v=firbuf[m+p+10];for(var g=d=0;g<(ce-1)/2-1;g+=2)v+=q[g]*(firbuf[m+p+g]+firbuf[m+p+ce-g]),d+=q[g+1]*(firbuf[m+p+g+1]+firbuf[m+p+ce-g-1]);f[b][p]=v+d}n[s][b].en.assign(c.en[b]),n[s][b].thm.assign(c.thm[b]),2<u&&(r[s][b].en.assign(c.en[b+2]),r[s][b].thm.assign(c.thm[b+2]))}for(b=0;b<u;b++){var S=Ae(12),M=Ae(12),w=[0,0,0,0],R=f[1&b],B=0,A=3==b?c.nsPsy.attackthre_s:c.nsPsy.attackthre,k=1;if(2==b)for(p=0,g=576;0<g;++p,--g){var T=f[0][p],x=f[1][p];f[0][p]=T+x,f[1][p]=T-x}for(p=0;p<3;p++)M[p]=c.nsPsy.last_en_subshort[b][p+6],S[p]=M[p]/c.nsPsy.last_en_subshort[b][p+4],w[0]+=M[p];for(p=0;p<9;p++){for(var y=B+64,E=1;B<y;B++)E<Math.abs(R[B])&&(E=Math.abs(R[B]));c.nsPsy.last_en_subshort[b][p]=M[p+3]=E,w[1+p/3]+=E,E>M[p+3-2]?E/=M[p+3-2]:E=M[p+3-2]>10*E?M[p+3-2]/(10*E):0,S[p+3]=E}for(p=0;p<3;++p){var P=M[3*p+3]+M[3*p+4]+M[3*p+5],H=1;6*M[3*p+5]<P&&(H*=.5,6*M[3*p+4]<P&&(H*=.5)),_[b][p]=H}if(e.analysis){var L=S[0];for(p=1;p<12;p++)L<S[p]&&(L=S[p]);c.pinfo.ers[s][b]=c.pinfo.ers_save[b],c.pinfo.ers_save[b]=L}for(p=0;p<12;p++)0==o[b][p/3]&&S[p]>A&&(o[b][p/3]=p%3+1);for(p=1;p<4;p++){var I=w[p-1],V=w[p];Math.max(I,V)<4e4&&I<1.7*V&&V<1.7*I&&(1==p&&o[b][0]<=o[b][p]&&(o[b][0]=0),o[b][p]=0)}o[b][0]<=c.nsPsy.lastAttacks[b]&&(o[b][0]=0),3!=c.nsPsy.lastAttacks[b]&&o[b][0]+o[b][1]+o[b][2]+o[b][3]==0||((k=0)!=o[b][1]&&0!=o[b][0]&&(o[b][1]=0),0!=o[b][2]&&0!=o[b][1]&&(o[b][2]=0),0!=o[b][3]&&0!=o[b][2]&&(o[b][3]=0)),b<2?l[b]=k:0==k&&(l[0]=l[1]=0),i[b]=c.tot_ener[b]}}(e,t,a,s,n,r,o,w,R,B),function(e,t){var a=e.internal_flags;e.short_blocks!=xe.short_block_coupled||0!=t[0]&&0!=t[1]||(t[0]=t[1]=0);for(var s=0;s<a.channels_out;s++)e.short_blocks==xe.short_block_dispensed&&(t[s]=1),e.short_blocks==xe.short_block_forced&&(t[s]=0)}(e,B);for(var k=0;k<A;k++){D(e,t,a,k,s,p,d,x=1&k),c=s,h=k,u=p,b=void 0,b=(f=e).internal_flags,2==f.athaa_loudapprox&&h<2&&(b.loudness_sq[c][h]=b.loudness_sq_save[h],b.loudness_sq_save[h]=Y(u,b)),0!=B[x]?z(m,p,S[k],M[k],k):C(m,k)}B[0]+B[1]==2&&e.mode==Ee.JOINT_STEREO&&Z(S,M,m.mld_cb_l,m.ATH.cb_l,e.ATHlower*m.ATH.adjust,e.msfix,m.npart_l);for(k=0;k<A;k++){0!=B[x=1&k]&&pe(m,S[k],M[k],k)}for(var T=0;T<3;T++){for(k=0;k<A;++k){0!=B[x=1&k]?j(m,k,T):(X(e,t,a,k,T,v,g,x),F(e,v,S[k],M[k],k,T))}B[0]+B[1]==0&&e.mode==Ee.JOINT_STEREO&&Z(S,M,m.mld_cb_s,m.ATH.cb_s,e.ATHlower*m.ATH.adjust,e.msfix,m.npart_s);for(k=0;k<A;++k){0==B[x=1&k]&&me(m,S[k],M[k],k,T)}}for(k=0;k<A;k++){var x;if(0==B[x=1&k])for(var y=0;y<Pe.SBMAX_s;y++){var E=Ae(3);for(T=0;T<3;T++){var P=m.thm[k].s[y][T];if(P*=.8,2<=R[k][T]||1==R[k][T+1]){var H=0!=T?T-1:2,L=de(m.thm[k].s[y][H],P,.36);P=Math.min(P,L)}else if(1==R[k][T]){H=0!=T?T-1:2,L=de(m.thm[k].s[y][H],P,.6*fe);P=Math.min(P,L)}else if(0!=T&&3==R[k][T-1]||0==T&&3==m.nsPsy.lastAttacks[k]){H=2!=T?T+1:0,L=de(m.thm[k].s[y][H],P,.6*fe);P=Math.min(P,L)}P*=w[k][T],E[T]=P}for(T=0;T<3;T++)m.thm[k].s[y][T]=E[T]}}for(k=0;k<A;k++)m.nsPsy.lastAttacks[k]=R[k][2];!function(e,t,a){for(var s=e.internal_flags,n=0;n<s.channels_out;n++){var r=Pe.NORM_TYPE;0!=t[n]?s.blocktype_old[n]==Pe.SHORT_TYPE&&(r=Pe.STOP_TYPE):(r=Pe.SHORT_TYPE,s.blocktype_old[n]==Pe.NORM_TYPE&&(s.blocktype_old[n]=Pe.START_TYPE),s.blocktype_old[n]==Pe.STOP_TYPE&&(s.blocktype_old[n]=Pe.SHORT_TYPE)),a[n]=s.blocktype_old[n],s.blocktype_old[n]=r}}(e,B,l);for(k=0;k<A;k++){var I,V,N,O;1<k?(I=_,V=-2,N=Pe.NORM_TYPE,l[0]!=Pe.SHORT_TYPE&&l[1]!=Pe.SHORT_TYPE||(N=Pe.SHORT_TYPE),O=r[s][k-2]):(I=i,V=0,N=l[k],O=n[s][k]),I[V+k]=N==Pe.SHORT_TYPE?ge(O,m.masking_lower):Se(O,m.masking_lower),e.analysis&&(m.pinfo.pe[s][k]=I[V+k])}return 0},this.psymodel_init=function(e){var t,a=e.internal_flags,s=!0,n=13,r=0,i=0,_=-8.25,o=-4.5,l=Ae(Pe.CBANDS),f=Ae(Pe.CBANDS),c=Ae(Pe.CBANDS),h=e.out_samplerate;switch(e.experimentalZ){default:case 0:s=!0;break;case 1:s=e.VBR!=ye.vbr_mtrh&&e.VBR!=ye.vbr_mt;break;case 2:s=!1;break;case 3:n=8,r=-1.75,i=-.0125,_=-8.25,o=-2.25}for(a.ms_ener_ratio_old=.25,a.blocktype_old[0]=a.blocktype_old[1]=Pe.NORM_TYPE,t=0;t<4;++t){for(var u=0;u<Pe.CBANDS;++u)a.nb_1[t][u]=1e20,a.nb_2[t][u]=1e20,a.nb_s1[t][u]=a.nb_s2[t][u]=1;for(var b=0;b<Pe.SBMAX_l;b++)a.en[t].l[b]=1e20,a.thm[t].l[b]=1e20;for(u=0;u<3;++u){for(b=0;b<Pe.SBMAX_s;b++)a.en[t].s[b][u]=1e20,a.thm[t].s[b][u]=1e20;a.nsPsy.lastAttacks[t]=0}for(u=0;u<9;u++)a.nsPsy.last_en_subshort[t][u]=10}for(a.loudness_sq_save[0]=a.loudness_sq_save[1]=0,a.npart_l=V(a.numlines_l,a.bo_l,a.bm_l,l,f,a.mld_l,a.PSY.bo_l_weight,h,Pe.BLKSIZE,a.scalefac_band.l,Pe.BLKSIZE/1152,Pe.SBMAX_l),t=0;t<a.npart_l;t++){var m=r;l[t]>=n&&(m=i*(l[t]-n)/(24-n)+r*(24-l[t])/(24-n)),c[t]=Math.pow(10,m/10),0<a.numlines_l[t]?a.rnumlines_l[t]=1/a.numlines_l[t]:a.rnumlines_l[t]=0}a.s3_ll=N(a.s3ind,a.npart_l,l,f,c,s);var p;u=0;for(t=0;t<a.npart_l;t++){g=K.MAX_VALUE;for(var v=0;v<a.numlines_l[t];v++,u++){var d=h*u/(1e3*Pe.BLKSIZE);S=this.ATHformula(1e3*d,e)-20,S=Math.pow(10,.1*S),(S*=a.numlines_l[t])<g&&(g=S)}a.ATH.cb_l[t]=g,6<(g=20*l[t]/10-20)&&(g=100),g<-15&&(g=-15),g-=8,a.minval_l[t]=Math.pow(10,g/10)*a.numlines_l[t]}for(a.npart_s=V(a.numlines_s,a.bo_s,a.bm_s,l,f,a.mld_s,a.PSY.bo_s_weight,h,Pe.BLKSIZE_s,a.scalefac_band.s,Pe.BLKSIZE_s/384,Pe.SBMAX_s),t=u=0;t<a.npart_s;t++){var g;m=_;l[t]>=n&&(m=o*(l[t]-n)/(24-n)+_*(24-l[t])/(24-n)),c[t]=Math.pow(10,m/10),g=K.MAX_VALUE;for(v=0;v<a.numlines_s[t];v++,u++){var S;d=h*u/(1e3*Pe.BLKSIZE_s);S=this.ATHformula(1e3*d,e)-20,S=Math.pow(10,.1*S),(S*=a.numlines_s[t])<g&&(g=S)}a.ATH.cb_s[t]=g,g=7*l[t]/12-7,12<l[t]&&(g*=1+3.1*Math.log(1+g)),l[t]<12&&(g*=1+2.3*Math.log(1-g)),g<-15&&(g=-15),g-=8,a.minval_s[t]=Math.pow(10,g/10)*a.numlines_s[t]}a.s3_ss=N(a.s3ind_s,a.npart_s,l,f,c,s),T=Math.pow(10,(P+1)/16),x=Math.pow(10,(H+1)/16),y=Math.pow(10,L/10),A.init_fft(a),a.decay=Math.exp(-1*k/(.01*h/192)),p=3.5,0!=(2&e.exp_nspsytune)&&(p=1),0<Math.abs(e.msfix)&&(p=e.msfix),e.msfix=p;for(var M=0;M<a.npart_l;M++)a.s3ind[M][1]>a.npart_l-1&&(a.s3ind[M][1]=a.npart_l-1);var w=576*a.mode_gr/h;if(a.ATH.decay=Math.pow(10,-1.2*w),a.ATH.adjust=.01,-(a.ATH.adjustLimit=1)!=e.ATHtype){var R=e.out_samplerate/Pe.BLKSIZE,B=0;for(t=d=0;t<Pe.BLKSIZE/2;++t)d+=R,a.ATH.eql_w[t]=1/Math.pow(10,this.ATHformula(d,e)/10),B+=a.ATH.eql_w[t];for(B=1/B,t=Pe.BLKSIZE/2;0<=--t;)a.ATH.eql_w[t]*=B}for(M=u=0;M<a.npart_s;++M)for(t=0;t<a.numlines_s[M];++t)++u;for(M=u=0;M<a.npart_l;++M)for(t=0;t<a.numlines_l[M];++t)++u;for(t=u=0;t<a.npart_l;t++){d=h*(u+a.numlines_l[t]/2)/(1*Pe.BLKSIZE);a.mld_cb_l[t]=O(d),u+=a.numlines_l[t]}for(;t<Pe.CBANDS;++t)a.mld_cb_l[t]=1;for(t=u=0;t<a.npart_s;t++){d=h*(u+a.numlines_s[t]/2)/(1*Pe.BLKSIZE_s);a.mld_cb_s[t]=O(d),u+=a.numlines_s[t]}for(;t<Pe.CBANDS;++t)a.mld_cb_s[t]=1;return 0},this.ATHformula=function(e,t){var a;switch(t.ATHtype){case 0:a=s(e,9);break;case 1:a=s(e,-1);break;case 2:a=s(e,0);break;case 3:a=s(e,1)+6;break;case 4:a=s(e,t.ATHcurve);break;default:a=s(e,0)}return a}}function Q(){var _=this;Q.V9=410,Q.V8=420,Q.V7=430,Q.V6=440,Q.V5=450,Q.V4=460,Q.V3=470,Q.V2=480,Q.V1=490,Q.V0=500,Q.R3MIX=1e3,Q.STANDARD=1001,Q.EXTREME=1002,Q.INSANE=1003,Q.STANDARD_FAST=1004,Q.EXTREME_FAST=1005,Q.MEDIUM=1006,Q.MEDIUM_FAST=1007;var w,R,g,S,M;Q.LAME_MAXMP3BUFFER=147456;var B,A,k,T=new G;function x(){this.lowerlimit=0}function n(e,t){this.lowpass=t}this.enc=new Pe,this.setModules=function(e,t,a,s,n,r,i,_,o){w=e,R=t,g=a,S=s,M=n,B=r,i,A=_,k=o,this.enc.setModules(R,T,S,B)};var y=4294479419;function E(e){return 1<e?0:e<=0?1:Math.cos(Math.PI/2*e)}function P(e,t){switch(e){case 44100:return t.version=1,0;case 48e3:return t.version=1;case 32e3:return t.version=1,2;case 22050:return t.version=0;case 24e3:return t.version=0,1;case 16e3:return t.version=0,2;case 11025:return t.version=0;case 12e3:return t.version=0,1;case 8e3:return t.version=0,2;default:return t.version=0,-1}}function H(e,t,a){a<16e3&&(t=2);for(var s=C.bitrate_table[t][1],n=2;n<=14;n++)0<C.bitrate_table[t][n]&&Math.abs(C.bitrate_table[t][n]-e)<Math.abs(s-e)&&(s=C.bitrate_table[t][n]);return s}function L(e,t,a){a<16e3&&(t=2);for(var s=0;s<=14;s++)if(0<C.bitrate_table[t][s]&&C.bitrate_table[t][s]==e)return s;return-1}function I(e,t){var a=[new n(8,2e3),new n(16,3700),new n(24,3900),new n(32,5500),new n(40,7e3),new n(48,7500),new n(56,1e4),new n(64,11e3),new n(80,13500),new n(96,15100),new n(112,15600),new n(128,17e3),new n(160,17500),new n(192,18600),new n(224,19400),new n(256,19700),new n(320,20500)],s=_.nearestBitrateFullIndex(t);e.lowerlimit=a[s].lowpass}function V(e){var t=Pe.BLKSIZE+e.framesize-Pe.FFTOFFSET;return t=Math.max(t,512+e.framesize-32)}function N(e,t,a,s,n,r){var i=_.enc.lame_encode_mp3_frame(e,t,a,s,n,r);return e.frameNum++,i}function O(){this.n_in=0,this.n_out=0}function f(){this.num_used=0}function Y(e,t,a){var s=Math.PI*t;(e/=a)<0&&(e=0),1<e&&(e=1);var n=e-.5,r=.42-.5*Math.cos(2*e*Math.PI)+.08*Math.cos(4*e*Math.PI);return Math.abs(n)<1e-9?s/Math.PI:r*Math.sin(a*s*n)/(Math.PI*a*n)}function c(e,t,a,s,n,r,i,_,o){var l,f,c=e.internal_flags,h=0,u=e.out_samplerate/function e(t,a){return 0!=a?e(a,t%a):t}(e.out_samplerate,e.in_samplerate);Z.BPC<u&&(u=Z.BPC);var b=Math.abs(c.resample_ratio-Math.floor(.5+c.resample_ratio))<1e-4?1:0,m=1/c.resample_ratio;1<m&&(m=1);var p=31;0==p%2&&--p;var v=(p+=b)+1;if(0==c.fill_buffer_resample_init){for(c.inbuf_old[0]=Ae(v),c.inbuf_old[1]=Ae(v),l=0;l<=2*u;++l)c.blackfilt[l]=Ae(v);for(c.itime[0]=0,h=c.itime[1]=0;h<=2*u;h++){var d=0,g=(h-u)/(2*u);for(l=0;l<=p;l++)d+=c.blackfilt[h][l]=Y(l-g,m,p);for(l=0;l<=p;l++)c.blackfilt[h][l]/=d}c.fill_buffer_resample_init=1}var S=c.inbuf_old[o];for(f=0;f<s;f++){var M,w;if(M=f*c.resample_ratio,i<=p+(h=0|Math.floor(M-c.itime[o]))-p/2)break;g=M-c.itime[o]-(h+p%2*.5);w=0|Math.floor(2*g*u+u+.5);var R=0;for(l=0;l<=p;++l){var B=l+h-p/2;R+=(B<0?S[v+B]:n[r+B])*c.blackfilt[w][l]}t[a+f]=R}if(_.num_used=Math.min(i,p+h-p/2),c.itime[o]+=_.num_used-f*c.resample_ratio,_.num_used>=v)for(l=0;l<v;l++)S[l]=n[r+_.num_used+l-v];else{var A=v-_.num_used;for(l=0;l<A;++l)S[l]=S[l+_.num_used];for(h=0;l<v;++l,++h)S[l]=n[r+h]}return f}function D(e,t,a,s,n,r){var i=e.internal_flags;if(i.resample_ratio<.9999||1.0001<i.resample_ratio)for(var _=0;_<i.channels_out;_++){var o=new f;r.n_out=c(e,t[_],i.mf_size,e.framesize,a[_],s,n,o,_),r.n_in=o.num_used}else{r.n_out=Math.min(e.framesize,n),r.n_in=r.n_out;for(var l=0;l<r.n_out;++l)t[0][i.mf_size+l]=a[0][s+l],2==i.channels_out&&(t[1][i.mf_size+l]=a[1][s+l])}}this.lame_init=function(){var e,t,a=new function(){this.class_id=0,this.num_samples=0,this.num_channels=0,this.in_samplerate=0,this.out_samplerate=0,this.scale=0,this.scale_left=0,this.scale_right=0,this.analysis=!1,this.bWriteVbrTag=!1,this.decode_only=!1,this.quality=0,this.mode=Ee.STEREO,this.force_ms=!1,this.free_format=!1,this.findReplayGain=!1,this.decode_on_the_fly=!1,this.write_id3tag_automatic=!1,this.brate=0,this.compression_ratio=0,this.copyright=0,this.original=0,this.extension=0,this.emphasis=0,this.error_protection=0,this.strict_ISO=!1,this.disable_reservoir=!1,this.quant_comp=0,this.quant_comp_short=0,this.experimentalY=!1,this.experimentalZ=0,this.exp_nspsytune=0,this.preset=0,this.VBR=null,this.VBR_q_frac=0,this.VBR_q=0,this.VBR_mean_bitrate_kbps=0,this.VBR_min_bitrate_kbps=0,this.VBR_max_bitrate_kbps=0,this.VBR_hard_min=0,this.lowpassfreq=0,this.highpassfreq=0,this.lowpasswidth=0,this.highpasswidth=0,this.maskingadjust=0,this.maskingadjust_short=0,this.ATHonly=!1,this.ATHshort=!1,this.noATH=!1,this.ATHtype=0,this.ATHcurve=0,this.ATHlower=0,this.athaa_type=0,this.athaa_loudapprox=0,this.athaa_sensitivity=0,this.short_blocks=null,this.useTemporal=!1,this.interChRatio=0,this.msfix=0,this.tune=!1,this.tune_value_a=0,this.version=0,this.encoder_delay=0,this.encoder_padding=0,this.framesize=0,this.frameNum=0,this.lame_allocated_gfp=0,this.internal_flags=null};return 0!=((e=a).class_id=y,t=e.internal_flags=new Z,e.mode=Ee.NOT_SET,e.original=1,e.in_samplerate=44100,e.num_channels=2,e.num_samples=-1,e.bWriteVbrTag=!0,e.quality=-1,e.short_blocks=null,t.subblock_gain=-1,e.lowpassfreq=0,e.highpassfreq=0,e.lowpasswidth=-1,e.highpasswidth=-1,e.VBR=ye.vbr_off,e.VBR_q=4,e.ATHcurve=-1,e.VBR_mean_bitrate_kbps=128,e.VBR_min_bitrate_kbps=0,e.VBR_max_bitrate_kbps=0,e.VBR_hard_min=0,t.VBR_min_bitrate=1,t.VBR_max_bitrate=13,e.quant_comp=-1,e.quant_comp_short=-1,e.msfix=-1,t.resample_ratio=1,t.OldValue[0]=180,t.OldValue[1]=180,t.CurrentStep[0]=4,t.CurrentStep[1]=4,t.masking_lower=1,t.nsPsy.attackthre=-1,t.nsPsy.attackthre_s=-1,e.scale=-1,e.athaa_type=-1,e.ATHtype=-1,e.athaa_loudapprox=-1,e.athaa_sensitivity=0,e.useTemporal=null,e.interChRatio=-1,t.mf_samples_to_encode=Pe.ENCDELAY+Pe.POSTDELAY,e.encoder_padding=0,t.mf_size=Pe.ENCDELAY-Pe.MDCTDELAY,e.findReplayGain=!1,e.decode_on_the_fly=!1,t.decode_on_the_fly=!1,t.findReplayGain=!1,t.findPeakSample=!1,t.RadioGain=0,t.AudiophileGain=0,t.noclipGainChange=0,t.noclipScale=-1,e.preset=0,e.write_id3tag_automatic=!0,0)?null:(a.lame_allocated_gfp=1,a)},this.nearestBitrateFullIndex=function(e){var t=[8,16,24,32,40,48,56,64,80,96,112,128,160,192,224,256,320],a=0,s=0,n=0,r=0;r=t[16],s=t[n=16],a=16;for(var i=0;i<16;i++)if(Math.max(e,t[i+1])!=e){r=t[i+1],n=i+1,s=t[i],a=i;break}return e-s<r-e?a:n},this.lame_init_params=function(e){var t,a,s,n=e.internal_flags;if(n.Class_ID=0,null==n.ATH&&(n.ATH=new function(){this.useAdjust=0,this.aaSensitivityP=0,this.adjust=0,this.adjustLimit=0,this.decay=0,this.floor=0,this.l=Ae(Pe.SBMAX_l),this.s=Ae(Pe.SBMAX_s),this.psfb21=Ae(Pe.PSFB21),this.psfb12=Ae(Pe.PSFB12),this.cb_l=Ae(Pe.CBANDS),this.cb_s=Ae(Pe.CBANDS),this.eql_w=Ae(Pe.BLKSIZE/2)}),null==n.PSY&&(n.PSY=new function(){this.mask_adjust=0,this.mask_adjust_short=0,this.bo_l_weight=Ae(Pe.SBMAX_l),this.bo_s_weight=Ae(Pe.SBMAX_s)}),null==n.rgdata&&(n.rgdata=new function(){}),n.channels_in=e.num_channels,1==n.channels_in&&(e.mode=Ee.MONO),n.channels_out=e.mode==Ee.MONO?1:2,n.mode_ext=Pe.MPG_MD_MS_LR,e.mode==Ee.MONO&&(e.force_ms=!1),e.VBR==ye.vbr_off&&128!=e.VBR_mean_bitrate_kbps&&0==e.brate&&(e.brate=e.VBR_mean_bitrate_kbps),e.VBR==ye.vbr_off||e.VBR==ye.vbr_mtrh||e.VBR==ye.vbr_mt||(e.free_format=!1),e.VBR==ye.vbr_off&&0==e.brate&&j.EQ(e.compression_ratio,0)&&(e.compression_ratio=11.025),e.VBR==ye.vbr_off&&0<e.compression_ratio&&(0==e.out_samplerate&&(e.out_samplerate=map2MP3Frequency(int(.97*e.in_samplerate))),e.brate=0|16*e.out_samplerate*n.channels_out/(1e3*e.compression_ratio),n.samplerate_index=P(e.out_samplerate,e),e.free_format||(e.brate=H(e.brate,e.version,e.out_samplerate))),0!=e.out_samplerate&&(e.out_samplerate<16e3?(e.VBR_mean_bitrate_kbps=Math.max(e.VBR_mean_bitrate_kbps,8),e.VBR_mean_bitrate_kbps=Math.min(e.VBR_mean_bitrate_kbps,64)):e.out_samplerate<32e3?(e.VBR_mean_bitrate_kbps=Math.max(e.VBR_mean_bitrate_kbps,8),e.VBR_mean_bitrate_kbps=Math.min(e.VBR_mean_bitrate_kbps,160)):(e.VBR_mean_bitrate_kbps=Math.max(e.VBR_mean_bitrate_kbps,32),e.VBR_mean_bitrate_kbps=Math.min(e.VBR_mean_bitrate_kbps,320))),0==e.lowpassfreq){var r=16e3;switch(e.VBR){case ye.vbr_off:I(i=new x,e.brate),r=i.lowerlimit;break;case ye.vbr_abr:var i;I(i=new x,e.VBR_mean_bitrate_kbps),r=i.lowerlimit;break;case ye.vbr_rh:var _=[19500,19e3,18600,18e3,17500,16e3,15600,14900,12500,1e4,3950];if(0<=e.VBR_q&&e.VBR_q<=9){var o=_[e.VBR_q],l=_[e.VBR_q+1],f=e.VBR_q_frac;r=linear_int(o,l,f)}else r=19500;break;default:_=[19500,19e3,18500,18e3,17500,16500,15500,14500,12500,9500,3950];if(0<=e.VBR_q&&e.VBR_q<=9){o=_[e.VBR_q],l=_[e.VBR_q+1],f=e.VBR_q_frac;r=linear_int(o,l,f)}else r=19500}e.mode!=Ee.MONO||e.VBR!=ye.vbr_off&&e.VBR!=ye.vbr_abr||(r*=1.5),e.lowpassfreq=0|r}if(0==e.out_samplerate&&(2*e.lowpassfreq>e.in_samplerate&&(e.lowpassfreq=e.in_samplerate/2),e.out_samplerate=(t=0|e.lowpassfreq,a=e.in_samplerate,s=44100,48e3<=a?s=48e3:44100<=a?s=44100:32e3<=a?s=32e3:24e3<=a?s=24e3:22050<=a?s=22050:16e3<=a?s=16e3:12e3<=a?s=12e3:11025<=a?s=11025:8e3<=a&&(s=8e3),-1==t?s:(t<=15960&&(s=44100),t<=15250&&(s=32e3),t<=11220&&(s=24e3),t<=9970&&(s=22050),t<=7230&&(s=16e3),t<=5420&&(s=12e3),t<=4510&&(s=11025),t<=3970&&(s=8e3),a<s?44100<a?48e3:32e3<a?44100:24e3<a?32e3:22050<a?24e3:16e3<a?22050:12e3<a?16e3:11025<a?12e3:8e3<a?11025:8e3:s))),e.lowpassfreq=Math.min(20500,e.lowpassfreq),e.lowpassfreq=Math.min(e.out_samplerate/2,e.lowpassfreq),e.VBR==ye.vbr_off&&(e.compression_ratio=16*e.out_samplerate*n.channels_out/(1e3*e.brate)),e.VBR==ye.vbr_abr&&(e.compression_ratio=16*e.out_samplerate*n.channels_out/(1e3*e.VBR_mean_bitrate_kbps)),e.bWriteVbrTag||(e.findReplayGain=!1,e.decode_on_the_fly=!1,n.findPeakSample=!1),n.findReplayGain=e.findReplayGain,n.decode_on_the_fly=e.decode_on_the_fly,n.decode_on_the_fly&&(n.findPeakSample=!0),n.findReplayGain&&w.InitGainAnalysis(n.rgdata,e.out_samplerate)==q.INIT_GAIN_ANALYSIS_ERROR)return e.internal_flags=null,-6;switch(n.decode_on_the_fly&&!e.decode_only&&(null!=n.hip&&k.hip_decode_exit(n.hip),n.hip=k.hip_decode_init()),n.mode_gr=e.out_samplerate<=24e3?1:2,e.framesize=576*n.mode_gr,e.encoder_delay=Pe.ENCDELAY,n.resample_ratio=e.in_samplerate/e.out_samplerate,e.VBR){case ye.vbr_mt:case ye.vbr_rh:case ye.vbr_mtrh:e.compression_ratio=[5.7,6.5,7.3,8.2,10,11.9,13,14,15,16.5][e.VBR_q];break;case ye.vbr_abr:e.compression_ratio=16*e.out_samplerate*n.channels_out/(1e3*e.VBR_mean_bitrate_kbps);break;default:e.compression_ratio=16*e.out_samplerate*n.channels_out/(1e3*e.brate)}if(e.mode==Ee.NOT_SET&&(e.mode=Ee.JOINT_STEREO),0<e.highpassfreq?(n.highpass1=2*e.highpassfreq,0<=e.highpasswidth?n.highpass2=2*(e.highpassfreq+e.highpasswidth):n.highpass2=2*e.highpassfreq,n.highpass1/=e.out_samplerate,n.highpass2/=e.out_samplerate):(n.highpass1=0,n.highpass2=0),0<e.lowpassfreq?(n.lowpass2=2*e.lowpassfreq,0<=e.lowpasswidth?(n.lowpass1=2*(e.lowpassfreq-e.lowpasswidth),n.lowpass1<0&&(n.lowpass1=0)):n.lowpass1=2*e.lowpassfreq,n.lowpass1/=e.out_samplerate,n.lowpass2/=e.out_samplerate):(n.lowpass1=0,n.lowpass2=0),function(e){var t=e.internal_flags,a=32,s=-1;if(0<t.lowpass1){for(var n=999,r=0;r<=31;r++)(l=r/31)>=t.lowpass2&&(a=Math.min(a,r)),t.lowpass1<l&&l<t.lowpass2&&(n=Math.min(n,r));t.lowpass1=999==n?(a-.75)/31:(n-.75)/31,t.lowpass2=a/31}if(0<t.highpass2&&t.highpass2<.75/31*.9&&(t.highpass1=0,t.highpass2=0,$.err.println("Warning: highpass filter disabled. highpass frequency too small\n")),0<t.highpass2){var i=-1;for(r=0;r<=31;r++)(l=r/31)<=t.highpass1&&(s=Math.max(s,r)),t.highpass1<l&&l<t.highpass2&&(i=Math.max(i,r));t.highpass1=s/31,t.highpass2=-1==i?(s+.75)/31:(i+.75)/31}for(r=0;r<32;r++){var _,o,l=r/31;_=t.highpass2>t.highpass1?E((t.highpass2-l)/(t.highpass2-t.highpass1+1e-20)):1,o=t.lowpass2>t.lowpass1?E((l-t.lowpass1)/(t.lowpass2-t.lowpass1+1e-20)):1,t.amp_filter[r]=_*o}}(e),n.samplerate_index=P(e.out_samplerate,e),n.samplerate_index<0)return e.internal_flags=null,-1;if(e.VBR==ye.vbr_off){if(e.free_format)n.bitrate_index=0;else if(e.brate=H(e.brate,e.version,e.out_samplerate),n.bitrate_index=L(e.brate,e.version,e.out_samplerate),n.bitrate_index<=0)return e.internal_flags=null,-1}else n.bitrate_index=1;e.analysis&&(e.bWriteVbrTag=!1),null!=n.pinfo&&(e.bWriteVbrTag=!1),R.init_bit_stream_w(n);for(var c,h,u,b=n.samplerate_index+3*e.version+6*(e.out_samplerate<16e3?1:0),m=0;m<Pe.SBMAX_l+1;m++)n.scalefac_band.l[m]=S.sfBandIndex[b].l[m];for(m=0;m<Pe.PSFB21+1;m++){var p=(n.scalefac_band.l[22]-n.scalefac_band.l[21])/Pe.PSFB21,v=n.scalefac_band.l[21]+m*p;n.scalefac_band.psfb21[m]=v}n.scalefac_band.psfb21[Pe.PSFB21]=576;for(m=0;m<Pe.SBMAX_s+1;m++)n.scalefac_band.s[m]=S.sfBandIndex[b].s[m];for(m=0;m<Pe.PSFB12+1;m++){p=(n.scalefac_band.s[13]-n.scalefac_band.s[12])/Pe.PSFB12,v=n.scalefac_band.s[12]+m*p;n.scalefac_band.psfb12[m]=v}for(n.scalefac_band.psfb12[Pe.PSFB12]=192,1==e.version?n.sideinfo_len=1==n.channels_out?21:36:n.sideinfo_len=1==n.channels_out?13:21,e.error_protection&&(n.sideinfo_len+=2),h=(c=e).internal_flags,c.frameNum=0,c.write_id3tag_automatic&&A.id3tag_write_v2(c),h.bitrate_stereoMode_Hist=X([16,5]),h.bitrate_blockType_Hist=X([16,6]),h.PeakSample=0,c.bWriteVbrTag&&B.InitVbrTag(c),n.Class_ID=y,u=0;u<19;u++)n.nsPsy.pefirbuf[u]=700*n.mode_gr*n.channels_out;switch(-1==e.ATHtype&&(e.ATHtype=4),e.VBR){case ye.vbr_mt:e.VBR=ye.vbr_mtrh;case ye.vbr_mtrh:null==e.useTemporal&&(e.useTemporal=!1),g.apply_preset(e,500-10*e.VBR_q,0),e.quality<0&&(e.quality=LAME_DEFAULT_QUALITY),e.quality<5&&(e.quality=0),5<e.quality&&(e.quality=5),n.PSY.mask_adjust=e.maskingadjust,n.PSY.mask_adjust_short=e.maskingadjust_short,e.experimentalY?n.sfb21_extra=!1:n.sfb21_extra=44e3<e.out_samplerate,n.iteration_loop=new VBRNewIterationLoop(M);break;case ye.vbr_rh:g.apply_preset(e,500-10*e.VBR_q,0),n.PSY.mask_adjust=e.maskingadjust,n.PSY.mask_adjust_short=e.maskingadjust_short,e.experimentalY?n.sfb21_extra=!1:n.sfb21_extra=44e3<e.out_samplerate,6<e.quality&&(e.quality=6),e.quality<0&&(e.quality=LAME_DEFAULT_QUALITY),n.iteration_loop=new VBROldIterationLoop(M);break;default:var d;n.sfb21_extra=!1,e.quality<0&&(e.quality=LAME_DEFAULT_QUALITY),(d=e.VBR)==ye.vbr_off&&(e.VBR_mean_bitrate_kbps=e.brate),g.apply_preset(e,e.VBR_mean_bitrate_kbps,0),e.VBR=d,n.PSY.mask_adjust=e.maskingadjust,n.PSY.mask_adjust_short=e.maskingadjust_short,n.iteration_loop=d==ye.vbr_off?new function(e){var t=e;this.quantize=t,this.iteration_loop=function(e,t,a,s){var n,r=e.internal_flags,i=Ae(z.SFBMAX),_=Ae(576),o=Be(2),l=0,f=r.l3_side,c=new F(l);this.quantize.rv.ResvFrameBegin(e,c),l=c.bits;for(var h=0;h<r.mode_gr;h++){n=this.quantize.qupvt.on_pe(e,t,o,l,h,h),r.mode_ext==Pe.MPG_MD_MS_LR&&(this.quantize.ms_convert(r.l3_side,h),this.quantize.qupvt.reduce_side(o,a[h],l,n));for(var u=0;u<r.channels_out;u++){var b,m,p=f.tt[h][u];p.block_type!=Pe.SHORT_TYPE?(b=0,m=r.PSY.mask_adjust-b):(b=0,m=r.PSY.mask_adjust_short-b),r.masking_lower=Math.pow(10,.1*m),this.quantize.init_outer_loop(r,p),this.quantize.init_xrpow(r,p,_)&&(this.quantize.qupvt.calc_xmin(e,s[h][u],p,i),this.quantize.outer_loop(e,p,i,_,u,o[u])),this.quantize.iteration_finish_one(r,h,u)}}this.quantize.rv.ResvFrameEnd(r,l)}}(M):new ABRIterationLoop(M)}if(e.VBR!=ye.vbr_off){if(n.VBR_min_bitrate=1,n.VBR_max_bitrate=14,e.out_samplerate<16e3&&(n.VBR_max_bitrate=8),0!=e.VBR_min_bitrate_kbps&&(e.VBR_min_bitrate_kbps=H(e.VBR_min_bitrate_kbps,e.version,e.out_samplerate),n.VBR_min_bitrate=L(e.VBR_min_bitrate_kbps,e.version,e.out_samplerate),n.VBR_min_bitrate<0))return-1;if(0!=e.VBR_max_bitrate_kbps&&(e.VBR_max_bitrate_kbps=H(e.VBR_max_bitrate_kbps,e.version,e.out_samplerate),n.VBR_max_bitrate=L(e.VBR_max_bitrate_kbps,e.version,e.out_samplerate),n.VBR_max_bitrate<0))return-1;e.VBR_min_bitrate_kbps=C.bitrate_table[e.version][n.VBR_min_bitrate],e.VBR_max_bitrate_kbps=C.bitrate_table[e.version][n.VBR_max_bitrate],e.VBR_mean_bitrate_kbps=Math.min(C.bitrate_table[e.version][n.VBR_max_bitrate],e.VBR_mean_bitrate_kbps),e.VBR_mean_bitrate_kbps=Math.max(C.bitrate_table[e.version][n.VBR_min_bitrate],e.VBR_mean_bitrate_kbps)}return e.tune&&(n.PSY.mask_adjust+=e.tune_value_a,n.PSY.mask_adjust_short+=e.tune_value_a),function(e){var t=e.internal_flags;switch(e.quality){default:case 9:t.psymodel=0,t.noise_shaping=0,t.noise_shaping_amp=0,t.noise_shaping_stop=0,t.use_best_huffman=0,t.full_outer_loop=0;break;case 8:e.quality=7;case 7:t.psymodel=1,t.noise_shaping=0,t.noise_shaping_amp=0,t.noise_shaping_stop=0,t.use_best_huffman=0,t.full_outer_loop=0;break;case 6:case 5:t.psymodel=1,0==t.noise_shaping&&(t.noise_shaping=1),t.noise_shaping_amp=0,t.noise_shaping_stop=0,-1==t.subblock_gain&&(t.subblock_gain=1),t.use_best_huffman=0,t.full_outer_loop=0;break;case 4:t.psymodel=1,0==t.noise_shaping&&(t.noise_shaping=1),t.noise_shaping_amp=0,t.noise_shaping_stop=0,-1==t.subblock_gain&&(t.subblock_gain=1),t.use_best_huffman=1,t.full_outer_loop=0;break;case 3:t.psymodel=1,0==t.noise_shaping&&(t.noise_shaping=1),t.noise_shaping_amp=1,-(t.noise_shaping_stop=1)==t.subblock_gain&&(t.subblock_gain=1),t.use_best_huffman=1,t.full_outer_loop=0;break;case 2:t.psymodel=1,0==t.noise_shaping&&(t.noise_shaping=1),0==t.substep_shaping&&(t.substep_shaping=2),t.noise_shaping_amp=1,-(t.noise_shaping_stop=1)==t.subblock_gain&&(t.subblock_gain=1),t.use_best_huffman=1,t.full_outer_loop=0;break;case 1:case 0:t.psymodel=1,0==t.noise_shaping&&(t.noise_shaping=1),0==t.substep_shaping&&(t.substep_shaping=2),t.noise_shaping_amp=2,-(t.noise_shaping_stop=1)==t.subblock_gain&&(t.subblock_gain=1),t.use_best_huffman=1,t.full_outer_loop=0}}(e),e.athaa_type<0?n.ATH.useAdjust=3:n.ATH.useAdjust=e.athaa_type,n.ATH.aaSensitivityP=Math.pow(10,e.athaa_sensitivity/-10),null==e.short_blocks&&(e.short_blocks=xe.short_block_allowed),e.short_blocks!=xe.short_block_allowed||e.mode!=Ee.JOINT_STEREO&&e.mode!=Ee.STEREO||(e.short_blocks=xe.short_block_coupled),e.quant_comp<0&&(e.quant_comp=1),e.quant_comp_short<0&&(e.quant_comp_short=0),e.msfix<0&&(e.msfix=0),e.exp_nspsytune=1|e.exp_nspsytune,e.internal_flags.nsPsy.attackthre<0&&(e.internal_flags.nsPsy.attackthre=G.NSATTACKTHRE),e.internal_flags.nsPsy.attackthre_s<0&&(e.internal_flags.nsPsy.attackthre_s=G.NSATTACKTHRE_S),e.scale<0&&(e.scale=1),e.ATHtype<0&&(e.ATHtype=4),e.ATHcurve<0&&(e.ATHcurve=4),e.athaa_loudapprox<0&&(e.athaa_loudapprox=2),e.interChRatio<0&&(e.interChRatio=0),null==e.useTemporal&&(e.useTemporal=!0),n.slot_lag=n.frac_SpF=0,e.VBR==ye.vbr_off&&(n.slot_lag=n.frac_SpF=72e3*(e.version+1)*e.brate%e.out_samplerate|0),S.iteration_init(e),T.psymodel_init(e),0},this.lame_encode_flush=function(e,t,a,s){var n,r,i,_,o=e.internal_flags,l=m([2,1152]),f=0,c=o.mf_samples_to_encode-Pe.POSTDELAY,h=V(e);if(o.mf_samples_to_encode<1)return 0;for(n=0,e.in_samplerate!=e.out_samplerate&&(c+=16*e.out_samplerate/e.in_samplerate),(i=e.framesize-c%e.framesize)<576&&(i+=e.framesize),_=(c+(e.encoder_padding=i))/e.framesize;0<_&&0<=f;){var u=h-o.mf_size,b=e.frameNum;u*=e.in_samplerate,1152<(u/=e.out_samplerate)&&(u=1152),u<1&&(u=1),r=s-n,0==s&&(r=0),a+=f=this.lame_encode_buffer(e,l[0],l[1],u,t,a,r),n+=f,_-=b!=e.frameNum?1:0}if(f<(o.mf_samples_to_encode=0))return f;if(r=s-n,0==s&&(r=0),R.flush_bitstream(e),(f=R.copy_buffer(o,t,a,r,1))<0)return f;if(a+=f,r=s-(n+=f),0==s&&(r=0),e.write_id3tag_automatic){if(A.id3tag_write_v1(e),(f=R.copy_buffer(o,t,a,r,0))<0)return f;n+=f}return n},this.lame_encode_buffer=function(e,t,a,s,n,r,i){var _,o,l=e.internal_flags,f=[null,null];if(l.Class_ID!=y)return-3;if(0==s)return 0;o=s,(null==(_=l).in_buffer_0||_.in_buffer_nsamples<o)&&(_.in_buffer_0=Ae(o),_.in_buffer_1=Ae(o),_.in_buffer_nsamples=o),f[0]=l.in_buffer_0,f[1]=l.in_buffer_1;for(var c=0;c<s;c++)f[0][c]=t[c],1<l.channels_in&&(f[1][c]=a[c]);return function(e,t,a,s,n,r,i){var _,o,l,f,c,h=e.internal_flags,u=0,b=[null,null],m=[null,null];if(h.Class_ID!=y)return-3;if(0==s)return 0;if((c=R.copy_buffer(h,n,r,i,0))<0)return c;if(r+=c,u+=c,m[0]=t,m[1]=a,j.NEQ(e.scale,0)&&j.NEQ(e.scale,1))for(o=0;o<s;++o)m[0][o]*=e.scale,2==h.channels_out&&(m[1][o]*=e.scale);if(j.NEQ(e.scale_left,0)&&j.NEQ(e.scale_left,1))for(o=0;o<s;++o)m[0][o]*=e.scale_left;if(j.NEQ(e.scale_right,0)&&j.NEQ(e.scale_right,1))for(o=0;o<s;++o)m[1][o]*=e.scale_right;if(2==e.num_channels&&1==h.channels_out)for(o=0;o<s;++o)m[0][o]=.5*(m[0][o]+m[1][o]),m[1][o]=0;f=V(e),b[0]=h.mfbuf[0],b[1]=h.mfbuf[1];var p=0;for(;0<s;){var v=[null,null],d=0,g=0;v[0]=m[0],v[1]=m[1];var S=new O;if(D(e,b,v,p,s,S),d=S.n_in,g=S.n_out,h.findReplayGain&&!h.decode_on_the_fly&&w.AnalyzeSamples(h.rgdata,b[0],h.mf_size,b[1],h.mf_size,g,h.channels_out)==q.GAIN_ANALYSIS_ERROR)return-6;if(s-=d,p+=d,h.channels_out,h.mf_size+=g,h.mf_samples_to_encode<1&&(h.mf_samples_to_encode=Pe.ENCDELAY+Pe.POSTDELAY),h.mf_samples_to_encode+=g,h.mf_size>=f){var M=i-u;if(0==i&&(M=0),(_=N(e,b[0],b[1],n,r,M))<0)return _;for(r+=_,u+=_,h.mf_size-=e.framesize,h.mf_samples_to_encode-=e.framesize,l=0;l<h.channels_out;l++)for(o=0;o<h.mf_size;o++)b[l][o]=b[l][o+e.framesize]}}return u}(e,f[0],f[1],s,n,r,i)}}z.SFBMAX=3*Pe.SBMAX_s,Pe.ENCDELAY=576,Pe.POSTDELAY=1152,Pe.FFTOFFSET=224+(Pe.MDCTDELAY=48),Pe.DECDELAY=528,Pe.SBLIMIT=32,Pe.CBANDS=64,Pe.SBPSY_l=21,Pe.SBPSY_s=12,Pe.SBMAX_l=22,Pe.SBMAX_s=13,Pe.PSFB21=6,Pe.PSFB12=6,Pe.HBLKSIZE=(Pe.BLKSIZE=1024)/2+1,Pe.HBLKSIZE_s=(Pe.BLKSIZE_s=256)/2+1,Pe.NORM_TYPE=0,Pe.START_TYPE=1,Pe.SHORT_TYPE=2,Pe.STOP_TYPE=3,Pe.MPG_MD_LR_LR=0,Pe.MPG_MD_LR_I=1,Pe.MPG_MD_MS_LR=2,Pe.MPG_MD_MS_I=3,Pe.fircoef=[-.1039435,-.1892065,5*-.0432472,-.155915,3.898045e-17,.0467745*5,.50455,.756825,.187098*5],Z.MFSIZE=3456+Pe.ENCDELAY-Pe.MDCTDELAY,Z.MAX_HEADER_BUF=256,Z.MAX_BITS_PER_CHANNEL=4095,Z.MAX_BITS_PER_GRANULE=7680,Z.BPC=320,z.SFBMAX=3*Pe.SBMAX_s,t.Mp3Encoder=function(s,e,t){3!=arguments.length&&(console.error("WARN: Mp3Encoder(channels, samplerate, kbps) not specified"),s=1,e=44100,t=128);var n=new Q,a=new function(){this.setModules=function(e,t){}},r=new q,i=new j,_=new function(){function e(e,t,a,s,n,r,i,_,o,l,f,c,h,u,b){this.vbr_q=e,this.quant_comp=t,this.quant_comp_s=a,this.expY=s,this.st_lrm=n,this.st_s=r,this.masking_adj=i,this.masking_adj_short=_,this.ath_lower=o,this.ath_curve=l,this.ath_sensitivity=f,this.interch=c,this.safejoint=h,this.sfb21mod=u,this.msfix=b}function t(e,t,a,s,n,r,i,_,o,l,f,c,h,u){this.quant_comp=t,this.quant_comp_s=a,this.safejoint=s,this.nsmsfix=n,this.st_lrm=r,this.st_s=i,this.nsbass=_,this.scale=o,this.masking_adj=l,this.ath_lower=f,this.ath_curve=c,this.interch=h,this.sfscale=u}var i;this.setModules=function(e){i=e};var f=[new e(0,9,9,0,5.2,125,-4.2,-6.3,4.8,1,0,0,2,21,.97),new e(1,9,9,0,5.3,125,-3.6,-5.6,4.5,1.5,0,0,2,21,1.35),new e(2,9,9,0,5.6,125,-2.2,-3.5,2.8,2,0,0,2,21,1.49),new e(3,9,9,1,5.8,130,-1.8,-2.8,2.6,3,-4,0,2,20,1.64),new e(4,9,9,1,6,135,-.7,-1.1,1.1,3.5,-8,0,2,0,1.79),new e(5,9,9,1,6.4,140,.5,.4,-7.5,4,-12,2e-4,0,0,1.95),new e(6,9,9,1,6.6,145,.67,.65,-14.7,6.5,-19,4e-4,0,0,2.3),new e(7,9,9,1,6.6,145,.8,.75,-19.7,8,-22,6e-4,0,0,2.7),new e(8,9,9,1,6.6,145,1.2,1.15,-27.5,10,-23,7e-4,0,0,0),new e(9,9,9,1,6.6,145,1.6,1.6,-36,11,-25,8e-4,0,0,0),new e(10,9,9,1,6.6,145,2,2,-36,12,-25,8e-4,0,0,0)],c=[new e(0,9,9,0,4.2,25,-7,-4,7.5,1,0,0,2,26,.97),new e(1,9,9,0,4.2,25,-5.6,-3.6,4.5,1.5,0,0,2,21,1.35),new e(2,9,9,0,4.2,25,-4.4,-1.8,2,2,0,0,2,18,1.49),new e(3,9,9,1,4.2,25,-3.4,-1.25,1.1,3,-4,0,2,15,1.64),new e(4,9,9,1,4.2,25,-2.2,.1,0,3.5,-8,0,2,0,1.79),new e(5,9,9,1,4.2,25,-1,1.65,-7.7,4,-12,2e-4,0,0,1.95),new e(6,9,9,1,4.2,25,-0,2.47,-7.7,6.5,-19,4e-4,0,0,2),new e(7,9,9,1,4.2,25,.5,2,-14.5,8,-22,6e-4,0,0,2),new e(8,9,9,1,4.2,25,1,2.4,-22,10,-23,7e-4,0,0,2),new e(9,9,9,1,4.2,25,1.5,2.95,-30,11,-25,8e-4,0,0,2),new e(10,9,9,1,4.2,25,2,2.95,-36,12,-30,8e-4,0,0,2)];function s(e,t,a){var s,n,r=e.VBR==ye.vbr_rh?f:c,i=e.VBR_q_frac,_=r[t],o=r[t+1],l=_;_.st_lrm=_.st_lrm+i*(o.st_lrm-_.st_lrm),_.st_s=_.st_s+i*(o.st_s-_.st_s),_.masking_adj=_.masking_adj+i*(o.masking_adj-_.masking_adj),_.masking_adj_short=_.masking_adj_short+i*(o.masking_adj_short-_.masking_adj_short),_.ath_lower=_.ath_lower+i*(o.ath_lower-_.ath_lower),_.ath_curve=_.ath_curve+i*(o.ath_curve-_.ath_curve),_.ath_sensitivity=_.ath_sensitivity+i*(o.ath_sensitivity-_.ath_sensitivity),_.interch=_.interch+i*(o.interch-_.interch),_.msfix=_.msfix+i*(o.msfix-_.msfix),s=e,(n=l.vbr_q)<0&&(n=0),9<n&&(n=9),s.VBR_q=n,(s.VBR_q_frac=0)!=a?e.quant_comp=l.quant_comp:0<Math.abs(e.quant_comp- -1)||(e.quant_comp=l.quant_comp),0!=a?e.quant_comp_short=l.quant_comp_s:0<Math.abs(e.quant_comp_short- -1)||(e.quant_comp_short=l.quant_comp_s),0!=l.expY&&(e.experimentalY=0!=l.expY),0!=a?e.internal_flags.nsPsy.attackthre=l.st_lrm:0<Math.abs(e.internal_flags.nsPsy.attackthre- -1)||(e.internal_flags.nsPsy.attackthre=l.st_lrm),0!=a?e.internal_flags.nsPsy.attackthre_s=l.st_s:0<Math.abs(e.internal_flags.nsPsy.attackthre_s- -1)||(e.internal_flags.nsPsy.attackthre_s=l.st_s),0!=a?e.maskingadjust=l.masking_adj:0<Math.abs(e.maskingadjust-0)||(e.maskingadjust=l.masking_adj),0!=a?e.maskingadjust_short=l.masking_adj_short:0<Math.abs(e.maskingadjust_short-0)||(e.maskingadjust_short=l.masking_adj_short),0!=a?e.ATHlower=-l.ath_lower/10:0<Math.abs(10*-e.ATHlower-0)||(e.ATHlower=-l.ath_lower/10),0!=a?e.ATHcurve=l.ath_curve:0<Math.abs(e.ATHcurve- -1)||(e.ATHcurve=l.ath_curve),0!=a?e.athaa_sensitivity=l.ath_sensitivity:0<Math.abs(e.athaa_sensitivity- -1)||(e.athaa_sensitivity=l.ath_sensitivity),0<l.interch&&(0!=a?e.interChRatio=l.interch:0<Math.abs(e.interChRatio- -1)||(e.interChRatio=l.interch)),0<l.safejoint&&(e.exp_nspsytune=e.exp_nspsytune|l.safejoint),0<l.sfb21mod&&(e.exp_nspsytune=e.exp_nspsytune|l.sfb21mod<<20),0!=a?e.msfix=l.msfix:0<Math.abs(e.msfix- -1)||(e.msfix=l.msfix),0==a&&(e.VBR_q=t,e.VBR_q_frac=i)}var _=[new t(8,9,9,0,0,6.6,145,0,.95,0,-30,11,.0012,1),new t(16,9,9,0,0,6.6,145,0,.95,0,-25,11,.001,1),new t(24,9,9,0,0,6.6,145,0,.95,0,-20,11,.001,1),new t(32,9,9,0,0,6.6,145,0,.95,0,-15,11,.001,1),new t(40,9,9,0,0,6.6,145,0,.95,0,-10,11,9e-4,1),new t(48,9,9,0,0,6.6,145,0,.95,0,-10,11,9e-4,1),new t(56,9,9,0,0,6.6,145,0,.95,0,-6,11,8e-4,1),new t(64,9,9,0,0,6.6,145,0,.95,0,-2,11,8e-4,1),new t(80,9,9,0,0,6.6,145,0,.95,0,0,8,7e-4,1),new t(96,9,9,0,2.5,6.6,145,0,.95,0,1,5.5,6e-4,1),new t(112,9,9,0,2.25,6.6,145,0,.95,0,2,4.5,5e-4,1),new t(128,9,9,0,1.95,6.4,140,0,.95,0,3,4,2e-4,1),new t(160,9,9,1,1.79,6,135,0,.95,-2,5,3.5,0,1),new t(192,9,9,1,1.49,5.6,125,0,.97,-4,7,3,0,0),new t(224,9,9,1,1.25,5.2,125,0,.98,-6,9,2,0,0),new t(256,9,9,1,.97,5.2,125,0,1,-8,10,1,0,0),new t(320,9,9,1,.9,5.2,125,0,1,-10,12,0,0,0)];function n(e,t,a){var s=t,n=i.nearestBitrateFullIndex(t);if(e.VBR=ye.vbr_abr,e.VBR_mean_bitrate_kbps=s,e.VBR_mean_bitrate_kbps=Math.min(e.VBR_mean_bitrate_kbps,320),e.VBR_mean_bitrate_kbps=Math.max(e.VBR_mean_bitrate_kbps,8),e.brate=e.VBR_mean_bitrate_kbps,320<e.VBR_mean_bitrate_kbps&&(e.disable_reservoir=!0),0<_[n].safejoint&&(e.exp_nspsytune=2|e.exp_nspsytune),0<_[n].sfscale&&(e.internal_flags.noise_shaping=2),0<Math.abs(_[n].nsbass)){var r=int(4*_[n].nsbass);r<0&&(r+=64),e.exp_nspsytune=e.exp_nspsytune|r<<2}return 0!=a?e.quant_comp=_[n].quant_comp:0<Math.abs(e.quant_comp- -1)||(e.quant_comp=_[n].quant_comp),0!=a?e.quant_comp_short=_[n].quant_comp_s:0<Math.abs(e.quant_comp_short- -1)||(e.quant_comp_short=_[n].quant_comp_s),0!=a?e.msfix=_[n].nsmsfix:0<Math.abs(e.msfix- -1)||(e.msfix=_[n].nsmsfix),0!=a?e.internal_flags.nsPsy.attackthre=_[n].st_lrm:0<Math.abs(e.internal_flags.nsPsy.attackthre- -1)||(e.internal_flags.nsPsy.attackthre=_[n].st_lrm),0!=a?e.internal_flags.nsPsy.attackthre_s=_[n].st_s:0<Math.abs(e.internal_flags.nsPsy.attackthre_s- -1)||(e.internal_flags.nsPsy.attackthre_s=_[n].st_s),0!=a?e.scale=_[n].scale:0<Math.abs(e.scale- -1)||(e.scale=_[n].scale),0!=a?e.maskingadjust=_[n].masking_adj:0<Math.abs(e.maskingadjust-0)||(e.maskingadjust=_[n].masking_adj),0<_[n].masking_adj?0!=a?e.maskingadjust_short=.9*_[n].masking_adj:0<Math.abs(e.maskingadjust_short-0)||(e.maskingadjust_short=.9*_[n].masking_adj):0!=a?e.maskingadjust_short=1.1*_[n].masking_adj:0<Math.abs(e.maskingadjust_short-0)||(e.maskingadjust_short=1.1*_[n].masking_adj),0!=a?e.ATHlower=-_[n].ath_lower/10:0<Math.abs(10*-e.ATHlower-0)||(e.ATHlower=-_[n].ath_lower/10),0!=a?e.ATHcurve=_[n].ath_curve:0<Math.abs(e.ATHcurve- -1)||(e.ATHcurve=_[n].ath_curve),0!=a?e.interChRatio=_[n].interch:0<Math.abs(e.interChRatio- -1)||(e.interChRatio=_[n].interch),t}this.apply_preset=function(e,t,a){switch(t){case Q.R3MIX:t=Q.V3,e.VBR=ye.vbr_mtrh;break;case Q.MEDIUM:t=Q.V4,e.VBR=ye.vbr_rh;break;case Q.MEDIUM_FAST:t=Q.V4,e.VBR=ye.vbr_mtrh;break;case Q.STANDARD:t=Q.V2,e.VBR=ye.vbr_rh;break;case Q.STANDARD_FAST:t=Q.V2,e.VBR=ye.vbr_mtrh;break;case Q.EXTREME:t=Q.V0,e.VBR=ye.vbr_rh;break;case Q.EXTREME_FAST:t=Q.V0,e.VBR=ye.vbr_mtrh;break;case Q.INSANE:return t=320,e.preset=t,n(e,t,a),e.VBR=ye.vbr_off,t}switch(e.preset=t){case Q.V9:return s(e,9,a),t;case Q.V8:return s(e,8,a),t;case Q.V7:return s(e,7,a),t;case Q.V6:return s(e,6,a),t;case Q.V5:return s(e,5,a),t;case Q.V4:return s(e,4,a),t;case Q.V3:return s(e,3,a),t;case Q.V2:return s(e,2,a),t;case Q.V1:return s(e,1,a),t;case Q.V0:return s(e,0,a),t}return 8<=t&&t<=320?n(e,t,a):(e.preset=0,t)}},o=new y,l=new w,f=new M,c=new function(){this.getLameVersion=function(){return"3.98.4"},this.getLameShortVersion=function(){return"3.98.4"},this.getLameVeryShortVersion=function(){return"LAME3.98r"},this.getPsyVersion=function(){return"0.93"},this.getLameUrl=function(){return"http://www.mp3dev.org/"},this.getLameOsBitness=function(){return"32bits"}},h=new function(){this.setModules=function(e,t){}},u=new function(){var o;this.setModules=function(e){o=e},this.ResvFrameBegin=function(e,t){var a,s=e.internal_flags,n=s.l3_side,r=o.getframebits(e);t.bits=(r-8*s.sideinfo_len)/s.mode_gr;var i=2048*s.mode_gr-8;320<e.brate?a=8*int(1e3*e.brate/(e.out_samplerate/1152)/8+.5):(a=11520,e.strict_ISO&&(a=8*int(32e4/(e.out_samplerate/1152)/8+.5))),s.ResvMax=a-r,s.ResvMax>i&&(s.ResvMax=i),(s.ResvMax<0||e.disable_reservoir)&&(s.ResvMax=0);var _=t.bits*s.mode_gr+Math.min(s.ResvSize,s.ResvMax);return a<_&&(_=a),n.resvDrain_pre=0,null!=s.pinfo&&(s.pinfo.mean_bits=t.bits/2,s.pinfo.resvsize=s.ResvSize),_},this.ResvMaxBits=function(e,t,a,s){var n,r=e.internal_flags,i=r.ResvSize,_=r.ResvMax;0!=s&&(i+=t),0!=(1&r.substep_shaping)&&(_*=.9),a.bits=t,9*_<10*i?(n=i-9*_/10,a.bits+=n,r.substep_shaping|=128):(n=0,r.substep_shaping&=127,e.disable_reservoir||0!=(1&r.substep_shaping)||(a.bits-=.1*t));var o=i<6*r.ResvMax/10?i:6*r.ResvMax/10;return(o-=n)<0&&(o=0),o},this.ResvAdjust=function(e,t){e.ResvSize-=t.part2_3_length+t.part2_length},this.ResvFrameEnd=function(e,t){var a,s=e.l3_side;e.ResvSize+=t*e.mode_gr;var n=0;s.resvDrain_post=0,(s.resvDrain_pre=0)!=(a=e.ResvSize%8)&&(n+=a),0<(a=e.ResvSize-n-e.ResvMax)&&(n+=a);var r=Math.min(8*s.main_data_begin,n)/8;s.resvDrain_pre+=8*r,n-=8*r,e.ResvSize-=8*r,s.main_data_begin-=r,s.resvDrain_post+=n,e.ResvSize-=n}},b=new k,m=new function(){this.setModules=function(e,t,a){}},p=new function(){};n.setModules(r,i,_,o,l,f,c,h,p),i.setModules(r,p,c,f),h.setModules(i,c),_.setModules(n),l.setModules(i,u,o,b),o.setModules(b,u,n.enc.psy),u.setModules(i),b.setModules(o),f.setModules(n,i,c),a.setModules(m,p),m.setModules(c,h,_);var v=n.lame_init();v.num_channels=s,v.in_samplerate=e,v.out_samplerate=e,v.brate=t,v.mode=Ee.STEREO,v.quality=3,v.bWriteVbrTag=!1,v.disable_reservoir=!0,v.write_id3tag_automatic=!1,n.lame_init_params(v);var d=1152,g=0|1.25*d+7200,S=B(g);this.encodeBuffer=function(e,t){1==s&&(t=e),e.length>d&&(d=e.length,S=B(g=0|1.25*d+7200));var a=n.lame_encode_buffer(v,e,t,e.length,S,0,g);return new Int8Array(S.subarray(0,a))},this.flush=function(){var e=n.lame_encode_flush(v,S,0,g);return new Int8Array(S.subarray(0,e))}}}t(),Recorder.lamejs=t}();
\ No newline at end of file
/*
录音
https://github.com/xiangyuecn/Recorder
src: engine/pcm.js
*/
!function(){"use strict";Recorder.prototype.enc_pcm={stable:!0,testmsg:"pcm为未封装的原始音频数据,pcm数据文件无法直接播放;支持位数8位、16位(填在比特率里面),采样率取值无限制"},Recorder.prototype.pcm=function(e,t,r){var a=this.set,n=e.length,o=8==a.bitRate?8:16,c=new ArrayBuffer(n*(o/8)),s=new DataView(c),l=0;if(8==o)for(var p=0;p<n;p++,l++){var i=128+(e[p]>>8);s.setInt8(l,i,!0)}else for(p=0;p<n;p++,l+=2)s.setInt16(l,e[p],!0);t(new Blob([s.buffer],{type:"audio/pcm"}))},Recorder.pcm2wav=function(e,a,n){e.slice&&null!=e.type&&(e={blob:e});var o=e.sampleRate||16e3,c=e.bitRate||16;if(e.sampleRate&&e.bitRate||console.warn("pcm2wav必须提供sampleRate和bitRate"),Recorder.prototype.wav){var s=new FileReader;s.onloadend=function(){var e;if(8==c){var t=new Uint8Array(s.result);e=new Int16Array(t.length);for(var r=0;r<t.length;r++)e[r]=t[r]-128<<8}else e=new Int16Array(s.result);Recorder({type:"wav",sampleRate:o,bitRate:c}).mock(e,o).stop(function(e,t){a(e,t)},n)},s.readAsArrayBuffer(e.blob)}else n("pcm2wav必须先加载wav编码器wav.js")}}();
\ No newline at end of file
/*
录音
https://github.com/xiangyuecn/Recorder
src: engine/wav.js
*/
!function(){"use strict";Recorder.prototype.enc_wav={stable:!0,testmsg:"支持位数8位、16位(填在比特率里面),采样率取值无限制"},Recorder.prototype.wav=function(t,e,n){var r=this.set,a=t.length,o=r.sampleRate,f=8==r.bitRate?8:16,i=a*(f/8),s=new ArrayBuffer(44+i),c=new DataView(s),u=0,v=function(t){for(var e=0;e<t.length;e++,u++)c.setUint8(u,t.charCodeAt(e))},w=function(t){c.setUint16(u,t,!0),u+=2},l=function(t){c.setUint32(u,t,!0),u+=4};if(v("RIFF"),l(36+i),v("WAVE"),v("fmt "),l(16),w(1),w(1),l(o),l(o*(f/8)),w(f/8),w(f),v("data"),l(i),8==f)for(var p=0;p<a;p++,u++){var d=128+(t[p]>>8);c.setInt8(u,d,!0)}else for(p=0;p<a;p++,u+=2)c.setInt16(u,t[p],!0);e(new Blob([c.buffer],{type:"audio/wav"}))}}();
\ No newline at end of file
/*
录音
https://github.com/xiangyuecn/Recorder
src: extensions/frequency.histogram.view.js
*/
!function(){"use strict";var t=function(t){return new e(t)},e=function(t){var e=this,r={scale:2,fps:20,lineCount:30,widthRatio:.6,spaceWidth:0,minHeight:0,position:-1,mirrorEnable:!1,stripeEnable:!0,stripeHeight:3,stripeMargin:6,fallDuration:1e3,stripeFallDuration:3500,linear:[0,"rgba(0,187,17,1)",.5,"rgba(255,215,0,1)",1,"rgba(255,102,0,1)"],stripeLinear:null,shadowBlur:0,shadowColor:"#bbb",stripeShadowBlur:-1,stripeShadowColor:"",onDraw:function(t,e){}};for(var a in t)r[a]=t[a];e.set=t=r;var i=t.elem;i&&("string"==typeof i?i=document.querySelector(i):i.length&&(i=i[0])),i&&(t.width=i.offsetWidth,t.height=i.offsetHeight);var o=t.scale,l=t.width*o,n=t.height*o,h=e.elem=document.createElement("div"),s=["","transform-origin:0 0;","transform:scale("+1/o+");"];h.innerHTML='<div style="width:'+t.width+"px;height:"+t.height+'px;overflow:hidden"><div style="width:'+l+"px;height:"+n+"px;"+s.join("-webkit-")+s.join("-ms-")+s.join("-moz-")+s.join("")+'"><canvas/></div></div>';var f=e.canvas=h.querySelector("canvas");e.ctx=f.getContext("2d");if(f.width=l,f.height=n,i&&(i.innerHTML="",i.appendChild(h)),!Recorder.LibFFT)throw new Error("需要lib.fft.js支持");e.fft=Recorder.LibFFT(1024),e.lastH=[],e.stripesH=[]};e.prototype=t.prototype={genLinear:function(t,e,r,a){for(var i=t.createLinearGradient(0,r,0,a),o=0;o<e.length;)i.addColorStop(e[o++],e[o++]);return i},input:function(t,e,r){var a=this;a.sampleRate=r,a.pcmData=t,a.pcmPos=0,a.inputTime=Date.now(),a.schedule()},schedule:function(){var t=this,e=t.set,r=Math.floor(1e3/e.fps);t.timer||(t.timer=setInterval(function(){t.schedule()},r));var a=Date.now(),i=t.drawTime||0;if(a-t.inputTime>1.3*e.stripeFallDuration)return clearInterval(t.timer),void(t.timer=0);if(!(a-i<r)){t.drawTime=a;for(var o=t.fft.bufferSize,l=t.pcmData,n=t.pcmPos,h=new Int16Array(o),s=0;s<o&&n<l.length;s++,n++)h[s]=l[n];t.pcmPos=n;var f=t.fft.transform(h);t.draw(f,t.sampleRate)}},draw:function(t,e){var r=this,a=r.set,i=r.ctx,o=a.scale,l=a.width*o,n=a.height*o,h=a.lineCount,s=r.fft.bufferSize,f=a.position,d=Math.abs(a.position),c=1==f?0:n,p=n;d<1&&(c=p/=2,p=Math.floor(p*(1+d)),c=Math.floor(0<f?c*(1-d):c*(1+d)));for(var u=r.lastH,v=r.stripesH,w=Math.ceil(p/(a.fallDuration/(1e3/a.fps))),g=Math.ceil(p/(a.stripeFallDuration/(1e3/a.fps))),m=a.stripeMargin*o,M=1<<(Math.round(Math.log(s)/Math.log(2)+3)<<1),b=Math.log(M)/Math.log(10),L=20*Math.log(32767)/Math.log(10),y=s/2,S=Math.min(y,Math.floor(5e3*y/(e/2))),C=S==y,H=C?h:Math.round(.8*h),R=S/H,D=C?0:(y-S)/(h-H),x=0,F=0;F<h;F++){var T=Math.ceil(x);x+=F<H?R:D;for(var B=Math.min(Math.ceil(x),y),E=0,j=T;j<B;j++)E=Math.max(E,Math.abs(t[j]));var I=M<E?Math.floor(17*(Math.log(E)/Math.log(10)-b)):0,q=p*Math.min(I/L,1);u[F]=(u[F]||0)-w,q<u[F]&&(q=u[F]),q<0&&(q=0),u[F]=q;var z=v[F]||0;if(q&&z<q+m)v[F]=q+m;else{var P=z-g;P<0&&(P=0),v[F]=P}}i.clearRect(0,0,l,n);var W=r.genLinear(i,a.linear,c,c-p),k=a.stripeLinear&&r.genLinear(i,a.stripeLinear,c,c-p)||W,A=r.genLinear(i,a.linear,c,c+p),G=a.stripeLinear&&r.genLinear(i,a.stripeLinear,c,c+p)||A;i.shadowBlur=a.shadowBlur*o,i.shadowColor=a.shadowColor;var V=a.mirrorEnable,J=V?2*h-1:h,K=a.widthRatio,N=a.spaceWidth*o;0!=N&&(K=(l-N*(J+1))/l);for(var O=Math.max(1*o,Math.floor(l*K/J)),Q=(l-J*O)/(J+1),U=a.minHeight*o,X=V?l/2-(Q+O/2):0,Y=(F=0,X);F<h;F++)Y+=Q,$=Math.floor(Y),q=Math.max(u[F],U),0!=c&&(_=c-q,i.fillStyle=W,i.fillRect($,_,O,q)),c!=n&&(i.fillStyle=A,i.fillRect($,c,O,q)),Y+=O;if(a.stripeEnable){var Z=a.stripeShadowBlur;i.shadowBlur=(-1==Z?a.shadowBlur:Z)*o,i.shadowColor=a.stripeShadowColor||a.shadowColor;var $,_,tt=a.stripeHeight*o;for(F=0,Y=X;F<h;F++)Y+=Q,$=Math.floor(Y),q=v[F],0!=c&&((_=c-q-tt)<0&&(_=0),i.fillStyle=k,i.fillRect($,_,O,tt)),c!=n&&(n<(_=c+q)+tt&&(_=n-tt),i.fillStyle=G,i.fillRect($,_,O,tt)),Y+=O}if(V){var et=Math.floor(l/2);i.save(),i.scale(-1,1),i.drawImage(r.canvas,Math.ceil(l/2),0,et,n,-et,0,et,n),i.restore()}a.onDraw(t,e)}},Recorder.FrequencyHistogramView=t}();
\ No newline at end of file
/*
录音
https://github.com/xiangyuecn/Recorder
src: extensions/lib.fft.js
*/
Recorder.LibFFT=function(r){"use strict";var s,v,d,l,F,b,g,m;return function(r){var o,t,a,f;for(s=Math.round(Math.log(r)/Math.log(2)),d=((v=1<<s)<<2)*Math.sqrt(2),l=[],F=[],b=[0],g=[0],m=[],o=0;o<v;o++){for(a=o,f=t=0;t!=s;t++)f<<=1,f|=1&a,a>>>=1;m[o]=f}var n,u=2*Math.PI/v;for(o=(v>>1)-1;0<o;o--)n=o*u,g[o]=Math.cos(n),b[o]=Math.sin(n)}(r),{transform:function(r){var o,t,a,f,n,u,e,h,M=1,i=s-1;for(o=0;o!=v;o++)l[o]=r[m[o]],F[o]=0;for(o=s;0!=o;o--){for(t=0;t!=M;t++)for(n=g[t<<i],u=b[t<<i],a=t;a<v;a+=M<<1)e=n*l[f=a+M]-u*F[f],h=n*F[f]+u*l[f],l[f]=l[a]-e,F[f]=F[a]-h,l[a]+=e,F[a]+=h;M<<=1,i--}t=v>>1;var c=new Float64Array(t);for(n=-(u=d),o=t;0!=o;o--)e=l[o],h=F[o],c[o-1]=n<e&&e<u&&n<h&&h<u?0:Math.round(e*e+h*h);return c},bufferSize:v}};
\ No newline at end of file
/*
录音
https://github.com/xiangyuecn/Recorder
src: recorder-core.js
*/
!function(y){"use strict";var h=function(){},A=function(e){return new t(e)};A.IsOpen=function(){var e=A.Stream;if(e){var t=e.getTracks&&e.getTracks()||e.audioTracks||[],n=t[0];if(n){var r=n.readyState;return"live"==r||r==n.LIVE}}return!1},A.BufferSize=4096,A.Destroy=function(){for(var e in M("Recorder Destroy"),g(),n)n[e]()};var n={};A.BindDestroy=function(e,t){n[e]=t},A.Support=function(){var e=y.AudioContext;if(e||(e=y.webkitAudioContext),!e)return!1;var t=navigator.mediaDevices||{};return t.getUserMedia||(t=navigator).getUserMedia||(t.getUserMedia=t.webkitGetUserMedia||t.mozGetUserMedia||t.msGetUserMedia),!!t.getUserMedia&&(A.Scope=t,A.Ctx&&"closed"!=A.Ctx.state||(A.Ctx=new e,A.BindDestroy("Ctx",function(){var e=A.Ctx;e&&e.close&&(e.close(),A.Ctx=0)})),!0)};var k="ConnectEnableWorklet";A[k]=!1;var d=function(e){var t=(e=e||A).BufferSize||A.BufferSize,r=A.Ctx,n=e.Stream,a=n._m=r.createMediaStreamSource(n),u=n._call,o=function(e,t){if(!t||h)for(var n in u){for(var r=t||e.inputBuffer.getChannelData(0),a=r.length,o=new Int16Array(a),s=0,i=0;i<a;i++){var c=Math.max(-1,Math.min(1,r[i]));c=c<0?32768*c:32767*c,o[i]=c,s+=Math.abs(c)}for(var f in u)u[f](o,s);return}else M(l+"多余回调",3)},s="ScriptProcessor",l="audioWorklet",i="Recorder",c=i+" "+l,f="RecProc",p=r.createScriptProcessor||r.createJavaScriptNode,v="。由于"+l+"内部1秒375次回调,在移动端可能会有性能问题导致回调丢失录音变短,PC端无影响,暂不建议开启"+l+"",m=function(){h=n.isWorklet=!1,I(n),M("Connect采用老的"+s+""+(A[k]?"但已":"")+"设置"+i+"."+k+"=true尝试启用"+l+v,3);var e=n._p=p.call(r,t,1,1);a.connect(e),e.connect(r.destination),e.onaudioprocess=function(e){o(e)}},h=n.isWorklet=!p||A[k],d=y.AudioWorkletNode;if(h&&r[l]&&d){var g,S=function(){return h&&n._na},_=n._na=function(){""!==g&&(clearTimeout(g),g=setTimeout(function(){g=0,M(l+"未返回任何音频,恢复使用"+s,3),S()&&p&&m()},500))},C=function(){if(S()){var e=n._n=new d(r,f,{processorOptions:{bufferSize:t}});a.connect(e),e.connect(r.destination),e.port.onmessage=function(e){g&&(clearTimeout(g),g=""),o(0,e.data.val)},M("Connect采用"+l+"方式,设置"+i+"."+k+"=false可恢复老式"+s+v,3)}};r.resume()[u&&"finally"](function(){if(S())if(r[f])C();else{var e,t,n=(t="class "+f+" extends AudioWorkletProcessor{",t+="constructor "+(e=function(e){return e.toString().replace(/^function|DEL_/g,"").replace(/\$RA/g,c)})(function(e){DEL_super(e);var t=this,n=e.processorOptions.bufferSize;t.bufferSize=n,t.buffer=new Float32Array(2*n),t.pos=0,t.port.onmessage=function(e){e.data.kill&&(t.kill=!0,console.log("$RA kill call"))},console.log("$RA .ctor call",e)}),t+="process "+e(function(e,t,n){var r=this,a=r.bufferSize,o=r.buffer,s=r.pos;if((e=(e[0]||[])[0]||[]).length){o.set(e,s);var i=~~((s+=e.length)/a)*a;if(i){this.port.postMessage({val:o.slice(0,i)});var c=o.subarray(i,s);(o=new Float32Array(2*a)).set(c),s=c.length,r.buffer=o}r.pos=s}return!r.kill}),t+='}try{registerProcessor("'+f+'", '+f+')}catch(e){console.error("'+c+'注册失败",e)}',"data:text/javascript;base64,"+btoa(unescape(encodeURIComponent(t))));r[l].addModule(n).then(function(e){S()&&(r[f]=1,C(),g&&_())})[u&&"catch"](function(e){M(l+".addModule失败",1,e),S()&&m()})}})}else m()},I=function(e){e._na=null,e._n&&(e._n.port.postMessage({kill:!0}),e._n.disconnect(),e._n=null)},g=function(e){var t=(e=e||A)==A,n=e.Stream;if(n&&(n._m&&(n._m.disconnect(),n._m=null),n._p&&(n._p.disconnect(),n._p.onaudioprocess=n._p=null),I(n),t)){for(var r=n.getTracks&&n.getTracks()||n.audioTracks||[],a=0;a<r.length;a++){var o=r[a];o.stop&&o.stop()}n.stop&&n.stop()}e.Stream=0};A.SampleData=function(e,t,n,r,a){r||(r={});var o=r.index||0,s=r.offset||0,i=r.frameNext||[];a||(a={});var c=a.frameSize||1;a.frameType&&(c="mp3"==a.frameType?1152:1);for(var f=0,u=o;u<e.length;u++)f+=e[u].length;f=Math.max(0,f-Math.floor(s));var l=t/n;1<l?f=Math.floor(f/l):(l=1,n=t),f+=i.length;for(var p=new Int16Array(f),v=0,u=0;u<i.length;u++)p[v]=i[u],v++;for(var m=e.length;o<m;o++){for(var h=e[o],u=s,d=h.length;u<d;){var g=Math.floor(u),S=Math.ceil(u),_=u-g,C=h[g],y=S<d?h[S]:(e[o+1]||[C])[0]||0;p[v]=C+(y-C)*_,v++,u+=l}s=u-d}i=null;var k=p.length%c;if(0<k){var I=2*(p.length-k);i=new Int16Array(p.buffer.slice(I)),p=new Int16Array(p.buffer.slice(0,I))}return{index:o,offset:s,frameNext:i,sampleRate:n,data:p}},A.PowerLevel=function(e,t){var n=e/t||0;return n<1251?Math.round(n/1250*10):Math.round(Math.min(100,Math.max(0,100*(1+Math.log(n/1e4)/Math.log(10)))))};var M=function(e,t){var n=new Date,r=("0"+n.getMinutes()).substr(-2)+":"+("0"+n.getSeconds()).substr(-2)+"."+("00"+n.getMilliseconds()).substr(-3),a=this&&this.envIn&&this.envCheck&&this.id,o=["["+r+" Recorder"+(a?":"+a:"")+"]"+e],s=arguments,i=y.console||{},c=2,f=i.log;for("number"==typeof t?f=1==t?i.error:3==t?i.warn:f:c=1;c<s.length;c++)o.push(s[c]);u?f&&f("[IsLoser]"+o[0],1<o.length?o:""):f.apply(i,o)},u=!0;try{u=!console.log.apply}catch(e){}A.CLog=M;var r=0;function t(e){this.id=++r,A.Traffic&&A.Traffic();var t={type:"mp3",bitRate:16,sampleRate:16e3,onProcess:h};for(var n in e)t[n]=e[n];this.set=t,this._S=9,this.Sync={O:9,C:9}}A.Sync={O:9,C:9},A.prototype=t.prototype={CLog:M,_streamStore:function(){return this.set.sourceStream?this:A},open:function(e,n){var r=this,t=r._streamStore();e=e||h;var a=function(e,t){t=!!t,r.CLog("录音open失败:"+e+",isUserNotAllow:"+t,1),n&&n(e,t)},o=function(){r.CLog("open ok id:"+r.id),e(),r._SO=0},s=t.Sync,i=++s.O,c=s.C;r._O=r._O_=i,r._SO=r._S;var f=function(){if(c!=s.C||!r._O){var e="open被取消";return i==s.O?r.close():e="open被中断",a(e),!0}},u=r.envCheck({envName:"H5",canProcess:!0});if(u)a("不能录音:"+u);else if(r.set.sourceStream){if(!A.Support())return void a("不支持此浏览器从流中获取录音");g(t),r.Stream=r.set.sourceStream,r.Stream._call={};try{d(t)}catch(e){return void a("从流中打开录音失败:"+e.message)}o()}else{var l=function(e,t){try{y.top.a}catch(e){return void a('无权录音(跨域,请尝试给iframe添加麦克风访问策略,如allow="camera;microphone")')}/Permission|Allow/i.test(e)?a("用户拒绝了录音权限",!0):!1===y.isSecureContext?a("无权录音(需https)"):/Found/i.test(e)?a(t+",无可用麦克风"):a(t)};if(A.IsOpen())o();else if(A.Support()){var p=function(e){(A.Stream=e)._call={},f()||setTimeout(function(){f()||(A.IsOpen()?(d(),o()):a("录音功能无效:无音频流"))},100)},v=function(e){var t=e.name||e.message||e.code+":"+e;r.CLog("请求录音权限错误",1,e),l(t,"无法录音:"+t)},m=A.Scope.getUserMedia({audio:r.set.audioTrackSet||!0},p,v);m&&m.then&&m.then(p)[e&&"catch"](v)}else l("","此浏览器不支持录音")}},close:function(e){e=e||h;var t=this,n=t._streamStore();t._stop();var r=n.Sync;if(t._O=0,t._O_!=r.O)return t.CLog("close被忽略(因为同时open了多个rec,只有最后一个会真正close)",3),void e();r.C++,g(n),t.CLog("close"),e()},mock:function(e,t){var n=this;return n._stop(),n.isMock=1,n.mockEnvInfo=null,n.buffers=[e],n.recSize=e.length,n.srcSampleRate=t,n},envCheck:function(e){var t,n=this.set;return t||(this[n.type+"_envCheck"]?t=this[n.type+"_envCheck"](e,n):n.takeoffEncodeChunk&&(t=n.type+"类型不支持设置takeoffEncodeChunk")),t||""},envStart:function(e,t){var n=this,r=n.set;if(n.isMock=e?1:0,n.mockEnvInfo=e,n.buffers=[],n.recSize=0,n.envInLast=0,n.envInFirst=0,n.envInFix=0,n.envInFixTs=[],r.sampleRate=Math.min(t,r.sampleRate),n.srcSampleRate=t,n.engineCtx=0,n[r.type+"_start"]){var a=n.engineCtx=n[r.type+"_start"](r);a&&(a.pcmDatas=[],a.pcmSize=0)}},envResume:function(){this.envInFixTs=[]},envIn:function(e,t){var a=this,o=a.set,s=a.engineCtx,n=a.srcSampleRate,r=e.length,i=A.PowerLevel(t,r),c=a.buffers,f=c.length;c.push(e);var u=c,l=f,p=Date.now(),v=Math.round(r/n*1e3);a.envInLast=p,1==a.buffers.length&&(a.envInFirst=p-v);var m=a.envInFixTs;m.splice(0,0,{t:p,d:v});for(var h=p,d=0,g=0;g<m.length;g++){var S=m[g];if(3e3<p-S.t){m.length=g;break}h=S.t,d+=S.d}var _=m[1],C=p-h;if(C/3<C-d&&(_&&1e3<C||6<=m.length)){var y=p-_.t-v;if(v/5<y){var k=!o.disableEnvInFix;if(a.CLog("["+p+"]"+(k?"":"")+"补偿"+y+"ms",3),a.envInFix+=y,k){var I=new Int16Array(y*n/1e3);r+=I.length,c.push(I)}}}var M=a.recSize,x=r,b=M+x;if(a.recSize=b,s){var R=A.SampleData(c,n,o.sampleRate,s.chunkInfo);s.chunkInfo=R,b=(M=s.pcmSize)+(x=R.data.length),s.pcmSize=b,c=s.pcmDatas,f=c.length,c.push(R.data),n=R.sampleRate}var L=Math.round(b/n*1e3),w=c.length,T=u.length,z=function(){for(var e=O?0:-x,t=null==c[0],n=f;n<w;n++){var r=c[n];null==r?t=1:(e+=r.length,s&&r.length&&a[o.type+"_encode"](s,r))}if(t&&s)for(n=l,u[0]&&(n=0);n<T;n++)u[n]=null;t&&(e=O?x:0,c[0]=null),s?s.pcmSize+=e:a.recSize+=e},O=o.onProcess(c,i,L,n,f,z);if(!0===O){var D=0;for(g=f;g<w;g++)null==c[g]?D=1:c[g]=new Int16Array(0);D?a.CLog("未进入异步前不能清除buffers",3):s?s.pcmSize-=x:a.recSize-=x}else z()},start:function(){var e=this,t=A.Ctx,n=1;if(e.set.sourceStream?e.Stream||(n=0):A.IsOpen()||(n=0),n)if(e.CLog("开始录音"),e._stop(),e.state=0,e.envStart(null,t.sampleRate),e._SO&&e._SO+1!=e._S)e.CLog("start被中断",3);else{e._SO=0;var r=function(){e.state=1,e.resume()};"suspended"==t.state?(e.CLog("wait ctx resume..."),e.state=3,t.resume().then(function(){e.CLog("ctx resume"),3==e.state&&r()})):r()}else e.CLog("未open",1)},pause:function(){var e=this;e.state&&(e.state=2,e.CLog("pause"),delete e._streamStore().Stream._call[e.id])},resume:function(){var e,n=this;if(n.state){n.state=1,n.CLog("resume"),n.envResume();var t=n._streamStore();t.Stream._call[n.id]=function(e,t){1==n.state&&n.envIn(e,t)},(e=(t||A).Stream)._na&&e._na()}},_stop:function(e){var t=this,n=t.set;t.isMock||t._S++,t.state&&(t.pause(),t.state=0),!e&&t[n.type+"_stop"]&&(t[n.type+"_stop"](t.engineCtx),t.engineCtx=0)},stop:function(n,t,e){var r,a=this,o=a.set;a.CLog("stop "+(a.envInLast?a.envInLast-a.envInFirst+"ms 补"+a.envInFix+"ms":"-"));var s=function(){a._stop(),e&&a.close()},i=function(e){a.CLog("结束录音失败:"+e,1),t&&t(e),s()},c=function(e,t){if(a.CLog("结束录音 编码"+(Date.now()-r)+"ms 音频"+t+"ms/"+e.size+"b"),o.takeoffEncodeChunk)a.CLog("启用takeoffEncodeChunk后stop返回的blob长度为0不提供音频数据",3);else if(e.size<Math.max(100,t/2))return void i("生成的"+o.type+"无效");n&&n(e,t),s()};if(!a.isMock){var f=3==a.state;if(!a.state||f)return void i("未开始录音"+(f?",开始录音前无用户交互导致AudioContext未运行":""));a._stop(!0)}var u=a.recSize;if(u)if(a.buffers[0])if(a[o.type]){if(a.isMock){var l=a.envCheck(a.mockEnvInfo||{envName:"mock",canProcess:!1});if(l)return void i("录音错误:"+l)}var p=a.engineCtx;if(a[o.type+"_complete"]&&p){var v=Math.round(p.pcmSize/o.sampleRate*1e3);return r=Date.now(),void a[o.type+"_complete"](p,function(e){c(e,v)},i)}r=Date.now();var m=A.SampleData(a.buffers,a.srcSampleRate,o.sampleRate);o.sampleRate=m.sampleRate;var h=m.data;v=Math.round(h.length/o.sampleRate*1e3),a.CLog("采样"+u+"->"+h.length+" 花:"+(Date.now()-r)+"ms"),setTimeout(function(){r=Date.now(),a[o.type](h,function(e){c(e,v)},function(e){i(e)})})}else i("未加载"+o.type+"编码器");else i("音频buffers被释放");else i("未采集到录音")}},y.Recorder&&y.Recorder.Destroy(),(y.Recorder=A).LM="2022-03-05 11:53:19",A.TrafficImgUrl="//ia.51.la/go1?id=20469973&pvFlag=1",A.Traffic=function(){var e=A.TrafficImgUrl;if(e){var t=A.Traffic,n=location.href.replace(/#.*/,"");if(0==e.indexOf("//")&&(e=/^https:/i.test(n)?"https:"+e:"http:"+e),!t[n]){t[n]=1;var r=new Image;r.src=e,M("Traffic Analysis Image: Recorder.TrafficImgUrl="+A.TrafficImgUrl)}}}}(window),"function"==typeof define&&define.amd&&define(function(){return Recorder}),"object"==typeof module&&module.exports&&(module.exports=Recorder);
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>PaddleSpeech Serving-语音实时转写</title>
<link rel="shortcut icon" href="./static/paddle.ico">
<script src="../static/js/jquery-3.2.1.min.js"></script>
<script src="../static/js/recorder/recorder-core.js"></script>
<script src="../static/js/recorder/extensions/lib.fft.js"></script>
<script src="../static/js/recorder/extensions/frequency.histogram.view.js"></script>
<script src="../static/js/recorder/engine/pcm.js"></script>
<script src="../static/js/SoundRecognizer.js"></script>
<link rel="stylesheet" href="../static/css/style.css">
<link rel="stylesheet" href="../static/css/font-awesome.min.css">
</head>
<body>
<div class="asr-content">
<div class="audio-banner">
<div class="weaper">
<div class="text-content">
<p><span class="title">PaddleSpeech Serving简介</span></p>
<p class="con-container">
<span class="con">PaddleSpeech 是基于飞桨 PaddlePaddle 的语音方向的开源模型库,用于语音和音频中的各种关键任务的开发。PaddleSpeech Serving是基于python + fastapi 的语音算法模型的C/S类型后端服务,旨在统一paddle speech下的各语音算子来对外提供后端服务。</span>
</p>
</div>
<div class="img-con">
<img src="../static/image/PaddleSpeech_logo.png" alt="" />
</div>
</div>
</div>
<div class="audio-experience">
<div class="asr-box">
<h2>产品体验</h2>
<div id="client-word-recorder" style="position: relative;">
<div class="pd">
<div style="text-align:center;height:20px;width:100%;
border:0px solid #bcbcbc;color:#000;box-sizing: border-box;display:inline-block"
class="recwave">
</div>
</div>
</div>
<div class="voice-container">
<div class="voice-input">
<span>WebSocket URL:</span>
<input type="text" id="socketUrl" class="websocket-url" value="ws://127.0.0.1:8091/ws/asr"
placeholder="请输入服务器地址,如:ws://127.0.0.1:8091/ws/asr">
<div class="start-voice">
<button type="primary" id="beginBtn" class="voice-btn">
<span class="fa fa-microphone"> 开始识别</span>
</button>
<button type="primary" id="endBtn" class="voice-btn end">
<span class="fa fa-microphone-slash"> 结束识别</span>
</button>
<div id="timeBox" class="time-box flex-display-1">
<span class="total-time">识别中,<i id="timeCount"></i> 秒后自动停止识别</span>
</div>
</div>
</div>
<div class="voice">
<div class="result-text" id="resultPanel">此处显示识别结果</div>
</div>
</div>
</div>
</div>
</div>
<script>
var wenetWs = null
var timeLoop = null
var result = ""
$(document).ready(function () {
$('#beginBtn').on('click', startRecording)
$('#endBtn').on('click', stopRecording)
})
function openWebSocket(url) {
if ("WebSocket" in window) {
wenetWs = new WebSocket(url)
wenetWs.onopen = function () {
console.log("Websocket 连接成功,开始识别")
wenetWs.send(JSON.stringify({
"signal": "start"
}))
}
wenetWs.onmessage = function (_msg) { parseResult(_msg.data) }
wenetWs.onclose = function () {
console.log("WebSocket 连接断开")
}
wenetWs.onerror = function () { console.log("WebSocket 连接失败") }
}
}
function parseResult(data) {
var data = JSON.parse(data)
console.log('result json:', data)
var result = data.result
console.log(result)
$("#resultPanel").html(result)
}
function TransferUpload(number, blobOrNull, duration, blobRec, isClose) {
if (blobOrNull) {
var blob = blobOrNull
var encTime = blob.encTime
var reader = new FileReader()
reader.onloadend = function () { wenetWs.send(reader.result) }
reader.readAsArrayBuffer(blob)
}
}
function startRecording() {
// Check socket url
var socketUrl = $('#socketUrl').val()
if (!socketUrl.trim()) {
alert('请输入 WebSocket 服务器地址,如:ws://127.0.0.1:8091/ws/asr')
$('#socketUrl').focus()
return
}
// init recorder
SoundRecognizer.init({
soundType: 'pcm',
sampleRate: 16000,
recwaveElm: '.recwave',
translerCallBack: TransferUpload
})
openWebSocket(socketUrl)
// Change button state
$('#beginBtn').hide()
$('#endBtn, #timeBox').addClass('show')
// Start countdown
var seconds = 180
$('#timeCount').text(seconds)
timeLoop = setInterval(function () {
seconds--
$('#timeCount').text(seconds)
if (seconds === 0) {
stopRecording()
}
}, 1000)
}
function stopRecording() {
wenetWs.send(JSON.stringify({ "signal": "end" }))
SoundRecognizer.recordClose()
$('#endBtn').add($('#timeBox')).removeClass('show')
$('#beginBtn').show()
$('#timeCount').text('')
clearInterval(timeLoop)
}
</script>
</body>
</html>
...@@ -22,6 +22,7 @@ onnxruntime ...@@ -22,6 +22,7 @@ onnxruntime
pandas pandas
paddlenlp paddlenlp
paddlespeech_feat paddlespeech_feat
Pillow>=9.0.0
praatio==5.0.0 praatio==5.0.0
pypinyin pypinyin
pypinyin-dict pypinyin-dict
......
...@@ -10,7 +10,7 @@ Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER | ...@@ -10,7 +10,7 @@ Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER |
[Ds2 Offline Aishell ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz)| Aishell Dataset | Char-based | 1.4 GB | 2 Conv + 5 bidirectional LSTM layers| 0.0554 |-| 151 h | [Ds2 Offline Aishell ASR0](../../examples/aishell/asr0) | inference/python | [Ds2 Offline Aishell ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz)| Aishell Dataset | Char-based | 1.4 GB | 2 Conv + 5 bidirectional LSTM layers| 0.0554 |-| 151 h | [Ds2 Offline Aishell ASR0](../../examples/aishell/asr0) | inference/python |
[Conformer Online Wenetspeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/wenetspeech/asr1/asr1_chunk_conformer_wenetspeech_ckpt_1.0.0a.model.tar.gz) | WenetSpeech Dataset | Char-based | 457 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.11 (test\_net) 0.1879 (test\_meeting) |-| 10000 h |- | python | [Conformer Online Wenetspeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/wenetspeech/asr1/asr1_chunk_conformer_wenetspeech_ckpt_1.0.0a.model.tar.gz) | WenetSpeech Dataset | Char-based | 457 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.11 (test\_net) 0.1879 (test\_meeting) |-| 10000 h |- | python |
[Conformer Online Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_chunk_conformer_aishell_ckpt_0.2.0.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.0544 |-| 151 h | [Conformer Online Aishell ASR1](../../examples/aishell/asr1) | python | [Conformer Online Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_chunk_conformer_aishell_ckpt_0.2.0.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.0544 |-| 151 h | [Conformer Online Aishell ASR1](../../examples/aishell/asr1) | python |
[Conformer Offline Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_conformer_aishell_ckpt_0.1.2.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0464 |-| 151 h | [Conformer Offline Aishell ASR1](../../examples/aishell/asr1) | python | [Conformer Offline Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_conformer_aishell_ckpt_1.0.1.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0460 |-| 151 h | [Conformer Offline Aishell ASR1](../../examples/aishell/asr1) | python |
[Transformer Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_transformer_aishell_ckpt_0.1.1.model.tar.gz) | Aishell Dataset | Char-based | 128 MB | Encoder:Transformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0523 || 151 h | [Transformer Aishell ASR1](../../examples/aishell/asr1) | python | [Transformer Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_transformer_aishell_ckpt_0.1.1.model.tar.gz) | Aishell Dataset | Char-based | 128 MB | Encoder:Transformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0523 || 151 h | [Transformer Aishell ASR1](../../examples/aishell/asr1) | python |
[Ds2 Offline Librispeech ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr0/asr0_deepspeech2_offline_librispeech_ckpt_1.0.1.model.tar.gz)| Librispeech Dataset | Char-based | 1.3 GB | 2 Conv + 5 bidirectional LSTM layers| - |0.0467| 960 h | [Ds2 Offline Librispeech ASR0](../../examples/librispeech/asr0) | inference/python | [Ds2 Offline Librispeech ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr0/asr0_deepspeech2_offline_librispeech_ckpt_1.0.1.model.tar.gz)| Librispeech Dataset | Char-based | 1.3 GB | 2 Conv + 5 bidirectional LSTM layers| - |0.0467| 960 h | [Ds2 Offline Librispeech ASR0](../../examples/librispeech/asr0) | inference/python |
[Conformer Librispeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr1/asr1_conformer_librispeech_ckpt_0.1.1.model.tar.gz) | Librispeech Dataset | subword-based | 191 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring |-| 0.0338 | 960 h | [Conformer Librispeech ASR1](../../examples/librispeech/asr1) | python | [Conformer Librispeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr1/asr1_conformer_librispeech_ckpt_0.1.1.model.tar.gz) | Librispeech Dataset | subword-based | 191 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring |-| 0.0338 | 960 h | [Conformer Librispeech ASR1](../../examples/librispeech/asr1) | python |
......
...@@ -2,13 +2,13 @@ ...@@ -2,13 +2,13 @@
## Conformer ## Conformer
paddle version: 2.2.2 paddle version: 2.2.2
paddlespeech version: 0.2.0 paddlespeech version: 1.0.1
| Model | Params | Config | Augmentation| Test set | Decode method | Loss | CER | | Model | Params | Config | Augmentation| Test set | Decode method | Loss | CER |
| --- | --- | --- | --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- | --- | --- | --- |
| conformer | 47.07M | conf/conformer.yaml | spec_aug | test | attention | - | 0.0530 | | conformer | 47.07M | conf/conformer.yaml | spec_aug | test | attention | - | 0.0522 |
| conformer | 47.07M | conf/conformer.yaml | spec_aug | test | ctc_greedy_search | - | 0.0495 | | conformer | 47.07M | conf/conformer.yaml | spec_aug | test | ctc_greedy_search | - | 0.0481 |
| conformer | 47.07M | conf/conformer.yaml | spec_aug| test | ctc_prefix_beam_search | - | 0.0494 | | conformer | 47.07M | conf/conformer.yaml | spec_aug| test | ctc_prefix_beam_search | - | 0.0480 |
| conformer | 47.07M | conf/conformer.yaml | spec_aug | test | attention_rescoring | - | 0.0464 | | conformer | 47.07M | conf/conformer.yaml | spec_aug | test | attention_rescoring | - | 0.0460 |
## Conformer Streaming ## Conformer Streaming
......
...@@ -57,7 +57,7 @@ feat_dim: 80 ...@@ -57,7 +57,7 @@ feat_dim: 80
stride_ms: 10.0 stride_ms: 10.0
window_ms: 25.0 window_ms: 25.0
sortagrad: 0 # Feed samples from shortest to longest ; -1: enabled for all epochs, 0: disabled, other: enabled for 'other' epochs sortagrad: 0 # Feed samples from shortest to longest ; -1: enabled for all epochs, 0: disabled, other: enabled for 'other' epochs
batch_size: 64 batch_size: 32
maxlen_in: 512 # if input length > maxlen-in, batchsize is automatically reduced maxlen_in: 512 # if input length > maxlen-in, batchsize is automatically reduced
maxlen_out: 150 # if output length > maxlen-out, batchsize is automatically reduced maxlen_out: 150 # if output length > maxlen-out, batchsize is automatically reduced
minibatches: 0 # for debug minibatches: 0 # for debug
...@@ -73,10 +73,10 @@ num_encs: 1 ...@@ -73,10 +73,10 @@ num_encs: 1
########################################### ###########################################
# Training # # Training #
########################################### ###########################################
n_epoch: 240 n_epoch: 150
accum_grad: 2 accum_grad: 8
global_grad_clip: 5.0 global_grad_clip: 5.0
dist_sampler: True dist_sampler: False
optim: adam optim: adam
optim_conf: optim_conf:
lr: 0.002 lr: 0.002
......
...@@ -144,3 +144,34 @@ optional arguments: ...@@ -144,3 +144,34 @@ optional arguments:
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu. 6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
## Pretrained Model ## Pretrained Model
The pretrained model can be downloaded here:
- [vits_csmsc_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/vits/vits_csmsc_ckpt_1.1.0.zip) (add_blank=true)
VITS checkpoint contains files listed below.
```text
vits_csmsc_ckpt_1.1.0
├── default.yaml # default config used to train vitx
├── phone_id_map.txt # phone vocabulary file when training vits
└── snapshot_iter_350000.pdz # model parameters and optimizer states
```
ps: This ckpt is not good enough, a better result is training
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained VITS.
```bash
source path.sh
add_blank=true
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--config=vits_csmsc_ckpt_1.1.0/default.yaml \
--ckpt=vits_csmsc_ckpt_1.1.0/snapshot_iter_350000.pdz \
--phones_dict=vits_csmsc_ckpt_1.1.0/phone_id_map.txt \
--output_dir=exp/default/test_e2e \
--text=${BIN_DIR}/../sentences.txt \
--add-blank=${add_blank}
```
...@@ -3,6 +3,11 @@ ...@@ -3,6 +3,11 @@
config_path=$1 config_path=$1
train_output_path=$2 train_output_path=$2
# install monotonic_align
cd ${MAIN_ROOT}/paddlespeech/t2s/models/vits/monotonic_align
python3 setup.py build_ext --inplace
cd -
python3 ${BIN_DIR}/train.py \ python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \ --train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \ --dev-metadata=dump/dev/norm/metadata.jsonl \
......
...@@ -74,7 +74,7 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then ...@@ -74,7 +74,7 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# convert the m4a to wav # convert the m4a to wav
# and we will not delete the original m4a file # and we will not delete the original m4a file
echo "start to convert the m4a to wav" echo "start to convert the m4a to wav"
bash local/convert.sh ${TARGET_DIR}/voxceleb/vox2/test/ || exit 1; bash local/convert.sh ${TARGET_DIR}/voxceleb/vox2/ || exit 1;
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
echo "Convert voxceleb2 dataset from m4a to wav failed. Terminated." echo "Convert voxceleb2 dataset from m4a to wav failed. Terminated."
......
...@@ -14,10 +14,8 @@ ...@@ -14,10 +14,8 @@
# Modified from espnet(https://github.com/espnet/espnet) # Modified from espnet(https://github.com/espnet/espnet)
"""Spec Augment module for preprocessing i.e., data augmentation""" """Spec Augment module for preprocessing i.e., data augmentation"""
import random import random
import numpy import numpy
from PIL import Image from PIL import Image
from PIL.Image import BICUBIC
from .functional import FuncTrans from .functional import FuncTrans
...@@ -46,9 +44,10 @@ def time_warp(x, max_time_warp=80, inplace=False, mode="PIL"): ...@@ -46,9 +44,10 @@ def time_warp(x, max_time_warp=80, inplace=False, mode="PIL"):
warped = random.randrange(center - window, center + warped = random.randrange(center - window, center +
window) + 1 # 1 ... t - 1 window) + 1 # 1 ... t - 1
left = Image.fromarray(x[:center]).resize((x.shape[1], warped), BICUBIC) left = Image.fromarray(x[:center]).resize((x.shape[1], warped),
Image.BICUBIC)
right = Image.fromarray(x[center:]).resize((x.shape[1], t - warped), right = Image.fromarray(x[center:]).resize((x.shape[1], t - warped),
BICUBIC) Image.BICUBIC)
if inplace: if inplace:
x[:warped] = left x[:warped] = left
x[warped:] = right x[warped:] = right
......
...@@ -133,11 +133,11 @@ class ASRExecutor(BaseExecutor): ...@@ -133,11 +133,11 @@ class ASRExecutor(BaseExecutor):
""" """
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
logger.info("start to init the model") logger.debug("start to init the model")
# default max_len: unit:second # default max_len: unit:second
self.max_len = 50 self.max_len = 50
if hasattr(self, 'model'): if hasattr(self, 'model'):
logger.info('Model had been initialized.') logger.debug('Model had been initialized.')
return return
if cfg_path is None or ckpt_path is None: if cfg_path is None or ckpt_path is None:
...@@ -151,15 +151,15 @@ class ASRExecutor(BaseExecutor): ...@@ -151,15 +151,15 @@ class ASRExecutor(BaseExecutor):
self.ckpt_path = os.path.join( self.ckpt_path = os.path.join(
self.res_path, self.res_path,
self.task_resource.res_dict['ckpt_path'] + ".pdparams") self.task_resource.res_dict['ckpt_path'] + ".pdparams")
logger.info(self.res_path) logger.debug(self.res_path)
else: else:
self.cfg_path = os.path.abspath(cfg_path) self.cfg_path = os.path.abspath(cfg_path)
self.ckpt_path = os.path.abspath(ckpt_path + ".pdparams") self.ckpt_path = os.path.abspath(ckpt_path + ".pdparams")
self.res_path = os.path.dirname( self.res_path = os.path.dirname(
os.path.dirname(os.path.abspath(self.cfg_path))) os.path.dirname(os.path.abspath(self.cfg_path)))
logger.info(self.cfg_path) logger.debug(self.cfg_path)
logger.info(self.ckpt_path) logger.debug(self.ckpt_path)
#Init body. #Init body.
self.config = CfgNode(new_allowed=True) self.config = CfgNode(new_allowed=True)
...@@ -216,7 +216,7 @@ class ASRExecutor(BaseExecutor): ...@@ -216,7 +216,7 @@ class ASRExecutor(BaseExecutor):
max_len = self.config.encoder_conf.max_len max_len = self.config.encoder_conf.max_len
self.max_len = frame_shift_ms * max_len * subsample_rate self.max_len = frame_shift_ms * max_len * subsample_rate
logger.info( logger.debug(
f"The asr server limit max duration len: {self.max_len}") f"The asr server limit max duration len: {self.max_len}")
def preprocess(self, model_type: str, input: Union[str, os.PathLike]): def preprocess(self, model_type: str, input: Union[str, os.PathLike]):
...@@ -227,15 +227,15 @@ class ASRExecutor(BaseExecutor): ...@@ -227,15 +227,15 @@ class ASRExecutor(BaseExecutor):
audio_file = input audio_file = input
if isinstance(audio_file, (str, os.PathLike)): if isinstance(audio_file, (str, os.PathLike)):
logger.info("Preprocess audio_file:" + audio_file) logger.debug("Preprocess audio_file:" + audio_file)
# Get the object for feature extraction # Get the object for feature extraction
if "deepspeech2" in model_type or "conformer" in model_type or "transformer" in model_type: if "deepspeech2" in model_type or "conformer" in model_type or "transformer" in model_type:
logger.info("get the preprocess conf") logger.debug("get the preprocess conf")
preprocess_conf = self.config.preprocess_config preprocess_conf = self.config.preprocess_config
preprocess_args = {"train": False} preprocess_args = {"train": False}
preprocessing = Transformation(preprocess_conf) preprocessing = Transformation(preprocess_conf)
logger.info("read the audio file") logger.debug("read the audio file")
audio, audio_sample_rate = soundfile.read( audio, audio_sample_rate = soundfile.read(
audio_file, dtype="int16", always_2d=True) audio_file, dtype="int16", always_2d=True)
if self.change_format: if self.change_format:
...@@ -255,7 +255,7 @@ class ASRExecutor(BaseExecutor): ...@@ -255,7 +255,7 @@ class ASRExecutor(BaseExecutor):
else: else:
audio = audio[:, 0] audio = audio[:, 0]
logger.info(f"audio shape: {audio.shape}") logger.debug(f"audio shape: {audio.shape}")
# fbank # fbank
audio = preprocessing(audio, **preprocess_args) audio = preprocessing(audio, **preprocess_args)
...@@ -264,19 +264,19 @@ class ASRExecutor(BaseExecutor): ...@@ -264,19 +264,19 @@ class ASRExecutor(BaseExecutor):
self._inputs["audio"] = audio self._inputs["audio"] = audio
self._inputs["audio_len"] = audio_len self._inputs["audio_len"] = audio_len
logger.info(f"audio feat shape: {audio.shape}") logger.debug(f"audio feat shape: {audio.shape}")
else: else:
raise Exception("wrong type") raise Exception("wrong type")
logger.info("audio feat process success") logger.debug("audio feat process success")
@paddle.no_grad() @paddle.no_grad()
def infer(self, model_type: str): def infer(self, model_type: str):
""" """
Model inference and result stored in self.output. Model inference and result stored in self.output.
""" """
logger.info("start to infer the model to get the output") logger.debug("start to infer the model to get the output")
cfg = self.config.decode cfg = self.config.decode
audio = self._inputs["audio"] audio = self._inputs["audio"]
audio_len = self._inputs["audio_len"] audio_len = self._inputs["audio_len"]
...@@ -293,7 +293,7 @@ class ASRExecutor(BaseExecutor): ...@@ -293,7 +293,7 @@ class ASRExecutor(BaseExecutor):
self._outputs["result"] = result_transcripts[0] self._outputs["result"] = result_transcripts[0]
elif "conformer" in model_type or "transformer" in model_type: elif "conformer" in model_type or "transformer" in model_type:
logger.info( logger.debug(
f"we will use the transformer like model : {model_type}") f"we will use the transformer like model : {model_type}")
try: try:
result_transcripts = self.model.decode( result_transcripts = self.model.decode(
...@@ -352,7 +352,7 @@ class ASRExecutor(BaseExecutor): ...@@ -352,7 +352,7 @@ class ASRExecutor(BaseExecutor):
logger.error("Please input the right audio file path") logger.error("Please input the right audio file path")
return False return False
logger.info("checking the audio file format......") logger.debug("checking the audio file format......")
try: try:
audio, audio_sample_rate = soundfile.read( audio, audio_sample_rate = soundfile.read(
audio_file, dtype="int16", always_2d=True) audio_file, dtype="int16", always_2d=True)
...@@ -374,7 +374,7 @@ class ASRExecutor(BaseExecutor): ...@@ -374,7 +374,7 @@ class ASRExecutor(BaseExecutor):
sox input_audio.xx --rate 8k --bits 16 --channels 1 output_audio.wav \n \ sox input_audio.xx --rate 8k --bits 16 --channels 1 output_audio.wav \n \
") ")
return False return False
logger.info("The sample rate is %d" % audio_sample_rate) logger.debug("The sample rate is %d" % audio_sample_rate)
if audio_sample_rate != self.sample_rate: if audio_sample_rate != self.sample_rate:
logger.warning("The sample rate of the input file is not {}.\n \ logger.warning("The sample rate of the input file is not {}.\n \
The program will resample the wav file to {}.\n \ The program will resample the wav file to {}.\n \
...@@ -383,28 +383,28 @@ class ASRExecutor(BaseExecutor): ...@@ -383,28 +383,28 @@ class ASRExecutor(BaseExecutor):
".format(self.sample_rate, self.sample_rate)) ".format(self.sample_rate, self.sample_rate))
if force_yes is False: if force_yes is False:
while (True): while (True):
logger.info( logger.debug(
"Whether to change the sample rate and the channel. Y: change the sample. N: exit the prgream." "Whether to change the sample rate and the channel. Y: change the sample. N: exit the prgream."
) )
content = input("Input(Y/N):") content = input("Input(Y/N):")
if content.strip() == "Y" or content.strip( if content.strip() == "Y" or content.strip(
) == "y" or content.strip() == "yes" or content.strip( ) == "y" or content.strip() == "yes" or content.strip(
) == "Yes": ) == "Yes":
logger.info( logger.debug(
"change the sampele rate, channel to 16k and 1 channel" "change the sampele rate, channel to 16k and 1 channel"
) )
break break
elif content.strip() == "N" or content.strip( elif content.strip() == "N" or content.strip(
) == "n" or content.strip() == "no" or content.strip( ) == "n" or content.strip() == "no" or content.strip(
) == "No": ) == "No":
logger.info("Exit the program") logger.debug("Exit the program")
return False return False
else: else:
logger.warning("Not regular input, please input again") logger.warning("Not regular input, please input again")
self.change_format = True self.change_format = True
else: else:
logger.info("The audio file format is right") logger.debug("The audio file format is right")
self.change_format = False self.change_format = False
return True return True
......
...@@ -92,7 +92,7 @@ class CLSExecutor(BaseExecutor): ...@@ -92,7 +92,7 @@ class CLSExecutor(BaseExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'model'): if hasattr(self, 'model'):
logger.info('Model had been initialized.') logger.debug('Model had been initialized.')
return return
if label_file is None or ckpt_path is None: if label_file is None or ckpt_path is None:
...@@ -135,14 +135,14 @@ class CLSExecutor(BaseExecutor): ...@@ -135,14 +135,14 @@ class CLSExecutor(BaseExecutor):
Input content can be a text(tts), a file(asr, cls) or a streaming(not supported yet). Input content can be a text(tts), a file(asr, cls) or a streaming(not supported yet).
""" """
feat_conf = self._conf['feature'] feat_conf = self._conf['feature']
logger.info(feat_conf) logger.debug(feat_conf)
waveform, _ = load( waveform, _ = load(
file=audio_file, file=audio_file,
sr=feat_conf['sample_rate'], sr=feat_conf['sample_rate'],
mono=True, mono=True,
dtype='float32') dtype='float32')
if isinstance(audio_file, (str, os.PathLike)): if isinstance(audio_file, (str, os.PathLike)):
logger.info("Preprocessing audio_file:" + audio_file) logger.debug("Preprocessing audio_file:" + audio_file)
# Feature extraction # Feature extraction
feature_extractor = LogMelSpectrogram( feature_extractor = LogMelSpectrogram(
......
...@@ -61,7 +61,7 @@ def _get_unique_endpoints(trainer_endpoints): ...@@ -61,7 +61,7 @@ def _get_unique_endpoints(trainer_endpoints):
continue continue
ips.add(ip) ips.add(ip)
unique_endpoints.add(endpoint) unique_endpoints.add(endpoint)
logger.info("unique_endpoints {}".format(unique_endpoints)) logger.debug("unique_endpoints {}".format(unique_endpoints))
return unique_endpoints return unique_endpoints
...@@ -96,7 +96,7 @@ def get_path_from_url(url, ...@@ -96,7 +96,7 @@ def get_path_from_url(url,
# data, and the same ip will only download data once. # data, and the same ip will only download data once.
unique_endpoints = _get_unique_endpoints(ParallelEnv().trainer_endpoints[:]) unique_endpoints = _get_unique_endpoints(ParallelEnv().trainer_endpoints[:])
if osp.exists(fullpath) and check_exist and _md5check(fullpath, md5sum): if osp.exists(fullpath) and check_exist and _md5check(fullpath, md5sum):
logger.info("Found {}".format(fullpath)) logger.debug("Found {}".format(fullpath))
else: else:
if ParallelEnv().current_endpoint in unique_endpoints: if ParallelEnv().current_endpoint in unique_endpoints:
fullpath = _download(url, root_dir, md5sum, method=method) fullpath = _download(url, root_dir, md5sum, method=method)
...@@ -118,7 +118,7 @@ def _get_download(url, fullname): ...@@ -118,7 +118,7 @@ def _get_download(url, fullname):
try: try:
req = requests.get(url, stream=True) req = requests.get(url, stream=True)
except Exception as e: # requests.exceptions.ConnectionError except Exception as e: # requests.exceptions.ConnectionError
logger.info("Downloading {} from {} failed with exception {}".format( logger.debug("Downloading {} from {} failed with exception {}".format(
fname, url, str(e))) fname, url, str(e)))
return False return False
...@@ -190,7 +190,7 @@ def _download(url, path, md5sum=None, method='get'): ...@@ -190,7 +190,7 @@ def _download(url, path, md5sum=None, method='get'):
fullname = osp.join(path, fname) fullname = osp.join(path, fname)
retry_cnt = 0 retry_cnt = 0
logger.info("Downloading {} from {}".format(fname, url)) logger.debug("Downloading {} from {}".format(fname, url))
while not (osp.exists(fullname) and _md5check(fullname, md5sum)): while not (osp.exists(fullname) and _md5check(fullname, md5sum)):
if retry_cnt < DOWNLOAD_RETRY_LIMIT: if retry_cnt < DOWNLOAD_RETRY_LIMIT:
retry_cnt += 1 retry_cnt += 1
...@@ -209,7 +209,7 @@ def _md5check(fullname, md5sum=None): ...@@ -209,7 +209,7 @@ def _md5check(fullname, md5sum=None):
if md5sum is None: if md5sum is None:
return True return True
logger.info("File {} md5 checking...".format(fullname)) logger.debug("File {} md5 checking...".format(fullname))
md5 = hashlib.md5() md5 = hashlib.md5()
with open(fullname, 'rb') as f: with open(fullname, 'rb') as f:
for chunk in iter(lambda: f.read(4096), b""): for chunk in iter(lambda: f.read(4096), b""):
...@@ -217,7 +217,7 @@ def _md5check(fullname, md5sum=None): ...@@ -217,7 +217,7 @@ def _md5check(fullname, md5sum=None):
calc_md5sum = md5.hexdigest() calc_md5sum = md5.hexdigest()
if calc_md5sum != md5sum: if calc_md5sum != md5sum:
logger.info("File {} md5 check failed, {}(calc) != " logger.debug("File {} md5 check failed, {}(calc) != "
"{}(base)".format(fullname, calc_md5sum, md5sum)) "{}(base)".format(fullname, calc_md5sum, md5sum))
return False return False
return True return True
...@@ -227,7 +227,7 @@ def _decompress(fname): ...@@ -227,7 +227,7 @@ def _decompress(fname):
""" """
Decompress for zip and tar file Decompress for zip and tar file
""" """
logger.info("Decompressing {}...".format(fname)) logger.debug("Decompressing {}...".format(fname))
# For protecting decompressing interupted, # For protecting decompressing interupted,
# decompress to fpath_tmp directory firstly, if decompress # decompress to fpath_tmp directory firstly, if decompress
......
...@@ -217,7 +217,7 @@ class BaseExecutor(ABC): ...@@ -217,7 +217,7 @@ class BaseExecutor(ABC):
logging.getLogger(name) for name in logging.root.manager.loggerDict logging.getLogger(name) for name in logging.root.manager.loggerDict
] ]
for l in loggers: for l in loggers:
l.disabled = True l.setLevel(logging.ERROR)
def show_rtf(self, info: Dict[str, List[float]]): def show_rtf(self, info: Dict[str, List[float]]):
""" """
......
...@@ -88,7 +88,7 @@ class KWSExecutor(BaseExecutor): ...@@ -88,7 +88,7 @@ class KWSExecutor(BaseExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'model'): if hasattr(self, 'model'):
logger.info('Model had been initialized.') logger.debug('Model had been initialized.')
return return
if ckpt_path is None: if ckpt_path is None:
...@@ -141,7 +141,7 @@ class KWSExecutor(BaseExecutor): ...@@ -141,7 +141,7 @@ class KWSExecutor(BaseExecutor):
assert os.path.isfile(audio_file) assert os.path.isfile(audio_file)
waveform, _ = load(audio_file) waveform, _ = load(audio_file)
if isinstance(audio_file, (str, os.PathLike)): if isinstance(audio_file, (str, os.PathLike)):
logger.info("Preprocessing audio_file:" + audio_file) logger.debug("Preprocessing audio_file:" + audio_file)
# Feature extraction # Feature extraction
waveform = paddle.to_tensor(waveform).unsqueeze(0) waveform = paddle.to_tensor(waveform).unsqueeze(0)
......
...@@ -49,7 +49,7 @@ class Logger(object): ...@@ -49,7 +49,7 @@ class Logger(object):
self.handler.setFormatter(self.format) self.handler.setFormatter(self.format)
self.logger.addHandler(self.handler) self.logger.addHandler(self.handler)
self.logger.setLevel(logging.DEBUG) self.logger.setLevel(logging.INFO)
self.logger.propagate = False self.logger.propagate = False
def __call__(self, log_level: str, msg: str): def __call__(self, log_level: str, msg: str):
......
...@@ -110,7 +110,7 @@ class STExecutor(BaseExecutor): ...@@ -110,7 +110,7 @@ class STExecutor(BaseExecutor):
""" """
decompressed_path = download_and_decompress(self.kaldi_bins, MODEL_HOME) decompressed_path = download_and_decompress(self.kaldi_bins, MODEL_HOME)
decompressed_path = os.path.abspath(decompressed_path) decompressed_path = os.path.abspath(decompressed_path)
logger.info("Kaldi_bins stored in: {}".format(decompressed_path)) logger.debug("Kaldi_bins stored in: {}".format(decompressed_path))
if "LD_LIBRARY_PATH" in os.environ: if "LD_LIBRARY_PATH" in os.environ:
os.environ["LD_LIBRARY_PATH"] += f":{decompressed_path}" os.environ["LD_LIBRARY_PATH"] += f":{decompressed_path}"
else: else:
...@@ -128,7 +128,7 @@ class STExecutor(BaseExecutor): ...@@ -128,7 +128,7 @@ class STExecutor(BaseExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'model'): if hasattr(self, 'model'):
logger.info('Model had been initialized.') logger.debug('Model had been initialized.')
return return
if cfg_path is None or ckpt_path is None: if cfg_path is None or ckpt_path is None:
...@@ -140,8 +140,8 @@ class STExecutor(BaseExecutor): ...@@ -140,8 +140,8 @@ class STExecutor(BaseExecutor):
self.ckpt_path = os.path.join( self.ckpt_path = os.path.join(
self.task_resource.res_dir, self.task_resource.res_dir,
self.task_resource.res_dict['ckpt_path']) self.task_resource.res_dict['ckpt_path'])
logger.info(self.cfg_path) logger.debug(self.cfg_path)
logger.info(self.ckpt_path) logger.debug(self.ckpt_path)
res_path = self.task_resource.res_dir res_path = self.task_resource.res_dir
else: else:
self.cfg_path = os.path.abspath(cfg_path) self.cfg_path = os.path.abspath(cfg_path)
...@@ -192,7 +192,7 @@ class STExecutor(BaseExecutor): ...@@ -192,7 +192,7 @@ class STExecutor(BaseExecutor):
Input content can be a file(wav). Input content can be a file(wav).
""" """
audio_file = os.path.abspath(wav_file) audio_file = os.path.abspath(wav_file)
logger.info("Preprocess audio_file:" + audio_file) logger.debug("Preprocess audio_file:" + audio_file)
if "fat_st" in model_type: if "fat_st" in model_type:
cmvn = self.config.cmvn_path cmvn = self.config.cmvn_path
......
...@@ -98,7 +98,7 @@ class TextExecutor(BaseExecutor): ...@@ -98,7 +98,7 @@ class TextExecutor(BaseExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'model'): if hasattr(self, 'model'):
logger.info('Model had been initialized.') logger.debug('Model had been initialized.')
return return
self.task = task self.task = task
......
...@@ -173,16 +173,23 @@ class TTSExecutor(BaseExecutor): ...@@ -173,16 +173,23 @@ class TTSExecutor(BaseExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'am_inference') and hasattr(self, 'voc_inference'): if hasattr(self, 'am_inference') and hasattr(self, 'voc_inference'):
logger.info('Models had been initialized.') logger.debug('Models had been initialized.')
return return
# am # am
if am_ckpt is None or am_config is None or am_stat is None or phones_dict is None:
use_pretrained_am = True
else:
use_pretrained_am = False
am_tag = am + '-' + lang am_tag = am + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=am_tag, model_tag=am_tag,
model_type=0, # am model_type=0, # am
skip_download=not use_pretrained_am,
version=None, # default version version=None, # default version
) )
if am_ckpt is None or am_config is None or am_stat is None or phones_dict is None: if use_pretrained_am:
self.am_res_path = self.task_resource.res_dir self.am_res_path = self.task_resource.res_dir
self.am_config = os.path.join(self.am_res_path, self.am_config = os.path.join(self.am_res_path,
self.task_resource.res_dict['config']) self.task_resource.res_dict['config'])
...@@ -193,9 +200,9 @@ class TTSExecutor(BaseExecutor): ...@@ -193,9 +200,9 @@ class TTSExecutor(BaseExecutor):
# must have phones_dict in acoustic # must have phones_dict in acoustic
self.phones_dict = os.path.join( self.phones_dict = os.path.join(
self.am_res_path, self.task_resource.res_dict['phones_dict']) self.am_res_path, self.task_resource.res_dict['phones_dict'])
logger.info(self.am_res_path) logger.debug(self.am_res_path)
logger.info(self.am_config) logger.debug(self.am_config)
logger.info(self.am_ckpt) logger.debug(self.am_ckpt)
else: else:
self.am_config = os.path.abspath(am_config) self.am_config = os.path.abspath(am_config)
self.am_ckpt = os.path.abspath(am_ckpt) self.am_ckpt = os.path.abspath(am_ckpt)
...@@ -220,13 +227,19 @@ class TTSExecutor(BaseExecutor): ...@@ -220,13 +227,19 @@ class TTSExecutor(BaseExecutor):
self.speaker_dict = speaker_dict self.speaker_dict = speaker_dict
# voc # voc
if voc_ckpt is None or voc_config is None or voc_stat is None:
use_pretrained_voc = True
else:
use_pretrained_voc = False
voc_tag = voc + '-' + lang voc_tag = voc + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=voc_tag, model_tag=voc_tag,
model_type=1, # vocoder model_type=1, # vocoder
skip_download=not use_pretrained_voc,
version=None, # default version version=None, # default version
) )
if voc_ckpt is None or voc_config is None or voc_stat is None: if use_pretrained_voc:
self.voc_res_path = self.task_resource.voc_res_dir self.voc_res_path = self.task_resource.voc_res_dir
self.voc_config = os.path.join( self.voc_config = os.path.join(
self.voc_res_path, self.task_resource.voc_res_dict['config']) self.voc_res_path, self.task_resource.voc_res_dict['config'])
...@@ -235,9 +248,9 @@ class TTSExecutor(BaseExecutor): ...@@ -235,9 +248,9 @@ class TTSExecutor(BaseExecutor):
self.voc_stat = os.path.join( self.voc_stat = os.path.join(
self.voc_res_path, self.voc_res_path,
self.task_resource.voc_res_dict['speech_stats']) self.task_resource.voc_res_dict['speech_stats'])
logger.info(self.voc_res_path) logger.debug(self.voc_res_path)
logger.info(self.voc_config) logger.debug(self.voc_config)
logger.info(self.voc_ckpt) logger.debug(self.voc_ckpt)
else: else:
self.voc_config = os.path.abspath(voc_config) self.voc_config = os.path.abspath(voc_config)
self.voc_ckpt = os.path.abspath(voc_ckpt) self.voc_ckpt = os.path.abspath(voc_ckpt)
...@@ -254,21 +267,18 @@ class TTSExecutor(BaseExecutor): ...@@ -254,21 +267,18 @@ class TTSExecutor(BaseExecutor):
with open(self.phones_dict, "r") as f: with open(self.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id) vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
tone_size = None tone_size = None
if self.tones_dict: if self.tones_dict:
with open(self.tones_dict, "r") as f: with open(self.tones_dict, "r") as f:
tone_id = [line.strip().split() for line in f.readlines()] tone_id = [line.strip().split() for line in f.readlines()]
tone_size = len(tone_id) tone_size = len(tone_id)
print("tone_size:", tone_size)
spk_num = None spk_num = None
if self.speaker_dict: if self.speaker_dict:
with open(self.speaker_dict, 'rt') as f: with open(self.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()] spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id) spk_num = len(spk_id)
print("spk_num:", spk_num)
# frontend # frontend
if lang == 'zh': if lang == 'zh':
...@@ -278,7 +288,6 @@ class TTSExecutor(BaseExecutor): ...@@ -278,7 +288,6 @@ class TTSExecutor(BaseExecutor):
elif lang == 'en': elif lang == 'en':
self.frontend = English(phone_vocab_path=self.phones_dict) self.frontend = English(phone_vocab_path=self.phones_dict)
print("frontend done!")
# acoustic model # acoustic model
odim = self.am_config.n_mels odim = self.am_config.n_mels
...@@ -311,7 +320,6 @@ class TTSExecutor(BaseExecutor): ...@@ -311,7 +320,6 @@ class TTSExecutor(BaseExecutor):
am_normalizer = ZScore(am_mu, am_std) am_normalizer = ZScore(am_mu, am_std)
self.am_inference = am_inference_class(am_normalizer, am) self.am_inference = am_inference_class(am_normalizer, am)
self.am_inference.eval() self.am_inference.eval()
print("acoustic model done!")
# vocoder # vocoder
# model: {model_name}_{dataset} # model: {model_name}_{dataset}
...@@ -334,7 +342,6 @@ class TTSExecutor(BaseExecutor): ...@@ -334,7 +342,6 @@ class TTSExecutor(BaseExecutor):
voc_normalizer = ZScore(voc_mu, voc_std) voc_normalizer = ZScore(voc_mu, voc_std)
self.voc_inference = voc_inference_class(voc_normalizer, voc) self.voc_inference = voc_inference_class(voc_normalizer, voc)
self.voc_inference.eval() self.voc_inference.eval()
print("voc done!")
def preprocess(self, input: Any, *args, **kwargs): def preprocess(self, input: Any, *args, **kwargs):
""" """
...@@ -375,7 +382,7 @@ class TTSExecutor(BaseExecutor): ...@@ -375,7 +382,7 @@ class TTSExecutor(BaseExecutor):
text, merge_sentences=merge_sentences) text, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
else: else:
print("lang should in {'zh', 'en'}!") logger.error("lang should in {'zh', 'en'}!")
self.frontend_time = time.time() - frontend_st self.frontend_time = time.time() - frontend_st
self.am_time = 0 self.am_time = 0
......
...@@ -117,7 +117,7 @@ class VectorExecutor(BaseExecutor): ...@@ -117,7 +117,7 @@ class VectorExecutor(BaseExecutor):
# stage 2: read the input data and store them as a list # stage 2: read the input data and store them as a list
task_source = self.get_input_source(parser_args.input) task_source = self.get_input_source(parser_args.input)
logger.info(f"task source: {task_source}") logger.debug(f"task source: {task_source}")
# stage 3: process the audio one by one # stage 3: process the audio one by one
# we do action according the task type # we do action according the task type
...@@ -127,13 +127,13 @@ class VectorExecutor(BaseExecutor): ...@@ -127,13 +127,13 @@ class VectorExecutor(BaseExecutor):
try: try:
# extract the speaker audio embedding # extract the speaker audio embedding
if parser_args.task == "spk": if parser_args.task == "spk":
logger.info("do vector spk task") logger.debug("do vector spk task")
res = self(input_, model, sample_rate, config, ckpt_path, res = self(input_, model, sample_rate, config, ckpt_path,
device) device)
task_result[id_] = res task_result[id_] = res
elif parser_args.task == "score": elif parser_args.task == "score":
logger.info("do vector score task") logger.debug("do vector score task")
logger.info(f"input content {input_}") logger.debug(f"input content {input_}")
if len(input_.split()) != 2: if len(input_.split()) != 2:
logger.error( logger.error(
f"vector score task input {input_} wav num is not two," f"vector score task input {input_} wav num is not two,"
...@@ -142,7 +142,7 @@ class VectorExecutor(BaseExecutor): ...@@ -142,7 +142,7 @@ class VectorExecutor(BaseExecutor):
# get the enroll and test embedding # get the enroll and test embedding
enroll_audio, test_audio = input_.split() enroll_audio, test_audio = input_.split()
logger.info( logger.debug(
f"score task, enroll audio: {enroll_audio}, test audio: {test_audio}" f"score task, enroll audio: {enroll_audio}, test audio: {test_audio}"
) )
enroll_embedding = self(enroll_audio, model, sample_rate, enroll_embedding = self(enroll_audio, model, sample_rate,
...@@ -158,8 +158,8 @@ class VectorExecutor(BaseExecutor): ...@@ -158,8 +158,8 @@ class VectorExecutor(BaseExecutor):
has_exceptions = True has_exceptions = True
task_result[id_] = f'{e.__class__.__name__}: {e}' task_result[id_] = f'{e.__class__.__name__}: {e}'
logger.info("task result as follows: ") logger.debug("task result as follows: ")
logger.info(f"{task_result}") logger.debug(f"{task_result}")
# stage 4: process the all the task results # stage 4: process the all the task results
self.process_task_results(parser_args.input, task_result, self.process_task_results(parser_args.input, task_result,
...@@ -207,7 +207,7 @@ class VectorExecutor(BaseExecutor): ...@@ -207,7 +207,7 @@ class VectorExecutor(BaseExecutor):
""" """
if not hasattr(self, "score_func"): if not hasattr(self, "score_func"):
self.score_func = paddle.nn.CosineSimilarity(axis=0) self.score_func = paddle.nn.CosineSimilarity(axis=0)
logger.info("create the cosine score function ") logger.debug("create the cosine score function ")
score = self.score_func( score = self.score_func(
paddle.to_tensor(enroll_embedding), paddle.to_tensor(enroll_embedding),
...@@ -244,7 +244,7 @@ class VectorExecutor(BaseExecutor): ...@@ -244,7 +244,7 @@ class VectorExecutor(BaseExecutor):
sys.exit(-1) sys.exit(-1)
# stage 1: set the paddle runtime host device # stage 1: set the paddle runtime host device
logger.info(f"device type: {device}") logger.debug(f"device type: {device}")
paddle.device.set_device(device) paddle.device.set_device(device)
# stage 2: read the specific pretrained model # stage 2: read the specific pretrained model
...@@ -283,7 +283,7 @@ class VectorExecutor(BaseExecutor): ...@@ -283,7 +283,7 @@ class VectorExecutor(BaseExecutor):
# stage 0: avoid to init the mode again # stage 0: avoid to init the mode again
self.task = task self.task = task
if hasattr(self, "model"): if hasattr(self, "model"):
logger.info("Model has been initialized") logger.debug("Model has been initialized")
return return
# stage 1: get the model and config path # stage 1: get the model and config path
...@@ -294,7 +294,7 @@ class VectorExecutor(BaseExecutor): ...@@ -294,7 +294,7 @@ class VectorExecutor(BaseExecutor):
sample_rate_str = "16k" if sample_rate == 16000 else "8k" sample_rate_str = "16k" if sample_rate == 16000 else "8k"
tag = model_type + "-" + sample_rate_str tag = model_type + "-" + sample_rate_str
self.task_resource.set_task_model(tag, version=None) self.task_resource.set_task_model(tag, version=None)
logger.info(f"load the pretrained model: {tag}") logger.debug(f"load the pretrained model: {tag}")
# get the model from the pretrained list # get the model from the pretrained list
# we download the pretrained model and store it in the res_path # we download the pretrained model and store it in the res_path
self.res_path = self.task_resource.res_dir self.res_path = self.task_resource.res_dir
...@@ -312,19 +312,19 @@ class VectorExecutor(BaseExecutor): ...@@ -312,19 +312,19 @@ class VectorExecutor(BaseExecutor):
self.res_path = os.path.dirname( self.res_path = os.path.dirname(
os.path.dirname(os.path.abspath(self.cfg_path))) os.path.dirname(os.path.abspath(self.cfg_path)))
logger.info(f"start to read the ckpt from {self.ckpt_path}") logger.debug(f"start to read the ckpt from {self.ckpt_path}")
logger.info(f"read the config from {self.cfg_path}") logger.debug(f"read the config from {self.cfg_path}")
logger.info(f"get the res path {self.res_path}") logger.debug(f"get the res path {self.res_path}")
# stage 2: read and config and init the model body # stage 2: read and config and init the model body
self.config = CfgNode(new_allowed=True) self.config = CfgNode(new_allowed=True)
self.config.merge_from_file(self.cfg_path) self.config.merge_from_file(self.cfg_path)
# stage 3: get the model name to instance the model network with dynamic_import # stage 3: get the model name to instance the model network with dynamic_import
logger.info("start to dynamic import the model class") logger.debug("start to dynamic import the model class")
model_name = model_type[:model_type.rindex('_')] model_name = model_type[:model_type.rindex('_')]
model_class = self.task_resource.get_model_class(model_name) model_class = self.task_resource.get_model_class(model_name)
logger.info(f"model name {model_name}") logger.debug(f"model name {model_name}")
model_conf = self.config.model model_conf = self.config.model
backbone = model_class(**model_conf) backbone = model_class(**model_conf)
model = SpeakerIdetification( model = SpeakerIdetification(
...@@ -333,11 +333,11 @@ class VectorExecutor(BaseExecutor): ...@@ -333,11 +333,11 @@ class VectorExecutor(BaseExecutor):
self.model.eval() self.model.eval()
# stage 4: load the model parameters # stage 4: load the model parameters
logger.info("start to set the model parameters to model") logger.debug("start to set the model parameters to model")
model_dict = paddle.load(self.ckpt_path) model_dict = paddle.load(self.ckpt_path)
self.model.set_state_dict(model_dict) self.model.set_state_dict(model_dict)
logger.info("create the model instance success") logger.debug("create the model instance success")
@paddle.no_grad() @paddle.no_grad()
def infer(self, model_type: str): def infer(self, model_type: str):
...@@ -349,14 +349,14 @@ class VectorExecutor(BaseExecutor): ...@@ -349,14 +349,14 @@ class VectorExecutor(BaseExecutor):
# stage 0: get the feat and length from _inputs # stage 0: get the feat and length from _inputs
feats = self._inputs["feats"] feats = self._inputs["feats"]
lengths = self._inputs["lengths"] lengths = self._inputs["lengths"]
logger.info("start to do backbone network model forward") logger.debug("start to do backbone network model forward")
logger.info( logger.debug(
f"feats shape:{feats.shape}, lengths shape: {lengths.shape}") f"feats shape:{feats.shape}, lengths shape: {lengths.shape}")
# stage 1: get the audio embedding # stage 1: get the audio embedding
# embedding from (1, emb_size, 1) -> (emb_size) # embedding from (1, emb_size, 1) -> (emb_size)
embedding = self.model.backbone(feats, lengths).squeeze().numpy() embedding = self.model.backbone(feats, lengths).squeeze().numpy()
logger.info(f"embedding size: {embedding.shape}") logger.debug(f"embedding size: {embedding.shape}")
# stage 2: put the embedding and dim info to _outputs property # stage 2: put the embedding and dim info to _outputs property
# the embedding type is numpy.array # the embedding type is numpy.array
...@@ -380,12 +380,13 @@ class VectorExecutor(BaseExecutor): ...@@ -380,12 +380,13 @@ class VectorExecutor(BaseExecutor):
""" """
audio_file = input_file audio_file = input_file
if isinstance(audio_file, (str, os.PathLike)): if isinstance(audio_file, (str, os.PathLike)):
logger.info(f"Preprocess audio file: {audio_file}") logger.debug(f"Preprocess audio file: {audio_file}")
# stage 1: load the audio sample points # stage 1: load the audio sample points
# Note: this process must match the training process # Note: this process must match the training process
waveform, sr = load_audio(audio_file) waveform, sr = load_audio(audio_file)
logger.info(f"load the audio sample points, shape is: {waveform.shape}") logger.debug(
f"load the audio sample points, shape is: {waveform.shape}")
# stage 2: get the audio feat # stage 2: get the audio feat
# Note: Now we only support fbank feature # Note: Now we only support fbank feature
...@@ -396,9 +397,9 @@ class VectorExecutor(BaseExecutor): ...@@ -396,9 +397,9 @@ class VectorExecutor(BaseExecutor):
n_mels=self.config.n_mels, n_mels=self.config.n_mels,
window_size=self.config.window_size, window_size=self.config.window_size,
hop_length=self.config.hop_size) hop_length=self.config.hop_size)
logger.info(f"extract the audio feat, shape is: {feat.shape}") logger.debug(f"extract the audio feat, shape is: {feat.shape}")
except Exception as e: except Exception as e:
logger.info(f"feat occurs exception {e}") logger.debug(f"feat occurs exception {e}")
sys.exit(-1) sys.exit(-1)
feat = paddle.to_tensor(feat).unsqueeze(0) feat = paddle.to_tensor(feat).unsqueeze(0)
...@@ -411,11 +412,11 @@ class VectorExecutor(BaseExecutor): ...@@ -411,11 +412,11 @@ class VectorExecutor(BaseExecutor):
# stage 4: store the feat and length in the _inputs, # stage 4: store the feat and length in the _inputs,
# which will be used in other function # which will be used in other function
logger.info(f"feats shape: {feat.shape}") logger.debug(f"feats shape: {feat.shape}")
self._inputs["feats"] = feat self._inputs["feats"] = feat
self._inputs["lengths"] = lengths self._inputs["lengths"] = lengths
logger.info("audio extract the feat success") logger.debug("audio extract the feat success")
def _check(self, audio_file: str, sample_rate: int): def _check(self, audio_file: str, sample_rate: int):
"""Check if the model sample match the audio sample rate """Check if the model sample match the audio sample rate
...@@ -441,7 +442,7 @@ class VectorExecutor(BaseExecutor): ...@@ -441,7 +442,7 @@ class VectorExecutor(BaseExecutor):
logger.error("Please input the right audio file path") logger.error("Please input the right audio file path")
return False return False
logger.info("checking the aduio file format......") logger.debug("checking the aduio file format......")
try: try:
audio, audio_sample_rate = soundfile.read( audio, audio_sample_rate = soundfile.read(
audio_file, dtype="float32", always_2d=True) audio_file, dtype="float32", always_2d=True)
...@@ -458,7 +459,7 @@ class VectorExecutor(BaseExecutor): ...@@ -458,7 +459,7 @@ class VectorExecutor(BaseExecutor):
") ")
return False return False
logger.info(f"The sample rate is {audio_sample_rate}") logger.debug(f"The sample rate is {audio_sample_rate}")
if audio_sample_rate != self.sample_rate: if audio_sample_rate != self.sample_rate:
logger.error("The sample rate of the input file is not {}.\n \ logger.error("The sample rate of the input file is not {}.\n \
...@@ -468,6 +469,6 @@ class VectorExecutor(BaseExecutor): ...@@ -468,6 +469,6 @@ class VectorExecutor(BaseExecutor):
".format(self.sample_rate, self.sample_rate)) ".format(self.sample_rate, self.sample_rate))
sys.exit(-1) sys.exit(-1)
else: else:
logger.info("The audio file format is right") logger.debug("The audio file format is right")
return True return True
...@@ -60,6 +60,7 @@ class CommonTaskResource: ...@@ -60,6 +60,7 @@ class CommonTaskResource:
def set_task_model(self, def set_task_model(self,
model_tag: str, model_tag: str,
model_type: int=0, model_type: int=0,
skip_download: bool=False,
version: Optional[str]=None): version: Optional[str]=None):
"""Set model tag and version of current task. """Set model tag and version of current task.
...@@ -83,6 +84,7 @@ class CommonTaskResource: ...@@ -83,6 +84,7 @@ class CommonTaskResource:
self.version = version self.version = version
self.res_dict = self.pretrained_models[model_tag][version] self.res_dict = self.pretrained_models[model_tag][version]
self._format_path(self.res_dict) self._format_path(self.res_dict)
if not skip_download:
self.res_dir = self._fetch(self.res_dict, self.res_dir = self._fetch(self.res_dict,
self._get_model_dir(model_type)) self._get_model_dir(model_type))
else: else:
...@@ -91,6 +93,7 @@ class CommonTaskResource: ...@@ -91,6 +93,7 @@ class CommonTaskResource:
self.voc_version = version self.voc_version = version
self.voc_res_dict = self.pretrained_models[model_tag][version] self.voc_res_dict = self.pretrained_models[model_tag][version]
self._format_path(self.voc_res_dict) self._format_path(self.voc_res_dict)
if not skip_download:
self.voc_res_dir = self._fetch(self.voc_res_dict, self.voc_res_dir = self._fetch(self.voc_res_dict,
self._get_model_dir(model_type)) self._get_model_dir(model_type))
......
...@@ -35,12 +35,6 @@ if __name__ == "__main__": ...@@ -35,12 +35,6 @@ if __name__ == "__main__":
# save jit model to # save jit model to
parser.add_argument( parser.add_argument(
"--export_path", type=str, help="path of the jit model to save") "--export_path", type=str, help="path of the jit model to save")
parser.add_argument(
'--nxpu',
type=int,
default=0,
choices=[0, 1],
help="if nxpu == 0 and ngpu == 0, use cpu.")
args = parser.parse_args() args = parser.parse_args()
print_arguments(args) print_arguments(args)
......
...@@ -35,12 +35,6 @@ if __name__ == "__main__": ...@@ -35,12 +35,6 @@ if __name__ == "__main__":
# save asr result to # save asr result to
parser.add_argument( parser.add_argument(
"--result_file", type=str, help="path of save the asr result") "--result_file", type=str, help="path of save the asr result")
parser.add_argument(
'--nxpu',
type=int,
default=0,
choices=[0, 1],
help="if nxpu == 0 and ngpu == 0, use cpu.")
args = parser.parse_args() args = parser.parse_args()
print_arguments(args, globals()) print_arguments(args, globals())
......
...@@ -38,12 +38,6 @@ if __name__ == "__main__": ...@@ -38,12 +38,6 @@ if __name__ == "__main__":
#load jit model from #load jit model from
parser.add_argument( parser.add_argument(
"--export_path", type=str, help="path of the jit model to save") "--export_path", type=str, help="path of the jit model to save")
parser.add_argument(
'--nxpu',
type=int,
default=0,
choices=[0, 1],
help="if nxpu == 0 and ngpu == 0, use cpu.")
parser.add_argument( parser.add_argument(
"--enable-auto-log", action="store_true", help="use auto log") "--enable-auto-log", action="store_true", help="use auto log")
args = parser.parse_args() args = parser.parse_args()
......
...@@ -31,12 +31,6 @@ def main(config, args): ...@@ -31,12 +31,6 @@ def main(config, args):
if __name__ == "__main__": if __name__ == "__main__":
parser = default_argument_parser() parser = default_argument_parser()
parser.add_argument(
'--nxpu',
type=int,
default=0,
choices=[0, 1],
help="if nxpu == 0 and ngpu == 0, use cpu.")
args = parser.parse_args() args = parser.parse_args()
print_arguments(args, globals()) print_arguments(args, globals())
......
...@@ -16,7 +16,6 @@ import random ...@@ -16,7 +16,6 @@ import random
import numpy as np import numpy as np
from PIL import Image from PIL import Image
from PIL.Image import BICUBIC
from paddlespeech.s2t.frontend.augmentor.base import AugmentorBase from paddlespeech.s2t.frontend.augmentor.base import AugmentorBase
from paddlespeech.s2t.utils.log import Log from paddlespeech.s2t.utils.log import Log
...@@ -164,9 +163,9 @@ class SpecAugmentor(AugmentorBase): ...@@ -164,9 +163,9 @@ class SpecAugmentor(AugmentorBase):
window) + 1 # 1 ... t - 1 window) + 1 # 1 ... t - 1
left = Image.fromarray(x[:center]).resize((x.shape[1], warped), left = Image.fromarray(x[:center]).resize((x.shape[1], warped),
BICUBIC) Image.BICUBIC)
right = Image.fromarray(x[center:]).resize((x.shape[1], t - warped), right = Image.fromarray(x[center:]).resize((x.shape[1], t - warped),
BICUBIC) Image.BICUBIC)
if self.inplace: if self.inplace:
x[:warped] = left x[:warped] = left
x[warped:] = right x[warped:] = right
......
...@@ -226,10 +226,10 @@ class TextFeaturizer(): ...@@ -226,10 +226,10 @@ class TextFeaturizer():
sos_id = vocab_list.index(SOS) if SOS in vocab_list else -1 sos_id = vocab_list.index(SOS) if SOS in vocab_list else -1
space_id = vocab_list.index(SPACE) if SPACE in vocab_list else -1 space_id = vocab_list.index(SPACE) if SPACE in vocab_list else -1
logger.info(f"BLANK id: {blank_id}") logger.debug(f"BLANK id: {blank_id}")
logger.info(f"UNK id: {unk_id}") logger.debug(f"UNK id: {unk_id}")
logger.info(f"EOS id: {eos_id}") logger.debug(f"EOS id: {eos_id}")
logger.info(f"SOS id: {sos_id}") logger.debug(f"SOS id: {sos_id}")
logger.info(f"SPACE id: {space_id}") logger.debug(f"SPACE id: {space_id}")
logger.info(f"MASKCTC id: {maskctc_id}") logger.debug(f"MASKCTC id: {maskctc_id}")
return token2id, id2token, vocab_list, unk_id, eos_id, blank_id return token2id, id2token, vocab_list, unk_id, eos_id, blank_id
...@@ -827,7 +827,7 @@ class U2Model(U2DecodeModel): ...@@ -827,7 +827,7 @@ class U2Model(U2DecodeModel):
# encoder # encoder
encoder_type = configs.get('encoder', 'transformer') encoder_type = configs.get('encoder', 'transformer')
logger.info(f"U2 Encoder type: {encoder_type}") logger.debug(f"U2 Encoder type: {encoder_type}")
if encoder_type == 'transformer': if encoder_type == 'transformer':
encoder = TransformerEncoder( encoder = TransformerEncoder(
input_dim, global_cmvn=global_cmvn, **configs['encoder_conf']) input_dim, global_cmvn=global_cmvn, **configs['encoder_conf'])
...@@ -894,7 +894,7 @@ class U2Model(U2DecodeModel): ...@@ -894,7 +894,7 @@ class U2Model(U2DecodeModel):
if checkpoint_path: if checkpoint_path:
infos = checkpoint.Checkpoint().load_parameters( infos = checkpoint.Checkpoint().load_parameters(
model, checkpoint_path=checkpoint_path) model, checkpoint_path=checkpoint_path)
logger.info(f"checkpoint info: {infos}") logger.debug(f"checkpoint info: {infos}")
layer_tools.summary(model) layer_tools.summary(model)
return model return model
......
...@@ -37,9 +37,9 @@ class CTCLoss(nn.Layer): ...@@ -37,9 +37,9 @@ class CTCLoss(nn.Layer):
self.loss = nn.CTCLoss(blank=blank, reduction=reduction) self.loss = nn.CTCLoss(blank=blank, reduction=reduction)
self.batch_average = batch_average self.batch_average = batch_average
logger.info( logger.debug(
f"CTCLoss Loss reduction: {reduction}, div-bs: {batch_average}") f"CTCLoss Loss reduction: {reduction}, div-bs: {batch_average}")
logger.info(f"CTCLoss Grad Norm Type: {grad_norm_type}") logger.debug(f"CTCLoss Grad Norm Type: {grad_norm_type}")
assert grad_norm_type in ('instance', 'batch', 'frame', None) assert grad_norm_type in ('instance', 'batch', 'frame', None)
self.norm_by_times = False self.norm_by_times = False
...@@ -70,7 +70,8 @@ class CTCLoss(nn.Layer): ...@@ -70,7 +70,8 @@ class CTCLoss(nn.Layer):
param = {} param = {}
self._kwargs = {k: v for k, v in kwargs.items() if k in param} self._kwargs = {k: v for k, v in kwargs.items() if k in param}
_notin = {k: v for k, v in kwargs.items() if k not in param} _notin = {k: v for k, v in kwargs.items() if k not in param}
logger.info(f"{self.loss} kwargs:{self._kwargs}, not support: {_notin}") logger.debug(
f"{self.loss} kwargs:{self._kwargs}, not support: {_notin}")
def forward(self, logits, ys_pad, hlens, ys_lens): def forward(self, logits, ys_pad, hlens, ys_lens):
"""Compute CTC loss. """Compute CTC loss.
......
...@@ -82,6 +82,12 @@ def default_argument_parser(parser=None): ...@@ -82,6 +82,12 @@ def default_argument_parser(parser=None):
type=int, type=int,
default=1, default=1,
help="number of parallel processes. 0 for cpu.") help="number of parallel processes. 0 for cpu.")
train_group.add_argument(
'--nxpu',
type=int,
default=0,
choices=[0, 1],
help="if nxpu == 0 and ngpu == 0, use cpu.")
train_group.add_argument( train_group.add_argument(
"--config", metavar="CONFIG_FILE", help="config file.") "--config", metavar="CONFIG_FILE", help="config file.")
train_group.add_argument( train_group.add_argument(
......
...@@ -94,7 +94,7 @@ def pad_sequence(sequences: List[paddle.Tensor], ...@@ -94,7 +94,7 @@ def pad_sequence(sequences: List[paddle.Tensor],
for i, tensor in enumerate(sequences): for i, tensor in enumerate(sequences):
length = tensor.shape[0] length = tensor.shape[0]
# use index notation to prevent duplicate references to the tensor # use index notation to prevent duplicate references to the tensor
logger.info( logger.debug(
f"length {length}, out_tensor {out_tensor.shape}, tensor {tensor.shape}" f"length {length}, out_tensor {out_tensor.shape}, tensor {tensor.shape}"
) )
if batch_first: if batch_first:
......
...@@ -123,7 +123,6 @@ class TTSClientExecutor(BaseExecutor): ...@@ -123,7 +123,6 @@ class TTSClientExecutor(BaseExecutor):
time_end = time.time() time_end = time.time()
time_consume = time_end - time_start time_consume = time_end - time_start
response_dict = res.json() response_dict = res.json()
logger.info(response_dict["message"])
logger.info("Save synthesized audio successfully on %s." % (output)) logger.info("Save synthesized audio successfully on %s." % (output))
logger.info("Audio duration: %f s." % logger.info("Audio duration: %f s." %
(response_dict['result']['duration'])) (response_dict['result']['duration']))
...@@ -702,7 +701,6 @@ class VectorClientExecutor(BaseExecutor): ...@@ -702,7 +701,6 @@ class VectorClientExecutor(BaseExecutor):
test_audio=args.test, test_audio=args.test,
task=task) task=task)
time_end = time.time() time_end = time.time()
logger.info(f"The vector: {res}")
logger.info("Response time %f s." % (time_end - time_start)) logger.info("Response time %f s." % (time_end - time_start))
return True return True
except Exception as e: except Exception as e:
......
...@@ -30,7 +30,7 @@ class ACSEngine(BaseEngine): ...@@ -30,7 +30,7 @@ class ACSEngine(BaseEngine):
"""The ACSEngine Engine """The ACSEngine Engine
""" """
super(ACSEngine, self).__init__() super(ACSEngine, self).__init__()
logger.info("Create the ACSEngine Instance") logger.debug("Create the ACSEngine Instance")
self.word_list = [] self.word_list = []
def init(self, config: dict): def init(self, config: dict):
...@@ -42,7 +42,7 @@ class ACSEngine(BaseEngine): ...@@ -42,7 +42,7 @@ class ACSEngine(BaseEngine):
Returns: Returns:
bool: The engine instance flag bool: The engine instance flag
""" """
logger.info("Init the acs engine") logger.debug("Init the acs engine")
try: try:
self.config = config self.config = config
self.device = self.config.get("device", paddle.get_device()) self.device = self.config.get("device", paddle.get_device())
...@@ -50,7 +50,7 @@ class ACSEngine(BaseEngine): ...@@ -50,7 +50,7 @@ class ACSEngine(BaseEngine):
# websocket default ping timeout is 20 seconds # websocket default ping timeout is 20 seconds
self.ping_timeout = self.config.get("ping_timeout", 20) self.ping_timeout = self.config.get("ping_timeout", 20)
paddle.set_device(self.device) paddle.set_device(self.device)
logger.info(f"ACS Engine set the device: {self.device}") logger.debug(f"ACS Engine set the device: {self.device}")
except BaseException as e: except BaseException as e:
logger.error( logger.error(
...@@ -66,7 +66,9 @@ class ACSEngine(BaseEngine): ...@@ -66,7 +66,9 @@ class ACSEngine(BaseEngine):
self.url = "ws://" + self.config.asr_server_ip + ":" + str( self.url = "ws://" + self.config.asr_server_ip + ":" + str(
self.config.asr_server_port) + "/paddlespeech/asr/streaming" self.config.asr_server_port) + "/paddlespeech/asr/streaming"
logger.info("Init the acs engine successfully") logger.info("Initialize acs server engine successfully on device: %s." %
(self.device))
return True return True
def read_search_words(self): def read_search_words(self):
...@@ -95,12 +97,12 @@ class ACSEngine(BaseEngine): ...@@ -95,12 +97,12 @@ class ACSEngine(BaseEngine):
Returns: Returns:
_type_: _description_ _type_: _description_
""" """
logger.info("send a message to the server") logger.debug("send a message to the server")
if self.url is None: if self.url is None:
logger.error("No asr server, please input valid ip and port") logger.error("No asr server, please input valid ip and port")
return "" return ""
ws = websocket.WebSocket() ws = websocket.WebSocket()
logger.info(f"set the ping timeout: {self.ping_timeout} seconds") logger.debug(f"set the ping timeout: {self.ping_timeout} seconds")
ws.connect(self.url, ping_timeout=self.ping_timeout) ws.connect(self.url, ping_timeout=self.ping_timeout)
audio_info = json.dumps( audio_info = json.dumps(
{ {
...@@ -123,7 +125,7 @@ class ACSEngine(BaseEngine): ...@@ -123,7 +125,7 @@ class ACSEngine(BaseEngine):
logger.info(f"audio result: {msg}") logger.info(f"audio result: {msg}")
# 3. send chunk audio data to engine # 3. send chunk audio data to engine
logger.info("send the end signal") logger.debug("send the end signal")
audio_info = json.dumps( audio_info = json.dumps(
{ {
"name": "test.wav", "name": "test.wav",
...@@ -197,7 +199,7 @@ class ACSEngine(BaseEngine): ...@@ -197,7 +199,7 @@ class ACSEngine(BaseEngine):
start = max(time_stamp[m.start(0)]['bg'] - offset, 0) start = max(time_stamp[m.start(0)]['bg'] - offset, 0)
end = min(time_stamp[m.end(0) - 1]['ed'] + offset, max_ed) end = min(time_stamp[m.end(0) - 1]['ed'] + offset, max_ed)
logger.info(f'start: {start}, end: {end}') logger.debug(f'start: {start}, end: {end}')
acs_result.append({'w': w, 'bg': start, 'ed': end}) acs_result.append({'w': w, 'bg': start, 'ed': end})
return acs_result, asr_result return acs_result, asr_result
...@@ -212,7 +214,7 @@ class ACSEngine(BaseEngine): ...@@ -212,7 +214,7 @@ class ACSEngine(BaseEngine):
Returns: Returns:
acs_result, asr_result: the acs result and the asr result acs_result, asr_result: the acs result and the asr result
""" """
logger.info("start to process the audio content search") logger.debug("start to process the audio content search")
msg = self.get_asr_content(io.BytesIO(audio_data)) msg = self.get_asr_content(io.BytesIO(audio_data))
acs_result, asr_result = self.get_macthed_word(msg) acs_result, asr_result = self.get_macthed_word(msg)
......
...@@ -44,7 +44,7 @@ class PaddleASRConnectionHanddler: ...@@ -44,7 +44,7 @@ class PaddleASRConnectionHanddler:
asr_engine (ASREngine): the global asr engine asr_engine (ASREngine): the global asr engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"create an paddle asr connection handler to process the websocket connection" "create an paddle asr connection handler to process the websocket connection"
) )
self.config = asr_engine.config # server config self.config = asr_engine.config # server config
...@@ -152,12 +152,12 @@ class PaddleASRConnectionHanddler: ...@@ -152,12 +152,12 @@ class PaddleASRConnectionHanddler:
self.output_reset() self.output_reset()
def extract_feat(self, samples: ByteString): def extract_feat(self, samples: ByteString):
logger.info("Online ASR extract the feat") logger.debug("Online ASR extract the feat")
samples = np.frombuffer(samples, dtype=np.int16) samples = np.frombuffer(samples, dtype=np.int16)
assert samples.ndim == 1 assert samples.ndim == 1
self.num_samples += samples.shape[0] self.num_samples += samples.shape[0]
logger.info( logger.debug(
f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}" f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}"
) )
...@@ -168,7 +168,7 @@ class PaddleASRConnectionHanddler: ...@@ -168,7 +168,7 @@ class PaddleASRConnectionHanddler:
else: else:
assert self.remained_wav.ndim == 1 # (T,) assert self.remained_wav.ndim == 1 # (T,)
self.remained_wav = np.concatenate([self.remained_wav, samples]) self.remained_wav = np.concatenate([self.remained_wav, samples])
logger.info( logger.debug(
f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}" f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}"
) )
...@@ -202,14 +202,14 @@ class PaddleASRConnectionHanddler: ...@@ -202,14 +202,14 @@ class PaddleASRConnectionHanddler:
# update remained wav # update remained wav
self.remained_wav = self.remained_wav[self.n_shift * num_frames:] self.remained_wav = self.remained_wav[self.n_shift * num_frames:]
logger.info( logger.debug(
f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}" f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}"
) )
logger.info( logger.debug(
f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}" f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}"
) )
logger.info(f"global samples: {self.num_samples}") logger.debug(f"global samples: {self.num_samples}")
logger.info(f"global frames: {self.num_frames}") logger.debug(f"global frames: {self.num_frames}")
def decode(self, is_finished=False): def decode(self, is_finished=False):
"""advance decoding """advance decoding
...@@ -237,7 +237,7 @@ class PaddleASRConnectionHanddler: ...@@ -237,7 +237,7 @@ class PaddleASRConnectionHanddler:
return return
num_frames = self.cached_feat.shape[1] num_frames = self.cached_feat.shape[1]
logger.info( logger.debug(
f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames" f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames"
) )
...@@ -355,7 +355,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -355,7 +355,7 @@ class ASRServerExecutor(ASRExecutor):
lm_url = self.task_resource.res_dict['lm_url'] lm_url = self.task_resource.res_dict['lm_url']
lm_md5 = self.task_resource.res_dict['lm_md5'] lm_md5 = self.task_resource.res_dict['lm_md5']
logger.info(f"Start to load language model {lm_url}") logger.debug(f"Start to load language model {lm_url}")
self.download_lm( self.download_lm(
lm_url, lm_url,
os.path.dirname(self.config.decode.lang_model_path), lm_md5) os.path.dirname(self.config.decode.lang_model_path), lm_md5)
...@@ -367,7 +367,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -367,7 +367,7 @@ class ASRServerExecutor(ASRExecutor):
if "deepspeech2" in self.model_type: if "deepspeech2" in self.model_type:
# AM predictor # AM predictor
logger.info("ASR engine start to init the am predictor") logger.debug("ASR engine start to init the am predictor")
self.am_predictor = onnx_infer.get_sess( self.am_predictor = onnx_infer.get_sess(
model_path=self.am_model, sess_conf=self.am_predictor_conf) model_path=self.am_model, sess_conf=self.am_predictor_conf)
else: else:
...@@ -400,7 +400,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -400,7 +400,7 @@ class ASRServerExecutor(ASRExecutor):
self.num_decoding_left_chunks = num_decoding_left_chunks self.num_decoding_left_chunks = num_decoding_left_chunks
# conf for paddleinference predictor or onnx # conf for paddleinference predictor or onnx
self.am_predictor_conf = am_predictor_conf self.am_predictor_conf = am_predictor_conf
logger.info(f"model_type: {self.model_type}") logger.debug(f"model_type: {self.model_type}")
sample_rate_str = '16k' if sample_rate == 16000 else '8k' sample_rate_str = '16k' if sample_rate == 16000 else '8k'
tag = model_type + '-' + lang + '-' + sample_rate_str tag = model_type + '-' + lang + '-' + sample_rate_str
...@@ -422,12 +422,11 @@ class ASRServerExecutor(ASRExecutor): ...@@ -422,12 +422,11 @@ class ASRServerExecutor(ASRExecutor):
# self.res_path, self.task_resource.res_dict[ # self.res_path, self.task_resource.res_dict[
# 'params']) if am_params is None else os.path.abspath(am_params) # 'params']) if am_params is None else os.path.abspath(am_params)
logger.info("Load the pretrained model:") logger.debug("Load the pretrained model:")
logger.info(f" tag = {tag}") logger.debug(f" tag = {tag}")
logger.info(f" res_path: {self.res_path}") logger.debug(f" res_path: {self.res_path}")
logger.info(f" cfg path: {self.cfg_path}") logger.debug(f" cfg path: {self.cfg_path}")
logger.info(f" am_model path: {self.am_model}") logger.debug(f" am_model path: {self.am_model}")
# logger.info(f" am_params path: {self.am_params}")
#Init body. #Init body.
self.config = CfgNode(new_allowed=True) self.config = CfgNode(new_allowed=True)
...@@ -436,7 +435,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -436,7 +435,7 @@ class ASRServerExecutor(ASRExecutor):
if self.config.spm_model_prefix: if self.config.spm_model_prefix:
self.config.spm_model_prefix = os.path.join( self.config.spm_model_prefix = os.path.join(
self.res_path, self.config.spm_model_prefix) self.res_path, self.config.spm_model_prefix)
logger.info(f"spm model path: {self.config.spm_model_prefix}") logger.debug(f"spm model path: {self.config.spm_model_prefix}")
self.vocab = self.config.vocab_filepath self.vocab = self.config.vocab_filepath
...@@ -450,7 +449,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -450,7 +449,7 @@ class ASRServerExecutor(ASRExecutor):
# AM predictor # AM predictor
self.init_model() self.init_model()
logger.info(f"create the {model_type} model success") logger.debug(f"create the {model_type} model success")
return True return True
...@@ -501,7 +500,7 @@ class ASREngine(BaseEngine): ...@@ -501,7 +500,7 @@ class ASREngine(BaseEngine):
"If all GPU or XPU is used, you can set the server to 'cpu'") "If all GPU or XPU is used, you can set the server to 'cpu'")
sys.exit(-1) sys.exit(-1)
logger.info(f"paddlespeech_server set the device: {self.device}") logger.debug(f"paddlespeech_server set the device: {self.device}")
if not self.init_model(): if not self.init_model():
logger.error( logger.error(
...@@ -509,7 +508,8 @@ class ASREngine(BaseEngine): ...@@ -509,7 +508,8 @@ class ASREngine(BaseEngine):
) )
return False return False
logger.info("Initialize ASR server engine successfully.") logger.info("Initialize ASR server engine successfully on device: %s." %
(self.device))
return True return True
def new_handler(self): def new_handler(self):
......
...@@ -44,7 +44,7 @@ class PaddleASRConnectionHanddler: ...@@ -44,7 +44,7 @@ class PaddleASRConnectionHanddler:
asr_engine (ASREngine): the global asr engine asr_engine (ASREngine): the global asr engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"create an paddle asr connection handler to process the websocket connection" "create an paddle asr connection handler to process the websocket connection"
) )
self.config = asr_engine.config # server config self.config = asr_engine.config # server config
...@@ -157,7 +157,7 @@ class PaddleASRConnectionHanddler: ...@@ -157,7 +157,7 @@ class PaddleASRConnectionHanddler:
assert samples.ndim == 1 assert samples.ndim == 1
self.num_samples += samples.shape[0] self.num_samples += samples.shape[0]
logger.info( logger.debug(
f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}" f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}"
) )
...@@ -168,7 +168,7 @@ class PaddleASRConnectionHanddler: ...@@ -168,7 +168,7 @@ class PaddleASRConnectionHanddler:
else: else:
assert self.remained_wav.ndim == 1 # (T,) assert self.remained_wav.ndim == 1 # (T,)
self.remained_wav = np.concatenate([self.remained_wav, samples]) self.remained_wav = np.concatenate([self.remained_wav, samples])
logger.info( logger.debug(
f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}" f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}"
) )
...@@ -202,14 +202,14 @@ class PaddleASRConnectionHanddler: ...@@ -202,14 +202,14 @@ class PaddleASRConnectionHanddler:
# update remained wav # update remained wav
self.remained_wav = self.remained_wav[self.n_shift * num_frames:] self.remained_wav = self.remained_wav[self.n_shift * num_frames:]
logger.info( logger.debug(
f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}" f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}"
) )
logger.info( logger.debug(
f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}" f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}"
) )
logger.info(f"global samples: {self.num_samples}") logger.debug(f"global samples: {self.num_samples}")
logger.info(f"global frames: {self.num_frames}") logger.debug(f"global frames: {self.num_frames}")
def decode(self, is_finished=False): def decode(self, is_finished=False):
"""advance decoding """advance decoding
...@@ -237,13 +237,13 @@ class PaddleASRConnectionHanddler: ...@@ -237,13 +237,13 @@ class PaddleASRConnectionHanddler:
return return
num_frames = self.cached_feat.shape[1] num_frames = self.cached_feat.shape[1]
logger.info( logger.debug(
f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames" f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames"
) )
# the cached feat must be larger decoding_window # the cached feat must be larger decoding_window
if num_frames < decoding_window and not is_finished: if num_frames < decoding_window and not is_finished:
logger.info( logger.debug(
f"frame feat num is less than {decoding_window}, please input more pcm data" f"frame feat num is less than {decoding_window}, please input more pcm data"
) )
return None, None return None, None
...@@ -294,7 +294,7 @@ class PaddleASRConnectionHanddler: ...@@ -294,7 +294,7 @@ class PaddleASRConnectionHanddler:
Returns: Returns:
logprob: poster probability. logprob: poster probability.
""" """
logger.info("start to decoce one chunk for deepspeech2") logger.debug("start to decoce one chunk for deepspeech2")
input_names = self.am_predictor.get_input_names() input_names = self.am_predictor.get_input_names()
audio_handle = self.am_predictor.get_input_handle(input_names[0]) audio_handle = self.am_predictor.get_input_handle(input_names[0])
audio_len_handle = self.am_predictor.get_input_handle(input_names[1]) audio_len_handle = self.am_predictor.get_input_handle(input_names[1])
...@@ -369,7 +369,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -369,7 +369,7 @@ class ASRServerExecutor(ASRExecutor):
lm_url = self.task_resource.res_dict['lm_url'] lm_url = self.task_resource.res_dict['lm_url']
lm_md5 = self.task_resource.res_dict['lm_md5'] lm_md5 = self.task_resource.res_dict['lm_md5']
logger.info(f"Start to load language model {lm_url}") logger.debug(f"Start to load language model {lm_url}")
self.download_lm( self.download_lm(
lm_url, lm_url,
os.path.dirname(self.config.decode.lang_model_path), lm_md5) os.path.dirname(self.config.decode.lang_model_path), lm_md5)
...@@ -381,7 +381,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -381,7 +381,7 @@ class ASRServerExecutor(ASRExecutor):
if "deepspeech2" in self.model_type: if "deepspeech2" in self.model_type:
# AM predictor # AM predictor
logger.info("ASR engine start to init the am predictor") logger.debug("ASR engine start to init the am predictor")
self.am_predictor = init_predictor( self.am_predictor = init_predictor(
model_file=self.am_model, model_file=self.am_model,
params_file=self.am_params, params_file=self.am_params,
...@@ -415,7 +415,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -415,7 +415,7 @@ class ASRServerExecutor(ASRExecutor):
self.num_decoding_left_chunks = num_decoding_left_chunks self.num_decoding_left_chunks = num_decoding_left_chunks
# conf for paddleinference predictor or onnx # conf for paddleinference predictor or onnx
self.am_predictor_conf = am_predictor_conf self.am_predictor_conf = am_predictor_conf
logger.info(f"model_type: {self.model_type}") logger.debug(f"model_type: {self.model_type}")
sample_rate_str = '16k' if sample_rate == 16000 else '8k' sample_rate_str = '16k' if sample_rate == 16000 else '8k'
tag = model_type + '-' + lang + '-' + sample_rate_str tag = model_type + '-' + lang + '-' + sample_rate_str
...@@ -437,12 +437,12 @@ class ASRServerExecutor(ASRExecutor): ...@@ -437,12 +437,12 @@ class ASRServerExecutor(ASRExecutor):
self.res_path = os.path.dirname( self.res_path = os.path.dirname(
os.path.dirname(os.path.abspath(self.cfg_path))) os.path.dirname(os.path.abspath(self.cfg_path)))
logger.info("Load the pretrained model:") logger.debug("Load the pretrained model:")
logger.info(f" tag = {tag}") logger.debug(f" tag = {tag}")
logger.info(f" res_path: {self.res_path}") logger.debug(f" res_path: {self.res_path}")
logger.info(f" cfg path: {self.cfg_path}") logger.debug(f" cfg path: {self.cfg_path}")
logger.info(f" am_model path: {self.am_model}") logger.debug(f" am_model path: {self.am_model}")
logger.info(f" am_params path: {self.am_params}") logger.debug(f" am_params path: {self.am_params}")
#Init body. #Init body.
self.config = CfgNode(new_allowed=True) self.config = CfgNode(new_allowed=True)
...@@ -451,7 +451,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -451,7 +451,7 @@ class ASRServerExecutor(ASRExecutor):
if self.config.spm_model_prefix: if self.config.spm_model_prefix:
self.config.spm_model_prefix = os.path.join( self.config.spm_model_prefix = os.path.join(
self.res_path, self.config.spm_model_prefix) self.res_path, self.config.spm_model_prefix)
logger.info(f"spm model path: {self.config.spm_model_prefix}") logger.debug(f"spm model path: {self.config.spm_model_prefix}")
self.vocab = self.config.vocab_filepath self.vocab = self.config.vocab_filepath
...@@ -465,7 +465,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -465,7 +465,7 @@ class ASRServerExecutor(ASRExecutor):
# AM predictor # AM predictor
self.init_model() self.init_model()
logger.info(f"create the {model_type} model success") logger.debug(f"create the {model_type} model success")
return True return True
...@@ -516,7 +516,7 @@ class ASREngine(BaseEngine): ...@@ -516,7 +516,7 @@ class ASREngine(BaseEngine):
"If all GPU or XPU is used, you can set the server to 'cpu'") "If all GPU or XPU is used, you can set the server to 'cpu'")
sys.exit(-1) sys.exit(-1)
logger.info(f"paddlespeech_server set the device: {self.device}") logger.debug(f"paddlespeech_server set the device: {self.device}")
if not self.init_model(): if not self.init_model():
logger.error( logger.error(
...@@ -524,7 +524,9 @@ class ASREngine(BaseEngine): ...@@ -524,7 +524,9 @@ class ASREngine(BaseEngine):
) )
return False return False
logger.info("Initialize ASR server engine successfully.") logger.info("Initialize ASR server engine successfully on device: %s." %
(self.device))
return True return True
def new_handler(self): def new_handler(self):
......
...@@ -49,7 +49,7 @@ class PaddleASRConnectionHanddler: ...@@ -49,7 +49,7 @@ class PaddleASRConnectionHanddler:
asr_engine (ASREngine): the global asr engine asr_engine (ASREngine): the global asr engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"create an paddle asr connection handler to process the websocket connection" "create an paddle asr connection handler to process the websocket connection"
) )
self.config = asr_engine.config # server config self.config = asr_engine.config # server config
...@@ -107,7 +107,7 @@ class PaddleASRConnectionHanddler: ...@@ -107,7 +107,7 @@ class PaddleASRConnectionHanddler:
# acoustic model # acoustic model
self.model = self.asr_engine.executor.model self.model = self.asr_engine.executor.model
self.continuous_decoding = self.config.continuous_decoding self.continuous_decoding = self.config.continuous_decoding
logger.info(f"continue decoding: {self.continuous_decoding}") logger.debug(f"continue decoding: {self.continuous_decoding}")
# ctc decoding config # ctc decoding config
self.ctc_decode_config = self.asr_engine.executor.config.decode self.ctc_decode_config = self.asr_engine.executor.config.decode
...@@ -207,7 +207,7 @@ class PaddleASRConnectionHanddler: ...@@ -207,7 +207,7 @@ class PaddleASRConnectionHanddler:
assert samples.ndim == 1 assert samples.ndim == 1
self.num_samples += samples.shape[0] self.num_samples += samples.shape[0]
logger.info( logger.debug(
f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}" f"This package receive {samples.shape[0]} pcm data. Global samples:{self.num_samples}"
) )
...@@ -218,7 +218,7 @@ class PaddleASRConnectionHanddler: ...@@ -218,7 +218,7 @@ class PaddleASRConnectionHanddler:
else: else:
assert self.remained_wav.ndim == 1 # (T,) assert self.remained_wav.ndim == 1 # (T,)
self.remained_wav = np.concatenate([self.remained_wav, samples]) self.remained_wav = np.concatenate([self.remained_wav, samples])
logger.info( logger.debug(
f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}" f"The concatenation of remain and now audio samples length is: {self.remained_wav.shape}"
) )
...@@ -252,14 +252,14 @@ class PaddleASRConnectionHanddler: ...@@ -252,14 +252,14 @@ class PaddleASRConnectionHanddler:
# update remained wav # update remained wav
self.remained_wav = self.remained_wav[self.n_shift * num_frames:] self.remained_wav = self.remained_wav[self.n_shift * num_frames:]
logger.info( logger.debug(
f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}" f"process the audio feature success, the cached feat shape: {self.cached_feat.shape}"
) )
logger.info( logger.debug(
f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}" f"After extract feat, the cached remain the audio samples: {self.remained_wav.shape}"
) )
logger.info(f"global samples: {self.num_samples}") logger.debug(f"global samples: {self.num_samples}")
logger.info(f"global frames: {self.num_frames}") logger.debug(f"global frames: {self.num_frames}")
def decode(self, is_finished=False): def decode(self, is_finished=False):
"""advance decoding """advance decoding
...@@ -283,24 +283,24 @@ class PaddleASRConnectionHanddler: ...@@ -283,24 +283,24 @@ class PaddleASRConnectionHanddler:
stride = subsampling * decoding_chunk_size stride = subsampling * decoding_chunk_size
if self.cached_feat is None: if self.cached_feat is None:
logger.info("no audio feat, please input more pcm data") logger.debug("no audio feat, please input more pcm data")
return return
num_frames = self.cached_feat.shape[1] num_frames = self.cached_feat.shape[1]
logger.info( logger.debug(
f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames" f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames"
) )
# the cached feat must be larger decoding_window # the cached feat must be larger decoding_window
if num_frames < decoding_window and not is_finished: if num_frames < decoding_window and not is_finished:
logger.info( logger.debug(
f"frame feat num is less than {decoding_window}, please input more pcm data" f"frame feat num is less than {decoding_window}, please input more pcm data"
) )
return None, None return None, None
# if is_finished=True, we need at least context frames # if is_finished=True, we need at least context frames
if num_frames < context: if num_frames < context:
logger.info( logger.debug(
"flast {num_frames} is less than context {context} frames, and we cannot do model forward" "flast {num_frames} is less than context {context} frames, and we cannot do model forward"
) )
return None, None return None, None
...@@ -354,7 +354,7 @@ class PaddleASRConnectionHanddler: ...@@ -354,7 +354,7 @@ class PaddleASRConnectionHanddler:
Returns: Returns:
logprob: poster probability. logprob: poster probability.
""" """
logger.info("start to decoce one chunk for deepspeech2") logger.debug("start to decoce one chunk for deepspeech2")
input_names = self.am_predictor.get_input_names() input_names = self.am_predictor.get_input_names()
audio_handle = self.am_predictor.get_input_handle(input_names[0]) audio_handle = self.am_predictor.get_input_handle(input_names[0])
audio_len_handle = self.am_predictor.get_input_handle(input_names[1]) audio_len_handle = self.am_predictor.get_input_handle(input_names[1])
...@@ -391,7 +391,7 @@ class PaddleASRConnectionHanddler: ...@@ -391,7 +391,7 @@ class PaddleASRConnectionHanddler:
self.decoder.next(output_chunk_probs, output_chunk_lens) self.decoder.next(output_chunk_probs, output_chunk_lens)
trans_best, trans_beam = self.decoder.decode() trans_best, trans_beam = self.decoder.decode()
logger.info(f"decode one best result for deepspeech2: {trans_best[0]}") logger.debug(f"decode one best result for deepspeech2: {trans_best[0]}")
return trans_best[0] return trans_best[0]
@paddle.no_grad() @paddle.no_grad()
...@@ -402,7 +402,7 @@ class PaddleASRConnectionHanddler: ...@@ -402,7 +402,7 @@ class PaddleASRConnectionHanddler:
# reset endpiont state # reset endpiont state
self.endpoint_state = False self.endpoint_state = False
logger.info( logger.debug(
"Conformer/Transformer: start to decode with advanced_decoding method" "Conformer/Transformer: start to decode with advanced_decoding method"
) )
cfg = self.ctc_decode_config cfg = self.ctc_decode_config
...@@ -427,25 +427,25 @@ class PaddleASRConnectionHanddler: ...@@ -427,25 +427,25 @@ class PaddleASRConnectionHanddler:
stride = subsampling * decoding_chunk_size stride = subsampling * decoding_chunk_size
if self.cached_feat is None: if self.cached_feat is None:
logger.info("no audio feat, please input more pcm data") logger.debug("no audio feat, please input more pcm data")
return return
# (B=1,T,D) # (B=1,T,D)
num_frames = self.cached_feat.shape[1] num_frames = self.cached_feat.shape[1]
logger.info( logger.debug(
f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames" f"Required decoding window {decoding_window} frames, and the connection has {num_frames} frames"
) )
# the cached feat must be larger decoding_window # the cached feat must be larger decoding_window
if num_frames < decoding_window and not is_finished: if num_frames < decoding_window and not is_finished:
logger.info( logger.debug(
f"frame feat num is less than {decoding_window}, please input more pcm data" f"frame feat num is less than {decoding_window}, please input more pcm data"
) )
return None, None return None, None
# if is_finished=True, we need at least context frames # if is_finished=True, we need at least context frames
if num_frames < context: if num_frames < context:
logger.info( logger.debug(
"flast {num_frames} is less than context {context} frames, and we cannot do model forward" "flast {num_frames} is less than context {context} frames, and we cannot do model forward"
) )
return None, None return None, None
...@@ -489,7 +489,7 @@ class PaddleASRConnectionHanddler: ...@@ -489,7 +489,7 @@ class PaddleASRConnectionHanddler:
self.encoder_out = ys self.encoder_out = ys
else: else:
self.encoder_out = paddle.concat([self.encoder_out, ys], axis=1) self.encoder_out = paddle.concat([self.encoder_out, ys], axis=1)
logger.info( logger.debug(
f"This connection handler encoder out shape: {self.encoder_out.shape}" f"This connection handler encoder out shape: {self.encoder_out.shape}"
) )
...@@ -513,7 +513,8 @@ class PaddleASRConnectionHanddler: ...@@ -513,7 +513,8 @@ class PaddleASRConnectionHanddler:
if self.endpointer.endpoint_detected(ctc_probs.numpy(), if self.endpointer.endpoint_detected(ctc_probs.numpy(),
decoding_something): decoding_something):
self.endpoint_state = True self.endpoint_state = True
logger.info(f"Endpoint is detected at {self.num_frames} frame.") logger.debug(
f"Endpoint is detected at {self.num_frames} frame.")
# advance cache of feat # advance cache of feat
assert self.cached_feat.shape[0] == 1 #(B=1,T,D) assert self.cached_feat.shape[0] == 1 #(B=1,T,D)
...@@ -526,7 +527,7 @@ class PaddleASRConnectionHanddler: ...@@ -526,7 +527,7 @@ class PaddleASRConnectionHanddler:
def update_result(self): def update_result(self):
"""Conformer/Transformer hyps to result. """Conformer/Transformer hyps to result.
""" """
logger.info("update the final result") logger.debug("update the final result")
hyps = self.hyps hyps = self.hyps
# output results and tokenids # output results and tokenids
...@@ -560,16 +561,16 @@ class PaddleASRConnectionHanddler: ...@@ -560,16 +561,16 @@ class PaddleASRConnectionHanddler:
only for conformer and transformer model. only for conformer and transformer model.
""" """
if "deepspeech2" in self.model_type: if "deepspeech2" in self.model_type:
logger.info("deepspeech2 not support rescoring decoding.") logger.debug("deepspeech2 not support rescoring decoding.")
return return
if "attention_rescoring" != self.ctc_decode_config.decoding_method: if "attention_rescoring" != self.ctc_decode_config.decoding_method:
logger.info( logger.debug(
f"decoding method not match: {self.ctc_decode_config.decoding_method}, need attention_rescoring" f"decoding method not match: {self.ctc_decode_config.decoding_method}, need attention_rescoring"
) )
return return
logger.info("rescoring the final result") logger.debug("rescoring the final result")
# last decoding for last audio # last decoding for last audio
self.searcher.finalize_search() self.searcher.finalize_search()
...@@ -685,7 +686,6 @@ class PaddleASRConnectionHanddler: ...@@ -685,7 +686,6 @@ class PaddleASRConnectionHanddler:
"bg": global_offset_in_sec + start, "bg": global_offset_in_sec + start,
"ed": global_offset_in_sec + end "ed": global_offset_in_sec + end
}) })
# logger.info(f"{word_time_stamp[-1]}")
self.word_time_stamp = word_time_stamp self.word_time_stamp = word_time_stamp
logger.info(f"word time stamp: {self.word_time_stamp}") logger.info(f"word time stamp: {self.word_time_stamp}")
...@@ -707,13 +707,13 @@ class ASRServerExecutor(ASRExecutor): ...@@ -707,13 +707,13 @@ class ASRServerExecutor(ASRExecutor):
lm_url = self.task_resource.res_dict['lm_url'] lm_url = self.task_resource.res_dict['lm_url']
lm_md5 = self.task_resource.res_dict['lm_md5'] lm_md5 = self.task_resource.res_dict['lm_md5']
logger.info(f"Start to load language model {lm_url}") logger.debug(f"Start to load language model {lm_url}")
self.download_lm( self.download_lm(
lm_url, lm_url,
os.path.dirname(self.config.decode.lang_model_path), lm_md5) os.path.dirname(self.config.decode.lang_model_path), lm_md5)
elif "conformer" in self.model_type or "transformer" in self.model_type: elif "conformer" in self.model_type or "transformer" in self.model_type:
with UpdateConfig(self.config): with UpdateConfig(self.config):
logger.info("start to create the stream conformer asr engine") logger.debug("start to create the stream conformer asr engine")
# update the decoding method # update the decoding method
if self.decode_method: if self.decode_method:
self.config.decode.decoding_method = self.decode_method self.config.decode.decoding_method = self.decode_method
...@@ -726,7 +726,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -726,7 +726,7 @@ class ASRServerExecutor(ASRExecutor):
if self.config.decode.decoding_method not in [ if self.config.decode.decoding_method not in [
"ctc_prefix_beam_search", "attention_rescoring" "ctc_prefix_beam_search", "attention_rescoring"
]: ]:
logger.info( logger.debug(
"we set the decoding_method to attention_rescoring") "we set the decoding_method to attention_rescoring")
self.config.decode.decoding_method = "attention_rescoring" self.config.decode.decoding_method = "attention_rescoring"
...@@ -739,7 +739,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -739,7 +739,7 @@ class ASRServerExecutor(ASRExecutor):
def init_model(self) -> None: def init_model(self) -> None:
if "deepspeech2" in self.model_type: if "deepspeech2" in self.model_type:
# AM predictor # AM predictor
logger.info("ASR engine start to init the am predictor") logger.debug("ASR engine start to init the am predictor")
self.am_predictor = init_predictor( self.am_predictor = init_predictor(
model_file=self.am_model, model_file=self.am_model,
params_file=self.am_params, params_file=self.am_params,
...@@ -748,7 +748,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -748,7 +748,7 @@ class ASRServerExecutor(ASRExecutor):
# load model # load model
# model_type: {model_name}_{dataset} # model_type: {model_name}_{dataset}
model_name = self.model_type[:self.model_type.rindex('_')] model_name = self.model_type[:self.model_type.rindex('_')]
logger.info(f"model name: {model_name}") logger.debug(f"model name: {model_name}")
model_class = self.task_resource.get_model_class(model_name) model_class = self.task_resource.get_model_class(model_name)
model = model_class.from_config(self.config) model = model_class.from_config(self.config)
self.model = model self.model = model
...@@ -782,7 +782,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -782,7 +782,7 @@ class ASRServerExecutor(ASRExecutor):
self.num_decoding_left_chunks = num_decoding_left_chunks self.num_decoding_left_chunks = num_decoding_left_chunks
# conf for paddleinference predictor or onnx # conf for paddleinference predictor or onnx
self.am_predictor_conf = am_predictor_conf self.am_predictor_conf = am_predictor_conf
logger.info(f"model_type: {self.model_type}") logger.debug(f"model_type: {self.model_type}")
sample_rate_str = '16k' if sample_rate == 16000 else '8k' sample_rate_str = '16k' if sample_rate == 16000 else '8k'
tag = model_type + '-' + lang + '-' + sample_rate_str tag = model_type + '-' + lang + '-' + sample_rate_str
...@@ -804,12 +804,12 @@ class ASRServerExecutor(ASRExecutor): ...@@ -804,12 +804,12 @@ class ASRServerExecutor(ASRExecutor):
self.res_path = os.path.dirname( self.res_path = os.path.dirname(
os.path.dirname(os.path.abspath(self.cfg_path))) os.path.dirname(os.path.abspath(self.cfg_path)))
logger.info("Load the pretrained model:") logger.debug("Load the pretrained model:")
logger.info(f" tag = {tag}") logger.debug(f" tag = {tag}")
logger.info(f" res_path: {self.res_path}") logger.debug(f" res_path: {self.res_path}")
logger.info(f" cfg path: {self.cfg_path}") logger.debug(f" cfg path: {self.cfg_path}")
logger.info(f" am_model path: {self.am_model}") logger.debug(f" am_model path: {self.am_model}")
logger.info(f" am_params path: {self.am_params}") logger.debug(f" am_params path: {self.am_params}")
#Init body. #Init body.
self.config = CfgNode(new_allowed=True) self.config = CfgNode(new_allowed=True)
...@@ -818,7 +818,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -818,7 +818,7 @@ class ASRServerExecutor(ASRExecutor):
if self.config.spm_model_prefix: if self.config.spm_model_prefix:
self.config.spm_model_prefix = os.path.join( self.config.spm_model_prefix = os.path.join(
self.res_path, self.config.spm_model_prefix) self.res_path, self.config.spm_model_prefix)
logger.info(f"spm model path: {self.config.spm_model_prefix}") logger.debug(f"spm model path: {self.config.spm_model_prefix}")
self.vocab = self.config.vocab_filepath self.vocab = self.config.vocab_filepath
...@@ -832,7 +832,7 @@ class ASRServerExecutor(ASRExecutor): ...@@ -832,7 +832,7 @@ class ASRServerExecutor(ASRExecutor):
# AM predictor # AM predictor
self.init_model() self.init_model()
logger.info(f"create the {model_type} model success") logger.debug(f"create the {model_type} model success")
return True return True
...@@ -883,7 +883,7 @@ class ASREngine(BaseEngine): ...@@ -883,7 +883,7 @@ class ASREngine(BaseEngine):
"If all GPU or XPU is used, you can set the server to 'cpu'") "If all GPU or XPU is used, you can set the server to 'cpu'")
sys.exit(-1) sys.exit(-1)
logger.info(f"paddlespeech_server set the device: {self.device}") logger.debug(f"paddlespeech_server set the device: {self.device}")
if not self.init_model(): if not self.init_model():
logger.error( logger.error(
...@@ -891,7 +891,9 @@ class ASREngine(BaseEngine): ...@@ -891,7 +891,9 @@ class ASREngine(BaseEngine):
) )
return False return False
logger.info("Initialize ASR server engine successfully.") logger.info("Initialize ASR server engine successfully on device: %s." %
(self.device))
return True return True
def new_handler(self): def new_handler(self):
......
...@@ -65,10 +65,10 @@ class ASRServerExecutor(ASRExecutor): ...@@ -65,10 +65,10 @@ class ASRServerExecutor(ASRExecutor):
self.task_resource.res_dict['model']) self.task_resource.res_dict['model'])
self.am_params = os.path.join(self.res_path, self.am_params = os.path.join(self.res_path,
self.task_resource.res_dict['params']) self.task_resource.res_dict['params'])
logger.info(self.res_path) logger.debug(self.res_path)
logger.info(self.cfg_path) logger.debug(self.cfg_path)
logger.info(self.am_model) logger.debug(self.am_model)
logger.info(self.am_params) logger.debug(self.am_params)
else: else:
self.cfg_path = os.path.abspath(cfg_path) self.cfg_path = os.path.abspath(cfg_path)
self.am_model = os.path.abspath(am_model) self.am_model = os.path.abspath(am_model)
...@@ -236,16 +236,16 @@ class PaddleASRConnectionHandler(ASRServerExecutor): ...@@ -236,16 +236,16 @@ class PaddleASRConnectionHandler(ASRServerExecutor):
if self._check( if self._check(
io.BytesIO(audio_data), self.asr_engine.config.sample_rate, io.BytesIO(audio_data), self.asr_engine.config.sample_rate,
self.asr_engine.config.force_yes): self.asr_engine.config.force_yes):
logger.info("start running asr engine") logger.debug("start running asr engine")
self.preprocess(self.asr_engine.config.model_type, self.preprocess(self.asr_engine.config.model_type,
io.BytesIO(audio_data)) io.BytesIO(audio_data))
st = time.time() st = time.time()
self.infer(self.asr_engine.config.model_type) self.infer(self.asr_engine.config.model_type)
infer_time = time.time() - st infer_time = time.time() - st
self.output = self.postprocess() # Retrieve result of asr. self.output = self.postprocess() # Retrieve result of asr.
logger.info("end inferring asr engine") logger.debug("end inferring asr engine")
else: else:
logger.info("file check failed!") logger.error("file check failed!")
self.output = None self.output = None
logger.info("inference time: {}".format(infer_time)) logger.info("inference time: {}".format(infer_time))
......
...@@ -104,7 +104,7 @@ class PaddleASRConnectionHandler(ASRServerExecutor): ...@@ -104,7 +104,7 @@ class PaddleASRConnectionHandler(ASRServerExecutor):
if self._check( if self._check(
io.BytesIO(audio_data), self.asr_engine.config.sample_rate, io.BytesIO(audio_data), self.asr_engine.config.sample_rate,
self.asr_engine.config.force_yes): self.asr_engine.config.force_yes):
logger.info("start run asr engine") logger.debug("start run asr engine")
self.preprocess(self.asr_engine.config.model, self.preprocess(self.asr_engine.config.model,
io.BytesIO(audio_data)) io.BytesIO(audio_data))
st = time.time() st = time.time()
...@@ -112,7 +112,7 @@ class PaddleASRConnectionHandler(ASRServerExecutor): ...@@ -112,7 +112,7 @@ class PaddleASRConnectionHandler(ASRServerExecutor):
infer_time = time.time() - st infer_time = time.time() - st
self.output = self.postprocess() # Retrieve result of asr. self.output = self.postprocess() # Retrieve result of asr.
else: else:
logger.info("file check failed!") logger.error("file check failed!")
self.output = None self.output = None
logger.info("inference time: {}".format(infer_time)) logger.info("inference time: {}".format(infer_time))
......
...@@ -67,22 +67,22 @@ class CLSServerExecutor(CLSExecutor): ...@@ -67,22 +67,22 @@ class CLSServerExecutor(CLSExecutor):
self.params_path = os.path.abspath(params_path) self.params_path = os.path.abspath(params_path)
self.label_file = os.path.abspath(label_file) self.label_file = os.path.abspath(label_file)
logger.info(self.cfg_path) logger.debug(self.cfg_path)
logger.info(self.model_path) logger.debug(self.model_path)
logger.info(self.params_path) logger.debug(self.params_path)
logger.info(self.label_file) logger.debug(self.label_file)
# config # config
with open(self.cfg_path, 'r') as f: with open(self.cfg_path, 'r') as f:
self._conf = yaml.safe_load(f) self._conf = yaml.safe_load(f)
logger.info("Read cfg file successfully.") logger.debug("Read cfg file successfully.")
# labels # labels
self._label_list = [] self._label_list = []
with open(self.label_file, 'r') as f: with open(self.label_file, 'r') as f:
for line in f: for line in f:
self._label_list.append(line.strip()) self._label_list.append(line.strip())
logger.info("Read label file successfully.") logger.debug("Read label file successfully.")
# Create predictor # Create predictor
self.predictor_conf = predictor_conf self.predictor_conf = predictor_conf
...@@ -90,7 +90,7 @@ class CLSServerExecutor(CLSExecutor): ...@@ -90,7 +90,7 @@ class CLSServerExecutor(CLSExecutor):
model_file=self.model_path, model_file=self.model_path,
params_file=self.params_path, params_file=self.params_path,
predictor_conf=self.predictor_conf) predictor_conf=self.predictor_conf)
logger.info("Create predictor successfully.") logger.debug("Create predictor successfully.")
@paddle.no_grad() @paddle.no_grad()
def infer(self): def infer(self):
...@@ -148,7 +148,8 @@ class CLSEngine(BaseEngine): ...@@ -148,7 +148,8 @@ class CLSEngine(BaseEngine):
logger.error(e) logger.error(e)
return False return False
logger.info("Initialize CLS server engine successfully.") logger.info("Initialize CLS server engine successfully on device: %s." %
(self.device))
return True return True
...@@ -160,7 +161,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor): ...@@ -160,7 +161,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor):
cls_engine (CLSEngine): The CLS engine cls_engine (CLSEngine): The CLS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleCLSConnectionHandler to process the cls request") "Create PaddleCLSConnectionHandler to process the cls request")
self._inputs = OrderedDict() self._inputs = OrderedDict()
...@@ -183,7 +184,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor): ...@@ -183,7 +184,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor):
self.infer() self.infer()
infer_time = time.time() - st infer_time = time.time() - st
logger.info("inference time: {}".format(infer_time)) logger.debug("inference time: {}".format(infer_time))
logger.info("cls engine type: inference") logger.info("cls engine type: inference")
def postprocess(self, topk: int): def postprocess(self, topk: int):
......
...@@ -88,7 +88,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor): ...@@ -88,7 +88,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor):
cls_engine (CLSEngine): The CLS engine cls_engine (CLSEngine): The CLS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleCLSConnectionHandler to process the cls request") "Create PaddleCLSConnectionHandler to process the cls request")
self._inputs = OrderedDict() self._inputs = OrderedDict()
...@@ -110,7 +110,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor): ...@@ -110,7 +110,7 @@ class PaddleCLSConnectionHandler(CLSServerExecutor):
self.infer() self.infer()
infer_time = time.time() - st infer_time = time.time() - st
logger.info("inference time: {}".format(infer_time)) logger.debug("inference time: {}".format(infer_time))
logger.info("cls engine type: python") logger.info("cls engine type: python")
def postprocess(self, topk: int): def postprocess(self, topk: int):
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
# limitations under the License. # limitations under the License.
from typing import Text from typing import Text
from ..utils.log import logger from paddlespeech.cli.log import logger
__all__ = ['EngineFactory'] __all__ = ['EngineFactory']
......
...@@ -45,7 +45,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool: ...@@ -45,7 +45,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:
logger.error("Please check tte engine type.") logger.error("Please check tte engine type.")
try: try:
logger.info("Start to warm up tts engine.") logger.debug("Start to warm up tts engine.")
for i in range(warm_up_time): for i in range(warm_up_time):
connection_handler = PaddleTTSConnectionHandler(tts_engine) connection_handler = PaddleTTSConnectionHandler(tts_engine)
if flag_online: if flag_online:
...@@ -53,7 +53,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool: ...@@ -53,7 +53,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:
text=sentence, text=sentence,
lang=tts_engine.lang, lang=tts_engine.lang,
am=tts_engine.config.am): am=tts_engine.config.am):
logger.info( logger.debug(
f"The first response time of the {i} warm up: {connection_handler.first_response_time} s" f"The first response time of the {i} warm up: {connection_handler.first_response_time} s"
) )
break break
...@@ -62,7 +62,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool: ...@@ -62,7 +62,7 @@ def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:
st = time.time() st = time.time()
connection_handler.infer(text=sentence) connection_handler.infer(text=sentence)
et = time.time() et = time.time()
logger.info( logger.debug(
f"The response time of the {i} warm up: {et - st} s") f"The response time of the {i} warm up: {et - st} s")
except Exception as e: except Exception as e:
logger.error("Failed to warm up on tts engine.") logger.error("Failed to warm up on tts engine.")
......
...@@ -28,7 +28,7 @@ class PaddleTextConnectionHandler: ...@@ -28,7 +28,7 @@ class PaddleTextConnectionHandler:
text_engine (TextEngine): The Text engine text_engine (TextEngine): The Text engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleTextConnectionHandler to process the text request") "Create PaddleTextConnectionHandler to process the text request")
self.text_engine = text_engine self.text_engine = text_engine
self.task = self.text_engine.executor.task self.task = self.text_engine.executor.task
...@@ -130,7 +130,7 @@ class TextEngine(BaseEngine): ...@@ -130,7 +130,7 @@ class TextEngine(BaseEngine):
"""The Text Engine """The Text Engine
""" """
super(TextEngine, self).__init__() super(TextEngine, self).__init__()
logger.info("Create the TextEngine Instance") logger.debug("Create the TextEngine Instance")
def init(self, config: dict): def init(self, config: dict):
"""Init the Text Engine """Init the Text Engine
...@@ -141,7 +141,7 @@ class TextEngine(BaseEngine): ...@@ -141,7 +141,7 @@ class TextEngine(BaseEngine):
Returns: Returns:
bool: The engine instance flag bool: The engine instance flag
""" """
logger.info("Init the text engine") logger.debug("Init the text engine")
try: try:
self.config = config self.config = config
if self.config.device: if self.config.device:
...@@ -150,7 +150,7 @@ class TextEngine(BaseEngine): ...@@ -150,7 +150,7 @@ class TextEngine(BaseEngine):
self.device = paddle.get_device() self.device = paddle.get_device()
paddle.set_device(self.device) paddle.set_device(self.device)
logger.info(f"Text Engine set the device: {self.device}") logger.debug(f"Text Engine set the device: {self.device}")
except BaseException as e: except BaseException as e:
logger.error( logger.error(
"Set device failed, please check if device is already used and the parameter 'device' in the yaml file" "Set device failed, please check if device is already used and the parameter 'device' in the yaml file"
...@@ -168,5 +168,6 @@ class TextEngine(BaseEngine): ...@@ -168,5 +168,6 @@ class TextEngine(BaseEngine):
ckpt_path=config.ckpt_path, ckpt_path=config.ckpt_path,
vocab_file=config.vocab_file) vocab_file=config.vocab_file)
logger.info("Init the text engine successfully") logger.info("Initialize Text server engine successfully on device: %s."
% (self.device))
return True return True
...@@ -62,7 +62,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -62,7 +62,7 @@ class TTSServerExecutor(TTSExecutor):
(hasattr(self, 'am_encoder_infer_sess') and (hasattr(self, 'am_encoder_infer_sess') and
hasattr(self, 'am_decoder_sess') and hasattr( hasattr(self, 'am_decoder_sess') and hasattr(
self, 'am_postnet_sess'))) and hasattr(self, 'voc_inference'): self, 'am_postnet_sess'))) and hasattr(self, 'voc_inference'):
logger.info('Models had been initialized.') logger.debug('Models had been initialized.')
return return
# am # am
am_tag = am + '-' + lang am_tag = am + '-' + lang
...@@ -85,8 +85,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -85,8 +85,7 @@ class TTSServerExecutor(TTSExecutor):
else: else:
self.am_ckpt = os.path.abspath(am_ckpt[0]) self.am_ckpt = os.path.abspath(am_ckpt[0])
self.phones_dict = os.path.abspath(phones_dict) self.phones_dict = os.path.abspath(phones_dict)
self.am_res_path = os.path.dirname( self.am_res_path = os.path.dirname(os.path.abspath(am_ckpt))
os.path.abspath(am_ckpt))
# create am sess # create am sess
self.am_sess = get_sess(self.am_ckpt, am_sess_conf) self.am_sess = get_sess(self.am_ckpt, am_sess_conf)
...@@ -119,8 +118,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -119,8 +118,7 @@ class TTSServerExecutor(TTSExecutor):
self.am_postnet = os.path.abspath(am_ckpt[2]) self.am_postnet = os.path.abspath(am_ckpt[2])
self.phones_dict = os.path.abspath(phones_dict) self.phones_dict = os.path.abspath(phones_dict)
self.am_stat = os.path.abspath(am_stat) self.am_stat = os.path.abspath(am_stat)
self.am_res_path = os.path.dirname( self.am_res_path = os.path.dirname(os.path.abspath(am_ckpt[0]))
os.path.abspath(am_ckpt[0]))
# create am sess # create am sess
self.am_encoder_infer_sess = get_sess(self.am_encoder_infer, self.am_encoder_infer_sess = get_sess(self.am_encoder_infer,
...@@ -130,9 +128,9 @@ class TTSServerExecutor(TTSExecutor): ...@@ -130,9 +128,9 @@ class TTSServerExecutor(TTSExecutor):
self.am_mu, self.am_std = np.load(self.am_stat) self.am_mu, self.am_std = np.load(self.am_stat)
logger.info(f"self.phones_dict: {self.phones_dict}") logger.debug(f"self.phones_dict: {self.phones_dict}")
logger.info(f"am model dir: {self.am_res_path}") logger.debug(f"am model dir: {self.am_res_path}")
logger.info("Create am sess successfully.") logger.debug("Create am sess successfully.")
# voc model info # voc model info
voc_tag = voc + '-' + lang voc_tag = voc + '-' + lang
...@@ -149,16 +147,16 @@ class TTSServerExecutor(TTSExecutor): ...@@ -149,16 +147,16 @@ class TTSServerExecutor(TTSExecutor):
else: else:
self.voc_ckpt = os.path.abspath(voc_ckpt) self.voc_ckpt = os.path.abspath(voc_ckpt)
self.voc_res_path = os.path.dirname(os.path.abspath(self.voc_ckpt)) self.voc_res_path = os.path.dirname(os.path.abspath(self.voc_ckpt))
logger.info(self.voc_res_path) logger.debug(self.voc_res_path)
# create voc sess # create voc sess
self.voc_sess = get_sess(self.voc_ckpt, voc_sess_conf) self.voc_sess = get_sess(self.voc_ckpt, voc_sess_conf)
logger.info("Create voc sess successfully.") logger.debug("Create voc sess successfully.")
with open(self.phones_dict, "r") as f: with open(self.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
self.vocab_size = len(phn_id) self.vocab_size = len(phn_id)
logger.info(f"vocab_size: {self.vocab_size}") logger.debug(f"vocab_size: {self.vocab_size}")
# frontend # frontend
self.tones_dict = None self.tones_dict = None
...@@ -169,7 +167,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -169,7 +167,7 @@ class TTSServerExecutor(TTSExecutor):
elif lang == 'en': elif lang == 'en':
self.frontend = English(phone_vocab_path=self.phones_dict) self.frontend = English(phone_vocab_path=self.phones_dict)
logger.info("frontend done!") logger.debug("frontend done!")
class TTSEngine(BaseEngine): class TTSEngine(BaseEngine):
...@@ -267,7 +265,7 @@ class PaddleTTSConnectionHandler: ...@@ -267,7 +265,7 @@ class PaddleTTSConnectionHandler:
tts_engine (TTSEngine): The TTS engine tts_engine (TTSEngine): The TTS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleTTSConnectionHandler to process the tts request") "Create PaddleTTSConnectionHandler to process the tts request")
self.tts_engine = tts_engine self.tts_engine = tts_engine
......
...@@ -102,16 +102,22 @@ class TTSServerExecutor(TTSExecutor): ...@@ -102,16 +102,22 @@ class TTSServerExecutor(TTSExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'am_inference') and hasattr(self, 'voc_inference'): if hasattr(self, 'am_inference') and hasattr(self, 'voc_inference'):
logger.info('Models had been initialized.') logger.debug('Models had been initialized.')
return return
# am model info # am model info
if am_ckpt is None or am_config is None or am_stat is None or phones_dict is None:
use_pretrained_am = True
else:
use_pretrained_am = False
am_tag = am + '-' + lang am_tag = am + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=am_tag, model_tag=am_tag,
model_type=0, # am model_type=0, # am
skip_download=not use_pretrained_am,
version=None, # default version version=None, # default version
) )
if am_ckpt is None or am_config is None or am_stat is None or phones_dict is None: if use_pretrained_am:
self.am_res_path = self.task_resource.res_dir self.am_res_path = self.task_resource.res_dir
self.am_config = os.path.join(self.am_res_path, self.am_config = os.path.join(self.am_res_path,
self.task_resource.res_dict['config']) self.task_resource.res_dict['config'])
...@@ -122,29 +128,33 @@ class TTSServerExecutor(TTSExecutor): ...@@ -122,29 +128,33 @@ class TTSServerExecutor(TTSExecutor):
# must have phones_dict in acoustic # must have phones_dict in acoustic
self.phones_dict = os.path.join( self.phones_dict = os.path.join(
self.am_res_path, self.task_resource.res_dict['phones_dict']) self.am_res_path, self.task_resource.res_dict['phones_dict'])
print("self.phones_dict:", self.phones_dict) logger.debug(self.am_res_path)
logger.info(self.am_res_path) logger.debug(self.am_config)
logger.info(self.am_config) logger.debug(self.am_ckpt)
logger.info(self.am_ckpt)
else: else:
self.am_config = os.path.abspath(am_config) self.am_config = os.path.abspath(am_config)
self.am_ckpt = os.path.abspath(am_ckpt) self.am_ckpt = os.path.abspath(am_ckpt)
self.am_stat = os.path.abspath(am_stat) self.am_stat = os.path.abspath(am_stat)
self.phones_dict = os.path.abspath(phones_dict) self.phones_dict = os.path.abspath(phones_dict)
self.am_res_path = os.path.dirname(os.path.abspath(self.am_config)) self.am_res_path = os.path.dirname(os.path.abspath(self.am_config))
print("self.phones_dict:", self.phones_dict)
self.tones_dict = None self.tones_dict = None
self.speaker_dict = None self.speaker_dict = None
# voc model info # voc model info
if voc_ckpt is None or voc_config is None or voc_stat is None:
use_pretrained_voc = True
else:
use_pretrained_voc = False
voc_tag = voc + '-' + lang voc_tag = voc + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=voc_tag, model_tag=voc_tag,
model_type=1, # vocoder model_type=1, # vocoder
skip_download=not use_pretrained_voc,
version=None, # default version version=None, # default version
) )
if voc_ckpt is None or voc_config is None or voc_stat is None: if use_pretrained_voc:
self.voc_res_path = self.task_resource.voc_res_dir self.voc_res_path = self.task_resource.voc_res_dir
self.voc_config = os.path.join( self.voc_config = os.path.join(
self.voc_res_path, self.task_resource.voc_res_dict['config']) self.voc_res_path, self.task_resource.voc_res_dict['config'])
...@@ -153,9 +163,9 @@ class TTSServerExecutor(TTSExecutor): ...@@ -153,9 +163,9 @@ class TTSServerExecutor(TTSExecutor):
self.voc_stat = os.path.join( self.voc_stat = os.path.join(
self.voc_res_path, self.voc_res_path,
self.task_resource.voc_res_dict['speech_stats']) self.task_resource.voc_res_dict['speech_stats'])
logger.info(self.voc_res_path) logger.debug(self.voc_res_path)
logger.info(self.voc_config) logger.debug(self.voc_config)
logger.info(self.voc_ckpt) logger.debug(self.voc_ckpt)
else: else:
self.voc_config = os.path.abspath(voc_config) self.voc_config = os.path.abspath(voc_config)
self.voc_ckpt = os.path.abspath(voc_ckpt) self.voc_ckpt = os.path.abspath(voc_ckpt)
...@@ -172,7 +182,6 @@ class TTSServerExecutor(TTSExecutor): ...@@ -172,7 +182,6 @@ class TTSServerExecutor(TTSExecutor):
with open(self.phones_dict, "r") as f: with open(self.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
self.vocab_size = len(phn_id) self.vocab_size = len(phn_id)
print("vocab_size:", self.vocab_size)
# frontend # frontend
if lang == 'zh': if lang == 'zh':
...@@ -182,7 +191,6 @@ class TTSServerExecutor(TTSExecutor): ...@@ -182,7 +191,6 @@ class TTSServerExecutor(TTSExecutor):
elif lang == 'en': elif lang == 'en':
self.frontend = English(phone_vocab_path=self.phones_dict) self.frontend = English(phone_vocab_path=self.phones_dict)
print("frontend done!")
# am infer info # am infer info
self.am_name = am[:am.rindex('_')] self.am_name = am[:am.rindex('_')]
...@@ -197,7 +205,6 @@ class TTSServerExecutor(TTSExecutor): ...@@ -197,7 +205,6 @@ class TTSServerExecutor(TTSExecutor):
self.am_name + '_inference') self.am_name + '_inference')
self.am_inference = am_inference_class(am_normalizer, am) self.am_inference = am_inference_class(am_normalizer, am)
self.am_inference.eval() self.am_inference.eval()
print("acoustic model done!")
# voc infer info # voc infer info
self.voc_name = voc[:voc.rindex('_')] self.voc_name = voc[:voc.rindex('_')]
...@@ -208,7 +215,6 @@ class TTSServerExecutor(TTSExecutor): ...@@ -208,7 +215,6 @@ class TTSServerExecutor(TTSExecutor):
'_inference') '_inference')
self.voc_inference = voc_inference_class(voc_normalizer, voc) self.voc_inference = voc_inference_class(voc_normalizer, voc)
self.voc_inference.eval() self.voc_inference.eval()
print("voc done!")
class TTSEngine(BaseEngine): class TTSEngine(BaseEngine):
...@@ -297,7 +303,7 @@ class PaddleTTSConnectionHandler: ...@@ -297,7 +303,7 @@ class PaddleTTSConnectionHandler:
tts_engine (TTSEngine): The TTS engine tts_engine (TTSEngine): The TTS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleTTSConnectionHandler to process the tts request") "Create PaddleTTSConnectionHandler to process the tts request")
self.tts_engine = tts_engine self.tts_engine = tts_engine
...@@ -357,7 +363,7 @@ class PaddleTTSConnectionHandler: ...@@ -357,7 +363,7 @@ class PaddleTTSConnectionHandler:
text, merge_sentences=merge_sentences) text, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"] phone_ids = input_ids["phone_ids"]
else: else:
print("lang should in {'zh', 'en'}!") logger.error("lang should in {'zh', 'en'}!")
frontend_et = time.time() frontend_et = time.time()
self.frontend_time = frontend_et - frontend_st self.frontend_time = frontend_et - frontend_st
......
...@@ -65,16 +65,22 @@ class TTSServerExecutor(TTSExecutor): ...@@ -65,16 +65,22 @@ class TTSServerExecutor(TTSExecutor):
Init model and other resources from a specific path. Init model and other resources from a specific path.
""" """
if hasattr(self, 'am_predictor') and hasattr(self, 'voc_predictor'): if hasattr(self, 'am_predictor') and hasattr(self, 'voc_predictor'):
logger.info('Models had been initialized.') logger.debug('Models had been initialized.')
return return
# am # am
if am_model is None or am_params is None or phones_dict is None:
use_pretrained_am = True
else:
use_pretrained_am = False
am_tag = am + '-' + lang am_tag = am + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=am_tag, model_tag=am_tag,
model_type=0, # am model_type=0, # am
skip_download=not use_pretrained_am,
version=None, # default version version=None, # default version
) )
if am_model is None or am_params is None or phones_dict is None: if use_pretrained_am:
self.am_res_path = self.task_resource.res_dir self.am_res_path = self.task_resource.res_dir
self.am_model = os.path.join(self.am_res_path, self.am_model = os.path.join(self.am_res_path,
self.task_resource.res_dict['model']) self.task_resource.res_dict['model'])
...@@ -85,16 +91,16 @@ class TTSServerExecutor(TTSExecutor): ...@@ -85,16 +91,16 @@ class TTSServerExecutor(TTSExecutor):
self.am_res_path, self.task_resource.res_dict['phones_dict']) self.am_res_path, self.task_resource.res_dict['phones_dict'])
self.am_sample_rate = self.task_resource.res_dict['sample_rate'] self.am_sample_rate = self.task_resource.res_dict['sample_rate']
logger.info(self.am_res_path) logger.debug(self.am_res_path)
logger.info(self.am_model) logger.debug(self.am_model)
logger.info(self.am_params) logger.debug(self.am_params)
else: else:
self.am_model = os.path.abspath(am_model) self.am_model = os.path.abspath(am_model)
self.am_params = os.path.abspath(am_params) self.am_params = os.path.abspath(am_params)
self.phones_dict = os.path.abspath(phones_dict) self.phones_dict = os.path.abspath(phones_dict)
self.am_sample_rate = am_sample_rate self.am_sample_rate = am_sample_rate
self.am_res_path = os.path.dirname(os.path.abspath(self.am_model)) self.am_res_path = os.path.dirname(os.path.abspath(self.am_model))
logger.info("self.phones_dict: {}".format(self.phones_dict)) logger.debug("self.phones_dict: {}".format(self.phones_dict))
# for speedyspeech # for speedyspeech
self.tones_dict = None self.tones_dict = None
...@@ -113,13 +119,19 @@ class TTSServerExecutor(TTSExecutor): ...@@ -113,13 +119,19 @@ class TTSServerExecutor(TTSExecutor):
self.speaker_dict = speaker_dict self.speaker_dict = speaker_dict
# voc # voc
if voc_model is None or voc_params is None:
use_pretrained_voc = True
else:
use_pretrained_voc = False
voc_tag = voc + '-' + lang voc_tag = voc + '-' + lang
self.task_resource.set_task_model( self.task_resource.set_task_model(
model_tag=voc_tag, model_tag=voc_tag,
model_type=1, # vocoder model_type=1, # vocoder
skip_download=not use_pretrained_voc,
version=None, # default version version=None, # default version
) )
if voc_model is None or voc_params is None: if use_pretrained_voc:
self.voc_res_path = self.task_resource.voc_res_dir self.voc_res_path = self.task_resource.voc_res_dir
self.voc_model = os.path.join( self.voc_model = os.path.join(
self.voc_res_path, self.task_resource.voc_res_dict['model']) self.voc_res_path, self.task_resource.voc_res_dict['model'])
...@@ -127,9 +139,9 @@ class TTSServerExecutor(TTSExecutor): ...@@ -127,9 +139,9 @@ class TTSServerExecutor(TTSExecutor):
self.voc_res_path, self.task_resource.voc_res_dict['params']) self.voc_res_path, self.task_resource.voc_res_dict['params'])
self.voc_sample_rate = self.task_resource.voc_res_dict[ self.voc_sample_rate = self.task_resource.voc_res_dict[
'sample_rate'] 'sample_rate']
logger.info(self.voc_res_path) logger.debug(self.voc_res_path)
logger.info(self.voc_model) logger.debug(self.voc_model)
logger.info(self.voc_params) logger.debug(self.voc_params)
else: else:
self.voc_model = os.path.abspath(voc_model) self.voc_model = os.path.abspath(voc_model)
self.voc_params = os.path.abspath(voc_params) self.voc_params = os.path.abspath(voc_params)
...@@ -144,21 +156,21 @@ class TTSServerExecutor(TTSExecutor): ...@@ -144,21 +156,21 @@ class TTSServerExecutor(TTSExecutor):
with open(self.phones_dict, "r") as f: with open(self.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()] phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id) vocab_size = len(phn_id)
logger.info("vocab_size: {}".format(vocab_size)) logger.debug("vocab_size: {}".format(vocab_size))
tone_size = None tone_size = None
if self.tones_dict: if self.tones_dict:
with open(self.tones_dict, "r") as f: with open(self.tones_dict, "r") as f:
tone_id = [line.strip().split() for line in f.readlines()] tone_id = [line.strip().split() for line in f.readlines()]
tone_size = len(tone_id) tone_size = len(tone_id)
logger.info("tone_size: {}".format(tone_size)) logger.debug("tone_size: {}".format(tone_size))
spk_num = None spk_num = None
if self.speaker_dict: if self.speaker_dict:
with open(self.speaker_dict, 'rt') as f: with open(self.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()] spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id) spk_num = len(spk_id)
logger.info("spk_num: {}".format(spk_num)) logger.debug("spk_num: {}".format(spk_num))
# frontend # frontend
if lang == 'zh': if lang == 'zh':
...@@ -168,7 +180,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -168,7 +180,7 @@ class TTSServerExecutor(TTSExecutor):
elif lang == 'en': elif lang == 'en':
self.frontend = English(phone_vocab_path=self.phones_dict) self.frontend = English(phone_vocab_path=self.phones_dict)
logger.info("frontend done!") logger.debug("frontend done!")
# Create am predictor # Create am predictor
self.am_predictor_conf = am_predictor_conf self.am_predictor_conf = am_predictor_conf
...@@ -176,7 +188,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -176,7 +188,7 @@ class TTSServerExecutor(TTSExecutor):
model_file=self.am_model, model_file=self.am_model,
params_file=self.am_params, params_file=self.am_params,
predictor_conf=self.am_predictor_conf) predictor_conf=self.am_predictor_conf)
logger.info("Create AM predictor successfully.") logger.debug("Create AM predictor successfully.")
# Create voc predictor # Create voc predictor
self.voc_predictor_conf = voc_predictor_conf self.voc_predictor_conf = voc_predictor_conf
...@@ -184,7 +196,7 @@ class TTSServerExecutor(TTSExecutor): ...@@ -184,7 +196,7 @@ class TTSServerExecutor(TTSExecutor):
model_file=self.voc_model, model_file=self.voc_model,
params_file=self.voc_params, params_file=self.voc_params,
predictor_conf=self.voc_predictor_conf) predictor_conf=self.voc_predictor_conf)
logger.info("Create Vocoder predictor successfully.") logger.debug("Create Vocoder predictor successfully.")
@paddle.no_grad() @paddle.no_grad()
def infer(self, def infer(self,
...@@ -316,7 +328,8 @@ class TTSEngine(BaseEngine): ...@@ -316,7 +328,8 @@ class TTSEngine(BaseEngine):
logger.error(e) logger.error(e)
return False return False
logger.info("Initialize TTS server engine successfully.") logger.info("Initialize TTS server engine successfully on device: %s." %
(self.device))
return True return True
...@@ -328,7 +341,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -328,7 +341,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
tts_engine (TTSEngine): The TTS engine tts_engine (TTSEngine): The TTS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleTTSConnectionHandler to process the tts request") "Create PaddleTTSConnectionHandler to process the tts request")
self.tts_engine = tts_engine self.tts_engine = tts_engine
...@@ -366,23 +379,23 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -366,23 +379,23 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
if target_fs == 0 or target_fs > original_fs: if target_fs == 0 or target_fs > original_fs:
target_fs = original_fs target_fs = original_fs
wav_tar_fs = wav wav_tar_fs = wav
logger.info( logger.debug(
"The sample rate of synthesized audio is the same as model, which is {}Hz". "The sample rate of synthesized audio is the same as model, which is {}Hz".
format(original_fs)) format(original_fs))
else: else:
wav_tar_fs = librosa.resample( wav_tar_fs = librosa.resample(
np.squeeze(wav), original_fs, target_fs) np.squeeze(wav), original_fs, target_fs)
logger.info( logger.debug(
"The sample rate of model is {}Hz and the target sample rate is {}Hz. Converting the sample rate of the synthesized audio successfully.". "The sample rate of model is {}Hz and the target sample rate is {}Hz. Converting the sample rate of the synthesized audio successfully.".
format(original_fs, target_fs)) format(original_fs, target_fs))
# transform volume # transform volume
wav_vol = wav_tar_fs * volume wav_vol = wav_tar_fs * volume
logger.info("Transform the volume of the audio successfully.") logger.debug("Transform the volume of the audio successfully.")
# transform speed # transform speed
try: # windows not support soxbindings try: # windows not support soxbindings
wav_speed = change_speed(wav_vol, speed, target_fs) wav_speed = change_speed(wav_vol, speed, target_fs)
logger.info("Transform the speed of the audio successfully.") logger.debug("Transform the speed of the audio successfully.")
except ServerBaseException: except ServerBaseException:
raise ServerBaseException( raise ServerBaseException(
ErrorCode.SERVER_INTERNAL_ERR, ErrorCode.SERVER_INTERNAL_ERR,
...@@ -399,7 +412,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -399,7 +412,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
wavfile.write(buf, target_fs, wav_speed) wavfile.write(buf, target_fs, wav_speed)
base64_bytes = base64.b64encode(buf.read()) base64_bytes = base64.b64encode(buf.read())
wav_base64 = base64_bytes.decode('utf-8') wav_base64 = base64_bytes.decode('utf-8')
logger.info("Audio to string successfully.") logger.debug("Audio to string successfully.")
# save audio # save audio
if audio_path is not None: if audio_path is not None:
...@@ -487,15 +500,15 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -487,15 +500,15 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
logger.error(e) logger.error(e)
sys.exit(-1) sys.exit(-1)
logger.info("AM model: {}".format(self.config.am)) logger.debug("AM model: {}".format(self.config.am))
logger.info("Vocoder model: {}".format(self.config.voc)) logger.debug("Vocoder model: {}".format(self.config.voc))
logger.info("Language: {}".format(lang)) logger.debug("Language: {}".format(lang))
logger.info("tts engine type: python") logger.info("tts engine type: python")
logger.info("audio duration: {}".format(duration)) logger.info("audio duration: {}".format(duration))
logger.info("frontend inference time: {}".format(self.frontend_time)) logger.debug("frontend inference time: {}".format(self.frontend_time))
logger.info("AM inference time: {}".format(self.am_time)) logger.debug("AM inference time: {}".format(self.am_time))
logger.info("Vocoder inference time: {}".format(self.voc_time)) logger.debug("Vocoder inference time: {}".format(self.voc_time))
logger.info("total inference time: {}".format(infer_time)) logger.info("total inference time: {}".format(infer_time))
logger.info( logger.info(
"postprocess (change speed, volume, target sample rate) time: {}". "postprocess (change speed, volume, target sample rate) time: {}".
...@@ -503,6 +516,6 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -503,6 +516,6 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
logger.info("total generate audio time: {}".format(infer_time + logger.info("total generate audio time: {}".format(infer_time +
postprocess_time)) postprocess_time))
logger.info("RTF: {}".format(rtf)) logger.info("RTF: {}".format(rtf))
logger.info("device: {}".format(self.tts_engine.device)) logger.debug("device: {}".format(self.tts_engine.device))
return lang, target_sample_rate, duration, wav_base64 return lang, target_sample_rate, duration, wav_base64
...@@ -105,7 +105,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -105,7 +105,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
tts_engine (TTSEngine): The TTS engine tts_engine (TTSEngine): The TTS engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleTTSConnectionHandler to process the tts request") "Create PaddleTTSConnectionHandler to process the tts request")
self.tts_engine = tts_engine self.tts_engine = tts_engine
...@@ -143,23 +143,23 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -143,23 +143,23 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
if target_fs == 0 or target_fs > original_fs: if target_fs == 0 or target_fs > original_fs:
target_fs = original_fs target_fs = original_fs
wav_tar_fs = wav wav_tar_fs = wav
logger.info( logger.debug(
"The sample rate of synthesized audio is the same as model, which is {}Hz". "The sample rate of synthesized audio is the same as model, which is {}Hz".
format(original_fs)) format(original_fs))
else: else:
wav_tar_fs = librosa.resample( wav_tar_fs = librosa.resample(
np.squeeze(wav), original_fs, target_fs) np.squeeze(wav), original_fs, target_fs)
logger.info( logger.debug(
"The sample rate of model is {}Hz and the target sample rate is {}Hz. Converting the sample rate of the synthesized audio successfully.". "The sample rate of model is {}Hz and the target sample rate is {}Hz. Converting the sample rate of the synthesized audio successfully.".
format(original_fs, target_fs)) format(original_fs, target_fs))
# transform volume # transform volume
wav_vol = wav_tar_fs * volume wav_vol = wav_tar_fs * volume
logger.info("Transform the volume of the audio successfully.") logger.debug("Transform the volume of the audio successfully.")
# transform speed # transform speed
try: # windows not support soxbindings try: # windows not support soxbindings
wav_speed = change_speed(wav_vol, speed, target_fs) wav_speed = change_speed(wav_vol, speed, target_fs)
logger.info("Transform the speed of the audio successfully.") logger.debug("Transform the speed of the audio successfully.")
except ServerBaseException: except ServerBaseException:
raise ServerBaseException( raise ServerBaseException(
ErrorCode.SERVER_INTERNAL_ERR, ErrorCode.SERVER_INTERNAL_ERR,
...@@ -176,7 +176,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -176,7 +176,7 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
wavfile.write(buf, target_fs, wav_speed) wavfile.write(buf, target_fs, wav_speed)
base64_bytes = base64.b64encode(buf.read()) base64_bytes = base64.b64encode(buf.read())
wav_base64 = base64_bytes.decode('utf-8') wav_base64 = base64_bytes.decode('utf-8')
logger.info("Audio to string successfully.") logger.debug("Audio to string successfully.")
# save audio # save audio
if audio_path is not None: if audio_path is not None:
...@@ -264,15 +264,15 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -264,15 +264,15 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
logger.error(e) logger.error(e)
sys.exit(-1) sys.exit(-1)
logger.info("AM model: {}".format(self.config.am)) logger.debug("AM model: {}".format(self.config.am))
logger.info("Vocoder model: {}".format(self.config.voc)) logger.debug("Vocoder model: {}".format(self.config.voc))
logger.info("Language: {}".format(lang)) logger.debug("Language: {}".format(lang))
logger.info("tts engine type: python") logger.info("tts engine type: python")
logger.info("audio duration: {}".format(duration)) logger.info("audio duration: {}".format(duration))
logger.info("frontend inference time: {}".format(self.frontend_time)) logger.debug("frontend inference time: {}".format(self.frontend_time))
logger.info("AM inference time: {}".format(self.am_time)) logger.debug("AM inference time: {}".format(self.am_time))
logger.info("Vocoder inference time: {}".format(self.voc_time)) logger.debug("Vocoder inference time: {}".format(self.voc_time))
logger.info("total inference time: {}".format(infer_time)) logger.info("total inference time: {}".format(infer_time))
logger.info( logger.info(
"postprocess (change speed, volume, target sample rate) time: {}". "postprocess (change speed, volume, target sample rate) time: {}".
...@@ -280,6 +280,6 @@ class PaddleTTSConnectionHandler(TTSServerExecutor): ...@@ -280,6 +280,6 @@ class PaddleTTSConnectionHandler(TTSServerExecutor):
logger.info("total generate audio time: {}".format(infer_time + logger.info("total generate audio time: {}".format(infer_time +
postprocess_time)) postprocess_time))
logger.info("RTF: {}".format(rtf)) logger.info("RTF: {}".format(rtf))
logger.info("device: {}".format(self.tts_engine.device)) logger.debug("device: {}".format(self.tts_engine.device))
return lang, target_sample_rate, duration, wav_base64 return lang, target_sample_rate, duration, wav_base64
...@@ -33,7 +33,7 @@ class PaddleVectorConnectionHandler: ...@@ -33,7 +33,7 @@ class PaddleVectorConnectionHandler:
vector_engine (VectorEngine): The Vector engine vector_engine (VectorEngine): The Vector engine
""" """
super().__init__() super().__init__()
logger.info( logger.debug(
"Create PaddleVectorConnectionHandler to process the vector request") "Create PaddleVectorConnectionHandler to process the vector request")
self.vector_engine = vector_engine self.vector_engine = vector_engine
self.executor = self.vector_engine.executor self.executor = self.vector_engine.executor
...@@ -54,7 +54,7 @@ class PaddleVectorConnectionHandler: ...@@ -54,7 +54,7 @@ class PaddleVectorConnectionHandler:
Returns: Returns:
str: the punctuation text str: the punctuation text
""" """
logger.info( logger.debug(
f"start to extract the do vector {self.task} from the http request") f"start to extract the do vector {self.task} from the http request")
if self.task == "spk" and task == "spk": if self.task == "spk" and task == "spk":
embedding = self.extract_audio_embedding(audio_data) embedding = self.extract_audio_embedding(audio_data)
...@@ -81,17 +81,17 @@ class PaddleVectorConnectionHandler: ...@@ -81,17 +81,17 @@ class PaddleVectorConnectionHandler:
Returns: Returns:
float: the score between enroll and test audio float: the score between enroll and test audio
""" """
logger.info("start to extract the enroll audio embedding") logger.debug("start to extract the enroll audio embedding")
enroll_emb = self.extract_audio_embedding(enroll_audio) enroll_emb = self.extract_audio_embedding(enroll_audio)
logger.info("start to extract the test audio embedding") logger.debug("start to extract the test audio embedding")
test_emb = self.extract_audio_embedding(test_audio) test_emb = self.extract_audio_embedding(test_audio)
logger.info( logger.debug(
"start to get the score between the enroll and test embedding") "start to get the score between the enroll and test embedding")
score = self.executor.get_embeddings_score(enroll_emb, test_emb) score = self.executor.get_embeddings_score(enroll_emb, test_emb)
logger.info(f"get the enroll vs test score: {score}") logger.debug(f"get the enroll vs test score: {score}")
return score return score
@paddle.no_grad() @paddle.no_grad()
...@@ -106,11 +106,12 @@ class PaddleVectorConnectionHandler: ...@@ -106,11 +106,12 @@ class PaddleVectorConnectionHandler:
# because the soundfile will change the io.BytesIO(audio) to the end # because the soundfile will change the io.BytesIO(audio) to the end
# thus we should convert the base64 string to io.BytesIO when we need the audio data # thus we should convert the base64 string to io.BytesIO when we need the audio data
if not self.executor._check(io.BytesIO(audio), sample_rate): if not self.executor._check(io.BytesIO(audio), sample_rate):
logger.info("check the audio sample rate occurs error") logger.debug("check the audio sample rate occurs error")
return np.array([0.0]) return np.array([0.0])
waveform, sr = load_audio(io.BytesIO(audio)) waveform, sr = load_audio(io.BytesIO(audio))
logger.info(f"load the audio sample points, shape is: {waveform.shape}") logger.debug(
f"load the audio sample points, shape is: {waveform.shape}")
# stage 2: get the audio feat # stage 2: get the audio feat
# Note: Now we only support fbank feature # Note: Now we only support fbank feature
...@@ -121,9 +122,9 @@ class PaddleVectorConnectionHandler: ...@@ -121,9 +122,9 @@ class PaddleVectorConnectionHandler:
n_mels=self.config.n_mels, n_mels=self.config.n_mels,
window_size=self.config.window_size, window_size=self.config.window_size,
hop_length=self.config.hop_size) hop_length=self.config.hop_size)
logger.info(f"extract the audio feats, shape is: {feats.shape}") logger.debug(f"extract the audio feats, shape is: {feats.shape}")
except Exception as e: except Exception as e:
logger.info(f"feats occurs exception {e}") logger.error(f"feats occurs exception {e}")
sys.exit(-1) sys.exit(-1)
feats = paddle.to_tensor(feats).unsqueeze(0) feats = paddle.to_tensor(feats).unsqueeze(0)
...@@ -159,7 +160,7 @@ class VectorEngine(BaseEngine): ...@@ -159,7 +160,7 @@ class VectorEngine(BaseEngine):
"""The Vector Engine """The Vector Engine
""" """
super(VectorEngine, self).__init__() super(VectorEngine, self).__init__()
logger.info("Create the VectorEngine Instance") logger.debug("Create the VectorEngine Instance")
def init(self, config: dict): def init(self, config: dict):
"""Init the Vector Engine """Init the Vector Engine
...@@ -170,7 +171,7 @@ class VectorEngine(BaseEngine): ...@@ -170,7 +171,7 @@ class VectorEngine(BaseEngine):
Returns: Returns:
bool: The engine instance flag bool: The engine instance flag
""" """
logger.info("Init the vector engine") logger.debug("Init the vector engine")
try: try:
self.config = config self.config = config
if self.config.device: if self.config.device:
...@@ -179,7 +180,7 @@ class VectorEngine(BaseEngine): ...@@ -179,7 +180,7 @@ class VectorEngine(BaseEngine):
self.device = paddle.get_device() self.device = paddle.get_device()
paddle.set_device(self.device) paddle.set_device(self.device)
logger.info(f"Vector Engine set the device: {self.device}") logger.debug(f"Vector Engine set the device: {self.device}")
except BaseException as e: except BaseException as e:
logger.error( logger.error(
"Set device failed, please check if device is already used and the parameter 'device' in the yaml file" "Set device failed, please check if device is already used and the parameter 'device' in the yaml file"
...@@ -196,5 +197,7 @@ class VectorEngine(BaseEngine): ...@@ -196,5 +197,7 @@ class VectorEngine(BaseEngine):
ckpt_path=config.ckpt_path, ckpt_path=config.ckpt_path,
task=config.task) task=config.task)
logger.info("Init the Vector engine successfully") logger.info(
"Initialize Vector server engine successfully on device: %s." %
(self.device))
return True return True
...@@ -138,7 +138,7 @@ class ASRWsAudioHandler: ...@@ -138,7 +138,7 @@ class ASRWsAudioHandler:
Returns: Returns:
str: the final asr result str: the final asr result
""" """
logging.info("send a message to the server") logging.debug("send a message to the server")
if self.url is None: if self.url is None:
logger.error("No asr server, please input valid ip and port") logger.error("No asr server, please input valid ip and port")
...@@ -160,7 +160,7 @@ class ASRWsAudioHandler: ...@@ -160,7 +160,7 @@ class ASRWsAudioHandler:
separators=(',', ': ')) separators=(',', ': '))
await ws.send(audio_info) await ws.send(audio_info)
msg = await ws.recv() msg = await ws.recv()
logger.info("client receive msg={}".format(msg)) logger.debug("client receive msg={}".format(msg))
# 3. send chunk audio data to engine # 3. send chunk audio data to engine
for chunk_data in self.read_wave(wavfile_path): for chunk_data in self.read_wave(wavfile_path):
...@@ -170,7 +170,7 @@ class ASRWsAudioHandler: ...@@ -170,7 +170,7 @@ class ASRWsAudioHandler:
if self.punc_server and len(msg["result"]) > 0: if self.punc_server and len(msg["result"]) > 0:
msg["result"] = self.punc_server.run(msg["result"]) msg["result"] = self.punc_server.run(msg["result"])
logger.info("client receive msg={}".format(msg)) logger.debug("client receive msg={}".format(msg))
# 4. we must send finished signal to the server # 4. we must send finished signal to the server
audio_info = json.dumps( audio_info = json.dumps(
...@@ -310,7 +310,7 @@ class TTSWsHandler: ...@@ -310,7 +310,7 @@ class TTSWsHandler:
start_request = json.dumps({"task": "tts", "signal": "start"}) start_request = json.dumps({"task": "tts", "signal": "start"})
await ws.send(start_request) await ws.send(start_request)
msg = await ws.recv() msg = await ws.recv()
logger.info(f"client receive msg={msg}") logger.debug(f"client receive msg={msg}")
msg = json.loads(msg) msg = json.loads(msg)
session = msg["session"] session = msg["session"]
...@@ -319,7 +319,7 @@ class TTSWsHandler: ...@@ -319,7 +319,7 @@ class TTSWsHandler:
request = json.dumps({"text": text_base64}) request = json.dumps({"text": text_base64})
st = time.time() st = time.time()
await ws.send(request) await ws.send(request)
logging.info("send a message to the server") logging.debug("send a message to the server")
# 4. Process the received response # 4. Process the received response
message = await ws.recv() message = await ws.recv()
...@@ -543,7 +543,6 @@ class VectorHttpHandler: ...@@ -543,7 +543,6 @@ class VectorHttpHandler:
"sample_rate": sample_rate, "sample_rate": sample_rate,
} }
logger.info(self.url)
res = requests.post(url=self.url, data=json.dumps(data)) res = requests.post(url=self.url, data=json.dumps(data))
return res.json() return res.json()
......
...@@ -169,7 +169,7 @@ def save_audio(bytes_data, audio_path, sample_rate: int=24000) -> bool: ...@@ -169,7 +169,7 @@ def save_audio(bytes_data, audio_path, sample_rate: int=24000) -> bool:
sample_rate=sample_rate) sample_rate=sample_rate)
os.remove("./tmp.pcm") os.remove("./tmp.pcm")
else: else:
print("Only supports saved audio format is pcm or wav") logger.error("Only supports saved audio format is pcm or wav")
return False return False
return True return True
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
import logging
__all__ = [
'logger',
]
class Logger(object):
def __init__(self, name: str=None):
name = 'PaddleSpeech' if not name else name
self.logger = logging.getLogger(name)
log_config = {
'DEBUG': 10,
'INFO': 20,
'TRAIN': 21,
'EVAL': 22,
'WARNING': 30,
'ERROR': 40,
'CRITICAL': 50,
'EXCEPTION': 100,
}
for key, level in log_config.items():
logging.addLevelName(level, key)
if key == 'EXCEPTION':
self.__dict__[key.lower()] = self.logger.exception
else:
self.__dict__[key.lower()] = functools.partial(self.__call__,
level)
self.format = logging.Formatter(
fmt='[%(asctime)-15s] [%(levelname)8s] - %(message)s')
self.handler = logging.StreamHandler()
self.handler.setFormatter(self.format)
self.logger.addHandler(self.handler)
self.logger.setLevel(logging.DEBUG)
self.logger.propagate = False
def __call__(self, log_level: str, msg: str):
self.logger.log(log_level, msg)
logger = Logger()
...@@ -16,11 +16,11 @@ from typing import Optional ...@@ -16,11 +16,11 @@ from typing import Optional
import onnxruntime as ort import onnxruntime as ort
from .log import logger from paddlespeech.cli.log import logger
def get_sess(model_path: Optional[os.PathLike]=None, sess_conf: dict=None): def get_sess(model_path: Optional[os.PathLike]=None, sess_conf: dict=None):
logger.info(f"ort sessconf: {sess_conf}") logger.debug(f"ort sessconf: {sess_conf}")
sess_options = ort.SessionOptions() sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
if sess_conf.get('graph_optimization_level', 99) == 0: if sess_conf.get('graph_optimization_level', 99) == 0:
...@@ -34,7 +34,7 @@ def get_sess(model_path: Optional[os.PathLike]=None, sess_conf: dict=None): ...@@ -34,7 +34,7 @@ def get_sess(model_path: Optional[os.PathLike]=None, sess_conf: dict=None):
# fastspeech2/mb_melgan can't use trt now! # fastspeech2/mb_melgan can't use trt now!
if sess_conf.get("use_trt", 0): if sess_conf.get("use_trt", 0):
providers = ['TensorrtExecutionProvider'] providers = ['TensorrtExecutionProvider']
logger.info(f"ort providers: {providers}") logger.debug(f"ort providers: {providers}")
if 'cpu_threads' in sess_conf: if 'cpu_threads' in sess_conf:
sess_options.intra_op_num_threads = sess_conf.get("cpu_threads", 0) sess_options.intra_op_num_threads = sess_conf.get("cpu_threads", 0)
......
...@@ -13,6 +13,8 @@ ...@@ -13,6 +13,8 @@
import base64 import base64
import math import math
from paddlespeech.cli.log import logger
def wav2base64(wav_file: str): def wav2base64(wav_file: str):
""" """
...@@ -61,7 +63,7 @@ def get_chunks(data, block_size, pad_size, step): ...@@ -61,7 +63,7 @@ def get_chunks(data, block_size, pad_size, step):
elif step == "voc": elif step == "voc":
data_len = data.shape[0] data_len = data.shape[0]
else: else:
print("Please set correct type to get chunks, am or voc") logger.error("Please set correct type to get chunks, am or voc")
chunks = [] chunks = []
n = math.ceil(data_len / block_size) n = math.ceil(data_len / block_size)
...@@ -73,7 +75,7 @@ def get_chunks(data, block_size, pad_size, step): ...@@ -73,7 +75,7 @@ def get_chunks(data, block_size, pad_size, step):
elif step == "voc": elif step == "voc":
chunks.append(data[start:end, :]) chunks.append(data[start:end, :])
else: else:
print("Please set correct type to get chunks, am or voc") logger.error("Please set correct type to get chunks, am or voc")
return chunks return chunks
......
...@@ -141,71 +141,133 @@ class FastSpeech2(nn.Layer): ...@@ -141,71 +141,133 @@ class FastSpeech2(nn.Layer):
init_dec_alpha: float=1.0, ): init_dec_alpha: float=1.0, ):
"""Initialize FastSpeech2 module. """Initialize FastSpeech2 module.
Args: Args:
idim (int): Dimension of the inputs. idim (int):
odim (int): Dimension of the outputs. Dimension of the inputs.
adim (int): Attention dimension. odim (int):
aheads (int): Number of attention heads. Dimension of the outputs.
elayers (int): Number of encoder layers. adim (int):
eunits (int): Number of encoder hidden units. Attention dimension.
dlayers (int): Number of decoder layers. aheads (int):
dunits (int): Number of decoder hidden units. Number of attention heads.
postnet_layers (int): Number of postnet layers. elayers (int):
postnet_chans (int): Number of postnet channels. Number of encoder layers.
postnet_filts (int): Kernel size of postnet. eunits (int):
postnet_dropout_rate (float): Dropout rate in postnet. Number of encoder hidden units.
use_scaled_pos_enc (bool): Whether to use trainable scaled pos encoding. dlayers (int):
use_batch_norm (bool): Whether to use batch normalization in encoder prenet. Number of decoder layers.
encoder_normalize_before (bool): Whether to apply layernorm layer before encoder block. dunits (int):
decoder_normalize_before (bool): Whether to apply layernorm layer before decoder block. Number of decoder hidden units.
encoder_concat_after (bool): Whether to concatenate attention layer's input and output in encoder. postnet_layers (int):
decoder_concat_after (bool): Whether to concatenate attention layer's input and output in decoder. Number of postnet layers.
reduction_factor (int): Reduction factor. postnet_chans (int):
encoder_type (str): Encoder type ("transformer" or "conformer"). Number of postnet channels.
decoder_type (str): Decoder type ("transformer" or "conformer"). postnet_filts (int):
transformer_enc_dropout_rate (float): Dropout rate in encoder except attention and positional encoding. Kernel size of postnet.
transformer_enc_positional_dropout_rate (float): Dropout rate after encoder positional encoding. postnet_dropout_rate (float):
transformer_enc_attn_dropout_rate (float): Dropout rate in encoder self-attention module. Dropout rate in postnet.
transformer_dec_dropout_rate (float): Dropout rate in decoder except attention & positional encoding. use_scaled_pos_enc (bool):
transformer_dec_positional_dropout_rate (float): Dropout rate after decoder positional encoding. Whether to use trainable scaled pos encoding.
transformer_dec_attn_dropout_rate (float): Dropout rate in decoder self-attention module. use_batch_norm (bool):
conformer_pos_enc_layer_type (str): Pos encoding layer type in conformer. Whether to use batch normalization in encoder prenet.
conformer_self_attn_layer_type (str): Self-attention layer type in conformer encoder_normalize_before (bool):
conformer_activation_type (str): Activation function type in conformer. Whether to apply layernorm layer before encoder block.
use_macaron_style_in_conformer (bool): Whether to use macaron style FFN. decoder_normalize_before (bool):
use_cnn_in_conformer (bool): Whether to use CNN in conformer. Whether to apply layernorm layer before decoder block.
zero_triu (bool): Whether to use zero triu in relative self-attention module. encoder_concat_after (bool):
conformer_enc_kernel_size (int): Kernel size of encoder conformer. Whether to concatenate attention layer's input and output in encoder.
conformer_dec_kernel_size (int): Kernel size of decoder conformer. decoder_concat_after (bool):
duration_predictor_layers (int): Number of duration predictor layers. Whether to concatenate attention layer's input and output in decoder.
duration_predictor_chans (int): Number of duration predictor channels. reduction_factor (int):
duration_predictor_kernel_size (int): Kernel size of duration predictor. Reduction factor.
duration_predictor_dropout_rate (float): Dropout rate in duration predictor. encoder_type (str):
pitch_predictor_layers (int): Number of pitch predictor layers. Encoder type ("transformer" or "conformer").
pitch_predictor_chans (int): Number of pitch predictor channels. decoder_type (str):
pitch_predictor_kernel_size (int): Kernel size of pitch predictor. Decoder type ("transformer" or "conformer").
pitch_predictor_dropout_rate (float): Dropout rate in pitch predictor. transformer_enc_dropout_rate (float):
pitch_embed_kernel_size (float): Kernel size of pitch embedding. Dropout rate in encoder except attention and positional encoding.
pitch_embed_dropout_rate (float): Dropout rate for pitch embedding. transformer_enc_positional_dropout_rate (float):
stop_gradient_from_pitch_predictor (bool): Whether to stop gradient from pitch predictor to encoder. Dropout rate after encoder positional encoding.
energy_predictor_layers (int): Number of energy predictor layers. transformer_enc_attn_dropout_rate (float):
energy_predictor_chans (int): Number of energy predictor channels. Dropout rate in encoder self-attention module.
energy_predictor_kernel_size (int): Kernel size of energy predictor. transformer_dec_dropout_rate (float):
energy_predictor_dropout_rate (float): Dropout rate in energy predictor. Dropout rate in decoder except attention & positional encoding.
energy_embed_kernel_size (float): Kernel size of energy embedding. transformer_dec_positional_dropout_rate (float):
energy_embed_dropout_rate (float): Dropout rate for energy embedding. Dropout rate after decoder positional encoding.
stop_gradient_from_energy_predictor(bool): Whether to stop gradient from energy predictor to encoder. transformer_dec_attn_dropout_rate (float):
spk_num (Optional[int]): Number of speakers. If not None, assume that the spk_embed_dim is not None, Dropout rate in decoder self-attention module.
conformer_pos_enc_layer_type (str):
Pos encoding layer type in conformer.
conformer_self_attn_layer_type (str):
Self-attention layer type in conformer
conformer_activation_type (str):
Activation function type in conformer.
use_macaron_style_in_conformer (bool):
Whether to use macaron style FFN.
use_cnn_in_conformer (bool):
Whether to use CNN in conformer.
zero_triu (bool):
Whether to use zero triu in relative self-attention module.
conformer_enc_kernel_size (int):
Kernel size of encoder conformer.
conformer_dec_kernel_size (int):
Kernel size of decoder conformer.
duration_predictor_layers (int):
Number of duration predictor layers.
duration_predictor_chans (int):
Number of duration predictor channels.
duration_predictor_kernel_size (int):
Kernel size of duration predictor.
duration_predictor_dropout_rate (float):
Dropout rate in duration predictor.
pitch_predictor_layers (int):
Number of pitch predictor layers.
pitch_predictor_chans (int):
Number of pitch predictor channels.
pitch_predictor_kernel_size (int):
Kernel size of pitch predictor.
pitch_predictor_dropout_rate (float):
Dropout rate in pitch predictor.
pitch_embed_kernel_size (float):
Kernel size of pitch embedding.
pitch_embed_dropout_rate (float):
Dropout rate for pitch embedding.
stop_gradient_from_pitch_predictor (bool):
Whether to stop gradient from pitch predictor to encoder.
energy_predictor_layers (int):
Number of energy predictor layers.
energy_predictor_chans (int):
Number of energy predictor channels.
energy_predictor_kernel_size (int):
Kernel size of energy predictor.
energy_predictor_dropout_rate (float):
Dropout rate in energy predictor.
energy_embed_kernel_size (float):
Kernel size of energy embedding.
energy_embed_dropout_rate (float):
Dropout rate for energy embedding.
stop_gradient_from_energy_predictor(bool):
Whether to stop gradient from energy predictor to encoder.
spk_num (Optional[int]):
Number of speakers. If not None, assume that the spk_embed_dim is not None,
spk_ids will be provided as the input and use spk_embedding_table. spk_ids will be provided as the input and use spk_embedding_table.
spk_embed_dim (Optional[int]): Speaker embedding dimension. If not None, spk_embed_dim (Optional[int]):
Speaker embedding dimension. If not None,
assume that spk_emb will be provided as the input or spk_num is not None. assume that spk_emb will be provided as the input or spk_num is not None.
spk_embed_integration_type (str): How to integrate speaker embedding. spk_embed_integration_type (str):
tone_num (Optional[int]): Number of tones. If not None, assume that the How to integrate speaker embedding.
tone_num (Optional[int]):
Number of tones. If not None, assume that the
tone_ids will be provided as the input and use tone_embedding_table. tone_ids will be provided as the input and use tone_embedding_table.
tone_embed_dim (Optional[int]): Tone embedding dimension. If not None, assume that tone_num is not None. tone_embed_dim (Optional[int]):
tone_embed_integration_type (str): How to integrate tone embedding. Tone embedding dimension. If not None, assume that tone_num is not None.
init_type (str): How to initialize transformer parameters. tone_embed_integration_type (str):
init_enc_alpha (float): Initial value of alpha in scaled pos encoding of the encoder. How to integrate tone embedding.
init_dec_alpha (float): Initial value of alpha in scaled pos encoding of the decoder. init_type (str):
How to initialize transformer parameters.
init_enc_alpha (float):
Initial value of alpha in scaled pos encoding of the encoder.
init_dec_alpha (float):
Initial value of alpha in scaled pos encoding of the decoder.
""" """
assert check_argument_types() assert check_argument_types()
...@@ -258,7 +320,6 @@ class FastSpeech2(nn.Layer): ...@@ -258,7 +320,6 @@ class FastSpeech2(nn.Layer):
padding_idx=self.padding_idx) padding_idx=self.padding_idx)
if encoder_type == "transformer": if encoder_type == "transformer":
print("encoder_type is transformer")
self.encoder = TransformerEncoder( self.encoder = TransformerEncoder(
idim=idim, idim=idim,
attention_dim=adim, attention_dim=adim,
...@@ -275,7 +336,6 @@ class FastSpeech2(nn.Layer): ...@@ -275,7 +336,6 @@ class FastSpeech2(nn.Layer):
positionwise_layer_type=positionwise_layer_type, positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size, ) positionwise_conv_kernel_size=positionwise_conv_kernel_size, )
elif encoder_type == "conformer": elif encoder_type == "conformer":
print("encoder_type is conformer")
self.encoder = ConformerEncoder( self.encoder = ConformerEncoder(
idim=idim, idim=idim,
attention_dim=adim, attention_dim=adim,
...@@ -362,7 +422,6 @@ class FastSpeech2(nn.Layer): ...@@ -362,7 +422,6 @@ class FastSpeech2(nn.Layer):
# NOTE: we use encoder as decoder # NOTE: we use encoder as decoder
# because fastspeech's decoder is the same as encoder # because fastspeech's decoder is the same as encoder
if decoder_type == "transformer": if decoder_type == "transformer":
print("decoder_type is transformer")
self.decoder = TransformerEncoder( self.decoder = TransformerEncoder(
idim=0, idim=0,
attention_dim=adim, attention_dim=adim,
...@@ -380,7 +439,6 @@ class FastSpeech2(nn.Layer): ...@@ -380,7 +439,6 @@ class FastSpeech2(nn.Layer):
positionwise_layer_type=positionwise_layer_type, positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size, ) positionwise_conv_kernel_size=positionwise_conv_kernel_size, )
elif decoder_type == "conformer": elif decoder_type == "conformer":
print("decoder_type is conformer")
self.decoder = ConformerEncoder( self.decoder = ConformerEncoder(
idim=0, idim=0,
attention_dim=adim, attention_dim=adim,
...@@ -453,20 +511,29 @@ class FastSpeech2(nn.Layer): ...@@ -453,20 +511,29 @@ class FastSpeech2(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
text(Tensor(int64)): Batch of padded token ids (B, Tmax). text(Tensor(int64)):
text_lengths(Tensor(int64)): Batch of lengths of each input (B,). Batch of padded token ids (B, Tmax).
speech(Tensor): Batch of padded target features (B, Lmax, odim). text_lengths(Tensor(int64)):
speech_lengths(Tensor(int64)): Batch of the lengths of each target (B,). Batch of lengths of each input (B,).
durations(Tensor(int64)): Batch of padded durations (B, Tmax). speech(Tensor):
pitch(Tensor): Batch of padded token-averaged pitch (B, Tmax, 1). Batch of padded target features (B, Lmax, odim).
energy(Tensor): Batch of padded token-averaged energy (B, Tmax, 1). speech_lengths(Tensor(int64)):
tone_id(Tensor, optional(int64)): Batch of padded tone ids (B, Tmax). Batch of the lengths of each target (B,).
spk_emb(Tensor, optional): Batch of speaker embeddings (B, spk_embed_dim). durations(Tensor(int64)):
spk_id(Tnesor, optional(int64)): Batch of speaker ids (B,) Batch of padded durations (B, Tmax).
pitch(Tensor):
Batch of padded token-averaged pitch (B, Tmax, 1).
energy(Tensor):
Batch of padded token-averaged energy (B, Tmax, 1).
tone_id(Tensor, optional(int64)):
Batch of padded tone ids (B, Tmax).
spk_emb(Tensor, optional):
Batch of speaker embeddings (B, spk_embed_dim).
spk_id(Tnesor, optional(int64)):
Batch of speaker ids (B,)
Returns: Returns:
""" """
# input of embedding must be int64 # input of embedding must be int64
...@@ -662,20 +729,28 @@ class FastSpeech2(nn.Layer): ...@@ -662,20 +729,28 @@ class FastSpeech2(nn.Layer):
"""Generate the sequence of features given the sequences of characters. """Generate the sequence of features given the sequences of characters.
Args: Args:
text(Tensor(int64)): Input sequence of characters (T,). text(Tensor(int64)):
durations(Tensor, optional (int64)): Groundtruth of duration (T,). Input sequence of characters (T,).
pitch(Tensor, optional): Groundtruth of token-averaged pitch (T, 1). durations(Tensor, optional (int64)):
energy(Tensor, optional): Groundtruth of token-averaged energy (T, 1). Groundtruth of duration (T,).
alpha(float, optional): Alpha to control the speed. pitch(Tensor, optional):
use_teacher_forcing(bool, optional): Whether to use teacher forcing. Groundtruth of token-averaged pitch (T, 1).
energy(Tensor, optional):
Groundtruth of token-averaged energy (T, 1).
alpha(float, optional):
Alpha to control the speed.
use_teacher_forcing(bool, optional):
Whether to use teacher forcing.
If true, groundtruth of duration, pitch and energy will be used. If true, groundtruth of duration, pitch and energy will be used.
spk_emb(Tensor, optional, optional): peaker embedding vector (spk_embed_dim,). (Default value = None) spk_emb(Tensor, optional, optional):
spk_id(Tensor, optional(int64), optional): spk ids (1,). (Default value = None) peaker embedding vector (spk_embed_dim,). (Default value = None)
tone_id(Tensor, optional(int64), optional): tone ids (T,). (Default value = None) spk_id(Tensor, optional(int64), optional):
spk ids (1,). (Default value = None)
tone_id(Tensor, optional(int64), optional):
tone ids (T,). (Default value = None)
Returns: Returns:
""" """
# input of embedding must be int64 # input of embedding must be int64
x = paddle.cast(text, 'int64') x = paddle.cast(text, 'int64')
...@@ -724,8 +799,10 @@ class FastSpeech2(nn.Layer): ...@@ -724,8 +799,10 @@ class FastSpeech2(nn.Layer):
"""Integrate speaker embedding with hidden states. """Integrate speaker embedding with hidden states.
Args: Args:
hs(Tensor): Batch of hidden state sequences (B, Tmax, adim). hs(Tensor):
spk_emb(Tensor): Batch of speaker embeddings (B, spk_embed_dim). Batch of hidden state sequences (B, Tmax, adim).
spk_emb(Tensor):
Batch of speaker embeddings (B, spk_embed_dim).
Returns: Returns:
...@@ -749,8 +826,10 @@ class FastSpeech2(nn.Layer): ...@@ -749,8 +826,10 @@ class FastSpeech2(nn.Layer):
"""Integrate speaker embedding with hidden states. """Integrate speaker embedding with hidden states.
Args: Args:
hs(Tensor): Batch of hidden state sequences (B, Tmax, adim). hs(Tensor):
tone_embs(Tensor): Batch of speaker embeddings (B, Tmax, tone_embed_dim). Batch of hidden state sequences (B, Tmax, adim).
tone_embs(Tensor):
Batch of speaker embeddings (B, Tmax, tone_embed_dim).
Returns: Returns:
...@@ -773,10 +852,12 @@ class FastSpeech2(nn.Layer): ...@@ -773,10 +852,12 @@ class FastSpeech2(nn.Layer):
"""Make masks for self-attention. """Make masks for self-attention.
Args: Args:
ilens(Tensor): Batch of lengths (B,). ilens(Tensor):
Batch of lengths (B,).
Returns: Returns:
Tensor: Mask tensor for self-attention. dtype=paddle.bool Tensor:
Mask tensor for self-attention. dtype=paddle.bool
Examples: Examples:
>>> ilens = [5, 3] >>> ilens = [5, 3]
...@@ -858,19 +939,32 @@ class StyleFastSpeech2Inference(FastSpeech2Inference): ...@@ -858,19 +939,32 @@ class StyleFastSpeech2Inference(FastSpeech2Inference):
""" """
Args: Args:
text(Tensor(int64)): Input sequence of characters (T,). text(Tensor(int64)):
durations(paddle.Tensor/np.ndarray, optional (int64)): Groundtruth of duration (T,), this will overwrite the set of durations_scale and durations_bias Input sequence of characters (T,).
durations(paddle.Tensor/np.ndarray, optional (int64)):
Groundtruth of duration (T,), this will overwrite the set of durations_scale and durations_bias
durations_scale(int/float, optional): durations_scale(int/float, optional):
durations_bias(int/float, optional): durations_bias(int/float, optional):
pitch(paddle.Tensor/np.ndarray, optional): Groundtruth of token-averaged pitch (T, 1), this will overwrite the set of pitch_scale and pitch_bias
pitch_scale(int/float, optional): In denormed HZ domain. pitch(paddle.Tensor/np.ndarray, optional):
pitch_bias(int/float, optional): In denormed HZ domain. Groundtruth of token-averaged pitch (T, 1), this will overwrite the set of pitch_scale and pitch_bias
energy(paddle.Tensor/np.ndarray, optional): Groundtruth of token-averaged energy (T, 1), this will overwrite the set of energy_scale and energy_bias pitch_scale(int/float, optional):
energy_scale(int/float, optional): In denormed domain. In denormed HZ domain.
energy_bias(int/float, optional): In denormed domain. pitch_bias(int/float, optional):
robot: bool: (Default value = False) In denormed HZ domain.
spk_emb: (Default value = None) energy(paddle.Tensor/np.ndarray, optional):
spk_id: (Default value = None) Groundtruth of token-averaged energy (T, 1), this will overwrite the set of energy_scale and energy_bias
energy_scale(int/float, optional):
In denormed domain.
energy_bias(int/float, optional):
In denormed domain.
robot(bool) (Default value = False):
spk_emb(Default value = None):
spk_id(Default value = None):
Returns: Returns:
Tensor: logmel Tensor: logmel
...@@ -949,8 +1043,10 @@ class FastSpeech2Loss(nn.Layer): ...@@ -949,8 +1043,10 @@ class FastSpeech2Loss(nn.Layer):
use_weighted_masking: bool=False): use_weighted_masking: bool=False):
"""Initialize feed-forward Transformer loss module. """Initialize feed-forward Transformer loss module.
Args: Args:
use_masking (bool): Whether to apply masking for padded part in loss calculation. use_masking (bool):
use_weighted_masking (bool): Whether to weighted masking in loss calculation. Whether to apply masking for padded part in loss calculation.
use_weighted_masking (bool):
Whether to weighted masking in loss calculation.
""" """
assert check_argument_types() assert check_argument_types()
super().__init__() super().__init__()
...@@ -982,17 +1078,28 @@ class FastSpeech2Loss(nn.Layer): ...@@ -982,17 +1078,28 @@ class FastSpeech2Loss(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
after_outs(Tensor): Batch of outputs after postnets (B, Lmax, odim). after_outs(Tensor):
before_outs(Tensor): Batch of outputs before postnets (B, Lmax, odim). Batch of outputs after postnets (B, Lmax, odim).
d_outs(Tensor): Batch of outputs of duration predictor (B, Tmax). before_outs(Tensor):
p_outs(Tensor): Batch of outputs of pitch predictor (B, Tmax, 1). Batch of outputs before postnets (B, Lmax, odim).
e_outs(Tensor): Batch of outputs of energy predictor (B, Tmax, 1). d_outs(Tensor):
ys(Tensor): Batch of target features (B, Lmax, odim). Batch of outputs of duration predictor (B, Tmax).
ds(Tensor): Batch of durations (B, Tmax). p_outs(Tensor):
ps(Tensor): Batch of target token-averaged pitch (B, Tmax, 1). Batch of outputs of pitch predictor (B, Tmax, 1).
es(Tensor): Batch of target token-averaged energy (B, Tmax, 1). e_outs(Tensor):
ilens(Tensor): Batch of the lengths of each input (B,). Batch of outputs of energy predictor (B, Tmax, 1).
olens(Tensor): Batch of the lengths of each target (B,). ys(Tensor):
Batch of target features (B, Lmax, odim).
ds(Tensor):
Batch of durations (B, Tmax).
ps(Tensor):
Batch of target token-averaged pitch (B, Tmax, 1).
es(Tensor):
Batch of target token-averaged energy (B, Tmax, 1).
ilens(Tensor):
Batch of the lengths of each input (B,).
olens(Tensor):
Batch of the lengths of each target (B,).
Returns: Returns:
......
...@@ -50,20 +50,34 @@ class HiFiGANGenerator(nn.Layer): ...@@ -50,20 +50,34 @@ class HiFiGANGenerator(nn.Layer):
init_type: str="xavier_uniform", ): init_type: str="xavier_uniform", ):
"""Initialize HiFiGANGenerator module. """Initialize HiFiGANGenerator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels. Number of input channels.
channels (int): Number of hidden representation channels. out_channels (int):
global_channels (int): Number of global conditioning channels. Number of output channels.
kernel_size (int): Kernel size of initial and final conv layer. channels (int):
upsample_scales (list): List of upsampling scales. Number of hidden representation channels.
upsample_kernel_sizes (list): List of kernel sizes for upsampling layers. global_channels (int):
resblock_kernel_sizes (list): List of kernel sizes for residual blocks. Number of global conditioning channels.
resblock_dilations (list): List of dilation list for residual blocks. kernel_size (int):
use_additional_convs (bool): Whether to use additional conv layers in residual blocks. Kernel size of initial and final conv layer.
bias (bool): Whether to add bias parameter in convolution layers. upsample_scales (list):
nonlinear_activation (str): Activation function module name. List of upsampling scales.
nonlinear_activation_params (dict): Hyperparameters for activation function. upsample_kernel_sizes (list):
use_weight_norm (bool): Whether to use weight norm. List of kernel sizes for upsampling layers.
resblock_kernel_sizes (list):
List of kernel sizes for residual blocks.
resblock_dilations (list):
List of dilation list for residual blocks.
use_additional_convs (bool):
Whether to use additional conv layers in residual blocks.
bias (bool):
Whether to add bias parameter in convolution layers.
nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
use_weight_norm (bool):
Whether to use weight norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
""" """
super().__init__() super().__init__()
...@@ -199,9 +213,10 @@ class HiFiGANGenerator(nn.Layer): ...@@ -199,9 +213,10 @@ class HiFiGANGenerator(nn.Layer):
def inference(self, c, g: Optional[paddle.Tensor]=None): def inference(self, c, g: Optional[paddle.Tensor]=None):
"""Perform inference. """Perform inference.
Args: Args:
c (Tensor): Input tensor (T, in_channels). c (Tensor):
normalize_before (bool): Whether to perform normalization. Input tensor (T, in_channels).
g (Optional[Tensor]): Global conditioning tensor (global_channels, 1). g (Optional[Tensor]):
Global conditioning tensor (global_channels, 1).
Returns: Returns:
Tensor: Tensor:
Output tensor (T ** prod(upsample_scales), out_channels). Output tensor (T ** prod(upsample_scales), out_channels).
...@@ -233,20 +248,33 @@ class HiFiGANPeriodDiscriminator(nn.Layer): ...@@ -233,20 +248,33 @@ class HiFiGANPeriodDiscriminator(nn.Layer):
"""Initialize HiFiGANPeriodDiscriminator module. """Initialize HiFiGANPeriodDiscriminator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels. Number of input channels.
period (int): Period. out_channels (int):
kernel_sizes (list): Kernel sizes of initial conv layers and the final conv layer. Number of output channels.
channels (int): Number of initial channels. period (int):
downsample_scales (list): List of downsampling scales. Period.
max_downsample_channels (int): Number of maximum downsampling channels. kernel_sizes (list):
use_additional_convs (bool): Whether to use additional conv layers in residual blocks. Kernel sizes of initial conv layers and the final conv layer.
bias (bool): Whether to add bias parameter in convolution layers. channels (int):
nonlinear_activation (str): Activation function module name. Number of initial channels.
nonlinear_activation_params (dict): Hyperparameters for activation function. downsample_scales (list):
use_weight_norm (bool): Whether to use weight norm. List of downsampling scales.
max_downsample_channels (int):
Number of maximum downsampling channels.
use_additional_convs (bool):
Whether to use additional conv layers in residual blocks.
bias (bool):
Whether to add bias parameter in convolution layers.
nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
use_weight_norm (bool):
Whether to use weight norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
use_spectral_norm (bool): Whether to use spectral norm. use_spectral_norm (bool):
Whether to use spectral norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
""" """
super().__init__() super().__init__()
...@@ -298,7 +326,8 @@ class HiFiGANPeriodDiscriminator(nn.Layer): ...@@ -298,7 +326,8 @@ class HiFiGANPeriodDiscriminator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
c (Tensor): Input tensor (B, in_channels, T). c (Tensor):
Input tensor (B, in_channels, T).
Returns: Returns:
list: List of each layer's tensors. list: List of each layer's tensors.
""" """
...@@ -367,8 +396,10 @@ class HiFiGANMultiPeriodDiscriminator(nn.Layer): ...@@ -367,8 +396,10 @@ class HiFiGANMultiPeriodDiscriminator(nn.Layer):
"""Initialize HiFiGANMultiPeriodDiscriminator module. """Initialize HiFiGANMultiPeriodDiscriminator module.
Args: Args:
periods (list): List of periods. periods (list):
discriminator_params (dict): Parameters for hifi-gan period discriminator module. List of periods.
discriminator_params (dict):
Parameters for hifi-gan period discriminator module.
The period parameter will be overwritten. The period parameter will be overwritten.
""" """
super().__init__() super().__init__()
...@@ -385,7 +416,8 @@ class HiFiGANMultiPeriodDiscriminator(nn.Layer): ...@@ -385,7 +416,8 @@ class HiFiGANMultiPeriodDiscriminator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input noise signal (B, 1, T). x (Tensor):
Input noise signal (B, 1, T).
Returns: Returns:
List: List of list of each discriminator outputs, which consists of each layer output tensors. List: List of list of each discriminator outputs, which consists of each layer output tensors.
""" """
...@@ -417,16 +449,25 @@ class HiFiGANScaleDiscriminator(nn.Layer): ...@@ -417,16 +449,25 @@ class HiFiGANScaleDiscriminator(nn.Layer):
"""Initilize HiFiGAN scale discriminator module. """Initilize HiFiGAN scale discriminator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels. Number of input channels.
kernel_sizes (list): List of four kernel sizes. The first will be used for the first conv layer, out_channels (int):
Number of output channels.
kernel_sizes (list):
List of four kernel sizes. The first will be used for the first conv layer,
and the second is for downsampling part, and the remaining two are for output layers. and the second is for downsampling part, and the remaining two are for output layers.
channels (int): Initial number of channels for conv layer. channels (int):
max_downsample_channels (int): Maximum number of channels for downsampling layers. Initial number of channels for conv layer.
bias (bool): Whether to add bias parameter in convolution layers. max_downsample_channels (int):
downsample_scales (list): List of downsampling scales. Maximum number of channels for downsampling layers.
nonlinear_activation (str): Activation function module name. bias (bool):
nonlinear_activation_params (dict): Hyperparameters for activation function. Whether to add bias parameter in convolution layers.
downsample_scales (list):
List of downsampling scales.
nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
use_weight_norm (bool): Whether to use weight norm. use_weight_norm (bool): Whether to use weight norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
use_spectral_norm (bool): Whether to use spectral norm. use_spectral_norm (bool): Whether to use spectral norm.
...@@ -614,7 +655,8 @@ class HiFiGANMultiScaleDiscriminator(nn.Layer): ...@@ -614,7 +655,8 @@ class HiFiGANMultiScaleDiscriminator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input noise signal (B, 1, T). x (Tensor):
Input noise signal (B, 1, T).
Returns: Returns:
List: List of list of each discriminator outputs, which consists of each layer output tensors. List: List of list of each discriminator outputs, which consists of each layer output tensors.
""" """
...@@ -675,14 +717,21 @@ class HiFiGANMultiScaleMultiPeriodDiscriminator(nn.Layer): ...@@ -675,14 +717,21 @@ class HiFiGANMultiScaleMultiPeriodDiscriminator(nn.Layer):
"""Initilize HiFiGAN multi-scale + multi-period discriminator module. """Initilize HiFiGAN multi-scale + multi-period discriminator module.
Args: Args:
scales (int): Number of multi-scales. scales (int):
scale_downsample_pooling (str): Pooling module name for downsampling of the inputs. Number of multi-scales.
scale_downsample_pooling_params (dict): Parameters for the above pooling module. scale_downsample_pooling (str):
scale_discriminator_params (dict): Parameters for hifi-gan scale discriminator module. Pooling module name for downsampling of the inputs.
follow_official_norm (bool): Whether to follow the norm setting of the official implementaion. scale_downsample_pooling_params (dict):
Parameters for the above pooling module.
scale_discriminator_params (dict):
Parameters for hifi-gan scale discriminator module.
follow_official_norm (bool):
Whether to follow the norm setting of the official implementaion.
The first discriminator uses spectral norm and the other discriminators use weight norm. The first discriminator uses spectral norm and the other discriminators use weight norm.
periods (list): List of periods. periods (list):
period_discriminator_params (dict): Parameters for hifi-gan period discriminator module. List of periods.
period_discriminator_params (dict):
Parameters for hifi-gan period discriminator module.
The period parameter will be overwritten. The period parameter will be overwritten.
""" """
super().__init__() super().__init__()
...@@ -704,7 +753,8 @@ class HiFiGANMultiScaleMultiPeriodDiscriminator(nn.Layer): ...@@ -704,7 +753,8 @@ class HiFiGANMultiScaleMultiPeriodDiscriminator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input noise signal (B, 1, T). x (Tensor):
Input noise signal (B, 1, T).
Returns: Returns:
List: List:
List of list of each discriminator outputs, List of list of each discriminator outputs,
......
...@@ -53,24 +53,38 @@ class MelGANGenerator(nn.Layer): ...@@ -53,24 +53,38 @@ class MelGANGenerator(nn.Layer):
"""Initialize MelGANGenerator module. """Initialize MelGANGenerator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels, Number of input channels.
out_channels (int):
Number of output channels,
the number of sub-band is out_channels in multi-band melgan. the number of sub-band is out_channels in multi-band melgan.
kernel_size (int): Kernel size of initial and final conv layer. kernel_size (int):
channels (int): Initial number of channels for conv layer. Kernel size of initial and final conv layer.
bias (bool): Whether to add bias parameter in convolution layers. channels (int):
upsample_scales (List[int]): List of upsampling scales. Initial number of channels for conv layer.
stack_kernel_size (int): Kernel size of dilated conv layers in residual stack. bias (bool):
stacks (int): Number of stacks in a single residual stack. Whether to add bias parameter in convolution layers.
nonlinear_activation (Optional[str], optional): Non linear activation in upsample network, by default None upsample_scales (List[int]):
nonlinear_activation_params (Dict[str, Any], optional): Parameters passed to the linear activation in the upsample network, List of upsampling scales.
by default {} stack_kernel_size (int):
pad (str): Padding function module name before dilated convolution layer. Kernel size of dilated conv layers in residual stack.
pad_params (dict): Hyperparameters for padding function. stacks (int):
use_final_nonlinear_activation (nn.Layer): Activation function for the final layer. Number of stacks in a single residual stack.
use_weight_norm (bool): Whether to use weight norm. nonlinear_activation (Optional[str], optional):
Non linear activation in upsample network, by default None
nonlinear_activation_params (Dict[str, Any], optional):
Parameters passed to the linear activation in the upsample network, by default {}
pad (str):
Padding function module name before dilated convolution layer.
pad_params (dict):
Hyperparameters for padding function.
use_final_nonlinear_activation (nn.Layer):
Activation function for the final layer.
use_weight_norm (bool):
Whether to use weight norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
use_causal_conv (bool): Whether to use causal convolution. use_causal_conv (bool):
Whether to use causal convolution.
""" """
super().__init__() super().__init__()
...@@ -194,7 +208,8 @@ class MelGANGenerator(nn.Layer): ...@@ -194,7 +208,8 @@ class MelGANGenerator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
c (Tensor): Input tensor (B, in_channels, T). c (Tensor):
Input tensor (B, in_channels, T).
Returns: Returns:
Tensor: Output tensor (B, out_channels, T ** prod(upsample_scales)). Tensor: Output tensor (B, out_channels, T ** prod(upsample_scales)).
""" """
...@@ -244,7 +259,8 @@ class MelGANGenerator(nn.Layer): ...@@ -244,7 +259,8 @@ class MelGANGenerator(nn.Layer):
"""Perform inference. """Perform inference.
Args: Args:
c (Union[Tensor, ndarray]): Input tensor (T, in_channels). c (Union[Tensor, ndarray]):
Input tensor (T, in_channels).
Returns: Returns:
Tensor: Output tensor (out_channels*T ** prod(upsample_scales), 1). Tensor: Output tensor (out_channels*T ** prod(upsample_scales), 1).
""" """
...@@ -279,20 +295,30 @@ class MelGANDiscriminator(nn.Layer): ...@@ -279,20 +295,30 @@ class MelGANDiscriminator(nn.Layer):
"""Initilize MelGAN discriminator module. """Initilize MelGAN discriminator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels. Number of input channels.
out_channels (int):
Number of output channels.
kernel_sizes (List[int]): List of two kernel sizes. The prod will be used for the first conv layer, kernel_sizes (List[int]): List of two kernel sizes. The prod will be used for the first conv layer,
and the first and the second kernel sizes will be used for the last two layers. and the first and the second kernel sizes will be used for the last two layers.
For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15, For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15,
the last two layers' kernel size will be 5 and 3, respectively. the last two layers' kernel size will be 5 and 3, respectively.
channels (int): Initial number of channels for conv layer. channels (int):
max_downsample_channels (int): Maximum number of channels for downsampling layers. Initial number of channels for conv layer.
bias (bool): Whether to add bias parameter in convolution layers. max_downsample_channels (int):
downsample_scales (List[int]): List of downsampling scales. Maximum number of channels for downsampling layers.
nonlinear_activation (str): Activation function module name. bias (bool):
nonlinear_activation_params (dict): Hyperparameters for activation function. Whether to add bias parameter in convolution layers.
pad (str): Padding function module name before dilated convolution layer. downsample_scales (List[int]):
pad_params (dict): Hyperparameters for padding function. List of downsampling scales.
nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
pad (str):
Padding function module name before dilated convolution layer.
pad_params (dict):
Hyperparameters for padding function.
""" """
super().__init__() super().__init__()
...@@ -364,7 +390,8 @@ class MelGANDiscriminator(nn.Layer): ...@@ -364,7 +390,8 @@ class MelGANDiscriminator(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input noise signal (B, 1, T). x (Tensor):
Input noise signal (B, 1, T).
Returns: Returns:
List: List of output tensors of each layer (for feat_match_loss). List: List of output tensors of each layer (for feat_match_loss).
""" """
...@@ -406,22 +433,37 @@ class MelGANMultiScaleDiscriminator(nn.Layer): ...@@ -406,22 +433,37 @@ class MelGANMultiScaleDiscriminator(nn.Layer):
"""Initilize MelGAN multi-scale discriminator module. """Initilize MelGAN multi-scale discriminator module.
Args: Args:
in_channels (int): Number of input channels. in_channels (int):
out_channels (int): Number of output channels. Number of input channels.
scales (int): Number of multi-scales. out_channels (int):
downsample_pooling (str): Pooling module name for downsampling of the inputs. Number of output channels.
downsample_pooling_params (dict): Parameters for the above pooling module. scales (int):
kernel_sizes (List[int]): List of two kernel sizes. The sum will be used for the first conv layer, Number of multi-scales.
downsample_pooling (str):
Pooling module name for downsampling of the inputs.
downsample_pooling_params (dict):
Parameters for the above pooling module.
kernel_sizes (List[int]):
List of two kernel sizes. The sum will be used for the first conv layer,
and the first and the second kernel sizes will be used for the last two layers. and the first and the second kernel sizes will be used for the last two layers.
channels (int): Initial number of channels for conv layer. channels (int):
max_downsample_channels (int): Maximum number of channels for downsampling layers. Initial number of channels for conv layer.
bias (bool): Whether to add bias parameter in convolution layers. max_downsample_channels (int):
downsample_scales (List[int]): List of downsampling scales. Maximum number of channels for downsampling layers.
nonlinear_activation (str): Activation function module name. bias (bool):
nonlinear_activation_params (dict): Hyperparameters for activation function. Whether to add bias parameter in convolution layers.
pad (str): Padding function module name before dilated convolution layer. downsample_scales (List[int]):
pad_params (dict): Hyperparameters for padding function. List of downsampling scales.
use_causal_conv (bool): Whether to use causal convolution. nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
pad (str):
Padding function module name before dilated convolution layer.
pad_params (dict):
Hyperparameters for padding function.
use_causal_conv (bool):
Whether to use causal convolution.
""" """
super().__init__() super().__init__()
...@@ -464,7 +506,8 @@ class MelGANMultiScaleDiscriminator(nn.Layer): ...@@ -464,7 +506,8 @@ class MelGANMultiScaleDiscriminator(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input noise signal (B, 1, T). x (Tensor):
Input noise signal (B, 1, T).
Returns: Returns:
List: List of list of each discriminator outputs, which consists of each layer output tensors. List: List of list of each discriminator outputs, which consists of each layer output tensors.
""" """
......
...@@ -54,20 +54,34 @@ class StyleMelGANGenerator(nn.Layer): ...@@ -54,20 +54,34 @@ class StyleMelGANGenerator(nn.Layer):
"""Initilize Style MelGAN generator. """Initilize Style MelGAN generator.
Args: Args:
in_channels (int): Number of input noise channels. in_channels (int):
aux_channels (int): Number of auxiliary input channels. Number of input noise channels.
channels (int): Number of channels for conv layer. aux_channels (int):
out_channels (int): Number of output channels. Number of auxiliary input channels.
kernel_size (int): Kernel size of conv layers. channels (int):
dilation (int): Dilation factor for conv layers. Number of channels for conv layer.
bias (bool): Whether to add bias parameter in convolution layers. out_channels (int):
noise_upsample_scales (list): List of noise upsampling scales. Number of output channels.
noise_upsample_activation (str): Activation function module name for noise upsampling. kernel_size (int):
noise_upsample_activation_params (dict): Hyperparameters for the above activation function. Kernel size of conv layers.
upsample_scales (list): List of upsampling scales. dilation (int):
upsample_mode (str): Upsampling mode in TADE layer. Dilation factor for conv layers.
gated_function (str): Gated function in TADEResBlock ("softmax" or "sigmoid"). bias (bool):
use_weight_norm (bool): Whether to use weight norm. Whether to add bias parameter in convolution layers.
noise_upsample_scales (list):
List of noise upsampling scales.
noise_upsample_activation (str):
Activation function module name for noise upsampling.
noise_upsample_activation_params (dict):
Hyperparameters for the above activation function.
upsample_scales (list):
List of upsampling scales.
upsample_mode (str):
Upsampling mode in TADE layer.
gated_function (str):
Gated function in TADEResBlock ("softmax" or "sigmoid").
use_weight_norm (bool):
Whether to use weight norm.
If set to true, it will be applied to all of the conv layers. If set to true, it will be applied to all of the conv layers.
""" """
super().__init__() super().__init__()
...@@ -194,7 +208,8 @@ class StyleMelGANGenerator(nn.Layer): ...@@ -194,7 +208,8 @@ class StyleMelGANGenerator(nn.Layer):
def inference(self, c): def inference(self, c):
"""Perform inference. """Perform inference.
Args: Args:
c (Tensor): Input tensor (T, in_channels). c (Tensor):
Input tensor (T, in_channels).
Returns: Returns:
Tensor: Output tensor (T ** prod(upsample_scales), out_channels). Tensor: Output tensor (T ** prod(upsample_scales), out_channels).
""" """
...@@ -258,11 +273,16 @@ class StyleMelGANDiscriminator(nn.Layer): ...@@ -258,11 +273,16 @@ class StyleMelGANDiscriminator(nn.Layer):
"""Initilize Style MelGAN discriminator. """Initilize Style MelGAN discriminator.
Args: Args:
repeats (int): Number of repititons to apply RWD. repeats (int):
window_sizes (list): List of random window sizes. Number of repititons to apply RWD.
pqmf_params (list): List of list of Parameters for PQMF modules window_sizes (list):
discriminator_params (dict): Parameters for base discriminator module. List of random window sizes.
use_weight_nom (bool): Whether to apply weight normalization. pqmf_params (list):
List of list of Parameters for PQMF modules
discriminator_params (dict):
Parameters for base discriminator module.
use_weight_nom (bool):
Whether to apply weight normalization.
""" """
super().__init__() super().__init__()
...@@ -299,7 +319,8 @@ class StyleMelGANDiscriminator(nn.Layer): ...@@ -299,7 +319,8 @@ class StyleMelGANDiscriminator(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, 1, T). x (Tensor):
Input tensor (B, 1, T).
Returns: Returns:
List: List of discriminator outputs, #items in the list will be List: List of discriminator outputs, #items in the list will be
equal to repeats * #discriminators. equal to repeats * #discriminators.
......
...@@ -32,29 +32,45 @@ class PWGGenerator(nn.Layer): ...@@ -32,29 +32,45 @@ class PWGGenerator(nn.Layer):
"""Wave Generator for Parallel WaveGAN """Wave Generator for Parallel WaveGAN
Args: Args:
in_channels (int, optional): Number of channels of the input waveform, by default 1 in_channels (int, optional):
out_channels (int, optional): Number of channels of the output waveform, by default 1 Number of channels of the input waveform, by default 1
kernel_size (int, optional): Kernel size of the residual blocks inside, by default 3 out_channels (int, optional):
layers (int, optional): Number of residual blocks inside, by default 30 Number of channels of the output waveform, by default 1
stacks (int, optional): The number of groups to split the residual blocks into, by default 3 kernel_size (int, optional):
Kernel size of the residual blocks inside, by default 3
layers (int, optional):
Number of residual blocks inside, by default 30
stacks (int, optional):
The number of groups to split the residual blocks into, by default 3
Within each group, the dilation of the residual block grows exponentially. Within each group, the dilation of the residual block grows exponentially.
residual_channels (int, optional): Residual channel of the residual blocks, by default 64 residual_channels (int, optional):
gate_channels (int, optional): Gate channel of the residual blocks, by default 128 Residual channel of the residual blocks, by default 64
skip_channels (int, optional): Skip channel of the residual blocks, by default 64 gate_channels (int, optional):
aux_channels (int, optional): Auxiliary channel of the residual blocks, by default 80 Gate channel of the residual blocks, by default 128
aux_context_window (int, optional): The context window size of the first convolution applied to the skip_channels (int, optional):
auxiliary input, by default 2 Skip channel of the residual blocks, by default 64
dropout (float, optional): Dropout of the residual blocks, by default 0. aux_channels (int, optional):
bias (bool, optional): Whether to use bias in residual blocks, by default True Auxiliary channel of the residual blocks, by default 80
use_weight_norm (bool, optional): Whether to use weight norm in all convolutions, by default True aux_context_window (int, optional):
use_causal_conv (bool, optional): Whether to use causal padding in the upsample network and residual The context window size of the first convolution applied to the auxiliary input, by default 2
blocks, by default False dropout (float, optional):
upsample_scales (List[int], optional): Upsample scales of the upsample network, by default [4, 4, 4, 4] Dropout of the residual blocks, by default 0.
nonlinear_activation (Optional[str], optional): Non linear activation in upsample network, by default None bias (bool, optional):
nonlinear_activation_params (Dict[str, Any], optional): Parameters passed to the linear activation in the upsample network, Whether to use bias in residual blocks, by default True
by default {} use_weight_norm (bool, optional):
interpolate_mode (str, optional): Interpolation mode of the upsample network, by default "nearest" Whether to use weight norm in all convolutions, by default True
freq_axis_kernel_size (int, optional): Kernel size along the frequency axis of the upsample network, by default 1 use_causal_conv (bool, optional):
Whether to use causal padding in the upsample network and residual blocks, by default False
upsample_scales (List[int], optional):
Upsample scales of the upsample network, by default [4, 4, 4, 4]
nonlinear_activation (Optional[str], optional):
Non linear activation in upsample network, by default None
nonlinear_activation_params (Dict[str, Any], optional):
Parameters passed to the linear activation in the upsample network, by default {}
interpolate_mode (str, optional):
Interpolation mode of the upsample network, by default "nearest"
freq_axis_kernel_size (int, optional):
Kernel size along the frequency axis of the upsample network, by default 1
""" """
def __init__( def __init__(
...@@ -147,9 +163,11 @@ class PWGGenerator(nn.Layer): ...@@ -147,9 +163,11 @@ class PWGGenerator(nn.Layer):
"""Generate waveform. """Generate waveform.
Args: Args:
x(Tensor): Shape (N, C_in, T), The input waveform. x(Tensor):
c(Tensor): Shape (N, C_aux, T'). The auxiliary input (e.g. spectrogram). It Shape (N, C_in, T), The input waveform.
is upsampled to match the time resolution of the input. c(Tensor):
Shape (N, C_aux, T'). The auxiliary input (e.g. spectrogram).
It is upsampled to match the time resolution of the input.
Returns: Returns:
Tensor: Shape (N, C_out, T), the generated waveform. Tensor: Shape (N, C_out, T), the generated waveform.
...@@ -195,8 +213,10 @@ class PWGGenerator(nn.Layer): ...@@ -195,8 +213,10 @@ class PWGGenerator(nn.Layer):
"""Waveform generation. This function is used for single instance inference. """Waveform generation. This function is used for single instance inference.
Args: Args:
c(Tensor, optional, optional): Shape (T', C_aux), the auxiliary input, by default None c(Tensor, optional, optional):
x(Tensor, optional): Shape (T, C_in), the noise waveform, by default None Shape (T', C_aux), the auxiliary input, by default None
x(Tensor, optional):
Shape (T, C_in), the noise waveform, by default None
Returns: Returns:
Tensor: Shape (T, C_out), the generated waveform Tensor: Shape (T, C_out), the generated waveform
...@@ -214,20 +234,28 @@ class PWGDiscriminator(nn.Layer): ...@@ -214,20 +234,28 @@ class PWGDiscriminator(nn.Layer):
"""A convolutional discriminator for audio. """A convolutional discriminator for audio.
Args: Args:
in_channels (int, optional): Number of channels of the input audio, by default 1 in_channels (int, optional):
out_channels (int, optional): Output feature size, by default 1 Number of channels of the input audio, by default 1
kernel_size (int, optional): Kernel size of convolutional sublayers, by default 3 out_channels (int, optional):
layers (int, optional): Number of layers, by default 10 Output feature size, by default 1
conv_channels (int, optional): Feature size of the convolutional sublayers, by default 64 kernel_size (int, optional):
dilation_factor (int, optional): The factor with which dilation of each convolutional sublayers grows Kernel size of convolutional sublayers, by default 3
layers (int, optional):
Number of layers, by default 10
conv_channels (int, optional):
Feature size of the convolutional sublayers, by default 64
dilation_factor (int, optional):
The factor with which dilation of each convolutional sublayers grows
exponentially if it is greater than 1, else the dilation of each convolutional sublayers grows linearly, exponentially if it is greater than 1, else the dilation of each convolutional sublayers grows linearly,
by default 1 by default 1
nonlinear_activation (str, optional): The activation after each convolutional sublayer, by default "leakyrelu" nonlinear_activation (str, optional):
nonlinear_activation_params (Dict[str, Any], optional): The parameters passed to the activation's initializer, by default The activation after each convolutional sublayer, by default "leakyrelu"
{"negative_slope": 0.2} nonlinear_activation_params (Dict[str, Any], optional):
bias (bool, optional): Whether to use bias in convolutional sublayers, by default True The parameters passed to the activation's initializer, by default {"negative_slope": 0.2}
use_weight_norm (bool, optional): Whether to use weight normalization at all convolutional sublayers, bias (bool, optional):
by default True Whether to use bias in convolutional sublayers, by default True
use_weight_norm (bool, optional):
Whether to use weight normalization at all convolutional sublayers, by default True
""" """
def __init__( def __init__(
...@@ -290,7 +318,8 @@ class PWGDiscriminator(nn.Layer): ...@@ -290,7 +318,8 @@ class PWGDiscriminator(nn.Layer):
""" """
Args: Args:
x (Tensor): Shape (N, in_channels, num_samples), the input audio. x (Tensor):
Shape (N, in_channels, num_samples), the input audio.
Returns: Returns:
Tensor: Shape (N, out_channels, num_samples), the predicted logits. Tensor: Shape (N, out_channels, num_samples), the predicted logits.
...@@ -318,24 +347,35 @@ class ResidualPWGDiscriminator(nn.Layer): ...@@ -318,24 +347,35 @@ class ResidualPWGDiscriminator(nn.Layer):
"""A wavenet-style discriminator for audio. """A wavenet-style discriminator for audio.
Args: Args:
in_channels (int, optional): Number of channels of the input audio, by default 1 in_channels (int, optional):
out_channels (int, optional): Output feature size, by default 1 Number of channels of the input audio, by default 1
kernel_size (int, optional): Kernel size of residual blocks, by default 3 out_channels (int, optional):
layers (int, optional): Number of residual blocks, by default 30 Output feature size, by default 1
stacks (int, optional): Number of groups of residual blocks, within which the dilation kernel_size (int, optional):
Kernel size of residual blocks, by default 3
layers (int, optional):
Number of residual blocks, by default 30
stacks (int, optional):
Number of groups of residual blocks, within which the dilation
of each residual blocks grows exponentially, by default 3 of each residual blocks grows exponentially, by default 3
residual_channels (int, optional): Residual channels of residual blocks, by default 64 residual_channels (int, optional):
gate_channels (int, optional): Gate channels of residual blocks, by default 128 Residual channels of residual blocks, by default 64
skip_channels (int, optional): Skip channels of residual blocks, by default 64 gate_channels (int, optional):
dropout (float, optional): Dropout probability of residual blocks, by default 0. Gate channels of residual blocks, by default 128
bias (bool, optional): Whether to use bias in residual blocks, by default True skip_channels (int, optional):
use_weight_norm (bool, optional): Whether to use weight normalization in all convolutional layers, Skip channels of residual blocks, by default 64
by default True dropout (float, optional):
use_causal_conv (bool, optional): Whether to use causal convolution in residual blocks, by default False Dropout probability of residual blocks, by default 0.
nonlinear_activation (str, optional): Activation after convolutions other than those in residual blocks, bias (bool, optional):
by default "leakyrelu" Whether to use bias in residual blocks, by default True
nonlinear_activation_params (Dict[str, Any], optional): Parameters to pass to the activation, use_weight_norm (bool, optional):
by default {"negative_slope": 0.2} Whether to use weight normalization in all convolutional layers, by default True
use_causal_conv (bool, optional):
Whether to use causal convolution in residual blocks, by default False
nonlinear_activation (str, optional):
Activation after convolutions other than those in residual blocks, by default "leakyrelu"
nonlinear_activation_params (Dict[str, Any], optional):
Parameters to pass to the activation, by default {"negative_slope": 0.2}
""" """
def __init__( def __init__(
...@@ -405,7 +445,8 @@ class ResidualPWGDiscriminator(nn.Layer): ...@@ -405,7 +445,8 @@ class ResidualPWGDiscriminator(nn.Layer):
def forward(self, x): def forward(self, x):
""" """
Args: Args:
x(Tensor): Shape (N, in_channels, num_samples), the input audio.↩ x(Tensor):
Shape (N, in_channels, num_samples), the input audio.↩
Returns: Returns:
Tensor: Shape (N, out_channels, num_samples), the predicted logits. Tensor: Shape (N, out_channels, num_samples), the predicted logits.
......
...@@ -29,10 +29,14 @@ class ResidualBlock(nn.Layer): ...@@ -29,10 +29,14 @@ class ResidualBlock(nn.Layer):
n: int=2): n: int=2):
"""SpeedySpeech encoder module. """SpeedySpeech encoder module.
Args: Args:
channels (int, optional): Feature size of the residual output(and also the input). channels (int, optional):
kernel_size (int, optional): Kernel size of the 1D convolution. Feature size of the residual output(and also the input).
dilation (int, optional): Dilation of the 1D convolution. kernel_size (int, optional):
n (int): Number of blocks. Kernel size of the 1D convolution.
dilation (int, optional):
Dilation of the 1D convolution.
n (int):
Number of blocks.
""" """
super().__init__() super().__init__()
...@@ -57,7 +61,8 @@ class ResidualBlock(nn.Layer): ...@@ -57,7 +61,8 @@ class ResidualBlock(nn.Layer):
def forward(self, x: paddle.Tensor): def forward(self, x: paddle.Tensor):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x(Tensor): Batch of input sequences (B, hidden_size, Tmax). x(Tensor):
Batch of input sequences (B, hidden_size, Tmax).
Returns: Returns:
Tensor: The residual output (B, hidden_size, Tmax). Tensor: The residual output (B, hidden_size, Tmax).
""" """
...@@ -89,8 +94,10 @@ class TextEmbedding(nn.Layer): ...@@ -89,8 +94,10 @@ class TextEmbedding(nn.Layer):
def forward(self, text: paddle.Tensor, tone: paddle.Tensor=None): def forward(self, text: paddle.Tensor, tone: paddle.Tensor=None):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
text(Tensor(int64)): Batch of padded token ids (B, Tmax). text(Tensor(int64)):
tones(Tensor, optional(int64)): Batch of padded tone ids (B, Tmax). Batch of padded token ids (B, Tmax).
tones(Tensor, optional(int64)):
Batch of padded tone ids (B, Tmax).
Returns: Returns:
Tensor: The residual output (B, Tmax, embedding_size). Tensor: The residual output (B, Tmax, embedding_size).
""" """
...@@ -109,12 +116,18 @@ class TextEmbedding(nn.Layer): ...@@ -109,12 +116,18 @@ class TextEmbedding(nn.Layer):
class SpeedySpeechEncoder(nn.Layer): class SpeedySpeechEncoder(nn.Layer):
"""SpeedySpeech encoder module. """SpeedySpeech encoder module.
Args: Args:
vocab_size (int): Dimension of the inputs. vocab_size (int):
tone_size (Optional[int]): Number of tones. Dimension of the inputs.
hidden_size (int): Number of encoder hidden units. tone_size (Optional[int]):
kernel_size (int): Kernel size of encoder. Number of tones.
dilations (List[int]): Dilations of encoder. hidden_size (int):
spk_num (Optional[int]): Number of speakers. Number of encoder hidden units.
kernel_size (int):
Kernel size of encoder.
dilations (List[int]):
Dilations of encoder.
spk_num (Optional[int]):
Number of speakers.
""" """
def __init__(self, def __init__(self,
...@@ -161,9 +174,12 @@ class SpeedySpeechEncoder(nn.Layer): ...@@ -161,9 +174,12 @@ class SpeedySpeechEncoder(nn.Layer):
spk_id: paddle.Tensor=None): spk_id: paddle.Tensor=None):
"""Encoder input sequence. """Encoder input sequence.
Args: Args:
text(Tensor(int64)): Batch of padded token ids (B, Tmax). text(Tensor(int64)):
tones(Tensor, optional(int64)): Batch of padded tone ids (B, Tmax). Batch of padded token ids (B, Tmax).
spk_id(Tnesor, optional(int64)): Batch of speaker ids (B,) tones(Tensor, optional(int64)):
Batch of padded tone ids (B, Tmax).
spk_id(Tnesor, optional(int64)):
Batch of speaker ids (B,)
Returns: Returns:
Tensor: Output tensor (B, Tmax, hidden_size). Tensor: Output tensor (B, Tmax, hidden_size).
...@@ -192,7 +208,8 @@ class DurationPredictor(nn.Layer): ...@@ -192,7 +208,8 @@ class DurationPredictor(nn.Layer):
def forward(self, x: paddle.Tensor): def forward(self, x: paddle.Tensor):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x(Tensor): Batch of input sequences (B, Tmax, hidden_size). x(Tensor):
Batch of input sequences (B, Tmax, hidden_size).
Returns: Returns:
Tensor: Batch of predicted durations in log domain (B, Tmax). Tensor: Batch of predicted durations in log domain (B, Tmax).
...@@ -212,10 +229,14 @@ class SpeedySpeechDecoder(nn.Layer): ...@@ -212,10 +229,14 @@ class SpeedySpeechDecoder(nn.Layer):
]): ]):
"""SpeedySpeech decoder module. """SpeedySpeech decoder module.
Args: Args:
hidden_size (int): Number of decoder hidden units. hidden_size (int):
kernel_size (int): Kernel size of decoder. Number of decoder hidden units.
output_size (int): Dimension of the outputs. kernel_size (int):
dilations (List[int]): Dilations of decoder. Kernel size of decoder.
output_size (int):
Dimension of the outputs.
dilations (List[int]):
Dilations of decoder.
""" """
super().__init__() super().__init__()
res_blocks = [ res_blocks = [
...@@ -230,7 +251,8 @@ class SpeedySpeechDecoder(nn.Layer): ...@@ -230,7 +251,8 @@ class SpeedySpeechDecoder(nn.Layer):
def forward(self, x): def forward(self, x):
"""Decoder input sequence. """Decoder input sequence.
Args: Args:
x(Tensor): Input tensor (B, time, hidden_size). x(Tensor):
Input tensor (B, time, hidden_size).
Returns: Returns:
Tensor: Output tensor (B, time, output_size). Tensor: Output tensor (B, time, output_size).
...@@ -261,18 +283,30 @@ class SpeedySpeech(nn.Layer): ...@@ -261,18 +283,30 @@ class SpeedySpeech(nn.Layer):
positional_dropout_rate: int=0.1): positional_dropout_rate: int=0.1):
"""Initialize SpeedySpeech module. """Initialize SpeedySpeech module.
Args: Args:
vocab_size (int): Dimension of the inputs. vocab_size (int):
encoder_hidden_size (int): Number of encoder hidden units. Dimension of the inputs.
encoder_kernel_size (int): Kernel size of encoder. encoder_hidden_size (int):
encoder_dilations (List[int]): Dilations of encoder. Number of encoder hidden units.
duration_predictor_hidden_size (int): Number of duration predictor hidden units. encoder_kernel_size (int):
decoder_hidden_size (int): Number of decoder hidden units. Kernel size of encoder.
decoder_kernel_size (int): Kernel size of decoder. encoder_dilations (List[int]):
decoder_dilations (List[int]): Dilations of decoder. Dilations of encoder.
decoder_output_size (int): Dimension of the outputs. duration_predictor_hidden_size (int):
tone_size (Optional[int]): Number of tones. Number of duration predictor hidden units.
spk_num (Optional[int]): Number of speakers. decoder_hidden_size (int):
init_type (str): How to initialize transformer parameters. Number of decoder hidden units.
decoder_kernel_size (int):
Kernel size of decoder.
decoder_dilations (List[int]):
Dilations of decoder.
decoder_output_size (int):
Dimension of the outputs.
tone_size (Optional[int]):
Number of tones.
spk_num (Optional[int]):
Number of speakers.
init_type (str):
How to initialize transformer parameters.
""" """
super().__init__() super().__init__()
...@@ -304,14 +338,20 @@ class SpeedySpeech(nn.Layer): ...@@ -304,14 +338,20 @@ class SpeedySpeech(nn.Layer):
spk_id: paddle.Tensor=None): spk_id: paddle.Tensor=None):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
text(Tensor(int64)): Batch of padded token ids (B, Tmax). text(Tensor(int64)):
durations(Tensor(int64)): Batch of padded durations (B, Tmax). Batch of padded token ids (B, Tmax).
tones(Tensor, optional(int64)): Batch of padded tone ids (B, Tmax). durations(Tensor(int64)):
spk_id(Tnesor, optional(int64)): Batch of speaker ids (B,) Batch of padded durations (B, Tmax).
tones(Tensor, optional(int64)):
Batch of padded tone ids (B, Tmax).
spk_id(Tnesor, optional(int64)):
Batch of speaker ids (B,)
Returns: Returns:
Tensor: Output tensor (B, T_frames, decoder_output_size). Tensor:
Tensor: Predicted durations (B, Tmax). Output tensor (B, T_frames, decoder_output_size).
Tensor:
Predicted durations (B, Tmax).
""" """
# input of embedding must be int64 # input of embedding must be int64
text = paddle.cast(text, 'int64') text = paddle.cast(text, 'int64')
...@@ -336,10 +376,14 @@ class SpeedySpeech(nn.Layer): ...@@ -336,10 +376,14 @@ class SpeedySpeech(nn.Layer):
spk_id: paddle.Tensor=None): spk_id: paddle.Tensor=None):
"""Generate the sequence of features given the sequences of characters. """Generate the sequence of features given the sequences of characters.
Args: Args:
text(Tensor(int64)): Input sequence of characters (T,). text(Tensor(int64)):
tones(Tensor, optional(int64)): Batch of padded tone ids (T, ). Input sequence of characters (T,).
durations(Tensor, optional (int64)): Groundtruth of duration (T,). tones(Tensor, optional(int64)):
spk_id(Tensor, optional(int64), optional): spk ids (1,). (Default value = None) Batch of padded tone ids (T, ).
durations(Tensor, optional (int64)):
Groundtruth of duration (T,).
spk_id(Tensor, optional(int64), optional):
spk ids (1,). (Default value = None)
Returns: Returns:
Tensor: logmel (T, decoder_output_size). Tensor: logmel (T, decoder_output_size).
......
...@@ -83,38 +83,67 @@ class Tacotron2(nn.Layer): ...@@ -83,38 +83,67 @@ class Tacotron2(nn.Layer):
init_type: str="xavier_uniform", ): init_type: str="xavier_uniform", ):
"""Initialize Tacotron2 module. """Initialize Tacotron2 module.
Args: Args:
idim (int): Dimension of the inputs. idim (int):
odim (int): Dimension of the outputs. Dimension of the inputs.
embed_dim (int): Dimension of the token embedding. odim (int):
elayers (int): Number of encoder blstm layers. Dimension of the outputs.
eunits (int): Number of encoder blstm units. embed_dim (int):
econv_layers (int): Number of encoder conv layers. Dimension of the token embedding.
econv_filts (int): Number of encoder conv filter size. elayers (int):
econv_chans (int): Number of encoder conv filter channels. Number of encoder blstm layers.
dlayers (int): Number of decoder lstm layers. eunits (int):
dunits (int): Number of decoder lstm units. Number of encoder blstm units.
prenet_layers (int): Number of prenet layers. econv_layers (int):
prenet_units (int): Number of prenet units. Number of encoder conv layers.
postnet_layers (int): Number of postnet layers. econv_filts (int):
postnet_filts (int): Number of postnet filter size. Number of encoder conv filter size.
postnet_chans (int): Number of postnet filter channels. econv_chans (int):
output_activation (str): Name of activation function for outputs. Number of encoder conv filter channels.
adim (int): Number of dimension of mlp in attention. dlayers (int):
aconv_chans (int): Number of attention conv filter channels. Number of decoder lstm layers.
aconv_filts (int): Number of attention conv filter size. dunits (int):
cumulate_att_w (bool): Whether to cumulate previous attention weight. Number of decoder lstm units.
use_batch_norm (bool): Whether to use batch normalization. prenet_layers (int):
use_concate (bool): Whether to concat enc outputs w/ dec lstm outputs. Number of prenet layers.
reduction_factor (int): Reduction factor. prenet_units (int):
spk_num (Optional[int]): Number of speakers. If set to > 1, assume that the Number of prenet units.
postnet_layers (int):
Number of postnet layers.
postnet_filts (int):
Number of postnet filter size.
postnet_chans (int):
Number of postnet filter channels.
output_activation (str):
Name of activation function for outputs.
adim (int):
Number of dimension of mlp in attention.
aconv_chans (int):
Number of attention conv filter channels.
aconv_filts (int):
Number of attention conv filter size.
cumulate_att_w (bool):
Whether to cumulate previous attention weight.
use_batch_norm (bool):
Whether to use batch normalization.
use_concate (bool):
Whether to concat enc outputs w/ dec lstm outputs.
reduction_factor (int):
Reduction factor.
spk_num (Optional[int]):
Number of speakers. If set to > 1, assume that the
sids will be provided as the input and use sid embedding layer. sids will be provided as the input and use sid embedding layer.
lang_num (Optional[int]): Number of languages. If set to > 1, assume that the lang_num (Optional[int]):
Number of languages. If set to > 1, assume that the
lids will be provided as the input and use sid embedding layer. lids will be provided as the input and use sid embedding layer.
spk_embed_dim (Optional[int]): Speaker embedding dimension. If set to > 0, spk_embed_dim (Optional[int]):
Speaker embedding dimension. If set to > 0,
assume that spk_emb will be provided as the input. assume that spk_emb will be provided as the input.
spk_embed_integration_type (str): How to integrate speaker embedding. spk_embed_integration_type (str):
dropout_rate (float): Dropout rate. How to integrate speaker embedding.
zoneout_rate (float): Zoneout rate. dropout_rate (float):
Dropout rate.
zoneout_rate (float):
Zoneout rate.
""" """
assert check_argument_types() assert check_argument_types()
super().__init__() super().__init__()
...@@ -230,18 +259,28 @@ class Tacotron2(nn.Layer): ...@@ -230,18 +259,28 @@ class Tacotron2(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
text (Tensor(int64)): Batch of padded character ids (B, T_text). text (Tensor(int64)):
text_lengths (Tensor(int64)): Batch of lengths of each input batch (B,). Batch of padded character ids (B, T_text).
speech (Tensor): Batch of padded target features (B, T_feats, odim). text_lengths (Tensor(int64)):
speech_lengths (Tensor(int64)): Batch of the lengths of each target (B,). Batch of lengths of each input batch (B,).
spk_emb (Optional[Tensor]): Batch of speaker embeddings (B, spk_embed_dim). speech (Tensor):
spk_id (Optional[Tensor]): Batch of speaker IDs (B, 1). Batch of padded target features (B, T_feats, odim).
lang_id (Optional[Tensor]): Batch of language IDs (B, 1). speech_lengths (Tensor(int64)):
Batch of the lengths of each target (B,).
spk_emb (Optional[Tensor]):
Batch of speaker embeddings (B, spk_embed_dim).
spk_id (Optional[Tensor]):
Batch of speaker IDs (B, 1).
lang_id (Optional[Tensor]):
Batch of language IDs (B, 1).
Returns: Returns:
Tensor: Loss scalar value. Tensor:
Dict: Statistics to be monitored. Loss scalar value.
Tensor: Weight value if not joint training else model outputs. Dict:
Statistics to be monitored.
Tensor:
Weight value if not joint training else model outputs.
""" """
text = text[:, :text_lengths.max()] text = text[:, :text_lengths.max()]
...@@ -329,18 +368,30 @@ class Tacotron2(nn.Layer): ...@@ -329,18 +368,30 @@ class Tacotron2(nn.Layer):
"""Generate the sequence of features given the sequences of characters. """Generate the sequence of features given the sequences of characters.
Args: Args:
text (Tensor(int64)): Input sequence of characters (T_text,). text (Tensor(int64)):
speech (Optional[Tensor]): Feature sequence to extract style (N, idim). Input sequence of characters (T_text,).
spk_emb (ptional[Tensor]): Speaker embedding (spk_embed_dim,). speech (Optional[Tensor]):
spk_id (Optional[Tensor]): Speaker ID (1,). Feature sequence to extract style (N, idim).
lang_id (Optional[Tensor]): Language ID (1,). spk_emb (ptional[Tensor]):
threshold (float): Threshold in inference. Speaker embedding (spk_embed_dim,).
minlenratio (float): Minimum length ratio in inference. spk_id (Optional[Tensor]):
maxlenratio (float): Maximum length ratio in inference. Speaker ID (1,).
use_att_constraint (bool): Whether to apply attention constraint. lang_id (Optional[Tensor]):
backward_window (int): Backward window in attention constraint. Language ID (1,).
forward_window (int): Forward window in attention constraint. threshold (float):
use_teacher_forcing (bool): Whether to use teacher forcing. Threshold in inference.
minlenratio (float):
Minimum length ratio in inference.
maxlenratio (float):
Maximum length ratio in inference.
use_att_constraint (bool):
Whether to apply attention constraint.
backward_window (int):
Backward window in attention constraint.
forward_window (int):
Forward window in attention constraint.
use_teacher_forcing (bool):
Whether to use teacher forcing.
Returns: Returns:
Dict[str, Tensor] Dict[str, Tensor]
......
...@@ -49,66 +49,124 @@ class TransformerTTS(nn.Layer): ...@@ -49,66 +49,124 @@ class TransformerTTS(nn.Layer):
https://arxiv.org/pdf/1809.08895.pdf https://arxiv.org/pdf/1809.08895.pdf
Args: Args:
idim (int): Dimension of the inputs. idim (int):
odim (int): Dimension of the outputs. Dimension of the inputs.
embed_dim (int, optional): Dimension of character embedding. odim (int):
eprenet_conv_layers (int, optional): Number of encoder prenet convolution layers. Dimension of the outputs.
eprenet_conv_chans (int, optional): Number of encoder prenet convolution channels. embed_dim (int, optional):
eprenet_conv_filts (int, optional): Filter size of encoder prenet convolution. Dimension of character embedding.
dprenet_layers (int, optional): Number of decoder prenet layers. eprenet_conv_layers (int, optional):
dprenet_units (int, optional): Number of decoder prenet hidden units. Number of encoder prenet convolution layers.
elayers (int, optional): Number of encoder layers. eprenet_conv_chans (int, optional):
eunits (int, optional): Number of encoder hidden units. Number of encoder prenet convolution channels.
adim (int, optional): Number of attention transformation dimensions. eprenet_conv_filts (int, optional):
aheads (int, optional): Number of heads for multi head attention. Filter size of encoder prenet convolution.
dlayers (int, optional): Number of decoder layers. dprenet_layers (int, optional):
dunits (int, optional): Number of decoder hidden units. Number of decoder prenet layers.
postnet_layers (int, optional): Number of postnet layers. dprenet_units (int, optional):
postnet_chans (int, optional): Number of postnet channels. Number of decoder prenet hidden units.
postnet_filts (int, optional): Filter size of postnet. elayers (int, optional):
use_scaled_pos_enc (pool, optional): Whether to use trainable scaled positional encoding. Number of encoder layers.
use_batch_norm (bool, optional): Whether to use batch normalization in encoder prenet. eunits (int, optional):
encoder_normalize_before (bool, optional): Whether to perform layer normalization before encoder block. Number of encoder hidden units.
decoder_normalize_before (bool, optional): Whether to perform layer normalization before decoder block. adim (int, optional):
encoder_concat_after (bool, optional): Whether to concatenate attention layer's input and output in encoder. Number of attention transformation dimensions.
decoder_concat_after (bool, optional): Whether to concatenate attention layer's input and output in decoder. aheads (int, optional):
positionwise_layer_type (str, optional): Position-wise operation type. Number of heads for multi head attention.
positionwise_conv_kernel_size (int, optional): Kernel size in position wise conv 1d. dlayers (int, optional):
reduction_factor (int, optional): Reduction factor. Number of decoder layers.
spk_embed_dim (int, optional): Number of speaker embedding dimenstions. dunits (int, optional):
spk_embed_integration_type (str, optional): How to integrate speaker embedding. Number of decoder hidden units.
use_gst (str, optional): Whether to use global style token. postnet_layers (int, optional):
gst_tokens (int, optional): The number of GST embeddings. Number of postnet layers.
gst_heads (int, optional): The number of heads in GST multihead attention. postnet_chans (int, optional):
gst_conv_layers (int, optional): The number of conv layers in GST. Number of postnet channels.
gst_conv_chans_list (Sequence[int], optional): List of the number of channels of conv layers in GST. postnet_filts (int, optional):
gst_conv_kernel_size (int, optional): Kernal size of conv layers in GST. Filter size of postnet.
gst_conv_stride (int, optional): Stride size of conv layers in GST. use_scaled_pos_enc (pool, optional):
gst_gru_layers (int, optional): The number of GRU layers in GST. Whether to use trainable scaled positional encoding.
gst_gru_units (int, optional): The number of GRU units in GST. use_batch_norm (bool, optional):
transformer_lr (float, optional): Initial value of learning rate. Whether to use batch normalization in encoder prenet.
transformer_warmup_steps (int, optional): Optimizer warmup steps. encoder_normalize_before (bool, optional):
transformer_enc_dropout_rate (float, optional): Dropout rate in encoder except attention and positional encoding. Whether to perform layer normalization before encoder block.
transformer_enc_positional_dropout_rate (float, optional): Dropout rate after encoder positional encoding. decoder_normalize_before (bool, optional):
transformer_enc_attn_dropout_rate (float, optional): Dropout rate in encoder self-attention module. Whether to perform layer normalization before decoder block.
transformer_dec_dropout_rate (float, optional): Dropout rate in decoder except attention & positional encoding. encoder_concat_after (bool, optional):
transformer_dec_positional_dropout_rate (float, optional): Dropout rate after decoder positional encoding. Whether to concatenate attention layer's input and output in encoder.
transformer_dec_attn_dropout_rate (float, optional): Dropout rate in deocoder self-attention module. decoder_concat_after (bool, optional):
transformer_enc_dec_attn_dropout_rate (float, optional): Dropout rate in encoder-deocoder attention module. Whether to concatenate attention layer's input and output in decoder.
init_type (str, optional): How to initialize transformer parameters. positionwise_layer_type (str, optional):
init_enc_alpha (float, optional): Initial value of alpha in scaled pos encoding of the encoder. Position-wise operation type.
init_dec_alpha (float, optional): Initial value of alpha in scaled pos encoding of the decoder. positionwise_conv_kernel_size (int, optional):
eprenet_dropout_rate (float, optional): Dropout rate in encoder prenet. Kernel size in position wise conv 1d.
dprenet_dropout_rate (float, optional): Dropout rate in decoder prenet. reduction_factor (int, optional):
postnet_dropout_rate (float, optional): Dropout rate in postnet. Reduction factor.
use_masking (bool, optional): Whether to apply masking for padded part in loss calculation. spk_embed_dim (int, optional):
use_weighted_masking (bool, optional): Whether to apply weighted masking in loss calculation. Number of speaker embedding dimenstions.
bce_pos_weight (float, optional): Positive sample weight in bce calculation (only for use_masking=true). spk_embed_integration_type (str, optional):
loss_type (str, optional): How to calculate loss. How to integrate speaker embedding.
use_guided_attn_loss (bool, optional): Whether to use guided attention loss. use_gst (str, optional):
num_heads_applied_guided_attn (int, optional): Number of heads in each layer to apply guided attention loss. Whether to use global style token.
num_layers_applied_guided_attn (int, optional): Number of layers to apply guided attention loss. gst_tokens (int, optional):
List of module names to apply guided attention loss. The number of GST embeddings.
gst_heads (int, optional):
The number of heads in GST multihead attention.
gst_conv_layers (int, optional):
The number of conv layers in GST.
gst_conv_chans_list (Sequence[int], optional):
List of the number of channels of conv layers in GST.
gst_conv_kernel_size (int, optional):
Kernal size of conv layers in GST.
gst_conv_stride (int, optional):
Stride size of conv layers in GST.
gst_gru_layers (int, optional):
The number of GRU layers in GST.
gst_gru_units (int, optional):
The number of GRU units in GST.
transformer_lr (float, optional):
Initial value of learning rate.
transformer_warmup_steps (int, optional):
Optimizer warmup steps.
transformer_enc_dropout_rate (float, optional):
Dropout rate in encoder except attention and positional encoding.
transformer_enc_positional_dropout_rate (float, optional):
Dropout rate after encoder positional encoding.
transformer_enc_attn_dropout_rate (float, optional):
Dropout rate in encoder self-attention module.
transformer_dec_dropout_rate (float, optional):
Dropout rate in decoder except attention & positional encoding.
transformer_dec_positional_dropout_rate (float, optional):
Dropout rate after decoder positional encoding.
transformer_dec_attn_dropout_rate (float, optional):
Dropout rate in deocoder self-attention module.
transformer_enc_dec_attn_dropout_rate (float, optional):
Dropout rate in encoder-deocoder attention module.
init_type (str, optional):
How to initialize transformer parameters.
init_enc_alpha (float, optional):
Initial value of alpha in scaled pos encoding of the encoder.
init_dec_alpha (float, optional):
Initial value of alpha in scaled pos encoding of the decoder.
eprenet_dropout_rate (float, optional):
Dropout rate in encoder prenet.
dprenet_dropout_rate (float, optional):
Dropout rate in decoder prenet.
postnet_dropout_rate (float, optional):
Dropout rate in postnet.
use_masking (bool, optional):
Whether to apply masking for padded part in loss calculation.
use_weighted_masking (bool, optional):
Whether to apply weighted masking in loss calculation.
bce_pos_weight (float, optional):
Positive sample weight in bce calculation (only for use_masking=true).
loss_type (str, optional):
How to calculate loss.
use_guided_attn_loss (bool, optional):
Whether to use guided attention loss.
num_heads_applied_guided_attn (int, optional):
Number of heads in each layer to apply guided attention loss.
num_layers_applied_guided_attn (int, optional):
Number of layers to apply guided attention loss.
""" """
def __init__( def __init__(
......
...@@ -33,8 +33,10 @@ def fold(x, n_group): ...@@ -33,8 +33,10 @@ def fold(x, n_group):
"""Fold audio or spectrogram's temporal dimension in to groups. """Fold audio or spectrogram's temporal dimension in to groups.
Args: Args:
x(Tensor): The input tensor. shape=(*, time_steps) x(Tensor):
n_group(int): The size of a group. The input tensor. shape=(*, time_steps)
n_group(int):
The size of a group.
Returns: Returns:
Tensor: Folded tensor. shape=(*, time_steps // n_group, group) Tensor: Folded tensor. shape=(*, time_steps // n_group, group)
...@@ -53,7 +55,8 @@ class UpsampleNet(nn.LayerList): ...@@ -53,7 +55,8 @@ class UpsampleNet(nn.LayerList):
on mel and time dimension. on mel and time dimension.
Args: Args:
upscale_factors(List[int], optional): Time upsampling factors for each Conv2DTranspose Layer. upscale_factors(List[int], optional):
Time upsampling factors for each Conv2DTranspose Layer.
The ``UpsampleNet`` contains ``len(upscale_factor)`` Conv2DTranspose The ``UpsampleNet`` contains ``len(upscale_factor)`` Conv2DTranspose
Layers. Each upscale_factor is used as the ``stride`` for the Layers. Each upscale_factor is used as the ``stride`` for the
corresponding Conv2DTranspose. Defaults to [16, 16], this the default corresponding Conv2DTranspose. Defaults to [16, 16], this the default
...@@ -94,8 +97,10 @@ class UpsampleNet(nn.LayerList): ...@@ -94,8 +97,10 @@ class UpsampleNet(nn.LayerList):
"""Forward pass of the ``UpsampleNet`` """Forward pass of the ``UpsampleNet``
Args: Args:
x(Tensor): The input spectrogram. shape=(batch_size, input_channels, time_steps) x(Tensor):
trim_conv_artifact(bool, optional, optional): Trim deconvolution artifact at each layer. Defaults to False. The input spectrogram. shape=(batch_size, input_channels, time_steps)
trim_conv_artifact(bool, optional, optional):
Trim deconvolution artifact at each layer. Defaults to False.
Returns: Returns:
Tensor: The upsampled spectrogram. shape=(batch_size, input_channels, time_steps * upsample_factor) Tensor: The upsampled spectrogram. shape=(batch_size, input_channels, time_steps * upsample_factor)
...@@ -123,10 +128,14 @@ class ResidualBlock(nn.Layer): ...@@ -123,10 +128,14 @@ class ResidualBlock(nn.Layer):
and output. and output.
Args: Args:
channels (int): Feature size of the input. channels (int):
cond_channels (int): Featuer size of the condition. Feature size of the input.
kernel_size (Tuple[int]): Kernel size of the Convolution2d applied to the input. cond_channels (int):
dilations (int): Dilations of the Convolution2d applied to the input. Featuer size of the condition.
kernel_size (Tuple[int]):
Kernel size of the Convolution2d applied to the input.
dilations (int):
Dilations of the Convolution2d applied to the input.
""" """
def __init__(self, channels, cond_channels, kernel_size, dilations): def __init__(self, channels, cond_channels, kernel_size, dilations):
...@@ -173,12 +182,16 @@ class ResidualBlock(nn.Layer): ...@@ -173,12 +182,16 @@ class ResidualBlock(nn.Layer):
"""Compute output for a whole folded sequence. """Compute output for a whole folded sequence.
Args: Args:
x (Tensor): The input. [shape=(batch_size, channel, height, width)] x (Tensor):
condition (Tensor [shape=(batch_size, condition_channel, height, width)]): The local condition. The input. [shape=(batch_size, channel, height, width)]
condition (Tensor [shape=(batch_size, condition_channel, height, width)]):
The local condition.
Returns: Returns:
res (Tensor): The residual output. [shape=(batch_size, channel, height, width)] res (Tensor):
skip (Tensor): The skip output. [shape=(batch_size, channel, height, width)] The residual output. [shape=(batch_size, channel, height, width)]
skip (Tensor):
The skip output. [shape=(batch_size, channel, height, width)]
""" """
x_in = x x_in = x
x = self.conv(x) x = self.conv(x)
...@@ -216,12 +229,16 @@ class ResidualBlock(nn.Layer): ...@@ -216,12 +229,16 @@ class ResidualBlock(nn.Layer):
"""Compute the output for a row and update the buffer. """Compute the output for a row and update the buffer.
Args: Args:
x_row (Tensor): A row of the input. shape=(batch_size, channel, 1, width) x_row (Tensor):
condition_row (Tensor): A row of the condition. shape=(batch_size, condition_channel, 1, width) A row of the input. shape=(batch_size, channel, 1, width)
condition_row (Tensor):
A row of the condition. shape=(batch_size, condition_channel, 1, width)
Returns: Returns:
res (Tensor): A row of the the residual output. shape=(batch_size, channel, 1, width) res (Tensor):
skip (Tensor): A row of the skip output. shape=(batch_size, channel, 1, width) A row of the the residual output. shape=(batch_size, channel, 1, width)
skip (Tensor):
A row of the skip output. shape=(batch_size, channel, 1, width)
""" """
x_row_in = x_row x_row_in = x_row
...@@ -258,11 +275,16 @@ class ResidualNet(nn.LayerList): ...@@ -258,11 +275,16 @@ class ResidualNet(nn.LayerList):
"""A stack of several ResidualBlocks. It merges condition at each layer. """A stack of several ResidualBlocks. It merges condition at each layer.
Args: Args:
n_layer (int): Number of ResidualBlocks in the ResidualNet. n_layer (int):
residual_channels (int): Feature size of each ResidualBlocks. Number of ResidualBlocks in the ResidualNet.
condition_channels (int): Feature size of the condition. residual_channels (int):
kernel_size (Tuple[int]): Kernel size of each ResidualBlock. Feature size of each ResidualBlocks.
dilations_h (List[int]): Dilation in height dimension of every ResidualBlock. condition_channels (int):
Feature size of the condition.
kernel_size (Tuple[int]):
Kernel size of each ResidualBlock.
dilations_h (List[int]):
Dilation in height dimension of every ResidualBlock.
Raises: Raises:
ValueError: If the length of dilations_h does not equals n_layers. ValueError: If the length of dilations_h does not equals n_layers.
...@@ -288,11 +310,13 @@ class ResidualNet(nn.LayerList): ...@@ -288,11 +310,13 @@ class ResidualNet(nn.LayerList):
"""Comput the output of given the input and the condition. """Comput the output of given the input and the condition.
Args: Args:
x (Tensor): The input. shape=(batch_size, channel, height, width) x (Tensor):
condition (Tensor): The local condition. shape=(batch_size, condition_channel, height, width) The input. shape=(batch_size, channel, height, width)
condition (Tensor):
The local condition. shape=(batch_size, condition_channel, height, width)
Returns: Returns:
Tensor : The output, which is an aggregation of all the skip outputs. shape=(batch_size, channel, height, width) Tensor: The output, which is an aggregation of all the skip outputs. shape=(batch_size, channel, height, width)
""" """
skip_connections = [] skip_connections = []
...@@ -312,12 +336,16 @@ class ResidualNet(nn.LayerList): ...@@ -312,12 +336,16 @@ class ResidualNet(nn.LayerList):
"""Compute the output for a row and update the buffers. """Compute the output for a row and update the buffers.
Args: Args:
x_row (Tensor): A row of the input. shape=(batch_size, channel, 1, width) x_row (Tensor):
condition_row (Tensor): A row of the condition. shape=(batch_size, condition_channel, 1, width) A row of the input. shape=(batch_size, channel, 1, width)
condition_row (Tensor):
A row of the condition. shape=(batch_size, condition_channel, 1, width)
Returns: Returns:
res (Tensor): A row of the the residual output. shape=(batch_size, channel, 1, width) res (Tensor):
skip (Tensor): A row of the skip output. shape=(batch_size, channel, 1, width) A row of the the residual output. shape=(batch_size, channel, 1, width)
skip (Tensor):
A row of the skip output. shape=(batch_size, channel, 1, width)
""" """
skip_connections = [] skip_connections = []
...@@ -337,11 +365,16 @@ class Flow(nn.Layer): ...@@ -337,11 +365,16 @@ class Flow(nn.Layer):
sampling. sampling.
Args: Args:
n_layers (int): Number of ResidualBlocks in the Flow. n_layers (int):
channels (int): Feature size of the ResidualBlocks. Number of ResidualBlocks in the Flow.
mel_bands (int): Feature size of the mel spectrogram (mel bands). channels (int):
kernel_size (Tuple[int]): Kernel size of each ResisualBlocks in the Flow. Feature size of the ResidualBlocks.
n_group (int): Number of timesteps to the folded into a group. mel_bands (int):
Feature size of the mel spectrogram (mel bands).
kernel_size (Tuple[int]):
Kernel size of each ResisualBlocks in the Flow.
n_group (int):
Number of timesteps to the folded into a group.
""" """
dilations_dict = { dilations_dict = {
8: [1, 1, 1, 1, 1, 1, 1, 1], 8: [1, 1, 1, 1, 1, 1, 1, 1],
...@@ -393,11 +426,14 @@ class Flow(nn.Layer): ...@@ -393,11 +426,14 @@ class Flow(nn.Layer):
a sample from p(X) into a sample from p(Z). a sample from p(X) into a sample from p(Z).
Args: Args:
x (Tensor): A input sample of the distribution p(X). shape=(batch, 1, height, width) x (Tensor):
condition (Tensor): The local condition. shape=(batch, condition_channel, height, width) A input sample of the distribution p(X). shape=(batch, 1, height, width)
condition (Tensor):
The local condition. shape=(batch, condition_channel, height, width)
Returns: Returns:
z (Tensor): shape(batch, 1, height, width), the transformed sample. z (Tensor):
shape(batch, 1, height, width), the transformed sample.
Tuple[Tensor, Tensor]: Tuple[Tensor, Tensor]:
The parameter of the transformation. The parameter of the transformation.
logs (Tensor): shape(batch, 1, height - 1, width), the log scale of the transformation from x to z. logs (Tensor): shape(batch, 1, height - 1, width), the log scale of the transformation from x to z.
...@@ -433,8 +469,10 @@ class Flow(nn.Layer): ...@@ -433,8 +469,10 @@ class Flow(nn.Layer):
p(Z) and transform the sample. It is a auto regressive transformation. p(Z) and transform the sample. It is a auto regressive transformation.
Args: Args:
z(Tensor): A sample of the distribution p(Z). shape=(batch, 1, time_steps z(Tensor):
condition(Tensor): The local condition. shape=(batch, condition_channel, time_steps) A sample of the distribution p(Z). shape=(batch, 1, time_steps
condition(Tensor):
The local condition. shape=(batch, condition_channel, time_steps)
Returns: Returns:
Tensor: Tensor:
The transformed sample. shape=(batch, 1, height, width) The transformed sample. shape=(batch, 1, height, width)
...@@ -462,12 +500,18 @@ class WaveFlow(nn.LayerList): ...@@ -462,12 +500,18 @@ class WaveFlow(nn.LayerList):
flows. flows.
Args: Args:
n_flows (int): Number of flows in the WaveFlow model. n_flows (int):
n_layers (int): Number of ResidualBlocks in each Flow. Number of flows in the WaveFlow model.
n_group (int): Number of timesteps to fold as a group. n_layers (int):
channels (int): Feature size of each ResidualBlock. Number of ResidualBlocks in each Flow.
mel_bands (int): Feature size of mel spectrogram (mel bands). n_group (int):
kernel_size (Union[int, List[int]]): Kernel size of the convolution layer in each ResidualBlock. Number of timesteps to fold as a group.
channels (int):
Feature size of each ResidualBlock.
mel_bands (int):
Feature size of mel spectrogram (mel bands).
kernel_size (Union[int, List[int]]):
Kernel size of the convolution layer in each ResidualBlock.
""" """
def __init__(self, n_flows, n_layers, n_group, channels, mel_bands, def __init__(self, n_flows, n_layers, n_group, channels, mel_bands,
...@@ -518,12 +562,16 @@ class WaveFlow(nn.LayerList): ...@@ -518,12 +562,16 @@ class WaveFlow(nn.LayerList):
condition. condition.
Args: Args:
x (Tensor): The audio. shape=(batch_size, time_steps) x (Tensor):
condition (Tensor): The local condition (mel spectrogram here). shape=(batch_size, condition channel, time_steps) The audio. shape=(batch_size, time_steps)
condition (Tensor):
The local condition (mel spectrogram here). shape=(batch_size, condition channel, time_steps)
Returns: Returns:
Tensor: The transformed random variable. shape=(batch_size, time_steps) Tensor:
Tensor: The log determinant of the jacobian of the transformation from x to z. shape=(1,) The transformed random variable. shape=(batch_size, time_steps)
Tensor:
The log determinant of the jacobian of the transformation from x to z. shape=(1,)
""" """
# x: (B, T) # x: (B, T)
# condition: (B, C, T) upsampled condition # condition: (B, C, T) upsampled condition
...@@ -559,12 +607,13 @@ class WaveFlow(nn.LayerList): ...@@ -559,12 +607,13 @@ class WaveFlow(nn.LayerList):
autoregressive manner. autoregressive manner.
Args: Args:
z (Tensor): A sample of the distribution p(Z). shape=(batch, 1, time_steps z (Tensor):
condition (Tensor): The local condition. shape=(batch, condition_channel, time_steps) A sample of the distribution p(Z). shape=(batch, 1, time_steps
condition (Tensor):
The local condition. shape=(batch, condition_channel, time_steps)
Returns: Returns:
Tensor: The transformed sample (audio here). shape=(batch_size, time_steps) Tensor: The transformed sample (audio here). shape=(batch_size, time_steps)
""" """
z, condition = self._trim(z, condition) z, condition = self._trim(z, condition)
...@@ -590,13 +639,20 @@ class ConditionalWaveFlow(nn.LayerList): ...@@ -590,13 +639,20 @@ class ConditionalWaveFlow(nn.LayerList):
"""ConditionalWaveFlow, a UpsampleNet with a WaveFlow model. """ConditionalWaveFlow, a UpsampleNet with a WaveFlow model.
Args: Args:
upsample_factors (List[int]): Upsample factors for the upsample net. upsample_factors (List[int]):
n_flows (int): Number of flows in the WaveFlow model. Upsample factors for the upsample net.
n_layers (int): Number of ResidualBlocks in each Flow. n_flows (int):
n_group (int): Number of timesteps to fold as a group. Number of flows in the WaveFlow model.
channels (int): Feature size of each ResidualBlock. n_layers (int):
n_mels (int): Feature size of mel spectrogram (mel bands). Number of ResidualBlocks in each Flow.
kernel_size (Union[int, List[int]]): Kernel size of the convolution layer in each ResidualBlock. n_group (int):
Number of timesteps to fold as a group.
channels (int):
Feature size of each ResidualBlock.
n_mels (int):
Feature size of mel spectrogram (mel bands).
kernel_size (Union[int, List[int]]):
Kernel size of the convolution layer in each ResidualBlock.
""" """
def __init__(self, def __init__(self,
...@@ -622,12 +678,16 @@ class ConditionalWaveFlow(nn.LayerList): ...@@ -622,12 +678,16 @@ class ConditionalWaveFlow(nn.LayerList):
the determinant of the jacobian of the transformation from x to z. the determinant of the jacobian of the transformation from x to z.
Args: Args:
audio(Tensor): The audio. shape=(B, T) audio(Tensor):
mel(Tensor): The mel spectrogram. shape=(B, C_mel, T_mel) The audio. shape=(B, T)
mel(Tensor):
The mel spectrogram. shape=(B, C_mel, T_mel)
Returns: Returns:
Tensor: The inversely transformed random variable z (x to z). shape=(B, T) Tensor:
Tensor: the log of the determinant of the jacobian of the transformation from x to z. shape=(1,) The inversely transformed random variable z (x to z). shape=(B, T)
Tensor:
the log of the determinant of the jacobian of the transformation from x to z. shape=(1,)
""" """
condition = self.encoder(mel) condition = self.encoder(mel)
z, log_det_jacobian = self.decoder(audio, condition) z, log_det_jacobian = self.decoder(audio, condition)
...@@ -638,10 +698,12 @@ class ConditionalWaveFlow(nn.LayerList): ...@@ -638,10 +698,12 @@ class ConditionalWaveFlow(nn.LayerList):
"""Generate raw audio given mel spectrogram. """Generate raw audio given mel spectrogram.
Args: Args:
mel(np.ndarray): Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel) mel(np.ndarray):
Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel)
Returns: Returns:
Tensor: The synthesized audio, where``T <= T_mel * upsample_factors``. shape=(B, T) Tensor:
The synthesized audio, where``T <= T_mel * upsample_factors``. shape=(B, T)
""" """
start = time.time() start = time.time()
condition = self.encoder(mel, trim_conv_artifact=True) # (B, C, T) condition = self.encoder(mel, trim_conv_artifact=True) # (B, C, T)
...@@ -657,7 +719,8 @@ class ConditionalWaveFlow(nn.LayerList): ...@@ -657,7 +719,8 @@ class ConditionalWaveFlow(nn.LayerList):
"""Generate raw audio given mel spectrogram. """Generate raw audio given mel spectrogram.
Args: Args:
mel(np.ndarray): Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel) mel(np.ndarray):
Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel)
Returns: Returns:
np.ndarray: The synthesized audio. shape=(T,) np.ndarray: The synthesized audio. shape=(T,)
...@@ -673,8 +736,10 @@ class ConditionalWaveFlow(nn.LayerList): ...@@ -673,8 +736,10 @@ class ConditionalWaveFlow(nn.LayerList):
"""Build a ConditionalWaveFlow model from a pretrained model. """Build a ConditionalWaveFlow model from a pretrained model.
Args: Args:
config(yacs.config.CfgNode): model configs config(yacs.config.CfgNode):
checkpoint_path(Path or str): the path of pretrained model checkpoint, without extension name model configs
checkpoint_path(Path or str):
the path of pretrained model checkpoint, without extension name
Returns: Returns:
ConditionalWaveFlow The model built from pretrained result. ConditionalWaveFlow The model built from pretrained result.
...@@ -694,8 +759,8 @@ class WaveFlowLoss(nn.Layer): ...@@ -694,8 +759,8 @@ class WaveFlowLoss(nn.Layer):
"""Criterion of a WaveFlow model. """Criterion of a WaveFlow model.
Args: Args:
sigma (float): The standard deviation of the gaussian noise used in WaveFlow, sigma (float):
by default 1.0. The standard deviation of the gaussian noise used in WaveFlow, by default 1.0.
""" """
def __init__(self, sigma=1.0): def __init__(self, sigma=1.0):
...@@ -708,8 +773,10 @@ class WaveFlowLoss(nn.Layer): ...@@ -708,8 +773,10 @@ class WaveFlowLoss(nn.Layer):
log_det_jacobian of transformation from x to z. log_det_jacobian of transformation from x to z.
Args: Args:
z(Tensor): The transformed random variable (x to z). shape=(B, T) z(Tensor):
log_det_jacobian(Tensor): The log of the determinant of the jacobian matrix of the The transformed random variable (x to z). shape=(B, T)
log_det_jacobian(Tensor):
The log of the determinant of the jacobian matrix of the
transformation from x to z. shape=(1,) transformation from x to z. shape=(1,)
Returns: Returns:
...@@ -726,7 +793,8 @@ class ConditionalWaveFlow2Infer(ConditionalWaveFlow): ...@@ -726,7 +793,8 @@ class ConditionalWaveFlow2Infer(ConditionalWaveFlow):
"""Generate raw audio given mel spectrogram. """Generate raw audio given mel spectrogram.
Args: Args:
mel (np.ndarray): Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel) mel (np.ndarray):
Mel spectrogram of an utterance(in log-magnitude). shape=(C_mel, T_mel)
Returns: Returns:
np.ndarray: The synthesized audio. shape=(T,) np.ndarray: The synthesized audio. shape=(T,)
......
...@@ -165,19 +165,29 @@ class WaveRNN(nn.Layer): ...@@ -165,19 +165,29 @@ class WaveRNN(nn.Layer):
init_type: str="xavier_uniform", ): init_type: str="xavier_uniform", ):
''' '''
Args: Args:
rnn_dims (int, optional): Hidden dims of RNN Layers. rnn_dims (int, optional):
fc_dims (int, optional): Dims of FC Layers. Hidden dims of RNN Layers.
bits (int, optional): bit depth of signal. fc_dims (int, optional):
aux_context_window (int, optional): The context window size of the first convolution applied to the Dims of FC Layers.
auxiliary input, by default 2 bits (int, optional):
upsample_scales (List[int], optional): Upsample scales of the upsample network. bit depth of signal.
aux_channels (int, optional): Auxiliary channel of the residual blocks. aux_context_window (int, optional):
compute_dims (int, optional): Dims of Conv1D in MelResNet. The context window size of the first convolution applied to the auxiliary input, by default 2
res_out_dims (int, optional): Dims of output in MelResNet. upsample_scales (List[int], optional):
res_blocks (int, optional): Number of residual blocks. Upsample scales of the upsample network.
mode (str, optional): Output mode of the WaveRNN vocoder. aux_channels (int, optional):
Auxiliary channel of the residual blocks.
compute_dims (int, optional):
Dims of Conv1D in MelResNet.
res_out_dims (int, optional):
Dims of output in MelResNet.
res_blocks (int, optional):
Number of residual blocks.
mode (str, optional):
Output mode of the WaveRNN vocoder.
`MOL` for Mixture of Logistic Distribution, and `RAW` for quantized bits as the model's output. `MOL` for Mixture of Logistic Distribution, and `RAW` for quantized bits as the model's output.
init_type (str): How to initialize parameters. init_type (str):
How to initialize parameters.
''' '''
super().__init__() super().__init__()
self.mode = mode self.mode = mode
...@@ -226,8 +236,10 @@ class WaveRNN(nn.Layer): ...@@ -226,8 +236,10 @@ class WaveRNN(nn.Layer):
def forward(self, x, c): def forward(self, x, c):
''' '''
Args: Args:
x (Tensor): wav sequence, [B, T] x (Tensor):
c (Tensor): mel spectrogram [B, C_aux, T'] wav sequence, [B, T]
c (Tensor):
mel spectrogram [B, C_aux, T']
T = (T' - 2 * aux_context_window ) * hop_length T = (T' - 2 * aux_context_window ) * hop_length
Returns: Returns:
...@@ -280,10 +292,14 @@ class WaveRNN(nn.Layer): ...@@ -280,10 +292,14 @@ class WaveRNN(nn.Layer):
gen_display: bool=False): gen_display: bool=False):
""" """
Args: Args:
c(Tensor): input mels, (T', C_aux) c(Tensor):
batched(bool): generate in batch or not input mels, (T', C_aux)
target(int): target number of samples to be generated in each batch entry batched(bool):
overlap(int): number of samples for crossfading between batches generate in batch or not
target(int):
target number of samples to be generated in each batch entry
overlap(int):
number of samples for crossfading between batches
mu_law(bool) mu_law(bool)
Returns: Returns:
wav sequence: Output (T' * prod(upsample_scales), out_channels, C_out). wav sequence: Output (T' * prod(upsample_scales), out_channels, C_out).
...@@ -404,7 +420,8 @@ class WaveRNN(nn.Layer): ...@@ -404,7 +420,8 @@ class WaveRNN(nn.Layer):
def pad_tensor(self, x, pad, side='both'): def pad_tensor(self, x, pad, side='both'):
''' '''
Args: Args:
x(Tensor): mel, [1, n_frames, 80] x(Tensor):
mel, [1, n_frames, 80]
pad(int): pad(int):
side(str, optional): (Default value = 'both') side(str, optional): (Default value = 'both')
...@@ -428,12 +445,15 @@ class WaveRNN(nn.Layer): ...@@ -428,12 +445,15 @@ class WaveRNN(nn.Layer):
Overlap will be used for crossfading in xfade_and_unfold() Overlap will be used for crossfading in xfade_and_unfold()
Args: Args:
x(Tensor): Upsampled conditioning features. mels or aux x(Tensor):
Upsampled conditioning features. mels or aux
shape=(1, T, features) shape=(1, T, features)
mels: [1, T, 80] mels: [1, T, 80]
aux: [1, T, 128] aux: [1, T, 128]
target(int): Target timesteps for each index of batch target(int):
overlap(int): Timesteps for both xfade and rnn warmup Target timesteps for each index of batch
overlap(int):
Timesteps for both xfade and rnn warmup
Returns: Returns:
Tensor: Tensor:
......
...@@ -42,7 +42,8 @@ class CausalConv1D(nn.Layer): ...@@ -42,7 +42,8 @@ class CausalConv1D(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, in_channels, T). x (Tensor):
Input tensor (B, in_channels, T).
Returns: Returns:
Tensor: Output tensor (B, out_channels, T). Tensor: Output tensor (B, out_channels, T).
""" """
...@@ -67,7 +68,8 @@ class CausalConv1DTranspose(nn.Layer): ...@@ -67,7 +68,8 @@ class CausalConv1DTranspose(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, in_channels, T_in). x (Tensor):
Input tensor (B, in_channels, T_in).
Returns: Returns:
Tensor: Output tensor (B, out_channels, T_out). Tensor: Output tensor (B, out_channels, T_out).
""" """
......
...@@ -20,8 +20,10 @@ class ConvolutionModule(nn.Layer): ...@@ -20,8 +20,10 @@ class ConvolutionModule(nn.Layer):
"""ConvolutionModule in Conformer model. """ConvolutionModule in Conformer model.
Args: Args:
channels (int): The number of channels of conv layers. channels (int):
kernel_size (int): Kernerl size of conv layers. The number of channels of conv layers.
kernel_size (int):
Kernerl size of conv layers.
""" """
def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True): def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True):
...@@ -59,7 +61,8 @@ class ConvolutionModule(nn.Layer): ...@@ -59,7 +61,8 @@ class ConvolutionModule(nn.Layer):
"""Compute convolution module. """Compute convolution module.
Args: Args:
x (Tensor): Input tensor (#batch, time, channels). x (Tensor):
Input tensor (#batch, time, channels).
Returns: Returns:
Tensor: Output tensor (#batch, time, channels). Tensor: Output tensor (#batch, time, channels).
""" """
......
...@@ -23,25 +23,34 @@ class EncoderLayer(nn.Layer): ...@@ -23,25 +23,34 @@ class EncoderLayer(nn.Layer):
"""Encoder layer module. """Encoder layer module.
Args: Args:
size (int): Input dimension. size (int):
self_attn (nn.Layer): Self-attention module instance. Input dimension.
self_attn (nn.Layer):
Self-attention module instance.
`MultiHeadedAttention` or `RelPositionMultiHeadedAttention` instance `MultiHeadedAttention` or `RelPositionMultiHeadedAttention` instance
can be used as the argument. can be used as the argument.
feed_forward (nn.Layer): Feed-forward module instance. feed_forward (nn.Layer):
Feed-forward module instance.
`PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance
can be used as the argument. can be used as the argument.
feed_forward_macaron (nn.Layer): Additional feed-forward module instance. feed_forward_macaron (nn.Layer):
Additional feed-forward module instance.
`PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance
can be used as the argument. can be used as the argument.
conv_module (nn.Layer): Convolution module instance. conv_module (nn.Layer):
Convolution module instance.
`ConvlutionModule` instance can be used as the argument. `ConvlutionModule` instance can be used as the argument.
dropout_rate (float): Dropout rate. dropout_rate (float):
normalize_before (bool): Whether to use layer_norm before the first block. Dropout rate.
concat_after (bool): Whether to concat attention layer's input and output. normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
stochastic_depth_rate (float): Proability to skip this layer. stochastic_depth_rate (float):
Proability to skip this layer.
During training, the layer may skip residual computation and return input During training, the layer may skip residual computation and return input
as-is with given probability. as-is with given probability.
""" """
...@@ -86,15 +95,19 @@ class EncoderLayer(nn.Layer): ...@@ -86,15 +95,19 @@ class EncoderLayer(nn.Layer):
"""Compute encoded features. """Compute encoded features.
Args: Args:
x_input(Union[Tuple, Tensor]): Input tensor w/ or w/o pos emb. x_input(Union[Tuple, Tensor]):
Input tensor w/ or w/o pos emb.
- w/ pos emb: Tuple of tensors [(#batch, time, size), (1, time, size)]. - w/ pos emb: Tuple of tensors [(#batch, time, size), (1, time, size)].
- w/o pos emb: Tensor (#batch, time, size). - w/o pos emb: Tensor (#batch, time, size).
mask(Tensor): Mask tensor for the input (#batch, time). mask(Tensor):
Mask tensor for the input (#batch, time).
cache (Tensor): cache (Tensor):
Returns: Returns:
Tensor: Output tensor (#batch, time, size). Tensor:
Tensor: Mask tensor (#batch, time). Output tensor (#batch, time, size).
Tensor:
Mask tensor (#batch, time).
""" """
if isinstance(x_input, tuple): if isinstance(x_input, tuple):
x, pos_emb = x_input[0], x_input[1] x, pos_emb = x_input[0], x_input[1]
......
...@@ -42,13 +42,19 @@ class Conv1dCell(nn.Conv1D): ...@@ -42,13 +42,19 @@ class Conv1dCell(nn.Conv1D):
class. class.
Args: Args:
in_channels (int): The feature size of the input. in_channels (int):
out_channels (int): The feature size of the output. The feature size of the input.
kernel_size (int or Tuple[int]): The size of the kernel. out_channels (int):
dilation (int or Tuple[int]): The dilation of the convolution, by default 1 The feature size of the output.
weight_attr (ParamAttr, Initializer, str or bool, optional) : The parameter attribute of the convolution kernel, kernel_size (int or Tuple[int]):
The size of the kernel.
dilation (int or Tuple[int]):
The dilation of the convolution, by default 1
weight_attr (ParamAttr, Initializer, str or bool, optional):
The parameter attribute of the convolution kernel,
by default None. by default None.
bias_attr (ParamAttr, Initializer, str or bool, optional):The parameter attribute of the bias. bias_attr (ParamAttr, Initializer, str or bool, optional):
The parameter attribute of the bias.
If ``False``, this layer does not have a bias, by default None. If ``False``, this layer does not have a bias, by default None.
Examples: Examples:
...@@ -122,7 +128,8 @@ class Conv1dCell(nn.Conv1D): ...@@ -122,7 +128,8 @@ class Conv1dCell(nn.Conv1D):
"""Initialize the buffer for the step input. """Initialize the buffer for the step input.
Args: Args:
x_t (Tensor): The step input. shape=(batch_size, in_channels) x_t (Tensor):
The step input. shape=(batch_size, in_channels)
""" """
batch_size, _ = x_t.shape batch_size, _ = x_t.shape
...@@ -134,7 +141,8 @@ class Conv1dCell(nn.Conv1D): ...@@ -134,7 +141,8 @@ class Conv1dCell(nn.Conv1D):
"""Shift the buffer by one step. """Shift the buffer by one step.
Args: Args:
x_t (Tensor): The step input. shape=(batch_size, in_channels) x_t (Tensor): T
he step input. shape=(batch_size, in_channels)
""" """
self._buffer = paddle.concat( self._buffer = paddle.concat(
...@@ -144,10 +152,12 @@ class Conv1dCell(nn.Conv1D): ...@@ -144,10 +152,12 @@ class Conv1dCell(nn.Conv1D):
"""Add step input and compute step output. """Add step input and compute step output.
Args: Args:
x_t (Tensor): The step input. shape=(batch_size, in_channels) x_t (Tensor):
The step input. shape=(batch_size, in_channels)
Returns: Returns:
y_t (Tensor): The step output. shape=(batch_size, out_channels) y_t (Tensor):
The step output. shape=(batch_size, out_channels)
""" """
batch_size = x_t.shape[0] batch_size = x_t.shape[0]
...@@ -173,10 +183,14 @@ class Conv1dBatchNorm(nn.Layer): ...@@ -173,10 +183,14 @@ class Conv1dBatchNorm(nn.Layer):
"""A Conv1D Layer followed by a BatchNorm1D. """A Conv1D Layer followed by a BatchNorm1D.
Args: Args:
in_channels (int): The feature size of the input. in_channels (int):
out_channels (int): The feature size of the output. The feature size of the input.
kernel_size (int): The size of the convolution kernel. out_channels (int):
stride (int, optional): The stride of the convolution, by default 1. The feature size of the output.
kernel_size (int):
The size of the convolution kernel.
stride (int, optional):
The stride of the convolution, by default 1.
padding (int, str or Tuple[int], optional): padding (int, str or Tuple[int], optional):
The padding of the convolution. The padding of the convolution.
If int, a symmetrical padding is applied before convolution; If int, a symmetrical padding is applied before convolution;
...@@ -189,9 +203,12 @@ class Conv1dBatchNorm(nn.Layer): ...@@ -189,9 +203,12 @@ class Conv1dBatchNorm(nn.Layer):
bias_attr (ParamAttr, Initializer, str or bool, optional): bias_attr (ParamAttr, Initializer, str or bool, optional):
The parameter attribute of the bias of the convolution, The parameter attribute of the bias of the convolution,
by defaultNone. by defaultNone.
data_format (str ["NCL" or "NLC"], optional): The data layout of the input, by default "NCL" data_format (str ["NCL" or "NLC"], optional):
momentum (float, optional): The momentum of the BatchNorm1D layer, by default 0.9 The data layout of the input, by default "NCL"
epsilon (float, optional): The epsilon of the BatchNorm1D layer, by default 1e-05 momentum (float, optional):
The momentum of the BatchNorm1D layer, by default 0.9
epsilon (float, optional):
The epsilon of the BatchNorm1D layer, by default 1e-05
""" """
def __init__(self, def __init__(self,
...@@ -225,12 +242,13 @@ class Conv1dBatchNorm(nn.Layer): ...@@ -225,12 +242,13 @@ class Conv1dBatchNorm(nn.Layer):
"""Forward pass of the Conv1dBatchNorm layer. """Forward pass of the Conv1dBatchNorm layer.
Args: Args:
x (Tensor): The input tensor. Its data layout depends on ``data_format``. x (Tensor):
The input tensor. Its data layout depends on ``data_format``.
shape=(B, C_in, T_in) or (B, T_in, C_in) shape=(B, C_in, T_in) or (B, T_in, C_in)
Returns: Returns:
Tensor: The output tensor. Tensor:
shape=(B, C_out, T_out) or (B, T_out, C_out) The output tensor. shape=(B, C_out, T_out) or (B, T_out, C_out)
""" """
x = self.conv(x) x = self.conv(x)
......
...@@ -19,8 +19,10 @@ def shuffle_dim(x, axis, perm=None): ...@@ -19,8 +19,10 @@ def shuffle_dim(x, axis, perm=None):
"""Permute input tensor along aixs given the permutation or randomly. """Permute input tensor along aixs given the permutation or randomly.
Args: Args:
x (Tensor): The input tensor. x (Tensor):
axis (int): The axis to shuffle. The input tensor.
axis (int):
The axis to shuffle.
perm (List[int], ndarray, optional): perm (List[int], ndarray, optional):
The order to reorder the tensor along the ``axis``-th dimension. The order to reorder the tensor along the ``axis``-th dimension.
It is a permutation of ``[0, d)``, where d is the size of the It is a permutation of ``[0, d)``, where d is the size of the
......
...@@ -19,8 +19,10 @@ from paddle import nn ...@@ -19,8 +19,10 @@ from paddle import nn
class LayerNorm(nn.LayerNorm): class LayerNorm(nn.LayerNorm):
"""Layer normalization module. """Layer normalization module.
Args: Args:
nout (int): Output dim size. nout (int):
dim (int): Dimension to be normalized. Output dim size.
dim (int):
Dimension to be normalized.
""" """
def __init__(self, nout, dim=-1): def __init__(self, nout, dim=-1):
...@@ -32,7 +34,8 @@ class LayerNorm(nn.LayerNorm): ...@@ -32,7 +34,8 @@ class LayerNorm(nn.LayerNorm):
"""Apply layer normalization. """Apply layer normalization.
Args: Args:
x (Tensor):Input tensor. x (Tensor):
Input tensor.
Returns: Returns:
Tensor: Normalized tensor. Tensor: Normalized tensor.
......
...@@ -269,8 +269,10 @@ class GuidedAttentionLoss(nn.Layer): ...@@ -269,8 +269,10 @@ class GuidedAttentionLoss(nn.Layer):
"""Make masks indicating non-padded part. """Make masks indicating non-padded part.
Args: Args:
ilens(Tensor(int64) or List): Batch of lengths (B,). ilens(Tensor(int64) or List):
olens(Tensor(int64) or List): Batch of lengths (B,). Batch of lengths (B,).
olens(Tensor(int64) or List):
Batch of lengths (B,).
Returns: Returns:
Tensor: Mask tensor indicating non-padded part. Tensor: Mask tensor indicating non-padded part.
...@@ -322,9 +324,12 @@ class GuidedMultiHeadAttentionLoss(GuidedAttentionLoss): ...@@ -322,9 +324,12 @@ class GuidedMultiHeadAttentionLoss(GuidedAttentionLoss):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
att_ws(Tensor): Batch of multi head attention weights (B, H, T_max_out, T_max_in). att_ws(Tensor):
ilens(Tensor): Batch of input lenghts (B,). Batch of multi head attention weights (B, H, T_max_out, T_max_in).
olens(Tensor): Batch of output lenghts (B,). ilens(Tensor):
Batch of input lenghts (B,).
olens(Tensor):
Batch of output lenghts (B,).
Returns: Returns:
Tensor: Guided attention loss value. Tensor: Guided attention loss value.
...@@ -354,9 +359,12 @@ class Tacotron2Loss(nn.Layer): ...@@ -354,9 +359,12 @@ class Tacotron2Loss(nn.Layer):
"""Initialize Tactoron2 loss module. """Initialize Tactoron2 loss module.
Args: Args:
use_masking (bool): Whether to apply masking for padded part in loss calculation. use_masking (bool):
use_weighted_masking (bool): Whether to apply weighted masking in loss calculation. Whether to apply masking for padded part in loss calculation.
bce_pos_weight (float): Weight of positive sample of stop token. use_weighted_masking (bool):
Whether to apply weighted masking in loss calculation.
bce_pos_weight (float):
Weight of positive sample of stop token.
""" """
super().__init__() super().__init__()
assert (use_masking != use_weighted_masking) or not use_masking assert (use_masking != use_weighted_masking) or not use_masking
...@@ -374,17 +382,25 @@ class Tacotron2Loss(nn.Layer): ...@@ -374,17 +382,25 @@ class Tacotron2Loss(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
after_outs(Tensor): Batch of outputs after postnets (B, Lmax, odim). after_outs(Tensor):
before_outs(Tensor): Batch of outputs before postnets (B, Lmax, odim). Batch of outputs after postnets (B, Lmax, odim).
logits(Tensor): Batch of stop logits (B, Lmax). before_outs(Tensor):
ys(Tensor): Batch of padded target features (B, Lmax, odim). Batch of outputs before postnets (B, Lmax, odim).
stop_labels(Tensor(int64)): Batch of the sequences of stop token labels (B, Lmax). logits(Tensor):
Batch of stop logits (B, Lmax).
ys(Tensor):
Batch of padded target features (B, Lmax, odim).
stop_labels(Tensor(int64)):
Batch of the sequences of stop token labels (B, Lmax).
olens(Tensor(int64)): olens(Tensor(int64)):
Returns: Returns:
Tensor: L1 loss value. Tensor:
Tensor: Mean square error loss value. L1 loss value.
Tensor: Binary cross entropy loss value. Tensor:
Mean square error loss value.
Tensor:
Binary cross entropy loss value.
""" """
# make mask and apply it # make mask and apply it
if self.use_masking: if self.use_masking:
...@@ -437,16 +453,24 @@ def stft(x, ...@@ -437,16 +453,24 @@ def stft(x,
pad_mode='reflect'): pad_mode='reflect'):
"""Perform STFT and convert to magnitude spectrogram. """Perform STFT and convert to magnitude spectrogram.
Args: Args:
x(Tensor): Input signal tensor (B, T). x(Tensor):
fft_size(int): FFT size. Input signal tensor (B, T).
hop_size(int): Hop size. fft_size(int):
win_length(int, optional): window : str, optional (Default value = None) FFT size.
window(str, optional): Name of window function, see `scipy.signal.get_window` for more hop_size(int):
details. Defaults to "hann". Hop size.
center(bool, optional, optional): center (bool, optional): Whether to pad `x` to make that the win_length(int, optional):
window (str, optional):
(Default value = None)
window(str, optional):
Name of window function, see `scipy.signal.get_window` for more details. Defaults to "hann".
center(bool, optional, optional): center (bool, optional):
Whether to pad `x` to make that the
:math:`t \times hop\\_length` at the center of :math:`t`-th frame. Default: `True`. :math:`t \times hop\\_length` at the center of :math:`t`-th frame. Default: `True`.
pad_mode(str, optional, optional): (Default value = 'reflect') pad_mode(str, optional, optional):
hop_length: (Default value = None) (Default value = 'reflect')
hop_length:
(Default value = None)
Returns: Returns:
Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1). Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1).
...@@ -480,8 +504,10 @@ class SpectralConvergenceLoss(nn.Layer): ...@@ -480,8 +504,10 @@ class SpectralConvergenceLoss(nn.Layer):
def forward(self, x_mag, y_mag): def forward(self, x_mag, y_mag):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). x_mag (Tensor):
y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). Magnitude spectrogram of predicted signal (B, #frames, #freq_bins).
y_mag (Tensor):
Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins).
Returns: Returns:
Tensor: Spectral convergence loss value. Tensor: Spectral convergence loss value.
""" """
...@@ -501,8 +527,10 @@ class LogSTFTMagnitudeLoss(nn.Layer): ...@@ -501,8 +527,10 @@ class LogSTFTMagnitudeLoss(nn.Layer):
def forward(self, x_mag, y_mag): def forward(self, x_mag, y_mag):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). x_mag (Tensor):
y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). Magnitude spectrogram of predicted signal (B, #frames, #freq_bins).
y_mag (Tensor):
Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins).
Returns: Returns:
Tensor: Log STFT magnitude loss value. Tensor: Log STFT magnitude loss value.
""" """
...@@ -531,11 +559,15 @@ class STFTLoss(nn.Layer): ...@@ -531,11 +559,15 @@ class STFTLoss(nn.Layer):
def forward(self, x, y): def forward(self, x, y):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Predicted signal (B, T). x (Tensor):
y (Tensor): Groundtruth signal (B, T). Predicted signal (B, T).
y (Tensor):
Groundtruth signal (B, T).
Returns: Returns:
Tensor: Spectral convergence loss value. Tensor:
Tensor: Log STFT magnitude loss value. Spectral convergence loss value.
Tensor:
Log STFT magnitude loss value.
""" """
x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, x_mag = stft(x, self.fft_size, self.shift_size, self.win_length,
self.window) self.window)
...@@ -558,10 +590,14 @@ class MultiResolutionSTFTLoss(nn.Layer): ...@@ -558,10 +590,14 @@ class MultiResolutionSTFTLoss(nn.Layer):
window="hann", ): window="hann", ):
"""Initialize Multi resolution STFT loss module. """Initialize Multi resolution STFT loss module.
Args: Args:
fft_sizes (list): List of FFT sizes. fft_sizes (list):
hop_sizes (list): List of hop sizes. List of FFT sizes.
win_lengths (list): List of window lengths. hop_sizes (list):
window (str): Window function type. List of hop sizes.
win_lengths (list):
List of window lengths.
window (str):
Window function type.
""" """
super().__init__() super().__init__()
assert len(fft_sizes) == len(hop_sizes) == len(win_lengths) assert len(fft_sizes) == len(hop_sizes) == len(win_lengths)
...@@ -573,11 +609,15 @@ class MultiResolutionSTFTLoss(nn.Layer): ...@@ -573,11 +609,15 @@ class MultiResolutionSTFTLoss(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Predicted signal (B, T) or (B, #subband, T). x (Tensor):
y (Tensor): Groundtruth signal (B, T) or (B, #subband, T). Predicted signal (B, T) or (B, #subband, T).
y (Tensor):
Groundtruth signal (B, T) or (B, #subband, T).
Returns: Returns:
Tensor: Multi resolution spectral convergence loss value. Tensor:
Tensor: Multi resolution log STFT magnitude loss value. Multi resolution spectral convergence loss value.
Tensor:
Multi resolution log STFT magnitude loss value.
""" """
if len(x.shape) == 3: if len(x.shape) == 3:
# (B, C, T) -> (B x C, T) # (B, C, T) -> (B x C, T)
...@@ -615,9 +655,11 @@ class GeneratorAdversarialLoss(nn.Layer): ...@@ -615,9 +655,11 @@ class GeneratorAdversarialLoss(nn.Layer):
def forward(self, outputs): def forward(self, outputs):
"""Calcualate generator adversarial loss. """Calcualate generator adversarial loss.
Args: Args:
outputs (Tensor or List): Discriminator outputs or list of discriminator outputs. outputs (Tensor or List):
Discriminator outputs or list of discriminator outputs.
Returns: Returns:
Tensor: Generator adversarial loss value. Tensor:
Generator adversarial loss value.
""" """
if isinstance(outputs, (tuple, list)): if isinstance(outputs, (tuple, list)):
adv_loss = 0.0 adv_loss = 0.0
...@@ -659,13 +701,15 @@ class DiscriminatorAdversarialLoss(nn.Layer): ...@@ -659,13 +701,15 @@ class DiscriminatorAdversarialLoss(nn.Layer):
"""Calcualate discriminator adversarial loss. """Calcualate discriminator adversarial loss.
Args: Args:
outputs_hat (Tensor or list): Discriminator outputs or list of outputs_hat (Tensor or list):
discriminator outputs calculated from generator outputs. Discriminator outputs or list of discriminator outputs calculated from generator outputs.
outputs (Tensor or list): Discriminator outputs or list of outputs (Tensor or list):
discriminator outputs calculated from groundtruth. Discriminator outputs or list of discriminator outputs calculated from groundtruth.
Returns: Returns:
Tensor: Discriminator real loss value. Tensor:
Tensor: Discriminator fake loss value. Discriminator real loss value.
Tensor:
Discriminator fake loss value.
""" """
if isinstance(outputs, (tuple, list)): if isinstance(outputs, (tuple, list)):
real_loss = 0.0 real_loss = 0.0
...@@ -766,9 +810,12 @@ def masked_l1_loss(prediction, target, mask): ...@@ -766,9 +810,12 @@ def masked_l1_loss(prediction, target, mask):
"""Compute maksed L1 loss. """Compute maksed L1 loss.
Args: Args:
prediction(Tensor): The prediction. prediction(Tensor):
target(Tensor): The target. The shape should be broadcastable to ``prediction``. The prediction.
mask(Tensor): The mask. The shape should be broadcatable to the broadcasted shape of target(Tensor):
The target. The shape should be broadcastable to ``prediction``.
mask(Tensor):
The mask. The shape should be broadcatable to the broadcasted shape of
``prediction`` and ``target``. ``prediction`` and ``target``.
Returns: Returns:
...@@ -916,8 +963,10 @@ class MelSpectrogramLoss(nn.Layer): ...@@ -916,8 +963,10 @@ class MelSpectrogramLoss(nn.Layer):
def forward(self, y_hat, y): def forward(self, y_hat, y):
"""Calculate Mel-spectrogram loss. """Calculate Mel-spectrogram loss.
Args: Args:
y_hat(Tensor): Generated single tensor (B, 1, T). y_hat(Tensor):
y(Tensor): Groundtruth single tensor (B, 1, T). Generated single tensor (B, 1, T).
y(Tensor):
Groundtruth single tensor (B, 1, T).
Returns: Returns:
Tensor: Mel-spectrogram loss value. Tensor: Mel-spectrogram loss value.
...@@ -947,9 +996,11 @@ class FeatureMatchLoss(nn.Layer): ...@@ -947,9 +996,11 @@ class FeatureMatchLoss(nn.Layer):
"""Calcualate feature matching loss. """Calcualate feature matching loss.
Args: Args:
feats_hat(list): List of list of discriminator outputs feats_hat(list):
List of list of discriminator outputs
calcuated from generater outputs. calcuated from generater outputs.
feats(list): List of list of discriminator outputs feats(list):
List of list of discriminator outputs
Returns: Returns:
Tensor: Feature matching loss value. Tensor: Feature matching loss value.
...@@ -986,11 +1037,16 @@ class KLDivergenceLoss(nn.Layer): ...@@ -986,11 +1037,16 @@ class KLDivergenceLoss(nn.Layer):
"""Calculate KL divergence loss. """Calculate KL divergence loss.
Args: Args:
z_p (Tensor): Flow hidden representation (B, H, T_feats). z_p (Tensor):
logs_q (Tensor): Posterior encoder projected scale (B, H, T_feats). Flow hidden representation (B, H, T_feats).
m_p (Tensor): Expanded text encoder projected mean (B, H, T_feats). logs_q (Tensor):
logs_p (Tensor): Expanded text encoder projected scale (B, H, T_feats). Posterior encoder projected scale (B, H, T_feats).
z_mask (Tensor): Mask tensor (B, 1, T_feats). m_p (Tensor):
Expanded text encoder projected mean (B, H, T_feats).
logs_p (Tensor):
Expanded text encoder projected scale (B, H, T_feats).
z_mask (Tensor):
Mask tensor (B, 1, T_feats).
Returns: Returns:
Tensor: KL divergence loss. Tensor: KL divergence loss.
......
...@@ -25,8 +25,10 @@ def pad_list(xs, pad_value): ...@@ -25,8 +25,10 @@ def pad_list(xs, pad_value):
"""Perform padding for the list of tensors. """Perform padding for the list of tensors.
Args: Args:
xs (List[Tensor]): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)]. xs (List[Tensor]):
pad_value (float): Value for padding. List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)].
pad_value (float):
Value for padding.
Returns: Returns:
Tensor: Padded tensor (B, Tmax, `*`). Tensor: Padded tensor (B, Tmax, `*`).
...@@ -55,10 +57,13 @@ def make_pad_mask(lengths, xs=None, length_dim=-1): ...@@ -55,10 +57,13 @@ def make_pad_mask(lengths, xs=None, length_dim=-1):
"""Make mask tensor containing indices of padded part. """Make mask tensor containing indices of padded part.
Args: Args:
lengths (Tensor(int64)): Batch of lengths (B,). lengths (Tensor(int64)):
xs (Tensor, optional): The reference tensor. Batch of lengths (B,).
xs (Tensor, optional):
The reference tensor.
If set, masks will be the same shape as this tensor. If set, masks will be the same shape as this tensor.
length_dim (int, optional): Dimension indicator of the above tensor. length_dim (int, optional):
Dimension indicator of the above tensor.
See the example. See the example.
Returns: Returns:
...@@ -147,7 +152,7 @@ def make_pad_mask(lengths, xs=None, length_dim=-1): ...@@ -147,7 +152,7 @@ def make_pad_mask(lengths, xs=None, length_dim=-1):
seq_range = paddle.arange(0, maxlen, dtype=paddle.int64) seq_range = paddle.arange(0, maxlen, dtype=paddle.int64)
seq_range_expand = seq_range.unsqueeze(0).expand([bs, maxlen]) seq_range_expand = seq_range.unsqueeze(0).expand([bs, maxlen])
seq_length_expand = lengths.unsqueeze(-1) seq_length_expand = lengths.unsqueeze(-1)
mask = seq_range_expand >= seq_length_expand mask = seq_range_expand >= seq_length_expand.cast(seq_range_expand.dtype)
if xs is not None: if xs is not None:
assert paddle.shape(xs)[0] == bs, (paddle.shape(xs)[0], bs) assert paddle.shape(xs)[0] == bs, (paddle.shape(xs)[0], bs)
...@@ -166,14 +171,18 @@ def make_non_pad_mask(lengths, xs=None, length_dim=-1): ...@@ -166,14 +171,18 @@ def make_non_pad_mask(lengths, xs=None, length_dim=-1):
"""Make mask tensor containing indices of non-padded part. """Make mask tensor containing indices of non-padded part.
Args: Args:
lengths (Tensor(int64) or List): Batch of lengths (B,). lengths (Tensor(int64) or List):
xs (Tensor, optional): The reference tensor. Batch of lengths (B,).
xs (Tensor, optional):
The reference tensor.
If set, masks will be the same shape as this tensor. If set, masks will be the same shape as this tensor.
length_dim (int, optional): Dimension indicator of the above tensor. length_dim (int, optional):
Dimension indicator of the above tensor.
See the example. See the example.
Returns: Returns:
Tensor(bool): mask tensor containing indices of padded part bool. Tensor(bool):
mask tensor containing indices of padded part bool.
Examples: Examples:
With only lengths. With only lengths.
...@@ -257,8 +266,10 @@ def initialize(model: nn.Layer, init: str): ...@@ -257,8 +266,10 @@ def initialize(model: nn.Layer, init: str):
Custom initialization routines can be implemented into submodules Custom initialization routines can be implemented into submodules
Args: Args:
model (nn.Layer): Target. model (nn.Layer):
init (str): Method of initialization. Target.
init (str):
Method of initialization.
""" """
assert check_argument_types() assert check_argument_types()
...@@ -285,12 +296,17 @@ def get_random_segments( ...@@ -285,12 +296,17 @@ def get_random_segments(
segment_size: int, ) -> Tuple[paddle.Tensor, paddle.Tensor]: segment_size: int, ) -> Tuple[paddle.Tensor, paddle.Tensor]:
"""Get random segments. """Get random segments.
Args: Args:
x (Tensor): Input tensor (B, C, T). x (Tensor):
x_lengths (Tensor): Length tensor (B,). Input tensor (B, C, T).
segment_size (int): Segment size. x_lengths (Tensor):
Length tensor (B,).
segment_size (int):
Segment size.
Returns: Returns:
Tensor: Segmented tensor (B, C, segment_size). Tensor:
Tensor: Start index tensor (B,). Segmented tensor (B, C, segment_size).
Tensor:
Start index tensor (B,).
""" """
b, c, t = paddle.shape(x) b, c, t = paddle.shape(x)
max_start_idx = x_lengths - segment_size max_start_idx = x_lengths - segment_size
...@@ -306,9 +322,12 @@ def get_segments( ...@@ -306,9 +322,12 @@ def get_segments(
segment_size: int, ) -> paddle.Tensor: segment_size: int, ) -> paddle.Tensor:
"""Get segments. """Get segments.
Args: Args:
x (Tensor): Input tensor (B, C, T). x (Tensor):
start_idxs (Tensor): Start index tensor (B,). Input tensor (B, C, T).
segment_size (int): Segment size. start_idxs (Tensor):
Start index tensor (B,).
segment_size (int):
Segment size.
Returns: Returns:
Tensor: Segmented tensor (B, C, segment_size). Tensor: Segmented tensor (B, C, segment_size).
""" """
...@@ -353,14 +372,20 @@ def phones_masking(xs_pad: paddle.Tensor, ...@@ -353,14 +372,20 @@ def phones_masking(xs_pad: paddle.Tensor,
span_bdy: paddle.Tensor=None): span_bdy: paddle.Tensor=None):
''' '''
Args: Args:
xs_pad (paddle.Tensor): input speech (B, Tmax, D). xs_pad (paddle.Tensor):
src_mask (paddle.Tensor): mask of speech (B, 1, Tmax). input speech (B, Tmax, D).
align_start (paddle.Tensor): frame level phone alignment start (B, Tmax2). src_mask (paddle.Tensor):
align_end (paddle.Tensor): frame level phone alignment end (B, Tmax2). mask of speech (B, 1, Tmax).
align_start_lens (paddle.Tensor): length of align_start (B, ). align_start (paddle.Tensor):
frame level phone alignment start (B, Tmax2).
align_end (paddle.Tensor):
frame level phone alignment end (B, Tmax2).
align_start_lens (paddle.Tensor):
length of align_start (B, ).
mlm_prob (float): mlm_prob (float):
mean_phn_span (int): mean_phn_span (int):
span_bdy (paddle.Tensor): masked mel boundary of input speech (B, 2). span_bdy (paddle.Tensor):
masked mel boundary of input speech (B, 2).
Returns: Returns:
paddle.Tensor[bool]: masked position of input speech (B, Tmax). paddle.Tensor[bool]: masked position of input speech (B, Tmax).
''' '''
...@@ -416,19 +441,29 @@ def phones_text_masking(xs_pad: paddle.Tensor, ...@@ -416,19 +441,29 @@ def phones_text_masking(xs_pad: paddle.Tensor,
span_bdy: paddle.Tensor=None): span_bdy: paddle.Tensor=None):
''' '''
Args: Args:
xs_pad (paddle.Tensor): input speech (B, Tmax, D). xs_pad (paddle.Tensor):
src_mask (paddle.Tensor): mask of speech (B, 1, Tmax). input speech (B, Tmax, D).
text_pad (paddle.Tensor): input text (B, Tmax2). src_mask (paddle.Tensor):
text_mask (paddle.Tensor): mask of text (B, 1, Tmax2). mask of speech (B, 1, Tmax).
align_start (paddle.Tensor): frame level phone alignment start (B, Tmax2). text_pad (paddle.Tensor):
align_end (paddle.Tensor): frame level phone alignment end (B, Tmax2). input text (B, Tmax2).
align_start_lens (paddle.Tensor): length of align_start (B, ). text_mask (paddle.Tensor):
mask of text (B, 1, Tmax2).
align_start (paddle.Tensor):
frame level phone alignment start (B, Tmax2).
align_end (paddle.Tensor):
frame level phone alignment end (B, Tmax2).
align_start_lens (paddle.Tensor):
length of align_start (B, ).
mlm_prob (float): mlm_prob (float):
mean_phn_span (int): mean_phn_span (int):
span_bdy (paddle.Tensor): masked mel boundary of input speech (B, 2). span_bdy (paddle.Tensor):
masked mel boundary of input speech (B, 2).
Returns: Returns:
paddle.Tensor[bool]: masked position of input speech (B, Tmax). paddle.Tensor[bool]:
paddle.Tensor[bool]: masked position of input text (B, Tmax2). masked position of input speech (B, Tmax).
paddle.Tensor[bool]:
masked position of input text (B, Tmax2).
''' '''
bz, sent_len, _ = paddle.shape(xs_pad) bz, sent_len, _ = paddle.shape(xs_pad)
masked_pos = paddle.zeros((bz, sent_len)) masked_pos = paddle.zeros((bz, sent_len))
...@@ -488,12 +523,18 @@ def get_seg_pos(speech_pad: paddle.Tensor, ...@@ -488,12 +523,18 @@ def get_seg_pos(speech_pad: paddle.Tensor,
seg_emb: bool=False): seg_emb: bool=False):
''' '''
Args: Args:
speech_pad (paddle.Tensor): input speech (B, Tmax, D). speech_pad (paddle.Tensor):
text_pad (paddle.Tensor): input text (B, Tmax2). input speech (B, Tmax, D).
align_start (paddle.Tensor): frame level phone alignment start (B, Tmax2). text_pad (paddle.Tensor):
align_end (paddle.Tensor): frame level phone alignment end (B, Tmax2). input text (B, Tmax2).
align_start_lens (paddle.Tensor): length of align_start (B, ). align_start (paddle.Tensor):
seg_emb (bool): whether to use segment embedding. frame level phone alignment start (B, Tmax2).
align_end (paddle.Tensor):
frame level phone alignment end (B, Tmax2).
align_start_lens (paddle.Tensor):
length of align_start (B, ).
seg_emb (bool):
whether to use segment embedding.
Returns: Returns:
paddle.Tensor[int]: n-th phone of each mel, 0<=n<=Tmax2 (B, Tmax). paddle.Tensor[int]: n-th phone of each mel, 0<=n<=Tmax2 (B, Tmax).
eg: eg:
...@@ -579,8 +620,10 @@ def random_spans_noise_mask(length: int, ...@@ -579,8 +620,10 @@ def random_spans_noise_mask(length: int,
def _random_seg(num_items, num_segs): def _random_seg(num_items, num_segs):
"""Partition a sequence of items randomly into non-empty segments. """Partition a sequence of items randomly into non-empty segments.
Args: Args:
num_items: an integer scalar > 0 num_items:
num_segs: an integer scalar in [1, num_items] an integer scalar > 0
num_segs:
an integer scalar in [1, num_items]
Returns: Returns:
a Tensor with shape [num_segs] containing positive integers that add a Tensor with shape [num_segs] containing positive integers that add
up to num_items up to num_items
......
...@@ -26,9 +26,12 @@ def design_prototype_filter(taps=62, cutoff_ratio=0.142, beta=9.0): ...@@ -26,9 +26,12 @@ def design_prototype_filter(taps=62, cutoff_ratio=0.142, beta=9.0):
filters of cosine modulated filterbanks`_. filters of cosine modulated filterbanks`_.
Args: Args:
taps (int): The number of filter taps. taps (int):
cutoff_ratio (float): Cut-off frequency ratio. The number of filter taps.
beta (float): Beta coefficient for kaiser window. cutoff_ratio (float):
Cut-off frequency ratio.
beta (float):
Beta coefficient for kaiser window.
Returns: Returns:
ndarray: ndarray:
Impluse response of prototype filter (taps + 1,). Impluse response of prototype filter (taps + 1,).
...@@ -66,10 +69,14 @@ class PQMF(nn.Layer): ...@@ -66,10 +69,14 @@ class PQMF(nn.Layer):
See dicussion in https://github.com/kan-bayashi/ParallelWaveGAN/issues/195. See dicussion in https://github.com/kan-bayashi/ParallelWaveGAN/issues/195.
Args: Args:
subbands (int): The number of subbands. subbands (int):
taps (int): The number of filter taps. The number of subbands.
cutoff_ratio (float): Cut-off frequency ratio. taps (int):
beta (float): Beta coefficient for kaiser window. The number of filter taps.
cutoff_ratio (float):
Cut-off frequency ratio.
beta (float):
Beta coefficient for kaiser window.
""" """
super().__init__() super().__init__()
...@@ -103,7 +110,8 @@ class PQMF(nn.Layer): ...@@ -103,7 +110,8 @@ class PQMF(nn.Layer):
def analysis(self, x): def analysis(self, x):
"""Analysis with PQMF. """Analysis with PQMF.
Args: Args:
x (Tensor): Input tensor (B, 1, T). x (Tensor):
Input tensor (B, 1, T).
Returns: Returns:
Tensor: Output tensor (B, subbands, T // subbands). Tensor: Output tensor (B, subbands, T // subbands).
""" """
...@@ -113,7 +121,8 @@ class PQMF(nn.Layer): ...@@ -113,7 +121,8 @@ class PQMF(nn.Layer):
def synthesis(self, x): def synthesis(self, x):
"""Synthesis with PQMF. """Synthesis with PQMF.
Args: Args:
x (Tensor): Input tensor (B, subbands, T // subbands). x (Tensor):
Input tensor (B, subbands, T // subbands).
Returns: Returns:
Tensor: Output tensor (B, 1, T). Tensor: Output tensor (B, 1, T).
""" """
......
...@@ -50,12 +50,18 @@ class DurationPredictor(nn.Layer): ...@@ -50,12 +50,18 @@ class DurationPredictor(nn.Layer):
"""Initilize duration predictor module. """Initilize duration predictor module.
Args: Args:
idim (int):Input dimension. idim (int):
n_layers (int, optional): Number of convolutional layers. Input dimension.
n_chans (int, optional): Number of channels of convolutional layers. n_layers (int, optional):
kernel_size (int, optional): Kernel size of convolutional layers. Number of convolutional layers.
dropout_rate (float, optional): Dropout rate. n_chans (int, optional):
offset (float, optional): Offset value to avoid nan in log domain. Number of channels of convolutional layers.
kernel_size (int, optional):
Kernel size of convolutional layers.
dropout_rate (float, optional):
Dropout rate.
offset (float, optional):
Offset value to avoid nan in log domain.
""" """
super().__init__() super().__init__()
...@@ -99,8 +105,10 @@ class DurationPredictor(nn.Layer): ...@@ -99,8 +105,10 @@ class DurationPredictor(nn.Layer):
def forward(self, xs, x_masks=None): def forward(self, xs, x_masks=None):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
xs(Tensor): Batch of input sequences (B, Tmax, idim). xs(Tensor):
x_masks(ByteTensor, optional, optional): Batch of masks indicating padded part (B, Tmax). (Default value = None) Batch of input sequences (B, Tmax, idim).
x_masks(ByteTensor, optional, optional):
Batch of masks indicating padded part (B, Tmax). (Default value = None)
Returns: Returns:
Tensor: Batch of predicted durations in log domain (B, Tmax). Tensor: Batch of predicted durations in log domain (B, Tmax).
...@@ -110,8 +118,10 @@ class DurationPredictor(nn.Layer): ...@@ -110,8 +118,10 @@ class DurationPredictor(nn.Layer):
def inference(self, xs, x_masks=None): def inference(self, xs, x_masks=None):
"""Inference duration. """Inference duration.
Args: Args:
xs(Tensor): Batch of input sequences (B, Tmax, idim). xs(Tensor):
x_masks(Tensor(bool), optional, optional): Batch of masks indicating padded part (B, Tmax). (Default value = None) Batch of input sequences (B, Tmax, idim).
x_masks(Tensor(bool), optional, optional):
Batch of masks indicating padded part (B, Tmax). (Default value = None)
Returns: Returns:
Tensor: Batch of predicted durations in linear domain int64 (B, Tmax). Tensor: Batch of predicted durations in linear domain int64 (B, Tmax).
...@@ -140,8 +150,10 @@ class DurationPredictorLoss(nn.Layer): ...@@ -140,8 +150,10 @@ class DurationPredictorLoss(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
outputs(Tensor): Batch of prediction durations in log domain (B, T) outputs(Tensor):
targets(Tensor): Batch of groundtruth durations in linear domain (B, T) Batch of prediction durations in log domain (B, T)
targets(Tensor):
Batch of groundtruth durations in linear domain (B, T)
Returns: Returns:
Tensor: Mean squared error loss value. Tensor: Mean squared error loss value.
......
...@@ -36,7 +36,8 @@ class LengthRegulator(nn.Layer): ...@@ -36,7 +36,8 @@ class LengthRegulator(nn.Layer):
"""Initilize length regulator module. """Initilize length regulator module.
Args: Args:
pad_value (float, optional): Value used for padding. pad_value (float, optional):
Value used for padding.
""" """
super().__init__() super().__init__()
...@@ -97,9 +98,12 @@ class LengthRegulator(nn.Layer): ...@@ -97,9 +98,12 @@ class LengthRegulator(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
xs (Tensor): Batch of sequences of char or phoneme embeddings (B, Tmax, D). xs (Tensor):
ds (Tensor(int64)): Batch of durations of each frame (B, T). Batch of sequences of char or phoneme embeddings (B, Tmax, D).
alpha (float, optional): Alpha value to control speed of speech. ds (Tensor(int64)):
Batch of durations of each frame (B, T).
alpha (float, optional):
Alpha value to control speed of speech.
Returns: Returns:
Tensor: replicated input tensor based on durations (B, T*, D). Tensor: replicated input tensor based on durations (B, T*, D).
......
...@@ -43,11 +43,16 @@ class VariancePredictor(nn.Layer): ...@@ -43,11 +43,16 @@ class VariancePredictor(nn.Layer):
"""Initilize duration predictor module. """Initilize duration predictor module.
Args: Args:
idim (int): Input dimension. idim (int):
n_layers (int, optional): Number of convolutional layers. Input dimension.
n_chans (int, optional): Number of channels of convolutional layers. n_layers (int, optional):
kernel_size (int, optional): Kernel size of convolutional layers. Number of convolutional layers.
dropout_rate (float, optional): Dropout rate. n_chans (int, optional):
Number of channels of convolutional layers.
kernel_size (int, optional):
Kernel size of convolutional layers.
dropout_rate (float, optional):
Dropout rate.
""" """
assert check_argument_types() assert check_argument_types()
super().__init__() super().__init__()
...@@ -74,11 +79,14 @@ class VariancePredictor(nn.Layer): ...@@ -74,11 +79,14 @@ class VariancePredictor(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
xs (Tensor): Batch of input sequences (B, Tmax, idim). xs (Tensor):
x_masks (Tensor(bool), optional): Batch of masks indicating padded part (B, Tmax, 1). Batch of input sequences (B, Tmax, idim).
x_masks (Tensor(bool), optional):
Batch of masks indicating padded part (B, Tmax, 1).
Returns: Returns:
Tensor: Batch of predicted sequences (B, Tmax, 1). Tensor:
Batch of predicted sequences (B, Tmax, 1).
""" """
# (B, idim, Tmax) # (B, idim, Tmax)
xs = xs.transpose([0, 2, 1]) xs = xs.transpose([0, 2, 1])
......
...@@ -29,15 +29,24 @@ class WaveNetResidualBlock(nn.Layer): ...@@ -29,15 +29,24 @@ class WaveNetResidualBlock(nn.Layer):
refer to `WaveNet: A Generative Model for Raw Audio <https://arxiv.org/abs/1609.03499>`_. refer to `WaveNet: A Generative Model for Raw Audio <https://arxiv.org/abs/1609.03499>`_.
Args: Args:
kernel_size (int, optional): Kernel size of the 1D convolution, by default 3 kernel_size (int, optional):
residual_channels (int, optional): Feature size of the residual output(and also the input), by default 64 Kernel size of the 1D convolution, by default 3
gate_channels (int, optional): Output feature size of the 1D convolution, by default 128 residual_channels (int, optional):
skip_channels (int, optional): Feature size of the skip output, by default 64 Feature size of the residual output(and also the input), by default 64
aux_channels (int, optional): Feature size of the auxiliary input (e.g. spectrogram), by default 80 gate_channels (int, optional):
dropout (float, optional): Probability of the dropout before the 1D convolution, by default 0. Output feature size of the 1D convolution, by default 128
dilation (int, optional): Dilation of the 1D convolution, by default 1 skip_channels (int, optional):
bias (bool, optional): Whether to use bias in the 1D convolution, by default True Feature size of the skip output, by default 64
use_causal_conv (bool, optional): Whether to use causal padding for the 1D convolution, by default False aux_channels (int, optional):
Feature size of the auxiliary input (e.g. spectrogram), by default 80
dropout (float, optional):
Probability of the dropout before the 1D convolution, by default 0.
dilation (int, optional):
Dilation of the 1D convolution, by default 1
bias (bool, optional):
Whether to use bias in the 1D convolution, by default True
use_causal_conv (bool, optional):
Whether to use causal padding for the 1D convolution, by default False
""" """
def __init__(self, def __init__(self,
...@@ -81,13 +90,17 @@ class WaveNetResidualBlock(nn.Layer): ...@@ -81,13 +90,17 @@ class WaveNetResidualBlock(nn.Layer):
def forward(self, x, c): def forward(self, x, c):
""" """
Args: Args:
x (Tensor): the input features. Shape (N, C_res, T) x (Tensor):
c (Tensor): the auxiliary input. Shape (N, C_aux, T) the input features. Shape (N, C_res, T)
c (Tensor):
the auxiliary input. Shape (N, C_aux, T)
Returns: Returns:
res (Tensor): Shape (N, C_res, T), the residual output, which is used as the res (Tensor):
Shape (N, C_res, T), the residual output, which is used as the
input of the next ResidualBlock in a stack of ResidualBlocks. input of the next ResidualBlock in a stack of ResidualBlocks.
skip (Tensor): Shape (N, C_skip, T), the skip output, which is collected among skip (Tensor):
Shape (N, C_skip, T), the skip output, which is collected among
each layer in a stack of ResidualBlocks. each layer in a stack of ResidualBlocks.
""" """
x_input = x x_input = x
...@@ -121,13 +134,20 @@ class HiFiGANResidualBlock(nn.Layer): ...@@ -121,13 +134,20 @@ class HiFiGANResidualBlock(nn.Layer):
): ):
"""Initialize HiFiGANResidualBlock module. """Initialize HiFiGANResidualBlock module.
Args: Args:
kernel_size (int): Kernel size of dilation convolution layer. kernel_size (int):
channels (int): Number of channels for convolution layer. Kernel size of dilation convolution layer.
dilations (List[int]): List of dilation factors. channels (int):
use_additional_convs (bool): Whether to use additional convolution layers. Number of channels for convolution layer.
bias (bool): Whether to add bias parameter in convolution layers. dilations (List[int]):
nonlinear_activation (str): Activation function module name. List of dilation factors.
nonlinear_activation_params (dict): Hyperparameters for activation function. use_additional_convs (bool):
Whether to use additional convolution layers.
bias (bool):
Whether to add bias parameter in convolution layers.
nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (dict):
Hyperparameters for activation function.
""" """
super().__init__() super().__init__()
...@@ -167,7 +187,8 @@ class HiFiGANResidualBlock(nn.Layer): ...@@ -167,7 +187,8 @@ class HiFiGANResidualBlock(nn.Layer):
def forward(self, x): def forward(self, x):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, channels, T). x (Tensor):
Input tensor (B, channels, T).
Returns: Returns:
Tensor: Output tensor (B, channels, T). Tensor: Output tensor (B, channels, T).
""" """
......
...@@ -39,15 +39,24 @@ class ResidualStack(nn.Layer): ...@@ -39,15 +39,24 @@ class ResidualStack(nn.Layer):
"""Initialize ResidualStack module. """Initialize ResidualStack module.
Args: Args:
kernel_size (int): Kernel size of dilation convolution layer. kernel_size (int):
channels (int): Number of channels of convolution layers. Kernel size of dilation convolution layer.
dilation (int): Dilation factor. channels (int):
bias (bool): Whether to add bias parameter in convolution layers. Number of channels of convolution layers.
nonlinear_activation (str): Activation function module name. dilation (int):
nonlinear_activation_params (Dict[str,Any]): Hyperparameters for activation function. Dilation factor.
pad (str): Padding function module name before dilated convolution layer. bias (bool):
pad_params (Dict[str, Any]): Hyperparameters for padding function. Whether to add bias parameter in convolution layers.
use_causal_conv (bool): Whether to use causal convolution. nonlinear_activation (str):
Activation function module name.
nonlinear_activation_params (Dict[str,Any]):
Hyperparameters for activation function.
pad (str):
Padding function module name before dilated convolution layer.
pad_params (Dict[str, Any]):
Hyperparameters for padding function.
use_causal_conv (bool):
Whether to use causal convolution.
""" """
super().__init__() super().__init__()
# for compatibility # for compatibility
...@@ -95,7 +104,8 @@ class ResidualStack(nn.Layer): ...@@ -95,7 +104,8 @@ class ResidualStack(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
c (Tensor): Input tensor (B, channels, T). c (Tensor):
Input tensor (B, channels, T).
Returns: Returns:
Tensor: Output tensor (B, chennels, T). Tensor: Output tensor (B, chennels, T).
""" """
......
...@@ -32,16 +32,26 @@ class StyleEncoder(nn.Layer): ...@@ -32,16 +32,26 @@ class StyleEncoder(nn.Layer):
Speech Synthesis`: https://arxiv.org/abs/1803.09017 Speech Synthesis`: https://arxiv.org/abs/1803.09017
Args: Args:
idim (int, optional): Dimension of the input mel-spectrogram. idim (int, optional):
gst_tokens (int, optional): The number of GST embeddings. Dimension of the input mel-spectrogram.
gst_token_dim (int, optional): Dimension of each GST embedding. gst_tokens (int, optional):
gst_heads (int, optional): The number of heads in GST multihead attention. The number of GST embeddings.
conv_layers (int, optional): The number of conv layers in the reference encoder. gst_token_dim (int, optional):
conv_chans_list (Sequence[int], optional): List of the number of channels of conv layers in the referece encoder. Dimension of each GST embedding.
conv_kernel_size (int, optional): Kernal size of conv layers in the reference encoder. gst_heads (int, optional):
conv_stride (int, optional): Stride size of conv layers in the reference encoder. The number of heads in GST multihead attention.
gru_layers (int, optional): The number of GRU layers in the reference encoder. conv_layers (int, optional):
gru_units (int, optional):The number of GRU units in the reference encoder. The number of conv layers in the reference encoder.
conv_chans_list (Sequence[int], optional):
List of the number of channels of conv layers in the referece encoder.
conv_kernel_size (int, optional):
Kernal size of conv layers in the reference encoder.
conv_stride (int, optional):
Stride size of conv layers in the reference encoder.
gru_layers (int, optional):
The number of GRU layers in the reference encoder.
gru_units (int, optional):
The number of GRU units in the reference encoder.
Todo: Todo:
* Support manual weight specification in inference. * Support manual weight specification in inference.
...@@ -82,7 +92,8 @@ class StyleEncoder(nn.Layer): ...@@ -82,7 +92,8 @@ class StyleEncoder(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
speech (Tensor): Batch of padded target features (B, Lmax, odim). speech (Tensor):
Batch of padded target features (B, Lmax, odim).
Returns: Returns:
Tensor: Style token embeddings (B, token_dim). Tensor: Style token embeddings (B, token_dim).
...@@ -104,13 +115,20 @@ class ReferenceEncoder(nn.Layer): ...@@ -104,13 +115,20 @@ class ReferenceEncoder(nn.Layer):
Speech Synthesis`: https://arxiv.org/abs/1803.09017 Speech Synthesis`: https://arxiv.org/abs/1803.09017
Args: Args:
idim (int, optional): Dimension of the input mel-spectrogram. idim (int, optional):
conv_layers (int, optional): The number of conv layers in the reference encoder. Dimension of the input mel-spectrogram.
conv_chans_list: (Sequence[int], optional): List of the number of channels of conv layers in the referece encoder. conv_layers (int, optional):
conv_kernel_size (int, optional): Kernal size of conv layers in the reference encoder. The number of conv layers in the reference encoder.
conv_stride (int, optional): Stride size of conv layers in the reference encoder. conv_chans_list: (Sequence[int], optional):
gru_layers (int, optional): The number of GRU layers in the reference encoder. List of the number of channels of conv layers in the referece encoder.
gru_units (int, optional): The number of GRU units in the reference encoder. conv_kernel_size (int, optional):
Kernal size of conv layers in the reference encoder.
conv_stride (int, optional):
Stride size of conv layers in the reference encoder.
gru_layers (int, optional):
The number of GRU layers in the reference encoder.
gru_units (int, optional):
The number of GRU units in the reference encoder.
""" """
...@@ -168,7 +186,8 @@ class ReferenceEncoder(nn.Layer): ...@@ -168,7 +186,8 @@ class ReferenceEncoder(nn.Layer):
def forward(self, speech: paddle.Tensor) -> paddle.Tensor: def forward(self, speech: paddle.Tensor) -> paddle.Tensor:
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
speech (Tensor): Batch of padded target features (B, Lmax, idim). speech (Tensor):
Batch of padded target features (B, Lmax, idim).
Returns: Returns:
Tensor: Reference embedding (B, gru_units) Tensor: Reference embedding (B, gru_units)
...@@ -200,11 +219,16 @@ class StyleTokenLayer(nn.Layer): ...@@ -200,11 +219,16 @@ class StyleTokenLayer(nn.Layer):
.. _`Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End .. _`Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End
Speech Synthesis`: https://arxiv.org/abs/1803.09017 Speech Synthesis`: https://arxiv.org/abs/1803.09017
Args: Args:
ref_embed_dim (int, optional): Dimension of the input reference embedding. ref_embed_dim (int, optional):
gst_tokens (int, optional): The number of GST embeddings. Dimension of the input reference embedding.
gst_token_dim (int, optional): Dimension of each GST embedding. gst_tokens (int, optional):
gst_heads (int, optional): The number of heads in GST multihead attention. The number of GST embeddings.
dropout_rate (float, optional): Dropout rate in multi-head attention. gst_token_dim (int, optional):
Dimension of each GST embedding.
gst_heads (int, optional):
The number of heads in GST multihead attention.
dropout_rate (float, optional):
Dropout rate in multi-head attention.
""" """
...@@ -236,7 +260,8 @@ class StyleTokenLayer(nn.Layer): ...@@ -236,7 +260,8 @@ class StyleTokenLayer(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
ref_embs (Tensor): Reference embeddings (B, ref_embed_dim). ref_embs (Tensor):
Reference embeddings (B, ref_embed_dim).
Returns: Returns:
Tensor: Style token embeddings (B, gst_token_dim). Tensor: Style token embeddings (B, gst_token_dim).
......
...@@ -31,10 +31,14 @@ def _apply_attention_constraint(e, ...@@ -31,10 +31,14 @@ def _apply_attention_constraint(e,
Text-to-Speech with Convolutional Sequence Learning`_. Text-to-Speech with Convolutional Sequence Learning`_.
Args: Args:
e(Tensor): Attention energy before applying softmax (1, T). e(Tensor):
last_attended_idx(int): The index of the inputs of the last attended [0, T]. Attention energy before applying softmax (1, T).
backward_window(int, optional, optional): Backward window size in attention constraint. (Default value = 1) last_attended_idx(int):
forward_window(int, optional, optional): Forward window size in attetion constraint. (Default value = 3) The index of the inputs of the last attended [0, T].
backward_window(int, optional, optional):
Backward window size in attention constraint. (Default value = 1)
forward_window(int, optional, optional):
Forward window size in attetion constraint. (Default value = 3)
Returns: Returns:
Tensor: Monotonic constrained attention energy (1, T). Tensor: Monotonic constrained attention energy (1, T).
...@@ -62,12 +66,18 @@ class AttLoc(nn.Layer): ...@@ -62,12 +66,18 @@ class AttLoc(nn.Layer):
(https://arxiv.org/pdf/1506.07503.pdf) (https://arxiv.org/pdf/1506.07503.pdf)
Args: Args:
eprojs (int): projection-units of encoder eprojs (int):
dunits (int): units of decoder projection-units of encoder
att_dim (int): attention dimension dunits (int):
aconv_chans (int): channels of attention convolution units of decoder
aconv_filts (int): filter size of attention convolution att_dim (int):
han_mode (bool): flag to swith on mode of hierarchical attention and not store pre_compute_enc_h attention dimension
aconv_chans (int):
channels of attention convolution
aconv_filts (int):
filter size of attention convolution
han_mode (bool):
flag to swith on mode of hierarchical attention and not store pre_compute_enc_h
""" """
def __init__(self, def __init__(self,
...@@ -117,18 +127,29 @@ class AttLoc(nn.Layer): ...@@ -117,18 +127,29 @@ class AttLoc(nn.Layer):
forward_window=3, ): forward_window=3, ):
"""Calculate AttLoc forward propagation. """Calculate AttLoc forward propagation.
Args: Args:
enc_hs_pad(Tensor): padded encoder hidden state (B, T_max, D_enc) enc_hs_pad(Tensor):
enc_hs_len(Tensor): padded encoder hidden state length (B) padded encoder hidden state (B, T_max, D_enc)
dec_z(Tensor dec_z): decoder hidden state (B, D_dec) enc_hs_len(Tensor):
att_prev(Tensor): previous attention weight (B, T_max) padded encoder hidden state length (B)
scaling(float, optional): scaling parameter before applying softmax (Default value = 2.0) dec_z(Tensor dec_z):
forward_window(Tensor, optional): forward window size when constraining attention (Default value = 3) decoder hidden state (B, D_dec)
last_attended_idx(int, optional): index of the inputs of the last attended (Default value = None) att_prev(Tensor):
backward_window(int, optional): backward window size in attention constraint (Default value = 1) previous attention weight (B, T_max)
forward_window(int, optional): forward window size in attetion constraint (Default value = 3) scaling(float, optional):
scaling parameter before applying softmax (Default value = 2.0)
forward_window(Tensor, optional):
forward window size when constraining attention (Default value = 3)
last_attended_idx(int, optional):
index of the inputs of the last attended (Default value = None)
backward_window(int, optional):
backward window size in attention constraint (Default value = 1)
forward_window(int, optional):
forward window size in attetion constraint (Default value = 3)
Returns: Returns:
Tensor: attention weighted encoder state (B, D_enc) Tensor:
Tensor: previous attention weights (B, T_max) attention weighted encoder state (B, D_enc)
Tensor:
previous attention weights (B, T_max)
""" """
batch = paddle.shape(enc_hs_pad)[0] batch = paddle.shape(enc_hs_pad)[0]
# pre-compute all h outside the decoder loop # pre-compute all h outside the decoder loop
...@@ -192,11 +213,16 @@ class AttForward(nn.Layer): ...@@ -192,11 +213,16 @@ class AttForward(nn.Layer):
(https://arxiv.org/pdf/1807.06736.pdf) (https://arxiv.org/pdf/1807.06736.pdf)
Args: Args:
eprojs (int): projection-units of encoder eprojs (int):
dunits (int): units of decoder projection-units of encoder
att_dim (int): attention dimension dunits (int):
aconv_chans (int): channels of attention convolution units of decoder
aconv_filts (int): filter size of attention convolution att_dim (int):
attention dimension
aconv_chans (int):
channels of attention convolution
aconv_filts (int):
filter size of attention convolution
""" """
def __init__(self, eprojs, dunits, att_dim, aconv_chans, aconv_filts): def __init__(self, eprojs, dunits, att_dim, aconv_chans, aconv_filts):
...@@ -239,18 +265,28 @@ class AttForward(nn.Layer): ...@@ -239,18 +265,28 @@ class AttForward(nn.Layer):
"""Calculate AttForward forward propagation. """Calculate AttForward forward propagation.
Args: Args:
enc_hs_pad(Tensor): padded encoder hidden state (B, T_max, D_enc) enc_hs_pad(Tensor):
enc_hs_len(list): padded encoder hidden state length (B,) padded encoder hidden state (B, T_max, D_enc)
dec_z(Tensor): decoder hidden state (B, D_dec) enc_hs_len(list):
att_prev(Tensor): attention weights of previous step (B, T_max) padded encoder hidden state length (B,)
scaling(float, optional): scaling parameter before applying softmax (Default value = 1.0) dec_z(Tensor):
last_attended_idx(int, optional): index of the inputs of the last attended (Default value = None) decoder hidden state (B, D_dec)
backward_window(int, optional): backward window size in attention constraint (Default value = 1) att_prev(Tensor):
forward_window(int, optional): (Default value = 3) attention weights of previous step (B, T_max)
scaling(float, optional):
scaling parameter before applying softmax (Default value = 1.0)
last_attended_idx(int, optional):
index of the inputs of the last attended (Default value = None)
backward_window(int, optional):
backward window size in attention constraint (Default value = 1)
forward_window(int, optional):
(Default value = 3)
Returns: Returns:
Tensor: attention weighted encoder state (B, D_enc) Tensor:
Tensor: previous attention weights (B, T_max) attention weighted encoder state (B, D_enc)
Tensor:
previous attention weights (B, T_max)
""" """
batch = len(enc_hs_pad) batch = len(enc_hs_pad)
# pre-compute all h outside the decoder loop # pre-compute all h outside the decoder loop
...@@ -321,12 +357,18 @@ class AttForwardTA(nn.Layer): ...@@ -321,12 +357,18 @@ class AttForwardTA(nn.Layer):
(https://arxiv.org/pdf/1807.06736.pdf) (https://arxiv.org/pdf/1807.06736.pdf)
Args: Args:
eunits (int): units of encoder eunits (int):
dunits (int): units of decoder units of encoder
att_dim (int): attention dimension dunits (int):
aconv_chans (int): channels of attention convolution units of decoder
aconv_filts (int): filter size of attention convolution att_dim (int):
odim (int): output dimension attention dimension
aconv_chans (int):
channels of attention convolution
aconv_filts (int):
filter size of attention convolution
odim (int):
output dimension
""" """
def __init__(self, eunits, dunits, att_dim, aconv_chans, aconv_filts, odim): def __init__(self, eunits, dunits, att_dim, aconv_chans, aconv_filts, odim):
...@@ -372,19 +414,30 @@ class AttForwardTA(nn.Layer): ...@@ -372,19 +414,30 @@ class AttForwardTA(nn.Layer):
"""Calculate AttForwardTA forward propagation. """Calculate AttForwardTA forward propagation.
Args: Args:
enc_hs_pad(Tensor): padded encoder hidden state (B, Tmax, eunits) enc_hs_pad(Tensor):
enc_hs_len(list Tensor): padded encoder hidden state length (B,) padded encoder hidden state (B, Tmax, eunits)
dec_z(Tensor): decoder hidden state (B, dunits) enc_hs_len(list Tensor):
att_prev(Tensor): attention weights of previous step (B, T_max) padded encoder hidden state length (B,)
out_prev(Tensor): decoder outputs of previous step (B, odim) dec_z(Tensor):
scaling(float, optional): scaling parameter before applying softmax (Default value = 1.0) decoder hidden state (B, dunits)
last_attended_idx(int, optional): index of the inputs of the last attended (Default value = None) att_prev(Tensor):
backward_window(int, optional): backward window size in attention constraint (Default value = 1) attention weights of previous step (B, T_max)
forward_window(int, optional): (Default value = 3) out_prev(Tensor):
decoder outputs of previous step (B, odim)
scaling(float, optional):
scaling parameter before applying softmax (Default value = 1.0)
last_attended_idx(int, optional):
index of the inputs of the last attended (Default value = None)
backward_window(int, optional):
backward window size in attention constraint (Default value = 1)
forward_window(int, optional):
(Default value = 3)
Returns: Returns:
Tensor: attention weighted encoder state (B, dunits) Tensor:
Tensor: previous attention weights (B, Tmax) attention weighted encoder state (B, dunits)
Tensor:
previous attention weights (B, Tmax)
""" """
batch = len(enc_hs_pad) batch = len(enc_hs_pad)
# pre-compute all h outside the decoder loop # pre-compute all h outside the decoder loop
......
...@@ -45,10 +45,14 @@ class Prenet(nn.Layer): ...@@ -45,10 +45,14 @@ class Prenet(nn.Layer):
"""Initialize prenet module. """Initialize prenet module.
Args: Args:
idim (int): Dimension of the inputs. idim (int):
odim (int): Dimension of the outputs. Dimension of the inputs.
n_layers (int, optional): The number of prenet layers. odim (int):
n_units (int, optional): The number of prenet units. Dimension of the outputs.
n_layers (int, optional):
The number of prenet layers.
n_units (int, optional):
The number of prenet units.
""" """
super().__init__() super().__init__()
self.dropout_rate = dropout_rate self.dropout_rate = dropout_rate
...@@ -62,7 +66,8 @@ class Prenet(nn.Layer): ...@@ -62,7 +66,8 @@ class Prenet(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Batch of input tensors (B, ..., idim). x (Tensor):
Batch of input tensors (B, ..., idim).
Returns: Returns:
Tensor: Batch of output tensors (B, ..., odim). Tensor: Batch of output tensors (B, ..., odim).
...@@ -212,7 +217,8 @@ class ZoneOutCell(nn.Layer): ...@@ -212,7 +217,8 @@ class ZoneOutCell(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
inputs (Tensor): Batch of input tensor (B, input_size). inputs (Tensor):
Batch of input tensor (B, input_size).
hidden (tuple): hidden (tuple):
- Tensor: Batch of initial hidden states (B, hidden_size). - Tensor: Batch of initial hidden states (B, hidden_size).
- Tensor: Batch of initial cell states (B, hidden_size). - Tensor: Batch of initial cell states (B, hidden_size).
...@@ -277,26 +283,39 @@ class Decoder(nn.Layer): ...@@ -277,26 +283,39 @@ class Decoder(nn.Layer):
"""Initialize Tacotron2 decoder module. """Initialize Tacotron2 decoder module.
Args: Args:
idim (int): Dimension of the inputs. idim (int):
odim (int): Dimension of the outputs. Dimension of the inputs.
att (nn.Layer): Instance of attention class. odim (int):
dlayers (int, optional): The number of decoder lstm layers. Dimension of the outputs.
dunits (int, optional): The number of decoder lstm units. att (nn.Layer):
prenet_layers (int, optional): The number of prenet layers. Instance of attention class.
prenet_units (int, optional): The number of prenet units. dlayers (int, optional):
postnet_layers (int, optional): The number of postnet layers. The number of decoder lstm layers.
postnet_filts (int, optional): The number of postnet filter size. dunits (int, optional):
postnet_chans (int, optional): The number of postnet filter channels. The number of decoder lstm units.
output_activation_fn (nn.Layer, optional): Activation function for outputs. prenet_layers (int, optional):
cumulate_att_w (bool, optional): Whether to cumulate previous attention weight. The number of prenet layers.
use_batch_norm (bool, optional): Whether to use batch normalization. prenet_units (int, optional):
use_concate : bool, optional The number of prenet units.
postnet_layers (int, optional):
The number of postnet layers.
postnet_filts (int, optional):
The number of postnet filter size.
postnet_chans (int, optional):
The number of postnet filter channels.
output_activation_fn (nn.Layer, optional):
Activation function for outputs.
cumulate_att_w (bool, optional):
Whether to cumulate previous attention weight.
use_batch_norm (bool, optional):
Whether to use batch normalization.
use_concate (bool, optional):
Whether to concatenate encoder embedding with decoder lstm outputs. Whether to concatenate encoder embedding with decoder lstm outputs.
dropout_rate : float, optional dropout_rate (float, optional):
Dropout rate. Dropout rate.
zoneout_rate : float, optional zoneout_rate (float, optional):
Zoneout rate. Zoneout rate.
reduction_factor : int, optional reduction_factor (int, optional):
Reduction factor. Reduction factor.
""" """
super().__init__() super().__init__()
...@@ -363,15 +382,22 @@ class Decoder(nn.Layer): ...@@ -363,15 +382,22 @@ class Decoder(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
hs (Tensor): Batch of the sequences of padded hidden states (B, Tmax, idim). hs (Tensor):
hlens (Tensor(int64) padded): Batch of lengths of each input batch (B,). Batch of the sequences of padded hidden states (B, Tmax, idim).
ys (Tensor): Batch of the sequences of padded target features (B, Lmax, odim). hlens (Tensor(int64) padded):
Batch of lengths of each input batch (B,).
ys (Tensor):
Batch of the sequences of padded target features (B, Lmax, odim).
Returns: Returns:
Tensor: Batch of output tensors after postnet (B, Lmax, odim). Tensor:
Tensor: Batch of output tensors before postnet (B, Lmax, odim). Batch of output tensors after postnet (B, Lmax, odim).
Tensor: Batch of logits of stop prediction (B, Lmax). Tensor:
Tensor: Batch of attention weights (B, Lmax, Tmax). Batch of output tensors before postnet (B, Lmax, odim).
Tensor:
Batch of logits of stop prediction (B, Lmax).
Tensor:
Batch of attention weights (B, Lmax, Tmax).
Note: Note:
This computation is performed in teacher-forcing manner. This computation is performed in teacher-forcing manner.
...@@ -471,20 +497,30 @@ class Decoder(nn.Layer): ...@@ -471,20 +497,30 @@ class Decoder(nn.Layer):
forward_window=None, ): forward_window=None, ):
"""Generate the sequence of features given the sequences of characters. """Generate the sequence of features given the sequences of characters.
Args: Args:
h(Tensor): Input sequence of encoder hidden states (T, C). h(Tensor):
threshold(float, optional, optional): Threshold to stop generation. (Default value = 0.5) Input sequence of encoder hidden states (T, C).
minlenratio(float, optional, optional): Minimum length ratio. If set to 1.0 and the length of input is 10, threshold(float, optional, optional):
Threshold to stop generation. (Default value = 0.5)
minlenratio(float, optional, optional):
Minimum length ratio. If set to 1.0 and the length of input is 10,
the minimum length of outputs will be 10 * 1 = 10. (Default value = 0.0) the minimum length of outputs will be 10 * 1 = 10. (Default value = 0.0)
maxlenratio(float, optional, optional): Minimum length ratio. If set to 10 and the length of input is 10, maxlenratio(float, optional, optional):
Minimum length ratio. If set to 10 and the length of input is 10,
the maximum length of outputs will be 10 * 10 = 100. (Default value = 0.0) the maximum length of outputs will be 10 * 10 = 100. (Default value = 0.0)
use_att_constraint(bool, optional): Whether to apply attention constraint introduced in `Deep Voice 3`_. (Default value = False) use_att_constraint(bool, optional):
backward_window(int, optional): Backward window size in attention constraint. (Default value = None) Whether to apply attention constraint introduced in `Deep Voice 3`_. (Default value = False)
forward_window(int, optional): (Default value = None) backward_window(int, optional):
Backward window size in attention constraint. (Default value = None)
forward_window(int, optional):
(Default value = None)
Returns: Returns:
Tensor: Output sequence of features (L, odim). Tensor:
Tensor: Output sequence of stop probabilities (L,). Output sequence of features (L, odim).
Tensor: Attention weights (L, T). Tensor:
Output sequence of stop probabilities (L,).
Tensor:
Attention weights (L, T).
Note: Note:
This computation is performed in auto-regressive manner. This computation is performed in auto-regressive manner.
...@@ -625,9 +661,12 @@ class Decoder(nn.Layer): ...@@ -625,9 +661,12 @@ class Decoder(nn.Layer):
"""Calculate all of the attention weights. """Calculate all of the attention weights.
Args: Args:
hs (Tensor): Batch of the sequences of padded hidden states (B, Tmax, idim). hs (Tensor):
hlens (Tensor(int64)): Batch of lengths of each input batch (B,). Batch of the sequences of padded hidden states (B, Tmax, idim).
ys (Tensor): Batch of the sequences of padded target features (B, Lmax, odim). hlens (Tensor(int64)):
Batch of lengths of each input batch (B,).
ys (Tensor):
Batch of the sequences of padded target features (B, Lmax, odim).
Returns: Returns:
numpy.ndarray: numpy.ndarray:
......
...@@ -46,17 +46,28 @@ class Encoder(nn.Layer): ...@@ -46,17 +46,28 @@ class Encoder(nn.Layer):
padding_idx=0, ): padding_idx=0, ):
"""Initialize Tacotron2 encoder module. """Initialize Tacotron2 encoder module.
Args: Args:
idim (int): Dimension of the inputs. idim (int):
input_layer (str): Input layer type. Dimension of the inputs.
embed_dim (int, optional): Dimension of character embedding. input_layer (str):
elayers (int, optional): The number of encoder blstm layers. Input layer type.
eunits (int, optional): The number of encoder blstm units. embed_dim (int, optional):
econv_layers (int, optional): The number of encoder conv layers. Dimension of character embedding.
econv_filts (int, optional): The number of encoder conv filter size. elayers (int, optional):
econv_chans (int, optional): The number of encoder conv filter channels. The number of encoder blstm layers.
use_batch_norm (bool, optional): Whether to use batch normalization. eunits (int, optional):
use_residual (bool, optional): Whether to use residual connection. The number of encoder blstm units.
dropout_rate (float, optional): Dropout rate. econv_layers (int, optional):
The number of encoder conv layers.
econv_filts (int, optional):
The number of encoder conv filter size.
econv_chans (int, optional):
The number of encoder conv filter channels.
use_batch_norm (bool, optional):
Whether to use batch normalization.
use_residual (bool, optional):
Whether to use residual connection.
dropout_rate (float, optional):
Dropout rate.
""" """
super().__init__() super().__init__()
...@@ -127,14 +138,18 @@ class Encoder(nn.Layer): ...@@ -127,14 +138,18 @@ class Encoder(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
xs (Tensor): Batch of the padded sequence. Either character ids (B, Tmax) xs (Tensor):
Batch of the padded sequence. Either character ids (B, Tmax)
or acoustic feature (B, Tmax, idim * encoder_reduction_factor). or acoustic feature (B, Tmax, idim * encoder_reduction_factor).
Padded value should be 0. Padded value should be 0.
ilens (Tensor(int64)): Batch of lengths of each input batch (B,). ilens (Tensor(int64)):
Batch of lengths of each input batch (B,).
Returns: Returns:
Tensor: Batch of the sequences of encoder states(B, Tmax, eunits). Tensor:
Tensor(int64): Batch of lengths of each sequence (B,) Batch of the sequences of encoder states(B, Tmax, eunits).
Tensor(int64):
Batch of lengths of each sequence (B,)
""" """
xs = self.embed(xs).transpose([0, 2, 1]) xs = self.embed(xs).transpose([0, 2, 1])
if self.convs is not None: if self.convs is not None:
...@@ -161,8 +176,8 @@ class Encoder(nn.Layer): ...@@ -161,8 +176,8 @@ class Encoder(nn.Layer):
"""Inference. """Inference.
Args: Args:
x (Tensor): The sequeunce of character ids (T,) x (Tensor):
or acoustic feature (T, idim * encoder_reduction_factor). The sequeunce of character ids (T,) or acoustic feature (T, idim * encoder_reduction_factor).
Returns: Returns:
Tensor: The sequences of encoder states(T, eunits). Tensor: The sequences of encoder states(T, eunits).
......
...@@ -60,11 +60,15 @@ class TADELayer(nn.Layer): ...@@ -60,11 +60,15 @@ class TADELayer(nn.Layer):
def forward(self, x, c): def forward(self, x, c):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, in_channels, T). x (Tensor):
c (Tensor): Auxiliary input tensor (B, aux_channels, T). Input tensor (B, in_channels, T).
c (Tensor):
Auxiliary input tensor (B, aux_channels, T).
Returns: Returns:
Tensor: Output tensor (B, in_channels, T * upsample_factor). Tensor:
Tensor: Upsampled aux tensor (B, in_channels, T * upsample_factor). Output tensor (B, in_channels, T * upsample_factor).
Tensor:
Upsampled aux tensor (B, in_channels, T * upsample_factor).
""" """
x = self.norm(x) x = self.norm(x)
...@@ -138,11 +142,15 @@ class TADEResBlock(nn.Layer): ...@@ -138,11 +142,15 @@ class TADEResBlock(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Input tensor (B, in_channels, T). x (Tensor):
c (Tensor): Auxiliary input tensor (B, aux_channels, T). Input tensor (B, in_channels, T).
c (Tensor):
Auxiliary input tensor (B, aux_channels, T).
Returns: Returns:
Tensor: Output tensor (B, in_channels, T * upsample_factor). Tensor:
Tensor: Upsampled auxirialy tensor (B, in_channels, T * upsample_factor). Output tensor (B, in_channels, T * upsample_factor).
Tensor:
Upsampled auxirialy tensor (B, in_channels, T * upsample_factor).
""" """
residual = x residual = x
x, c = self.tade1(x, c) x, c = self.tade1(x, c)
......
...@@ -25,9 +25,12 @@ from paddlespeech.t2s.modules.masked_fill import masked_fill ...@@ -25,9 +25,12 @@ from paddlespeech.t2s.modules.masked_fill import masked_fill
class MultiHeadedAttention(nn.Layer): class MultiHeadedAttention(nn.Layer):
"""Multi-Head Attention layer. """Multi-Head Attention layer.
Args: Args:
n_head (int): The number of heads. n_head (int):
n_feat (int): The number of features. The number of heads.
dropout_rate (float): Dropout rate. n_feat (int):
The number of features.
dropout_rate (float):
Dropout rate.
""" """
def __init__(self, n_head, n_feat, dropout_rate): def __init__(self, n_head, n_feat, dropout_rate):
...@@ -48,14 +51,20 @@ class MultiHeadedAttention(nn.Layer): ...@@ -48,14 +51,20 @@ class MultiHeadedAttention(nn.Layer):
"""Transform query, key and value. """Transform query, key and value.
Args: Args:
query(Tensor): query tensor (#batch, time1, size). query(Tensor):
key(Tensor): Key tensor (#batch, time2, size). query tensor (#batch, time1, size).
value(Tensor): Value tensor (#batch, time2, size). key(Tensor):
Key tensor (#batch, time2, size).
value(Tensor):
Value tensor (#batch, time2, size).
Returns: Returns:
Tensor: Transformed query tensor (#batch, n_head, time1, d_k). Tensor:
Tensor: Transformed key tensor (#batch, n_head, time2, d_k). Transformed query tensor (#batch, n_head, time1, d_k).
Tensor: Transformed value tensor (#batch, n_head, time2, d_k). Tensor:
Transformed key tensor (#batch, n_head, time2, d_k).
Tensor:
Transformed value tensor (#batch, n_head, time2, d_k).
""" """
n_batch = paddle.shape(query)[0] n_batch = paddle.shape(query)[0]
...@@ -77,9 +86,12 @@ class MultiHeadedAttention(nn.Layer): ...@@ -77,9 +86,12 @@ class MultiHeadedAttention(nn.Layer):
"""Compute attention context vector. """Compute attention context vector.
Args: Args:
value(Tensor): Transformed value (#batch, n_head, time2, d_k). value(Tensor):
scores(Tensor): Attention score (#batch, n_head, time1, time2). Transformed value (#batch, n_head, time2, d_k).
mask(Tensor, optional): Mask (#batch, 1, time2) or (#batch, time1, time2). (Default value = None) scores(Tensor):
Attention score (#batch, n_head, time1, time2).
mask(Tensor, optional):
Mask (#batch, 1, time2) or (#batch, time1, time2). (Default value = None)
Returns: Returns:
Tensor: Transformed value (#batch, time1, d_model) weighted by the attention score (#batch, time1, time2). Tensor: Transformed value (#batch, time1, d_model) weighted by the attention score (#batch, time1, time2).
...@@ -113,10 +125,14 @@ class MultiHeadedAttention(nn.Layer): ...@@ -113,10 +125,14 @@ class MultiHeadedAttention(nn.Layer):
"""Compute scaled dot product attention. """Compute scaled dot product attention.
Args: Args:
query(Tensor): Query tensor (#batch, time1, size). query(Tensor):
key(Tensor): Key tensor (#batch, time2, size). Query tensor (#batch, time1, size).
value(Tensor): Value tensor (#batch, time2, size). key(Tensor):
mask(Tensor, optional): Mask tensor (#batch, 1, time2) or (#batch, time1, time2). (Default value = None) Key tensor (#batch, time2, size).
value(Tensor):
Value tensor (#batch, time2, size).
mask(Tensor, optional):
Mask tensor (#batch, 1, time2) or (#batch, time1, time2). (Default value = None)
Returns: Returns:
Tensor: Output tensor (#batch, time1, d_model). Tensor: Output tensor (#batch, time1, d_model).
...@@ -134,10 +150,14 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -134,10 +150,14 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
Paper: https://arxiv.org/abs/1901.02860 Paper: https://arxiv.org/abs/1901.02860
Args: Args:
n_head (int): The number of heads. n_head (int):
n_feat (int): The number of features. The number of heads.
dropout_rate (float): Dropout rate. n_feat (int):
zero_triu (bool): Whether to zero the upper triangular part of attention matrix. The number of features.
dropout_rate (float):
Dropout rate.
zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
""" """
def __init__(self, n_head, n_feat, dropout_rate, zero_triu=False): def __init__(self, n_head, n_feat, dropout_rate, zero_triu=False):
...@@ -161,10 +181,11 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -161,10 +181,11 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
def rel_shift(self, x): def rel_shift(self, x):
"""Compute relative positional encoding. """Compute relative positional encoding.
Args: Args:
x(Tensor): Input tensor (batch, head, time1, 2*time1-1). x(Tensor):
Input tensor (batch, head, time1, 2*time1-1).
Returns: Returns:
Tensor:Output tensor. Tensor: Output tensor.
""" """
b, h, t1, t2 = paddle.shape(x) b, h, t1, t2 = paddle.shape(x)
zero_pad = paddle.zeros((b, h, t1, 1)) zero_pad = paddle.zeros((b, h, t1, 1))
...@@ -183,11 +204,16 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -183,11 +204,16 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding. """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args: Args:
query(Tensor): Query tensor (#batch, time1, size). query(Tensor):
key(Tensor): Key tensor (#batch, time2, size). Query tensor (#batch, time1, size).
value(Tensor): Value tensor (#batch, time2, size). key(Tensor):
pos_emb(Tensor): Positional embedding tensor (#batch, 2*time1-1, size). Key tensor (#batch, time2, size).
mask(Tensor): Mask tensor (#batch, 1, time2) or (#batch, time1, time2). value(Tensor):
Value tensor (#batch, time2, size).
pos_emb(Tensor):
Positional embedding tensor (#batch, 2*time1-1, size).
mask(Tensor):
Mask tensor (#batch, 1, time2) or (#batch, time1, time2).
Returns: Returns:
Tensor: Output tensor (#batch, time1, d_model). Tensor: Output tensor (#batch, time1, d_model).
...@@ -228,10 +254,14 @@ class LegacyRelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -228,10 +254,14 @@ class LegacyRelPositionMultiHeadedAttention(MultiHeadedAttention):
Paper: https://arxiv.org/abs/1901.02860 Paper: https://arxiv.org/abs/1901.02860
Args: Args:
n_head (int): The number of heads. n_head (int):
n_feat (int): The number of features. The number of heads.
dropout_rate (float): Dropout rate. n_feat (int):
zero_triu (bool): Whether to zero the upper triangular part of attention matrix. The number of features.
dropout_rate (float):
Dropout rate.
zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
""" """
def __init__(self, n_head, n_feat, dropout_rate, zero_triu=False): def __init__(self, n_head, n_feat, dropout_rate, zero_triu=False):
...@@ -255,8 +285,8 @@ class LegacyRelPositionMultiHeadedAttention(MultiHeadedAttention): ...@@ -255,8 +285,8 @@ class LegacyRelPositionMultiHeadedAttention(MultiHeadedAttention):
def rel_shift(self, x): def rel_shift(self, x):
"""Compute relative positional encoding. """Compute relative positional encoding.
Args: Args:
x(Tensor): Input tensor (batch, head, time1, time2). x(Tensor):
Input tensor (batch, head, time1, time2).
Returns: Returns:
Tensor:Output tensor. Tensor:Output tensor.
""" """
......
...@@ -37,28 +37,46 @@ class Decoder(nn.Layer): ...@@ -37,28 +37,46 @@ class Decoder(nn.Layer):
"""Transfomer decoder module. """Transfomer decoder module.
Args: Args:
odim (int): Output diminsion. odim (int):
self_attention_layer_type (str): Self-attention layer type. Output diminsion.
attention_dim (int): Dimention of attention. self_attention_layer_type (str):
attention_heads (int): The number of heads of multi head attention. Self-attention layer type.
conv_wshare (int): The number of kernel of convolution. Only used in attention_dim (int):
Dimention of attention.
attention_heads (int):
The number of heads of multi head attention.
conv_wshare (int):
The number of kernel of convolution. Only used in
self_attention_layer_type == "lightconv*" or "dynamiconv*". self_attention_layer_type == "lightconv*" or "dynamiconv*".
conv_kernel_length (Union[int, str]):Kernel size str of convolution conv_kernel_length (Union[int, str]):
Kernel size str of convolution
(e.g. 71_71_71_71_71_71). Only used in self_attention_layer_type == "lightconv*" or "dynamiconv*". (e.g. 71_71_71_71_71_71). Only used in self_attention_layer_type == "lightconv*" or "dynamiconv*".
conv_usebias (bool): Whether to use bias in convolution. Only used in conv_usebias (bool):
Whether to use bias in convolution. Only used in
self_attention_layer_type == "lightconv*" or "dynamiconv*". self_attention_layer_type == "lightconv*" or "dynamiconv*".
linear_units(int): The number of units of position-wise feed forward. linear_units(int):
num_blocks (int): The number of decoder blocks. The number of units of position-wise feed forward.
dropout_rate (float): Dropout rate. num_blocks (int):
positional_dropout_rate (float): Dropout rate after adding positional encoding. The number of decoder blocks.
self_attention_dropout_rate (float): Dropout rate in self-attention. dropout_rate (float):
src_attention_dropout_rate (float): Dropout rate in source-attention. Dropout rate.
input_layer (Union[str, nn.Layer]): Input layer type. positional_dropout_rate (float):
use_output_layer (bool): Whether to use output layer. Dropout rate after adding positional encoding.
pos_enc_class (nn.Layer): Positional encoding module class. self_attention_dropout_rate (float):
Dropout rate in self-attention.
src_attention_dropout_rate (float):
Dropout rate in source-attention.
input_layer (Union[str, nn.Layer]):
Input layer type.
use_output_layer (bool):
Whether to use output layer.
pos_enc_class (nn.Layer):
Positional encoding module class.
`PositionalEncoding `or `ScaledPositionalEncoding` `PositionalEncoding `or `ScaledPositionalEncoding`
normalize_before (bool): Whether to use layer_norm before the first block. normalize_before (bool):
concat_after (bool): Whether to concat attention layer's input and output. Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
...@@ -143,17 +161,22 @@ class Decoder(nn.Layer): ...@@ -143,17 +161,22 @@ class Decoder(nn.Layer):
def forward(self, tgt, tgt_mask, memory, memory_mask): def forward(self, tgt, tgt_mask, memory, memory_mask):
"""Forward decoder. """Forward decoder.
Args: Args:
tgt(Tensor): Input token ids, int64 (#batch, maxlen_out) if input_layer == "embed". tgt(Tensor):
Input token ids, int64 (#batch, maxlen_out) if input_layer == "embed".
In the other case, input tensor (#batch, maxlen_out, odim). In the other case, input tensor (#batch, maxlen_out, odim).
tgt_mask(Tensor): Input token mask (#batch, maxlen_out). tgt_mask(Tensor):
memory(Tensor): Encoded memory, float32 (#batch, maxlen_in, feat). Input token mask (#batch, maxlen_out).
memory_mask(Tensor): Encoded memory mask (#batch, maxlen_in). memory(Tensor):
Encoded memory, float32 (#batch, maxlen_in, feat).
memory_mask(Tensor):
Encoded memory mask (#batch, maxlen_in).
Returns: Returns:
Tensor: Tensor:
Decoded token score before softmax (#batch, maxlen_out, odim) if use_output_layer is True. Decoded token score before softmax (#batch, maxlen_out, odim) if use_output_layer is True.
In the other case,final block outputs (#batch, maxlen_out, attention_dim). In the other case,final block outputs (#batch, maxlen_out, attention_dim).
Tensor: Score mask before softmax (#batch, maxlen_out). Tensor:
Score mask before softmax (#batch, maxlen_out).
""" """
x = self.embed(tgt) x = self.embed(tgt)
...@@ -169,14 +192,20 @@ class Decoder(nn.Layer): ...@@ -169,14 +192,20 @@ class Decoder(nn.Layer):
"""Forward one step. """Forward one step.
Args: Args:
tgt(Tensor): Input token ids, int64 (#batch, maxlen_out). tgt(Tensor):
tgt_mask(Tensor): Input token mask (#batch, maxlen_out). Input token ids, int64 (#batch, maxlen_out).
memory(Tensor): Encoded memory, float32 (#batch, maxlen_in, feat). tgt_mask(Tensor):
cache((List[Tensor]), optional): List of cached tensors. (Default value = None) Input token mask (#batch, maxlen_out).
memory(Tensor):
Encoded memory, float32 (#batch, maxlen_in, feat).
cache((List[Tensor]), optional):
List of cached tensors. (Default value = None)
Returns: Returns:
Tensor: Output tensor (batch, maxlen_out, odim). Tensor:
List[Tensor]: List of cache tensors of each decoder layer. Output tensor (batch, maxlen_out, odim).
List[Tensor]:
List of cache tensors of each decoder layer.
""" """
x = self.embed(tgt) x = self.embed(tgt)
...@@ -219,9 +248,12 @@ class Decoder(nn.Layer): ...@@ -219,9 +248,12 @@ class Decoder(nn.Layer):
"""Score new token batch (required). """Score new token batch (required).
Args: Args:
ys(Tensor): paddle.int64 prefix tokens (n_batch, ylen). ys(Tensor):
states(List[Any]): Scorer states for prefix tokens. paddle.int64 prefix tokens (n_batch, ylen).
xs(Tensor): The encoder feature that generates ys (n_batch, xlen, n_feat). states(List[Any]):
Scorer states for prefix tokens.
xs(Tensor):
The encoder feature that generates ys (n_batch, xlen, n_feat).
Returns: Returns:
tuple[Tensor, List[Any]]: tuple[Tensor, List[Any]]:
......
...@@ -24,16 +24,23 @@ class DecoderLayer(nn.Layer): ...@@ -24,16 +24,23 @@ class DecoderLayer(nn.Layer):
Args: Args:
size (int): Input dimension. size (int):
self_attn (nn.Layer): Self-attention module instance. Input dimension.
self_attn (nn.Layer):
Self-attention module instance.
`MultiHeadedAttention` instance can be used as the argument. `MultiHeadedAttention` instance can be used as the argument.
src_attn (nn.Layer): Self-attention module instance. src_attn (nn.Layer):
Self-attention module instance.
`MultiHeadedAttention` instance can be used as the argument. `MultiHeadedAttention` instance can be used as the argument.
feed_forward (nn.Layer): Feed-forward module instance. feed_forward (nn.Layer):
Feed-forward module instance.
`PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance can be used as the argument. `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance can be used as the argument.
dropout_rate (float): Dropout rate. dropout_rate (float):
normalize_before (bool): Whether to use layer_norm before the first block. Dropout rate.
concat_after (bool): Whether to concat attention layer's input and output. normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
...@@ -69,11 +76,16 @@ class DecoderLayer(nn.Layer): ...@@ -69,11 +76,16 @@ class DecoderLayer(nn.Layer):
"""Compute decoded features. """Compute decoded features.
Args: Args:
tgt(Tensor): Input tensor (#batch, maxlen_out, size). tgt(Tensor):
tgt_mask(Tensor): Mask for input tensor (#batch, maxlen_out). Input tensor (#batch, maxlen_out, size).
memory(Tensor): Encoded memory, float32 (#batch, maxlen_in, size). tgt_mask(Tensor):
memory_mask(Tensor): Encoded memory mask (#batch, maxlen_in). Mask for input tensor (#batch, maxlen_out).
cache(List[Tensor], optional): List of cached tensors. memory(Tensor):
Encoded memory, float32 (#batch, maxlen_in, size).
memory_mask(Tensor):
Encoded memory mask (#batch, maxlen_in).
cache(List[Tensor], optional):
List of cached tensors.
Each tensor shape should be (#batch, maxlen_out - 1, size). (Default value = None) Each tensor shape should be (#batch, maxlen_out - 1, size). (Default value = None)
Returns: Returns:
Tensor Tensor
......
...@@ -23,11 +23,16 @@ class PositionalEncoding(nn.Layer): ...@@ -23,11 +23,16 @@ class PositionalEncoding(nn.Layer):
"""Positional encoding. """Positional encoding.
Args: Args:
d_model (int): Embedding dimension. d_model (int):
dropout_rate (float): Dropout rate. Embedding dimension.
max_len (int): Maximum input length. dropout_rate (float):
reverse (bool): Whether to reverse the input position. Dropout rate.
type (str): dtype of param max_len (int):
Maximum input length.
reverse (bool):
Whether to reverse the input position.
type (str):
dtype of param
""" """
def __init__(self, def __init__(self,
...@@ -68,7 +73,8 @@ class PositionalEncoding(nn.Layer): ...@@ -68,7 +73,8 @@ class PositionalEncoding(nn.Layer):
"""Add positional encoding. """Add positional encoding.
Args: Args:
x (Tensor): Input tensor (batch, time, `*`). x (Tensor):
Input tensor (batch, time, `*`).
Returns: Returns:
Tensor: Encoded tensor (batch, time, `*`). Tensor: Encoded tensor (batch, time, `*`).
...@@ -84,10 +90,14 @@ class ScaledPositionalEncoding(PositionalEncoding): ...@@ -84,10 +90,14 @@ class ScaledPositionalEncoding(PositionalEncoding):
See Sec. 3.2 https://arxiv.org/abs/1809.08895 See Sec. 3.2 https://arxiv.org/abs/1809.08895
Args: Args:
d_model (int): Embedding dimension. d_model (int):
dropout_rate (float): Dropout rate. Embedding dimension.
max_len (int): Maximum input length. dropout_rate (float):
dtype (str): dtype of param Dropout rate.
max_len (int):
Maximum input length.
dtype (str):
dtype of param
""" """
def __init__(self, d_model, dropout_rate, max_len=5000, dtype="float32"): def __init__(self, d_model, dropout_rate, max_len=5000, dtype="float32"):
...@@ -111,7 +121,8 @@ class ScaledPositionalEncoding(PositionalEncoding): ...@@ -111,7 +121,8 @@ class ScaledPositionalEncoding(PositionalEncoding):
"""Add positional encoding. """Add positional encoding.
Args: Args:
x (Tensor): Input tensor (batch, time, `*`). x (Tensor):
Input tensor (batch, time, `*`).
Returns: Returns:
Tensor: Encoded tensor (batch, time, `*`). Tensor: Encoded tensor (batch, time, `*`).
""" """
...@@ -127,9 +138,12 @@ class RelPositionalEncoding(nn.Layer): ...@@ -127,9 +138,12 @@ class RelPositionalEncoding(nn.Layer):
See : Appendix B in https://arxiv.org/abs/1901.02860 See : Appendix B in https://arxiv.org/abs/1901.02860
Args: Args:
d_model (int): Embedding dimension. d_model (int):
dropout_rate (float): Dropout rate. Embedding dimension.
max_len (int): Maximum input length. dropout_rate (float):
Dropout rate.
max_len (int):
Maximum input length.
""" """
def __init__(self, d_model, dropout_rate, max_len=5000, dtype="float32"): def __init__(self, d_model, dropout_rate, max_len=5000, dtype="float32"):
...@@ -175,7 +189,8 @@ class RelPositionalEncoding(nn.Layer): ...@@ -175,7 +189,8 @@ class RelPositionalEncoding(nn.Layer):
def forward(self, x: paddle.Tensor): def forward(self, x: paddle.Tensor):
"""Add positional encoding. """Add positional encoding.
Args: Args:
x (Tensor):Input tensor (batch, time, `*`). x (Tensor):
Input tensor (batch, time, `*`).
Returns: Returns:
Tensor: Encoded tensor (batch, time, `*`). Tensor: Encoded tensor (batch, time, `*`).
""" """
...@@ -195,18 +210,24 @@ class LegacyRelPositionalEncoding(PositionalEncoding): ...@@ -195,18 +210,24 @@ class LegacyRelPositionalEncoding(PositionalEncoding):
See : Appendix B in https://arxiv.org/abs/1901.02860 See : Appendix B in https://arxiv.org/abs/1901.02860
Args: Args:
d_model (int): Embedding dimension. d_model (int):
dropout_rate (float): Dropout rate. Embedding dimension.
max_len (int): Maximum input length. dropout_rate (float):
Dropout rate.
max_len (int):
Maximum input length.
""" """
def __init__(self, d_model: int, dropout_rate: float, max_len: int=5000): def __init__(self, d_model: int, dropout_rate: float, max_len: int=5000):
""" """
Args: Args:
d_model (int): Embedding dimension. d_model (int):
dropout_rate (float): Dropout rate. Embedding dimension.
max_len (int, optional): [Maximum input length.]. Defaults to 5000. dropout_rate (float):
Dropout rate.
max_len (int, optional):
[Maximum input length.]. Defaults to 5000.
""" """
super().__init__(d_model, dropout_rate, max_len, reverse=True) super().__init__(d_model, dropout_rate, max_len, reverse=True)
...@@ -234,10 +255,13 @@ class LegacyRelPositionalEncoding(PositionalEncoding): ...@@ -234,10 +255,13 @@ class LegacyRelPositionalEncoding(PositionalEncoding):
def forward(self, x: paddle.Tensor): def forward(self, x: paddle.Tensor):
"""Compute positional encoding. """Compute positional encoding.
Args: Args:
x (paddle.Tensor): Input tensor (batch, time, `*`). x (Tensor):
Input tensor (batch, time, `*`).
Returns: Returns:
paddle.Tensor: Encoded tensor (batch, time, `*`). Tensor:
paddle.Tensor: Positional embedding tensor (1, time, `*`). Encoded tensor (batch, time, `*`).
Tensor:
Positional embedding tensor (1, time, `*`).
""" """
self.extend_pe(x) self.extend_pe(x)
x = x * self.xscale x = x * self.xscale
......
...@@ -38,32 +38,55 @@ class BaseEncoder(nn.Layer): ...@@ -38,32 +38,55 @@ class BaseEncoder(nn.Layer):
"""Base Encoder module. """Base Encoder module.
Args: Args:
idim (int): Input dimension. idim (int):
attention_dim (int): Dimention of attention. Input dimension.
attention_heads (int): The number of heads of multi head attention. attention_dim (int):
linear_units (int): The number of units of position-wise feed forward. Dimention of attention.
num_blocks (int): The number of decoder blocks. attention_heads (int):
dropout_rate (float): Dropout rate. The number of heads of multi head attention.
positional_dropout_rate (float): Dropout rate after adding positional encoding. linear_units (int):
attention_dropout_rate (float): Dropout rate in attention. The number of units of position-wise feed forward.
input_layer (Union[str, nn.Layer]): Input layer type. num_blocks (int):
normalize_before (bool): Whether to use layer_norm before the first block. The number of decoder blocks.
concat_after (bool): Whether to concat attention layer's input and output. dropout_rate (float):
Dropout rate.
positional_dropout_rate (float):
Dropout rate after adding positional encoding.
attention_dropout_rate (float):
Dropout rate in attention.
input_layer (Union[str, nn.Layer]):
Input layer type.
normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". positionwise_layer_type (str):
positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer. "linear", "conv1d", or "conv1d-linear".
macaron_style (bool): Whether to use macaron style for positionwise layer. positionwise_conv_kernel_size (int):
pos_enc_layer_type (str): Encoder positional encoding layer type. Kernel size of positionwise conv1d layer.
selfattention_layer_type (str): Encoder attention layer type. macaron_style (bool):
activation_type (str): Encoder activation function type. Whether to use macaron style for positionwise layer.
use_cnn_module (bool): Whether to use convolution module. pos_enc_layer_type (str):
zero_triu (bool): Whether to zero the upper triangular part of attention matrix. Encoder positional encoding layer type.
cnn_module_kernel (int): Kernerl size of convolution module. selfattention_layer_type (str):
padding_idx (int): Padding idx for input_layer=embed. Encoder attention layer type.
stochastic_depth_rate (float): Maximum probability to skip the encoder layer. activation_type (str):
intermediate_layers (Union[List[int], None]): indices of intermediate CTC layer. Encoder activation function type.
use_cnn_module (bool):
Whether to use convolution module.
zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
cnn_module_kernel (int):
Kernerl size of convolution module.
padding_idx (int):
Padding idx for input_layer=embed.
stochastic_depth_rate (float):
Maximum probability to skip the encoder layer.
intermediate_layers (Union[List[int], None]):
indices of intermediate CTC layer.
indices start from 1. indices start from 1.
if not None, intermediate outputs are returned (which changes return type if not None, intermediate outputs are returned (which changes return type
signature.) signature.)
...@@ -266,12 +289,16 @@ class BaseEncoder(nn.Layer): ...@@ -266,12 +289,16 @@ class BaseEncoder(nn.Layer):
"""Encode input sequence. """Encode input sequence.
Args: Args:
xs (Tensor): Input tensor (#batch, time, idim). xs (Tensor):
masks (Tensor): Mask tensor (#batch, 1, time). Input tensor (#batch, time, idim).
masks (Tensor):
Mask tensor (#batch, 1, time).
Returns: Returns:
Tensor: Output tensor (#batch, time, attention_dim). Tensor:
Tensor: Mask tensor (#batch, 1, time). Output tensor (#batch, time, attention_dim).
Tensor:
Mask tensor (#batch, 1, time).
""" """
xs = self.embed(xs) xs = self.embed(xs)
xs, masks = self.encoders(xs, masks) xs, masks = self.encoders(xs, masks)
...@@ -284,26 +311,43 @@ class TransformerEncoder(BaseEncoder): ...@@ -284,26 +311,43 @@ class TransformerEncoder(BaseEncoder):
"""Transformer encoder module. """Transformer encoder module.
Args: Args:
idim (int): Input dimension. idim (int):
attention_dim (int): Dimention of attention. Input dimension.
attention_heads (int): The number of heads of multi head attention. attention_dim (int):
linear_units (int): The number of units of position-wise feed forward. Dimention of attention.
num_blocks (int): The number of decoder blocks. attention_heads (int):
dropout_rate (float): Dropout rate. The number of heads of multi head attention.
positional_dropout_rate (float): Dropout rate after adding positional encoding. linear_units (int):
attention_dropout_rate (float): Dropout rate in attention. The number of units of position-wise feed forward.
input_layer (Union[str, paddle.nn.Layer]): Input layer type. num_blocks (int):
pos_enc_layer_type (str): Encoder positional encoding layer type. The number of decoder blocks.
normalize_before (bool): Whether to use layer_norm before the first block. dropout_rate (float):
concat_after (bool): Whether to concat attention layer's input and output. Dropout rate.
positional_dropout_rate (float):
Dropout rate after adding positional encoding.
attention_dropout_rate (float):
Dropout rate in attention.
input_layer (Union[str, paddle.nn.Layer]):
Input layer type.
pos_enc_layer_type (str):
Encoder positional encoding layer type.
normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". positionwise_layer_type (str):
positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer. "linear", "conv1d", or "conv1d-linear".
selfattention_layer_type (str): Encoder attention layer type. positionwise_conv_kernel_size (int):
activation_type (str): Encoder activation function type. Kernel size of positionwise conv1d layer.
padding_idx (int): Padding idx for input_layer=embed. selfattention_layer_type (str):
Encoder attention layer type.
activation_type (str):
Encoder activation function type.
padding_idx (int):
Padding idx for input_layer=embed.
""" """
def __init__( def __init__(
...@@ -350,12 +394,16 @@ class TransformerEncoder(BaseEncoder): ...@@ -350,12 +394,16 @@ class TransformerEncoder(BaseEncoder):
"""Encoder input sequence. """Encoder input sequence.
Args: Args:
xs(Tensor): Input tensor (#batch, time, idim). xs(Tensor):
masks(Tensor): Mask tensor (#batch, 1, time). Input tensor (#batch, time, idim).
masks(Tensor):
Mask tensor (#batch, 1, time).
Returns: Returns:
Tensor: Output tensor (#batch, time, attention_dim). Tensor:
Tensor: Mask tensor (#batch, 1, time). Output tensor (#batch, time, attention_dim).
Tensor:
Mask tensor (#batch, 1, time).
""" """
xs = self.embed(xs) xs = self.embed(xs)
xs, masks = self.encoders(xs, masks) xs, masks = self.encoders(xs, masks)
...@@ -367,14 +415,20 @@ class TransformerEncoder(BaseEncoder): ...@@ -367,14 +415,20 @@ class TransformerEncoder(BaseEncoder):
"""Encode input frame. """Encode input frame.
Args: Args:
xs (Tensor): Input tensor. xs (Tensor):
masks (Tensor): Mask tensor. Input tensor.
cache (List[Tensor]): List of cache tensors. masks (Tensor):
Mask tensor.
cache (List[Tensor]):
List of cache tensors.
Returns: Returns:
Tensor: Output tensor. Tensor:
Tensor: Mask tensor. Output tensor.
List[Tensor]: List of new cache tensors. Tensor:
Mask tensor.
List[Tensor]:
List of new cache tensors.
""" """
xs = self.embed(xs) xs = self.embed(xs)
...@@ -393,32 +447,55 @@ class ConformerEncoder(BaseEncoder): ...@@ -393,32 +447,55 @@ class ConformerEncoder(BaseEncoder):
"""Conformer encoder module. """Conformer encoder module.
Args: Args:
idim (int): Input dimension. idim (int):
attention_dim (int): Dimention of attention. Input dimension.
attention_heads (int): The number of heads of multi head attention. attention_dim (int):
linear_units (int): The number of units of position-wise feed forward. Dimention of attention.
num_blocks (int): The number of decoder blocks. attention_heads (int):
dropout_rate (float): Dropout rate. The number of heads of multi head attention.
positional_dropout_rate (float): Dropout rate after adding positional encoding. linear_units (int):
attention_dropout_rate (float): Dropout rate in attention. The number of units of position-wise feed forward.
input_layer (Union[str, nn.Layer]): Input layer type. num_blocks (int):
normalize_before (bool): Whether to use layer_norm before the first block. The number of decoder blocks.
concat_after (bool):Whether to concat attention layer's input and output. dropout_rate (float):
Dropout rate.
positional_dropout_rate (float):
Dropout rate after adding positional encoding.
attention_dropout_rate (float):
Dropout rate in attention.
input_layer (Union[str, nn.Layer]):
Input layer type.
normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". positionwise_layer_type (str):
positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer. "linear", "conv1d", or "conv1d-linear".
macaron_style (bool): Whether to use macaron style for positionwise layer. positionwise_conv_kernel_size (int):
pos_enc_layer_type (str): Encoder positional encoding layer type. Kernel size of positionwise conv1d layer.
selfattention_layer_type (str): Encoder attention layer type. macaron_style (bool):
activation_type (str): Encoder activation function type. Whether to use macaron style for positionwise layer.
use_cnn_module (bool): Whether to use convolution module. pos_enc_layer_type (str):
zero_triu (bool): Whether to zero the upper triangular part of attention matrix. Encoder positional encoding layer type.
cnn_module_kernel (int): Kernerl size of convolution module. selfattention_layer_type (str):
padding_idx (int): Padding idx for input_layer=embed. Encoder attention layer type.
stochastic_depth_rate (float): Maximum probability to skip the encoder layer. activation_type (str):
intermediate_layers (Union[List[int], None]):indices of intermediate CTC layer. indices start from 1. Encoder activation function type.
use_cnn_module (bool):
Whether to use convolution module.
zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
cnn_module_kernel (int):
Kernerl size of convolution module.
padding_idx (int):
Padding idx for input_layer=embed.
stochastic_depth_rate (float):
Maximum probability to skip the encoder layer.
intermediate_layers (Union[List[int], None]):
indices of intermediate CTC layer. indices start from 1.
if not None, intermediate outputs are returned (which changes return type signature.) if not None, intermediate outputs are returned (which changes return type signature.)
""" """
...@@ -478,11 +555,15 @@ class ConformerEncoder(BaseEncoder): ...@@ -478,11 +555,15 @@ class ConformerEncoder(BaseEncoder):
"""Encode input sequence. """Encode input sequence.
Args: Args:
xs (Tensor): Input tensor (#batch, time, idim). xs (Tensor):
masks (Tensor): Mask tensor (#batch, 1, time). Input tensor (#batch, time, idim).
masks (Tensor):
Mask tensor (#batch, 1, time).
Returns: Returns:
Tensor: Output tensor (#batch, time, attention_dim). Tensor:
Tensor: Mask tensor (#batch, 1, time). Output tensor (#batch, time, attention_dim).
Tensor:
Mask tensor (#batch, 1, time).
""" """
if isinstance(self.embed, (Conv2dSubsampling)): if isinstance(self.embed, (Conv2dSubsampling)):
xs, masks = self.embed(xs, masks) xs, masks = self.embed(xs, masks)
...@@ -539,7 +620,8 @@ class Conv1dResidualBlock(nn.Layer): ...@@ -539,7 +620,8 @@ class Conv1dResidualBlock(nn.Layer):
def forward(self, xs): def forward(self, xs):
"""Encode input sequence. """Encode input sequence.
Args: Args:
xs (Tensor): Input tensor (#batch, idim, T). xs (Tensor):
Input tensor (#batch, idim, T).
Returns: Returns:
Tensor: Output tensor (#batch, odim, T). Tensor: Output tensor (#batch, odim, T).
""" """
...@@ -582,8 +664,10 @@ class CNNDecoder(nn.Layer): ...@@ -582,8 +664,10 @@ class CNNDecoder(nn.Layer):
def forward(self, xs, masks=None): def forward(self, xs, masks=None):
"""Encode input sequence. """Encode input sequence.
Args: Args:
xs (Tensor): Input tensor (#batch, time, idim). xs (Tensor):
masks (Tensor): Mask tensor (#batch, 1, time). Input tensor (#batch, time, idim).
masks (Tensor):
Mask tensor (#batch, 1, time).
Returns: Returns:
Tensor: Output tensor (#batch, time, odim). Tensor: Output tensor (#batch, time, odim).
""" """
...@@ -629,8 +713,10 @@ class CNNPostnet(nn.Layer): ...@@ -629,8 +713,10 @@ class CNNPostnet(nn.Layer):
def forward(self, xs, masks=None): def forward(self, xs, masks=None):
"""Encode input sequence. """Encode input sequence.
Args: Args:
xs (Tensor): Input tensor (#batch, odim, time). xs (Tensor):
masks (Tensor): Mask tensor (#batch, 1, time). Input tensor (#batch, odim, time).
masks (Tensor):
Mask tensor (#batch, 1, time).
Returns: Returns:
Tensor: Output tensor (#batch, odim, time). Tensor: Output tensor (#batch, odim, time).
""" """
......
...@@ -21,14 +21,20 @@ class EncoderLayer(nn.Layer): ...@@ -21,14 +21,20 @@ class EncoderLayer(nn.Layer):
"""Encoder layer module. """Encoder layer module.
Args: Args:
size (int): Input dimension. size (int):
self_attn (nn.Layer): Self-attention module instance. Input dimension.
self_attn (nn.Layer):
Self-attention module instance.
`MultiHeadedAttention` instance can be used as the argument. `MultiHeadedAttention` instance can be used as the argument.
feed_forward (nn.Layer): Feed-forward module instance. feed_forward (nn.Layer):
Feed-forward module instance.
`PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance can be used as the argument. `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance can be used as the argument.
dropout_rate (float): Dropout rate. dropout_rate (float):
normalize_before (bool): Whether to use layer_norm before the first block. Dropout rate.
concat_after (bool): Whether to concat attention layer's input and output. normalize_before (bool):
Whether to use layer_norm before the first block.
concat_after (bool):
Whether to concat attention layer's input and output.
if True, additional linear will be applied. if True, additional linear will be applied.
i.e. x -> x + linear(concat(x, att(x))) i.e. x -> x + linear(concat(x, att(x)))
if False, no additional linear will be applied. i.e. x -> x + att(x) if False, no additional linear will be applied. i.e. x -> x + att(x)
...@@ -59,13 +65,18 @@ class EncoderLayer(nn.Layer): ...@@ -59,13 +65,18 @@ class EncoderLayer(nn.Layer):
"""Compute encoded features. """Compute encoded features.
Args: Args:
x(Tensor): Input tensor (#batch, time, size). x(Tensor):
mask(Tensor): Mask tensor for the input (#batch, time). Input tensor (#batch, time, size).
cache(Tensor, optional): Cache tensor of the input (#batch, time - 1, size). mask(Tensor):
Mask tensor for the input (#batch, time).
cache(Tensor, optional):
Cache tensor of the input (#batch, time - 1, size).
Returns: Returns:
Tensor: Output tensor (#batch, time, size). Tensor:
Tensor: Mask tensor (#batch, time). Output tensor (#batch, time, size).
Tensor:
Mask tensor (#batch, time).
""" """
residual = x residual = x
if self.normalize_before: if self.normalize_before:
......
...@@ -31,12 +31,18 @@ class LightweightConvolution(nn.Layer): ...@@ -31,12 +31,18 @@ class LightweightConvolution(nn.Layer):
https://github.com/pytorch/fairseq/tree/master/fairseq https://github.com/pytorch/fairseq/tree/master/fairseq
Args: Args:
wshare (int): the number of kernel of convolution wshare (int):
n_feat (int): the number of features the number of kernel of convolution
dropout_rate (float): dropout_rate n_feat (int):
kernel_size (int): kernel size (length) the number of features
use_kernel_mask (bool): Use causal mask or not for convolution kernel dropout_rate (float):
use_bias (bool): Use bias term or not. dropout_rate
kernel_size (int):
kernel size (length)
use_kernel_mask (bool):
Use causal mask or not for convolution kernel
use_bias (bool):
Use bias term or not.
""" """
...@@ -94,10 +100,14 @@ class LightweightConvolution(nn.Layer): ...@@ -94,10 +100,14 @@ class LightweightConvolution(nn.Layer):
This is just for compatibility with self-attention layer (attention.py) This is just for compatibility with self-attention layer (attention.py)
Args: Args:
query (Tensor): input tensor. (batch, time1, d_model) query (Tensor):
key (Tensor): NOT USED. (batch, time2, d_model) input tensor. (batch, time1, d_model)
value (Tensor): NOT USED. (batch, time2, d_model) key (Tensor):
mask : (Tensor): (batch, time1, time2) mask NOT USED. (batch, time2, d_model)
value (Tensor):
NOT USED. (batch, time2, d_model)
mask : (Tensor):
(batch, time1, time2) mask
Return: Return:
Tensor: ouput. (batch, time1, d_model) Tensor: ouput. (batch, time1, d_model)
......
...@@ -19,8 +19,10 @@ def subsequent_mask(size, dtype=paddle.bool): ...@@ -19,8 +19,10 @@ def subsequent_mask(size, dtype=paddle.bool):
"""Create mask for subsequent steps (size, size). """Create mask for subsequent steps (size, size).
Args: Args:
size (int): size of mask size (int):
dtype (paddle.dtype): result dtype size of mask
dtype (paddle.dtype):
result dtype
Return: Return:
Tensor: Tensor:
>>> subsequent_mask(3) >>> subsequent_mask(3)
...@@ -36,9 +38,12 @@ def target_mask(ys_in_pad, ignore_id, dtype=paddle.bool): ...@@ -36,9 +38,12 @@ def target_mask(ys_in_pad, ignore_id, dtype=paddle.bool):
"""Create mask for decoder self-attention. """Create mask for decoder self-attention.
Args: Args:
ys_pad (Tensor): batch of padded target sequences (B, Lmax) ys_pad (Tensor):
ignore_id (int): index of padding batch of padded target sequences (B, Lmax)
dtype (paddle.dtype): result dtype ignore_id (int):
index of padding
dtype (paddle.dtype):
result dtype
Return: Return:
Tensor: (B, Lmax, Lmax) Tensor: (B, Lmax, Lmax)
""" """
......
...@@ -32,10 +32,14 @@ class MultiLayeredConv1d(nn.Layer): ...@@ -32,10 +32,14 @@ class MultiLayeredConv1d(nn.Layer):
"""Initialize MultiLayeredConv1d module. """Initialize MultiLayeredConv1d module.
Args: Args:
in_chans (int): Number of input channels. in_chans (int):
hidden_chans (int): Number of hidden channels. Number of input channels.
kernel_size (int): Kernel size of conv1d. hidden_chans (int):
dropout_rate (float): Dropout rate. Number of hidden channels.
kernel_size (int):
Kernel size of conv1d.
dropout_rate (float):
Dropout rate.
""" """
super().__init__() super().__init__()
...@@ -58,7 +62,8 @@ class MultiLayeredConv1d(nn.Layer): ...@@ -58,7 +62,8 @@ class MultiLayeredConv1d(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Batch of input tensors (B, T, in_chans). x (Tensor):
Batch of input tensors (B, T, in_chans).
Returns: Returns:
Tensor: Batch of output tensors (B, T, in_chans). Tensor: Batch of output tensors (B, T, in_chans).
...@@ -79,10 +84,14 @@ class Conv1dLinear(nn.Layer): ...@@ -79,10 +84,14 @@ class Conv1dLinear(nn.Layer):
"""Initialize Conv1dLinear module. """Initialize Conv1dLinear module.
Args: Args:
in_chans (int): Number of input channels. in_chans (int):
hidden_chans (int): Number of hidden channels. Number of input channels.
kernel_size (int): Kernel size of conv1d. hidden_chans (int):
dropout_rate (float): Dropout rate. Number of hidden channels.
kernel_size (int):
Kernel size of conv1d.
dropout_rate (float):
Dropout rate.
""" """
super().__init__() super().__init__()
self.w_1 = nn.Conv1D( self.w_1 = nn.Conv1D(
...@@ -99,7 +108,8 @@ class Conv1dLinear(nn.Layer): ...@@ -99,7 +108,8 @@ class Conv1dLinear(nn.Layer):
"""Calculate forward propagation. """Calculate forward propagation.
Args: Args:
x (Tensor): Batch of input tensors (B, T, in_chans). x (Tensor):
Batch of input tensors (B, T, in_chans).
Returns: Returns:
Tensor: Batch of output tensors (B, T, in_chans). Tensor: Batch of output tensors (B, T, in_chans).
......
...@@ -21,9 +21,12 @@ class PositionwiseFeedForward(nn.Layer): ...@@ -21,9 +21,12 @@ class PositionwiseFeedForward(nn.Layer):
"""Positionwise feed forward layer. """Positionwise feed forward layer.
Args: Args:
idim (int): Input dimenstion. idim (int):
hidden_units (int): The number of hidden units. Input dimenstion.
dropout_rate (float): Dropout rate. hidden_units (int):
The number of hidden units.
dropout_rate (float):
Dropout rate.
""" """
def __init__(self, def __init__(self,
......
...@@ -30,8 +30,10 @@ def repeat(N, fn): ...@@ -30,8 +30,10 @@ def repeat(N, fn):
"""Repeat module N times. """Repeat module N times.
Args: Args:
N (int): Number of repeat time. N (int):
fn (Callable): Function to generate module. Number of repeat time.
fn (Callable):
Function to generate module.
Returns: Returns:
MultiSequential: Repeated model instance. MultiSequential: Repeated model instance.
......
...@@ -23,10 +23,14 @@ class Conv2dSubsampling(nn.Layer): ...@@ -23,10 +23,14 @@ class Conv2dSubsampling(nn.Layer):
"""Convolutional 2D subsampling (to 1/4 length). """Convolutional 2D subsampling (to 1/4 length).
Args: Args:
idim (int): Input dimension. idim (int):
odim (int): Output dimension. Input dimension.
dropout_rate (float): Dropout rate. odim (int):
pos_enc (nn.Layer): Custom position encoding layer. Output dimension.
dropout_rate (float):
Dropout rate.
pos_enc (nn.Layer):
Custom position encoding layer.
""" """
def __init__(self, idim, odim, dropout_rate, pos_enc=None): def __init__(self, idim, odim, dropout_rate, pos_enc=None):
...@@ -45,11 +49,15 @@ class Conv2dSubsampling(nn.Layer): ...@@ -45,11 +49,15 @@ class Conv2dSubsampling(nn.Layer):
def forward(self, x, x_mask): def forward(self, x, x_mask):
"""Subsample x. """Subsample x.
Args: Args:
x (Tensor): Input tensor (#batch, time, idim). x (Tensor):
x_mask (Tensor): Input mask (#batch, 1, time). Input tensor (#batch, time, idim).
x_mask (Tensor):
Input mask (#batch, 1, time).
Returns: Returns:
Tensor: Subsampled tensor (#batch, time', odim), where time' = time // 4. Tensor:
Tensor: Subsampled mask (#batch, 1, time'), where time' = time // 4. Subsampled tensor (#batch, time', odim), where time' = time // 4.
Tensor:
Subsampled mask (#batch, 1, time'), where time' = time // 4.
""" """
# (b, c, t, f) # (b, c, t, f)
x = x.unsqueeze(1) x = x.unsqueeze(1)
......
...@@ -28,9 +28,12 @@ class Stretch2D(nn.Layer): ...@@ -28,9 +28,12 @@ class Stretch2D(nn.Layer):
"""Strech an image (or image-like object) with some interpolation. """Strech an image (or image-like object) with some interpolation.
Args: Args:
w_scale (int): Scalar of width. w_scale (int):
h_scale (int): Scalar of the height. Scalar of width.
mode (str, optional): Interpolation mode, modes suppored are "nearest", "bilinear", h_scale (int):
Scalar of the height.
mode (str, optional):
Interpolation mode, modes suppored are "nearest", "bilinear",
"trilinear", "bicubic", "linear" and "area",by default "nearest" "trilinear", "bicubic", "linear" and "area",by default "nearest"
For more details about interpolation, see For more details about interpolation, see
`paddle.nn.functional.interpolate <https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/functional/interpolate_en.html>`_. `paddle.nn.functional.interpolate <https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/functional/interpolate_en.html>`_.
...@@ -44,11 +47,12 @@ class Stretch2D(nn.Layer): ...@@ -44,11 +47,12 @@ class Stretch2D(nn.Layer):
""" """
Args: Args:
x (Tensor): Shape (N, C, H, W) x (Tensor):
Shape (N, C, H, W)
Returns: Returns:
Tensor: The stretched image. Tensor:
Shape (N, C, H', W'), where ``H'=h_scale * H``, ``W'=w_scale * W``. The stretched image. Shape (N, C, H', W'), where ``H'=h_scale * H``, ``W'=w_scale * W``.
""" """
out = F.interpolate( out = F.interpolate(
...@@ -61,12 +65,18 @@ class UpsampleNet(nn.Layer): ...@@ -61,12 +65,18 @@ class UpsampleNet(nn.Layer):
convolutions. convolutions.
Args: Args:
upsample_scales (List[int]): Upsampling factors for each strech. upsample_scales (List[int]):
nonlinear_activation (Optional[str], optional): Activation after each convolution, by default None Upsampling factors for each strech.
nonlinear_activation_params (Dict[str, Any], optional): Parameters passed to construct the activation, by default {} nonlinear_activation (Optional[str], optional):
interpolate_mode (str, optional): Interpolation mode of the strech, by default "nearest" Activation after each convolution, by default None
freq_axis_kernel_size (int, optional): Convolution kernel size along the frequency axis, by default 1 nonlinear_activation_params (Dict[str, Any], optional):
use_causal_conv (bool, optional): Whether to use causal padding before convolution, by default False Parameters passed to construct the activation, by default {}
interpolate_mode (str, optional):
Interpolation mode of the strech, by default "nearest"
freq_axis_kernel_size (int, optional):
Convolution kernel size along the frequency axis, by default 1
use_causal_conv (bool, optional):
Whether to use causal padding before convolution, by default False
If True, Causal padding is used along the time axis, If True, Causal padding is used along the time axis,
i.e. padding amount is ``receptive field - 1`` and 0 for before and after, respectively. i.e. padding amount is ``receptive field - 1`` and 0 for before and after, respectively.
If False, "same" padding is used along the time axis. If False, "same" padding is used along the time axis.
...@@ -106,7 +116,8 @@ class UpsampleNet(nn.Layer): ...@@ -106,7 +116,8 @@ class UpsampleNet(nn.Layer):
def forward(self, c): def forward(self, c):
""" """
Args: Args:
c (Tensor): spectrogram. Shape (N, F, T) c (Tensor):
spectrogram. Shape (N, F, T)
Returns: Returns:
Tensor: upsampled spectrogram. Tensor: upsampled spectrogram.
...@@ -126,17 +137,25 @@ class ConvInUpsampleNet(nn.Layer): ...@@ -126,17 +137,25 @@ class ConvInUpsampleNet(nn.Layer):
UpsampleNet. UpsampleNet.
Args: Args:
upsample_scales (List[int]): Upsampling factors for each strech. upsample_scales (List[int]):
nonlinear_activation (Optional[str], optional): Activation after each convolution, by default None Upsampling factors for each strech.
nonlinear_activation_params (Dict[str, Any], optional): Parameters passed to construct the activation, by default {} nonlinear_activation (Optional[str], optional):
interpolate_mode (str, optional): Interpolation mode of the strech, by default "nearest" Activation after each convolution, by default None
freq_axis_kernel_size (int, optional): Convolution kernel size along the frequency axis, by default 1 nonlinear_activation_params (Dict[str, Any], optional):
aux_channels (int, optional): Feature size of the input, by default 80 Parameters passed to construct the activation, by default {}
aux_context_window (int, optional): Context window of the first 1D convolution applied to the input. It interpolate_mode (str, optional):
Interpolation mode of the strech, by default "nearest"
freq_axis_kernel_size (int, optional):
Convolution kernel size along the frequency axis, by default 1
aux_channels (int, optional):
Feature size of the input, by default 80
aux_context_window (int, optional):
Context window of the first 1D convolution applied to the input. It
related to the kernel size of the convolution, by default 0 related to the kernel size of the convolution, by default 0
If use causal convolution, the kernel size is ``window + 1``, If use causal convolution, the kernel size is ``window + 1``,
else the kernel size is ``2 * window + 1``. else the kernel size is ``2 * window + 1``.
use_causal_conv (bool, optional): Whether to use causal padding before convolution, by default False use_causal_conv (bool, optional):
Whether to use causal padding before convolution, by default False
If True, Causal padding is used along the time axis, i.e. padding If True, Causal padding is used along the time axis, i.e. padding
amount is ``receptive field - 1`` and 0 for before and after, respectively. amount is ``receptive field - 1`` and 0 for before and after, respectively.
If False, "same" padding is used along the time axis. If False, "same" padding is used along the time axis.
...@@ -171,7 +190,8 @@ class ConvInUpsampleNet(nn.Layer): ...@@ -171,7 +190,8 @@ class ConvInUpsampleNet(nn.Layer):
def forward(self, c): def forward(self, c):
""" """
Args: Args:
c (Tensor): spectrogram. Shape (N, F, T) c (Tensor):
spectrogram. Shape (N, F, T)
Returns: Returns:
Tensors: upsampled spectrogram. Shape (N, F, T'), where ``T' = upsample_factor * T``, Tensors: upsampled spectrogram. Shape (N, F, T'), where ``T' = upsample_factor * T``,
......
...@@ -58,8 +58,10 @@ class ExperimentBase(object): ...@@ -58,8 +58,10 @@ class ExperimentBase(object):
need. need.
Args: Args:
config (yacs.config.CfgNode): The configuration used for the experiment. config (yacs.config.CfgNode):
args (argparse.Namespace): The parsed command line arguments. The configuration used for the experiment.
args (argparse.Namespace):
The parsed command line arguments.
Examples: Examples:
>>> def main_sp(config, args): >>> def main_sp(config, args):
......
...@@ -25,7 +25,8 @@ def _load_latest_checkpoint(checkpoint_dir: str) -> int: ...@@ -25,7 +25,8 @@ def _load_latest_checkpoint(checkpoint_dir: str) -> int:
"""Get the iteration number corresponding to the latest saved checkpoint. """Get the iteration number corresponding to the latest saved checkpoint.
Args: Args:
checkpoint_dir (str): the directory where checkpoint is saved. checkpoint_dir (str):
the directory where checkpoint is saved.
Returns: Returns:
int: the latest iteration number. int: the latest iteration number.
...@@ -46,8 +47,10 @@ def _save_checkpoint(checkpoint_dir: str, iteration: int): ...@@ -46,8 +47,10 @@ def _save_checkpoint(checkpoint_dir: str, iteration: int):
"""Save the iteration number of the latest model to be checkpointed. """Save the iteration number of the latest model to be checkpointed.
Args: Args:
checkpoint_dir (str): the directory where checkpoint is saved. checkpoint_dir (str):
iteration (int): the latest iteration number. the directory where checkpoint is saved.
iteration (int):
the latest iteration number.
Returns: Returns:
None None
...@@ -65,11 +68,14 @@ def load_parameters(model, ...@@ -65,11 +68,14 @@ def load_parameters(model,
"""Load a specific model checkpoint from disk. """Load a specific model checkpoint from disk.
Args: Args:
model (Layer): model to load parameters. model (Layer):
optimizer (Optimizer, optional): optimizer to load states if needed. model to load parameters.
Defaults to None. optimizer (Optimizer, optional):
checkpoint_dir (str, optional): the directory where checkpoint is saved. optimizer to load states if needed. Defaults to None.
checkpoint_path (str, optional): if specified, load the checkpoint checkpoint_dir (str, optional):
the directory where checkpoint is saved.
checkpoint_path (str, optional):
if specified, load the checkpoint
stored in the checkpoint_path and the argument 'checkpoint_dir' will stored in the checkpoint_path and the argument 'checkpoint_dir' will
be ignored. Defaults to None. be ignored. Defaults to None.
...@@ -113,11 +119,14 @@ def save_parameters(checkpoint_dir, iteration, model, optimizer=None): ...@@ -113,11 +119,14 @@ def save_parameters(checkpoint_dir, iteration, model, optimizer=None):
"""Checkpoint the latest trained model parameters. """Checkpoint the latest trained model parameters.
Args: Args:
checkpoint_dir (str): the directory where checkpoint is saved. checkpoint_dir (str):
iteration (int): the latest iteration number. the directory where checkpoint is saved.
model (Layer): model to be checkpointed. iteration (int):
optimizer (Optimizer, optional): optimizer to be checkpointed. the latest iteration number.
Defaults to None. model (Layer):
model to be checkpointed.
optimizer (Optimizer, optional):
optimizer to be checkpointed. Defaults to None.
Returns: Returns:
None None
......
...@@ -71,10 +71,14 @@ def word_errors(reference, hypothesis, ignore_case=False, delimiter=' '): ...@@ -71,10 +71,14 @@ def word_errors(reference, hypothesis, ignore_case=False, delimiter=' '):
hypothesis sequence in word-level. hypothesis sequence in word-level.
Args: Args:
reference (str): The reference sentence. reference (str):
hypothesis (str): The hypothesis sentence. The reference sentence.
ignore_case (bool): Whether case-sensitive or not. hypothesis (str):
delimiter (char(str)): Delimiter of input sentences. The hypothesis sentence.
ignore_case (bool):
Whether case-sensitive or not.
delimiter (char(str)):
Delimiter of input sentences.
Returns: Returns:
list: Levenshtein distance and word number of reference sentence. list: Levenshtein distance and word number of reference sentence.
......
...@@ -24,8 +24,10 @@ import numpy as np ...@@ -24,8 +24,10 @@ import numpy as np
def read_hdf5(filename: Union[Path, str], dataset_name: str) -> Any: def read_hdf5(filename: Union[Path, str], dataset_name: str) -> Any:
"""Read a dataset from a HDF5 file. """Read a dataset from a HDF5 file.
Args: Args:
filename (Union[Path, str]): Path of the HDF5 file. filename (Union[Path, str]):
dataset_name (str): Name of the dataset to read. Path of the HDF5 file.
dataset_name (str):
Name of the dataset to read.
Returns: Returns:
Any: The retrieved dataset. Any: The retrieved dataset.
......
...@@ -22,7 +22,8 @@ def convert_dtype_to_np_dtype_(dtype): ...@@ -22,7 +22,8 @@ def convert_dtype_to_np_dtype_(dtype):
Convert paddle's data type to corrsponding numpy data type. Convert paddle's data type to corrsponding numpy data type.
Args: Args:
dtype(np.dtype): the data type in paddle. dtype(np.dtype):
the data type in paddle.
Returns: Returns:
type: the data type in numpy. type: the data type in numpy.
......
...@@ -48,6 +48,7 @@ base = [ ...@@ -48,6 +48,7 @@ base = [
"pandas", "pandas",
"paddlenlp", "paddlenlp",
"paddlespeech_feat", "paddlespeech_feat",
"Pillow>=9.0.0"
"praatio==5.0.0", "praatio==5.0.0",
"pypinyin", "pypinyin",
"pypinyin-dict", "pypinyin-dict",
...@@ -77,7 +78,7 @@ server = [ ...@@ -77,7 +78,7 @@ server = [
"fastapi", "fastapi",
"uvicorn", "uvicorn",
"pattern_singleton", "pattern_singleton",
"websockets", "websockets"
] ]
requirements = { requirements = {
...@@ -89,7 +90,6 @@ requirements = { ...@@ -89,7 +90,6 @@ requirements = {
"gpustat", "gpustat",
"paddlespeech_ctcdecoders", "paddlespeech_ctcdecoders",
"phkit", "phkit",
"Pillow",
"pybind11", "pybind11",
"pypi-kenlm", "pypi-kenlm",
"snakeviz", "snakeviz",
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册