未验证 提交 dc56c3a1 编写于 作者: L ljhzxc 提交者: GitHub

[TTS] [黑客松]Add JETS (#3109)

上级 bd0d69ca
# JETS with CSMSC
This example contains code used to train a [JETS](https://arxiv.org/abs/2203.16852v1) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html).
## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/source).
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes and durations for JETS.
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/BZNSYP`.
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from a text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│   ├── norm
│   └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│   ├── norm
│   └── raw
└── train
├── feats_stats.npy
├── norm
└── raw
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave、mel spectrogram、speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, the path of feats, feats_lengths, the path of pitch features, the path of energy features, the path of raw waves, speaker, and the id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
Train a JETS model.
optional arguments:
-h, --help show this help message and exit
--config CONFIG config file to overwrite default config.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
```
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
### Synthesizing
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
# This configuration tested on 4 GPUs (V100) with 32GB GPU
# memory. It takes around 2 weeks to finish the training
# but 100k iters model should generate reasonable results.
###########################################################
# FEATURE EXTRACTION SETTING #
###########################################################
n_mels: 80
fs: 22050 # sr
n_fft: 1024 # FFT size (samples).
n_shift: 256 # Hop size (samples). 12.5ms
win_length: null # Window length (samples). 50ms
# If set to null, it will be the same as fft_size.
window: "hann" # Window function.
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
f0min: 80 # Minimum f0 for pitch extraction.
f0max: 400 # Maximum f0 for pitch extraction.
##########################################################
# TTS MODEL SETTING #
##########################################################
model:
# generator related
generator_type: jets_generator
generator_params:
adim: 256 # attention dimension
aheads: 2 # number of attention heads
elayers: 4 # number of encoder layers
eunits: 1024 # number of encoder ff units
dlayers: 4 # number of decoder layers
dunits: 1024 # number of decoder ff units
positionwise_layer_type: conv1d # type of position-wise layer
positionwise_conv_kernel_size: 3 # kernel size of position wise conv layer
duration_predictor_layers: 2 # number of layers of duration predictor
duration_predictor_chans: 256 # number of channels of duration predictor
duration_predictor_kernel_size: 3 # filter size of duration predictor
use_masking: True # whether to apply masking for padded part in loss calculation
encoder_normalize_before: True # whether to perform layer normalization before the input
decoder_normalize_before: True # whether to perform layer normalization before the input
encoder_type: transformer # encoder type
decoder_type: transformer # decoder type
conformer_rel_pos_type: latest # relative positional encoding type
conformer_pos_enc_layer_type: rel_pos # conformer positional encoding type
conformer_self_attn_layer_type: rel_selfattn # conformer self-attention type
conformer_activation_type: swish # conformer activation type
use_macaron_style_in_conformer: true # whether to use macaron style in conformer
use_cnn_in_conformer: true # whether to use CNN in conformer
conformer_enc_kernel_size: 7 # kernel size in CNN module of conformer-based encoder
conformer_dec_kernel_size: 31 # kernel size in CNN module of conformer-based decoder
init_type: xavier_uniform # initialization type
init_enc_alpha: 1.0 # initial value of alpha for encoder
init_dec_alpha: 1.0 # initial value of alpha for decoder
transformer_enc_dropout_rate: 0.2 # dropout rate for transformer encoder layer
transformer_enc_positional_dropout_rate: 0.2 # dropout rate for transformer encoder positional encoding
transformer_enc_attn_dropout_rate: 0.2 # dropout rate for transformer encoder attention layer
transformer_dec_dropout_rate: 0.2 # dropout rate for transformer decoder layer
transformer_dec_positional_dropout_rate: 0.2 # dropout rate for transformer decoder positional encoding
transformer_dec_attn_dropout_rate: 0.2 # dropout rate for transformer decoder attention layer
pitch_predictor_layers: 5 # number of conv layers in pitch predictor
pitch_predictor_chans: 256 # number of channels of conv layers in pitch predictor
pitch_predictor_kernel_size: 5 # kernel size of conv leyers in pitch predictor
pitch_predictor_dropout: 0.5 # dropout rate in pitch predictor
pitch_embed_kernel_size: 1 # kernel size of conv embedding layer for pitch
pitch_embed_dropout: 0.0 # dropout rate after conv embedding layer for pitch
stop_gradient_from_pitch_predictor: true # whether to stop the gradient from pitch predictor to encoder
energy_predictor_layers: 2 # number of conv layers in energy predictor
energy_predictor_chans: 256 # number of channels of conv layers in energy predictor
energy_predictor_kernel_size: 3 # kernel size of conv leyers in energy predictor
energy_predictor_dropout: 0.5 # dropout rate in energy predictor
energy_embed_kernel_size: 1 # kernel size of conv embedding layer for energy
energy_embed_dropout: 0.0 # dropout rate after conv embedding layer for energy
stop_gradient_from_energy_predictor: false # whether to stop the gradient from energy predictor to encoder
generator_out_channels: 1
generator_channels: 512
generator_global_channels: -1
generator_kernel_size: 7
generator_upsample_scales: [8, 8, 2, 2]
generator_upsample_kernel_sizes: [16, 16, 4, 4]
generator_resblock_kernel_sizes: [3, 7, 11]
generator_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
generator_use_additional_convs: true
generator_bias: true
generator_nonlinear_activation: "leakyrelu"
generator_nonlinear_activation_params:
negative_slope: 0.1
generator_use_weight_norm: true
segment_size: 64 # segment size for random windowed discriminator
# discriminator related
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: "AvgPool1D"
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [15, 41, 5, 3]
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: True
downsample_scales: [2, 2, 4, 4, 1]
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
follow_official_norm: False
periods: [2, 3, 5, 7, 11]
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [5, 3]
channels: 32
downsample_scales: [3, 3, 3, 3, 1]
max_downsample_channels: 1024
bias: True
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
use_alignment_module: False # whether to use alignment module
###########################################################
# LOSS SETTING #
###########################################################
# loss function related
generator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
discriminator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
feat_match_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
average_by_layers: False # whether to average loss value by #layers of each discriminator
include_final_outputs: True # whether to include final outputs for loss calculation
mel_loss_params:
fs: 22050 # must be the same as the training data
fft_size: 1024 # fft points
hop_size: 256 # hop size
win_length: null # window length
window: hann # window type
num_mels: 80 # number of Mel basis
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
log_base: null # null represent natural log
###########################################################
# ADVERSARIAL LOSS SETTING #
###########################################################
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
lambda_var: 1.0 # loss scaling coefficient for duration loss
lambda_align: 2.0 # loss scaling coefficient for KL divergence loss
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
# extra module for additional inputs
pitch_extract: dio # pitch extractor type
pitch_extract_conf:
reduction_factor: 1
use_token_averaged_f0: false
pitch_normalize: global_mvn # normalizer for the pitch feature
energy_extract: energy # energy extractor type
energy_extract_conf:
reduction_factor: 1
use_token_averaged_energy: false
energy_normalize: global_mvn # normalizer for the energy feature
###########################################################
# DATA LOADER SETTING #
###########################################################
batch_size: 32 # Batch size.
num_workers: 4 # Number of workers in DataLoader.
##########################################################
# OPTIMIZER & SCHEDULER SETTING #
##########################################################
# optimizer setting for generator
generator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
generator_scheduler: exponential_decay
generator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
# optimizer setting for discriminator
discriminator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
discriminator_scheduler: exponential_decay
discriminator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
generator_first: True # whether to start updating generator first
##########################################################
# OTHER TRAINING SETTING #
##########################################################
num_snapshots: 10 # max number of snapshots to keep while training
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
save_interval_steps: 1000 # Interval steps to save checkpoint.
eval_interval_steps: 250 # Interval steps to evaluate the network.
seed: 777 # random seed number
#!/bin/bash
train_output_path=$1
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
python3 ${BIN_DIR}/inference.py \
--inference_dir=${train_output_path}/inference \
--am=jets_csmsc \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/pd_infer_out \
--phones_dict=dump/phone_id_map.txt
fi
#!/bin/bash
set -e
stage=0
stop_stage=100
config_path=$1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# get durations from MFA's result
echo "Generate durations.txt from MFA results ..."
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
--inputdir=./baker_alignment_tone \
--output=durations.txt \
--config=${config_path}
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# extract features
echo "Extract features ..."
python3 ${BIN_DIR}/preprocess.py \
--dataset=baker \
--rootdir=~/datasets/BZNSYP/ \
--dumpdir=dump \
--dur-file=durations.txt \
--config=${config_path} \
--num-cpu=20 \
--cut-sil=True \
--token_average=True
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# get features' stats(mean and std)
echo "Get features' stats ..."
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="feats"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="pitch"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="energy"
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# normalize and covert phone/speaker to id, dev and test should use train's stats
echo "Normalize ..."
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/train/raw/metadata.jsonl \
--dumpdir=dump/train/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/dev/raw/metadata.jsonl \
--dumpdir=dump/dev/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/test/raw/metadata.jsonl \
--dumpdir=dump/test/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test
fi
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--am=jets_csmsc \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--output_dir=${train_output_path}/test_e2e \
--text=${BIN_DIR}/../sentences.txt \
--inference_dir=${train_output_path}/inference
fi
#!/bin/bash
config_path=$1
train_output_path=$2
python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \
--output-dir=${train_output_path} \
--ngpu=1 \
--phones-dict=dump/phone_id_map.txt
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=jets
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
#!/bin/bash
set -e
source path.sh
gpus=0
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_150000.pdz
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
./local/preprocess.sh ${conf_path}|| exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# synthesize_e2e
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${train_output_path} || exit -1
fi
...@@ -669,6 +669,142 @@ def vits_multi_spk_batch_fn(examples): ...@@ -669,6 +669,142 @@ def vits_multi_spk_batch_fn(examples):
return batch return batch
def jets_single_spk_batch_fn(examples):
"""
Returns:
Dict[str, Any]:
- text (Tensor): Text index tensor (B, T_text).
- text_lengths (Tensor): Text length tensor (B,).
- feats (Tensor): Feature tensor (B, T_feats, aux_channels).
- feats_lengths (Tensor): Feature length tensor (B,).
- durations (Tensor): Feature tensor (B, T_text,).
- durations_lengths (Tensor): Durations length tensor (B,).
- pitch (Tensor): Feature tensor (B, pitch_length,).
- energy (Tensor): Feature tensor (B, energy_length,).
- speech (Tensor): Speech waveform tensor (B, T_wav).
"""
# fields = ["text", "text_lengths", "feats", "feats_lengths", "durations", "pitch", "energy", "speech"]
text = [np.array(item["text"], dtype=np.int64) for item in examples]
feats = [np.array(item["feats"], dtype=np.float32) for item in examples]
durations = [
np.array(item["durations"], dtype=np.int64) for item in examples
]
pitch = [np.array(item["pitch"], dtype=np.float32) for item in examples]
energy = [np.array(item["energy"], dtype=np.float32) for item in examples]
speech = [np.array(item["wave"], dtype=np.float32) for item in examples]
text_lengths = [
np.array(item["text_lengths"], dtype=np.int64) for item in examples
]
feats_lengths = [
np.array(item["feats_lengths"], dtype=np.int64) for item in examples
]
text = batch_sequences(text)
feats = batch_sequences(feats)
durations = batch_sequences(durations)
pitch = batch_sequences(pitch)
energy = batch_sequences(energy)
speech = batch_sequences(speech)
# convert each batch to paddle.Tensor
text = paddle.to_tensor(text)
feats = paddle.to_tensor(feats)
durations = paddle.to_tensor(durations)
pitch = paddle.to_tensor(pitch)
energy = paddle.to_tensor(energy)
text_lengths = paddle.to_tensor(text_lengths)
feats_lengths = paddle.to_tensor(feats_lengths)
batch = {
"text": text,
"text_lengths": text_lengths,
"feats": feats,
"feats_lengths": feats_lengths,
"durations": durations,
"durations_lengths": text_lengths,
"pitch": pitch,
"energy": energy,
"speech": speech,
}
return batch
def jets_multi_spk_batch_fn(examples):
"""
Returns:
Dict[str, Any]:
- text (Tensor): Text index tensor (B, T_text).
- text_lengths (Tensor): Text length tensor (B,).
- feats (Tensor): Feature tensor (B, T_feats, aux_channels).
- feats_lengths (Tensor): Feature length tensor (B,).
- durations (Tensor): Feature tensor (B, T_text,).
- durations_lengths (Tensor): Durations length tensor (B,).
- pitch (Tensor): Feature tensor (B, pitch_length,).
- energy (Tensor): Feature tensor (B, energy_length,).
- speech (Tensor): Speech waveform tensor (B, T_wav).
- spk_id (Optional[Tensor]): Speaker index tensor (B,) or (B, 1).
- spk_emb (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim).
"""
# fields = ["text", "text_lengths", "feats", "feats_lengths", "durations", "pitch", "energy", "speech", "spk_id"/"spk_emb"]
text = [np.array(item["text"], dtype=np.int64) for item in examples]
feats = [np.array(item["feats"], dtype=np.float32) for item in examples]
durations = [
np.array(item["durations"], dtype=np.int64) for item in examples
]
pitch = [np.array(item["pitch"], dtype=np.float32) for item in examples]
energy = [np.array(item["energy"], dtype=np.float32) for item in examples]
speech = [np.array(item["wave"], dtype=np.float32) for item in examples]
text_lengths = [
np.array(item["text_lengths"], dtype=np.int64) for item in examples
]
feats_lengths = [
np.array(item["feats_lengths"], dtype=np.int64) for item in examples
]
text = batch_sequences(text)
feats = batch_sequences(feats)
durations = batch_sequences(durations)
pitch = batch_sequences(pitch)
energy = batch_sequences(energy)
speech = batch_sequences(speech)
# convert each batch to paddle.Tensor
text = paddle.to_tensor(text)
feats = paddle.to_tensor(feats)
durations = paddle.to_tensor(durations)
pitch = paddle.to_tensor(pitch)
energy = paddle.to_tensor(energy)
text_lengths = paddle.to_tensor(text_lengths)
feats_lengths = paddle.to_tensor(feats_lengths)
batch = {
"text": text,
"text_lengths": text_lengths,
"feats": feats,
"feats_lengths": feats_lengths,
"durations": durations,
"durations_lengths": text_lengths,
"pitch": pitch,
"energy": energy,
"speech": speech,
}
# spk_emb has a higher priority than spk_id
if "spk_emb" in examples[0]:
spk_emb = [
np.array(item["spk_emb"], dtype=np.float32) for item in examples
]
spk_emb = batch_sequences(spk_emb)
spk_emb = paddle.to_tensor(spk_emb)
batch["spk_emb"] = spk_emb
elif "spk_id" in examples[0]:
spk_id = [np.array(item["spk_id"], dtype=np.int64) for item in examples]
spk_id = paddle.to_tensor(spk_id)
batch["spk_id"] = spk_id
return batch
# 因为要传参数,所以需要额外构建 # 因为要传参数,所以需要额外构建
def build_starganv2_vc_collate_fn(latent_dim: int=16, max_mel_length: int=192): def build_starganv2_vc_collate_fn(latent_dim: int=16, max_mel_length: int=192):
......
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import paddle
import soundfile as sf
from timer import timer
from paddlespeech.t2s.exps.syn_utils import get_am_output
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.exps.syn_utils import get_predictor
from paddlespeech.t2s.exps.syn_utils import get_sentences
from paddlespeech.t2s.utils import str2bool
def parse_args():
parser = argparse.ArgumentParser(
description="Paddle Infernce with acoustic model & vocoder.")
# acoustic model
parser.add_argument(
'--am',
type=str,
default='jets_csmsc',
choices=['jets_csmsc', 'jets_aishell3'],
help='Choose acoustic model type of tts task.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
'--spk_id',
type=int,
default=0,
help='spk id for multi speaker acoustic model')
# other
parser.add_argument(
'--lang',
type=str,
default='zh',
help='Choose model language. zh or en or mix')
parser.add_argument(
"--text",
type=str,
help="text to synthesize, a 'utt_id sentence' pair per line")
parser.add_argument(
"--add-blank",
type=str2bool,
default=True,
help="whether to add blank between phones")
parser.add_argument(
"--inference_dir", type=str, help="dir to save inference models")
parser.add_argument("--output_dir", type=str, help="output dir")
# inference
parser.add_argument(
"--use_trt",
type=str2bool,
default=False,
help="whether to use TensorRT or not in GPU", )
parser.add_argument(
"--use_mkldnn",
type=str2bool,
default=False,
help="whether to use MKLDNN or not in CPU.", )
parser.add_argument(
"--precision",
type=str,
default='fp32',
choices=['fp32', 'fp16', 'bf16', 'int8'],
help="mode of running")
parser.add_argument(
"--device",
default="gpu",
choices=["gpu", "cpu"],
help="Device selected for inference.", )
parser.add_argument('--cpu_threads', type=int, default=1)
args, _ = parser.parse_known_args()
return args
# only inference for models trained with csmsc now
def main():
args = parse_args()
paddle.set_device(args.device)
# frontend
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
# am_predictor
am_predictor = get_predictor(
model_dir=args.inference_dir,
model_file=args.am + ".pdmodel",
params_file=args.am + ".pdiparams",
device=args.device,
use_trt=args.use_trt,
use_mkldnn=args.use_mkldnn,
cpu_threads=args.cpu_threads,
precision=args.precision)
# model: {model_name}_{dataset}
am_dataset = args.am[args.am.rindex('_') + 1:]
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
sentences = get_sentences(text_file=args.text, lang=args.lang)
merge_sentences = True
add_blank = args.add_blank
# jets's fs is 22050
fs = 22050
# warmup
for utt_id, sentence in sentences[:3]:
with timer() as t:
wav = get_am_output(
input=sentence,
am_predictor=am_predictor,
am=args.am,
frontend=frontend,
lang=args.lang,
merge_sentences=merge_sentences,
speaker_dict=args.speaker_dict,
spk_id=args.spk_id, )
speed = wav.size / t.elapse
rtf = fs / speed
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
print("warm up done!")
N = 0
T = 0
for utt_id, sentence in sentences:
with timer() as t:
wav = get_am_output(
input=sentence,
am_predictor=am_predictor,
am=args.am,
frontend=frontend,
lang=args.lang,
merge_sentences=merge_sentences,
speaker_dict=args.speaker_dict,
spk_id=args.spk_id, )
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = fs / speed
sf.write(output_dir / (utt_id + ".wav"), wav, samplerate=fs)
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {fs / (N / T) }")
if __name__ == "__main__":
main()
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Normalize feature files and dump them."""
import argparse
import logging
from operator import itemgetter
from pathlib import Path
import jsonlines
import numpy as np
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm
from paddlespeech.t2s.datasets.data_table import DataTable
def main():
"""Run preprocessing process."""
parser = argparse.ArgumentParser(
description="Normalize dumped raw features (See detail in parallel_wavegan/bin/normalize.py)."
)
parser.add_argument(
"--metadata",
type=str,
required=True,
help="directory including feature files to be normalized. "
"you need to specify either *-scp or rootdir.")
parser.add_argument(
"--dumpdir",
type=str,
required=True,
help="directory to dump normalized feature files.")
parser.add_argument(
"--feats-stats", type=str, required=True, help="feats statistics file.")
parser.add_argument(
"--pitch-stats", type=str, required=True, help="pitch statistics file.")
parser.add_argument(
"--energy-stats",
type=str,
required=True,
help="energy statistics file.")
parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker-dict", type=str, default=None, help="speaker id map file.")
args = parser.parse_args()
dumpdir = Path(args.dumpdir).expanduser()
# use absolute path
dumpdir = dumpdir.resolve()
dumpdir.mkdir(parents=True, exist_ok=True)
# get dataset
with jsonlines.open(args.metadata, 'r') as reader:
metadata = list(reader)
dataset = DataTable(
metadata,
converters={
"feats": np.load,
"pitch": np.load,
"energy": np.load,
"wave": str,
})
logging.info(f"The number of files = {len(dataset)}.")
# restore scaler
feats_scaler = StandardScaler()
feats_scaler.mean_ = np.load(args.feats_stats)[0]
feats_scaler.scale_ = np.load(args.feats_stats)[1]
feats_scaler.n_features_in_ = feats_scaler.mean_.shape[0]
pitch_scaler = StandardScaler()
pitch_scaler.mean_ = np.load(args.pitch_stats)[0]
pitch_scaler.scale_ = np.load(args.pitch_stats)[1]
pitch_scaler.n_features_in_ = pitch_scaler.mean_.shape[0]
energy_scaler = StandardScaler()
energy_scaler.mean_ = np.load(args.energy_stats)[0]
energy_scaler.scale_ = np.load(args.energy_stats)[1]
energy_scaler.n_features_in_ = energy_scaler.mean_.shape[0]
vocab_phones = {}
with open(args.phones_dict, 'rt') as f:
phn_id = [line.strip().split() for line in f.readlines()]
for phn, id in phn_id:
vocab_phones[phn] = int(id)
vocab_speaker = {}
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
for spk, id in spk_id:
vocab_speaker[spk] = int(id)
# process each file
output_metadata = []
for item in tqdm(dataset):
utt_id = item['utt_id']
feats = item['feats']
pitch = item['pitch']
energy = item['energy']
wave_path = item['wave']
# normalize
feats = feats_scaler.transform(feats)
feats_dir = dumpdir / "data_feats"
feats_dir.mkdir(parents=True, exist_ok=True)
feats_path = feats_dir / f"{utt_id}_feats.npy"
np.save(feats_path, feats.astype(np.float32), allow_pickle=False)
pitch = pitch_scaler.transform(pitch)
pitch_dir = dumpdir / "data_pitch"
pitch_dir.mkdir(parents=True, exist_ok=True)
pitch_path = pitch_dir / f"{utt_id}_pitch.npy"
np.save(pitch_path, pitch.astype(np.float32), allow_pickle=False)
energy = energy_scaler.transform(energy)
energy_dir = dumpdir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / f"{utt_id}_energy.npy"
np.save(energy_path, energy.astype(np.float32), allow_pickle=False)
phone_ids = [vocab_phones[p] for p in item['phones']]
spk_id = vocab_speaker[item["speaker"]]
record = {
"utt_id": item['utt_id'],
"spk_id": spk_id,
"text": phone_ids,
"text_lengths": item['text_lengths'],
"feats_lengths": item['feats_lengths'],
"durations": item['durations'],
"feats": str(feats_path),
"pitch": str(pitch_path),
"energy": str(energy_path),
"wave": str(wave_path),
}
# add spk_emb for voice cloning
if "spk_emb" in item:
record["spk_emb"] = str(item["spk_emb"])
output_metadata.append(record)
output_metadata.sort(key=itemgetter('utt_id'))
output_metadata_path = Path(args.dumpdir) / "metadata.jsonl"
with jsonlines.open(output_metadata_path, 'w') as writer:
for item in output_metadata:
writer.write(item)
logging.info(f"metadata dumped into {output_metadata_path}")
if __name__ == "__main__":
main()
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
from concurrent.futures import ThreadPoolExecutor
from operator import itemgetter
from pathlib import Path
from typing import Any
from typing import Dict
from typing import List
import jsonlines
import librosa
import numpy as np
import tqdm
import yaml
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.get_feats import Energy
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.datasets.get_feats import Pitch
from paddlespeech.t2s.datasets.preprocess_utils import compare_duration_and_mel_length
from paddlespeech.t2s.datasets.preprocess_utils import get_input_token
from paddlespeech.t2s.datasets.preprocess_utils import get_phn_dur
from paddlespeech.t2s.datasets.preprocess_utils import get_spk_id_map
from paddlespeech.t2s.datasets.preprocess_utils import merge_silence
from paddlespeech.t2s.utils import str2bool
def process_sentence(config: Dict[str, Any],
fp: Path,
sentences: Dict,
output_dir: Path,
mel_extractor=None,
pitch_extractor=None,
energy_extractor=None,
cut_sil: bool=True,
spk_emb_dir: Path=None,
token_average: bool=True):
utt_id = fp.stem
# for vctk
if utt_id.endswith("_mic2"):
utt_id = utt_id[:-5]
record = None
if utt_id in sentences:
# reading, resampling may occur
wav, _ = librosa.load(
str(fp), sr=config.fs,
mono=False) if "canton" in str(fp) else librosa.load(
str(fp), sr=config.fs)
if len(wav.shape) == 2 and "canton" in str(fp):
# Remind that Cantonese datasets should be placed in ~/datasets/canton_all. Otherwise, it may cause problem.
wav = wav[0]
wav = np.ascontiguousarray(wav)
elif len(wav.shape) != 1:
return record
max_value = np.abs(wav).max()
if max_value > 1.0:
wav = wav / max_value
assert len(wav.shape) == 1, f"{utt_id} is not a mono-channel audio."
assert np.abs(wav).max(
) <= 1.0, f"{utt_id} is seems to be different that 16 bit PCM."
phones = sentences[utt_id][0]
durations = sentences[utt_id][1]
speaker = sentences[utt_id][2]
d_cumsum = np.pad(np.array(durations).cumsum(0), (1, 0), 'constant')
# little imprecise than use *.TextGrid directly
times = librosa.frames_to_time(
d_cumsum, sr=config.fs, hop_length=config.n_shift)
if cut_sil:
start = 0
end = d_cumsum[-1]
if phones[0] == "sil" and len(durations) > 1:
start = times[1]
durations = durations[1:]
phones = phones[1:]
if phones[-1] == 'sil' and len(durations) > 1:
end = times[-2]
durations = durations[:-1]
phones = phones[:-1]
sentences[utt_id][0] = phones
sentences[utt_id][1] = durations
start, end = librosa.time_to_samples([start, end], sr=config.fs)
wav = wav[start:end]
# extract mel feats
logmel = mel_extractor.get_log_mel_fbank(wav)
# change duration according to mel_length
compare_duration_and_mel_length(sentences, utt_id, logmel)
# utt_id may be popped in compare_duration_and_mel_length
if utt_id not in sentences:
return None
phones = sentences[utt_id][0]
durations = sentences[utt_id][1]
num_frames = logmel.shape[0]
assert sum(durations) == num_frames
mel_dir = output_dir / "data_feats"
mel_dir.mkdir(parents=True, exist_ok=True)
mel_path = mel_dir / (utt_id + "_feats.npy")
np.save(mel_path, logmel)
if wav.size < num_frames * config.n_shift:
wav = np.pad(
wav, (0, num_frames * config.n_shift - wav.size),
mode="reflect")
else:
wav = wav[:num_frames * config.n_shift]
wave_dir = output_dir / "data_wave"
wave_dir.mkdir(parents=True, exist_ok=True)
wav_path = wave_dir / (utt_id + "_wave.npy")
# (num_samples, )
np.save(wav_path, wav)
# extract pitch and energy
if token_average == True:
f0 = pitch_extractor.get_pitch(
wav,
duration=np.array(durations),
use_token_averaged_f0=token_average)
if (f0 == 0).all():
return None
assert f0.shape[0] == len(durations)
else:
f0 = pitch_extractor.get_pitch(
wav, use_token_averaged_f0=token_average)
if (f0 == 0).all():
return None
f0 = f0[:num_frames]
assert f0.shape[0] == num_frames
f0_dir = output_dir / "data_pitch"
f0_dir.mkdir(parents=True, exist_ok=True)
f0_path = f0_dir / (utt_id + "_pitch.npy")
np.save(f0_path, f0)
if token_average == True:
energy = energy_extractor.get_energy(
wav,
duration=np.array(durations),
use_token_averaged_energy=token_average)
assert energy.shape[0] == len(durations)
else:
energy = energy_extractor.get_energy(
wav, use_token_averaged_energy=token_average)
energy = energy[:num_frames]
assert energy.shape[0] == num_frames
energy_dir = output_dir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / (utt_id + "_energy.npy")
np.save(energy_path, energy)
record = {
"utt_id": utt_id,
"phones": phones,
"text_lengths": len(phones),
"feats_lengths": num_frames,
"durations": durations,
"feats": str(mel_path),
"pitch": str(f0_path),
"energy": str(energy_path),
"wave": str(wav_path),
"speaker": speaker
}
if spk_emb_dir:
if speaker in os.listdir(spk_emb_dir):
embed_name = utt_id + ".npy"
embed_path = spk_emb_dir / speaker / embed_name
if embed_path.is_file():
record["spk_emb"] = str(embed_path)
else:
return None
return record
def process_sentences(config,
fps: List[Path],
sentences: Dict,
output_dir: Path,
mel_extractor=None,
pitch_extractor=None,
energy_extractor=None,
nprocs: int=1,
cut_sil: bool=True,
spk_emb_dir: Path=None,
write_metadata_method: str='w',
token_average: bool=True):
if nprocs == 1:
results = []
for fp in tqdm.tqdm(fps, total=len(fps)):
record = process_sentence(
config=config,
fp=fp,
sentences=sentences,
output_dir=output_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir,
token_average=token_average)
if record:
results.append(record)
else:
with ThreadPoolExecutor(nprocs) as pool:
futures = []
with tqdm.tqdm(total=len(fps)) as progress:
for fp in fps:
future = pool.submit(process_sentence, config, fp,
sentences, output_dir, mel_extractor,
pitch_extractor, energy_extractor,
cut_sil, spk_emb_dir)
future.add_done_callback(lambda p: progress.update())
futures.append(future)
results = []
for ft in futures:
record = ft.result()
if record:
results.append(record)
results.sort(key=itemgetter("utt_id"))
with jsonlines.open(output_dir / "metadata.jsonl",
write_metadata_method) as writer:
for item in results:
writer.write(item)
print("Done")
def main():
# parse config and args
parser = argparse.ArgumentParser(
description="Preprocess audio and then extract features.")
parser.add_argument(
"--dataset",
default="baker",
type=str,
help="name of dataset, should in {baker, aishell3, ljspeech, vctk} now")
parser.add_argument(
"--rootdir", default=None, type=str, help="directory to dataset.")
parser.add_argument(
"--dumpdir",
type=str,
required=True,
help="directory to dump feature files.")
parser.add_argument(
"--dur-file", default=None, type=str, help="path to durations.txt.")
parser.add_argument("--config", type=str, help="fastspeech2 config file.")
parser.add_argument(
"--num-cpu", type=int, default=1, help="number of process.")
parser.add_argument(
"--cut-sil",
type=str2bool,
default=True,
help="whether cut sil in the edge of audio")
parser.add_argument(
"--spk_emb_dir",
default=None,
type=str,
help="directory to speaker embedding files.")
parser.add_argument(
"--write_metadata_method",
default="w",
type=str,
choices=["w", "a"],
help="How the metadata.jsonl file is written.")
parser.add_argument(
"--token_average",
type=str2bool,
default=False,
help="Average the energy and pitch accroding to durations")
args = parser.parse_args()
rootdir = Path(args.rootdir).expanduser()
dumpdir = Path(args.dumpdir).expanduser()
# use absolute path
dumpdir = dumpdir.resolve()
dumpdir.mkdir(parents=True, exist_ok=True)
dur_file = Path(args.dur_file).expanduser()
if args.spk_emb_dir:
spk_emb_dir = Path(args.spk_emb_dir).expanduser().resolve()
else:
spk_emb_dir = None
assert rootdir.is_dir()
assert dur_file.is_file()
with open(args.config, 'rt') as f:
config = CfgNode(yaml.safe_load(f))
sentences, speaker_set = get_phn_dur(dur_file)
merge_silence(sentences)
phone_id_map_path = dumpdir / "phone_id_map.txt"
speaker_id_map_path = dumpdir / "speaker_id_map.txt"
get_input_token(sentences, phone_id_map_path, args.dataset)
get_spk_id_map(speaker_set, speaker_id_map_path)
if args.dataset == "baker":
wav_files = sorted(list((rootdir / "Wave").rglob("*.wav")))
# split data into 3 sections
num_train = 9800
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "aishell3":
sub_num_dev = 5
wav_dir = rootdir / "train" / "wav"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*.wav")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
elif args.dataset == "canton":
sub_num_dev = 5
wav_dir = rootdir / "WAV"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*.wav")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
elif args.dataset == "ljspeech":
wav_files = sorted(list((rootdir / "wavs").rglob("*.wav")))
# split data into 3 sections
num_train = 12900
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "vctk":
sub_num_dev = 5
wav_dir = rootdir / "wav48_silence_trimmed"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*_mic2.flac")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
else:
print("dataset should in {baker, aishell3, ljspeech, vctk} now!")
train_dump_dir = dumpdir / "train" / "raw"
train_dump_dir.mkdir(parents=True, exist_ok=True)
dev_dump_dir = dumpdir / "dev" / "raw"
dev_dump_dir.mkdir(parents=True, exist_ok=True)
test_dump_dir = dumpdir / "test" / "raw"
test_dump_dir.mkdir(parents=True, exist_ok=True)
# Extractor
mel_extractor = LogMelFBank(
sr=config.fs,
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window,
n_mels=config.n_mels,
fmin=config.fmin,
fmax=config.fmax)
pitch_extractor = Pitch(
sr=config.fs,
hop_length=config.n_shift,
f0min=config.f0min,
f0max=config.f0max)
energy_extractor = Energy(
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window)
# process for the 3 sections
if train_wav_files:
process_sentences(
config=config,
fps=train_wav_files,
sentences=sentences,
output_dir=train_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if dev_wav_files:
process_sentences(
config=config,
fps=dev_wav_files,
sentences=sentences,
output_dir=dev_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if test_wav_files:
process_sentences(
config=config,
fps=test_wav_files,
sentences=sentences,
output_dir=test_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if __name__ == "__main__":
main()
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import jsonlines
import numpy as np
import paddle
import soundfile as sf
import yaml
from timer import timer
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.utils import str2bool
def evaluate(args):
# construct dataset for evaluation
with jsonlines.open(args.test_metadata, 'r') as reader:
test_metadata = list(reader)
# Init body.
with open(args.config) as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
fields = ["utt_id", "text"]
converters = {}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Evaluating voice cloning!")
fields += ["spk_emb"]
else:
print("single speaker jets!")
print("spk_num:", spk_num)
test_dataset = DataTable(
data=test_metadata,
fields=fields,
converters=converters, )
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
jets = JETS(idim=vocab_size, odim=odim, **config["model"])
jets.set_state_dict(paddle.load(args.ckpt)["main_params"])
jets.eval()
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
N = 0
T = 0
for datum in test_dataset:
utt_id = datum["utt_id"]
phone_ids = paddle.to_tensor(datum["text"])
with timer() as t:
with paddle.no_grad():
spk_emb = None
spk_id = None
# multi speaker
if args.voice_cloning and "spk_emb" in datum:
spk_emb = paddle.to_tensor(np.load(datum["spk_emb"]))
elif "spk_id" in datum:
spk_id = paddle.to_tensor(datum["spk_id"])
out = jets.inference(
text=phone_ids, sids=spk_id, spembs=spk_emb)
wav = out["wav"]
wav = wav.numpy()
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = config.fs / speed
print(
f"{utt_id}, wave: {wav.size}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
sf.write(str(output_dir / (utt_id + ".wav")), wav, samplerate=config.fs)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {config.fs / (N / T) }")
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(description="Synthesize with JETS")
# model
parser.add_argument(
'--config', type=str, default=None, help='Config of JETS.')
parser.add_argument(
'--ckpt', type=str, default=None, help='Checkpoint file of JETS.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
# other
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument("--test_metadata", type=str, help="test metadata.")
parser.add_argument("--output_dir", type=str, help="output dir.")
args = parser.parse_args()
return args
def main():
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
evaluate(args)
if __name__ == "__main__":
main()
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import paddle
import soundfile as sf
import yaml
from timer import timer
from yacs.config import CfgNode
from paddlespeech.t2s.exps.syn_utils import am_to_static
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.exps.syn_utils import get_sentences
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.models.jets import JETSInference
from paddlespeech.t2s.utils import str2bool
def evaluate(args):
# Init body.
with open(args.config) as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
sentences = get_sentences(text_file=args.text, lang=args.lang)
# frontend
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
# acoustic model
am_name = args.am[:args.am.rindex('_')]
am_dataset = args.am[args.am.rindex('_') + 1:]
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
else:
print("single speaker jets!")
print("spk_num:", spk_num)
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
jets = JETS(idim=vocab_size, odim=odim, **config["model"])
jets.set_state_dict(paddle.load(args.ckpt)["main_params"])
jets.eval()
jets_inference = JETSInference(jets)
# whether dygraph to static
if args.inference_dir:
jets_inference = am_to_static(
am_inference=jets_inference,
am=args.am,
inference_dir=args.inference_dir,
speaker_dict=args.speaker_dict)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
merge_sentences = False
N = 0
T = 0
for utt_id, sentence in sentences:
with timer() as t:
if args.lang == 'zh':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"]
elif args.lang == 'en':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"]
else:
print("lang should in {'zh', 'en'}!")
with paddle.no_grad():
flags = 0
for i in range(len(phone_ids)):
part_phone_ids = phone_ids[i]
spk_id = None
if am_dataset in {"aishell3",
"vctk"} and spk_num is not None:
spk_id = paddle.to_tensor(args.spk_id)
wav = jets_inference(part_phone_ids, spk_id)
else:
wav = jets_inference(part_phone_ids)
if flags == 0:
wav_all = wav
flags = 1
else:
wav_all = paddle.concat([wav_all, wav])
wav = wav_all.numpy()
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = config.fs / speed
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
sf.write(str(output_dir / (utt_id + ".wav")), wav, samplerate=config.fs)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {config.fs / (N / T) }")
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(description="Synthesize with JETS")
# model
parser.add_argument(
'--config', type=str, default=None, help='Config of JETS.')
parser.add_argument(
'--ckpt', type=str, default=None, help='Checkpoint file of JETS.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
'--spk_id',
type=int,
default=0,
help='spk id for multi speaker acoustic model')
# other
parser.add_argument(
'--lang',
type=str,
default='zh',
help='Choose model language. zh or en')
parser.add_argument(
"--inference_dir",
type=str,
default=None,
help="dir to save inference models")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument(
"--text",
type=str,
help="text to synthesize, a 'utt_id sentence' pair per line.")
parser.add_argument("--output_dir", type=str, help="output dir.")
parser.add_argument(
'--am',
type=str,
default='jets_csmsc',
help='Choose acoustic model type of tts task.')
args = parser.parse_args()
return args
def main():
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
evaluate(args)
if __name__ == "__main__":
main()
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import shutil
from pathlib import Path
import jsonlines
import numpy as np
import paddle
import yaml
from paddle import DataParallel
from paddle import distributed as dist
from paddle.io import DataLoader
from paddle.optimizer import AdamW
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import jets_multi_spk_batch_fn
from paddlespeech.t2s.datasets.am_batch_fn import jets_single_spk_batch_fn
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.datasets.sampler import ErnieSATSampler
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.models.jets import JETSEvaluator
from paddlespeech.t2s.models.jets import JETSUpdater
from paddlespeech.t2s.modules.losses import DiscriminatorAdversarialLoss
from paddlespeech.t2s.modules.losses import FeatureMatchLoss
from paddlespeech.t2s.modules.losses import ForwardSumLoss
from paddlespeech.t2s.modules.losses import GeneratorAdversarialLoss
from paddlespeech.t2s.modules.losses import MelSpectrogramLoss
from paddlespeech.t2s.modules.losses import VarianceLoss
from paddlespeech.t2s.training.extensions.snapshot import Snapshot
from paddlespeech.t2s.training.extensions.visualizer import VisualDL
from paddlespeech.t2s.training.optimizer import scheduler_classes
from paddlespeech.t2s.training.seeding import seed_everything
from paddlespeech.t2s.training.trainer import Trainer
from paddlespeech.t2s.utils import str2bool
def train_sp(args, config):
# decides device type and whether to run in parallel
# setup running environment correctly
world_size = paddle.distributed.get_world_size()
if (not paddle.is_compiled_with_cuda()) or args.ngpu == 0:
paddle.set_device("cpu")
else:
paddle.set_device("gpu")
if world_size > 1:
paddle.distributed.init_parallel_env()
# set the random seed, it is a must for multiprocess training
seed_everything(config.seed)
print(
f"rank: {dist.get_rank()}, pid: {os.getpid()}, parent_pid: {os.getppid()}",
)
# dataloader has been too verbose
logging.getLogger("DataLoader").disabled = True
fields = [
"text", "text_lengths", "feats", "feats_lengths", "wave", "durations",
"pitch", "energy"
]
converters = {
"wave": np.load,
"feats": np.load,
"pitch": np.load,
"energy": np.load,
}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
collate_fn = jets_multi_spk_batch_fn
with open(args.speaker_dict, 'rt', encoding='utf-8') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Training voice cloning!")
collate_fn = jets_multi_spk_batch_fn
fields += ["spk_emb"]
converters["spk_emb"] = np.load
else:
print("single speaker jets!")
collate_fn = jets_single_spk_batch_fn
print("spk_num:", spk_num)
# construct dataset for training and validation
with jsonlines.open(args.train_metadata, 'r') as reader:
train_metadata = list(reader)
train_dataset = DataTable(
data=train_metadata,
fields=fields,
converters=converters, )
with jsonlines.open(args.dev_metadata, 'r') as reader:
dev_metadata = list(reader)
dev_dataset = DataTable(
data=dev_metadata,
fields=fields,
converters=converters, )
# collate function and dataloader
train_sampler = ErnieSATSampler(
train_dataset,
batch_size=config.batch_size,
shuffle=False,
drop_last=True)
dev_sampler = ErnieSATSampler(
dev_dataset,
batch_size=config.batch_size,
shuffle=False,
drop_last=False)
print("samplers done!")
train_dataloader = DataLoader(
train_dataset,
batch_sampler=train_sampler,
collate_fn=collate_fn,
num_workers=config.num_workers)
dev_dataloader = DataLoader(
dev_dataset,
batch_sampler=dev_sampler,
collate_fn=collate_fn,
num_workers=config.num_workers)
print("dataloaders done!")
with open(args.phones_dict, 'rt', encoding='utf-8') as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_mels
config["model"]["generator_params"]["spks"] = spk_num
model = JETS(idim=vocab_size, odim=odim, **config["model"])
gen_parameters = model.generator.parameters()
dis_parameters = model.discriminator.parameters()
if world_size > 1:
model = DataParallel(model)
gen_parameters = model._layers.generator.parameters()
dis_parameters = model._layers.discriminator.parameters()
print("model done!")
# loss
criterion_mel = MelSpectrogramLoss(
**config["mel_loss_params"], )
criterion_feat_match = FeatureMatchLoss(
**config["feat_match_loss_params"], )
criterion_gen_adv = GeneratorAdversarialLoss(
**config["generator_adv_loss_params"], )
criterion_dis_adv = DiscriminatorAdversarialLoss(
**config["discriminator_adv_loss_params"], )
criterion_var = VarianceLoss()
criterion_forwardsum = ForwardSumLoss()
print("criterions done!")
lr_schedule_g = scheduler_classes[config["generator_scheduler"]](
**config["generator_scheduler_params"])
optimizer_g = AdamW(
learning_rate=lr_schedule_g,
parameters=gen_parameters,
**config["generator_optimizer_params"])
lr_schedule_d = scheduler_classes[config["discriminator_scheduler"]](
**config["discriminator_scheduler_params"])
optimizer_d = AdamW(
learning_rate=lr_schedule_d,
parameters=dis_parameters,
**config["discriminator_optimizer_params"])
print("optimizers done!")
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
if dist.get_rank() == 0:
config_name = args.config.split("/")[-1]
# copy conf to output_dir
shutil.copyfile(args.config, output_dir / config_name)
updater = JETSUpdater(
model=model,
optimizers={
"generator": optimizer_g,
"discriminator": optimizer_d,
},
criterions={
"mel": criterion_mel,
"feat_match": criterion_feat_match,
"gen_adv": criterion_gen_adv,
"dis_adv": criterion_dis_adv,
"var": criterion_var,
"forwardsum": criterion_forwardsum,
},
schedulers={
"generator": lr_schedule_g,
"discriminator": lr_schedule_d,
},
dataloader=train_dataloader,
lambda_adv=config.lambda_adv,
lambda_mel=config.lambda_mel,
lambda_feat_match=config.lambda_feat_match,
lambda_var=config.lambda_var,
lambda_align=config.lambda_align,
generator_first=config.generator_first,
use_alignment_module=config.use_alignment_module,
output_dir=output_dir)
evaluator = JETSEvaluator(
model=model,
criterions={
"mel": criterion_mel,
"feat_match": criterion_feat_match,
"gen_adv": criterion_gen_adv,
"dis_adv": criterion_dis_adv,
"var": criterion_var,
"forwardsum": criterion_forwardsum,
},
dataloader=dev_dataloader,
lambda_adv=config.lambda_adv,
lambda_mel=config.lambda_mel,
lambda_feat_match=config.lambda_feat_match,
lambda_var=config.lambda_var,
lambda_align=config.lambda_align,
generator_first=config.generator_first,
use_alignment_module=config.use_alignment_module,
output_dir=output_dir)
trainer = Trainer(
updater,
stop_trigger=(config.train_max_steps, "iteration"),
out=output_dir)
if dist.get_rank() == 0:
trainer.extend(
evaluator, trigger=(config.eval_interval_steps, 'iteration'))
trainer.extend(VisualDL(output_dir), trigger=(1, 'iteration'))
trainer.extend(
Snapshot(max_size=config.num_snapshots),
trigger=(config.save_interval_steps, 'iteration'))
print("Trainer Done!")
trainer.run()
def main():
# parse args and config and redirect to train_sp
parser = argparse.ArgumentParser(description="Train a JETS model.")
parser.add_argument("--config", type=str, help="JETS config file")
parser.add_argument("--train-metadata", type=str, help="training data.")
parser.add_argument("--dev-metadata", type=str, help="dev data.")
parser.add_argument("--output-dir", type=str, help="output dir.")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker-dict",
type=str,
default=None,
help="speaker id map file for multiple speaker model.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
args = parser.parse_args()
with open(args.config, 'rt') as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
print(
f"master see the word size: {dist.get_world_size()}, from pid: {os.getpid()}"
)
# dispatch
if args.ngpu > 1:
dist.spawn(train_sp, (args, config), nprocs=args.ngpu)
else:
train_sp(args, config)
if __name__ == "__main__":
main()
...@@ -506,7 +506,7 @@ def am_to_static(am_inference, ...@@ -506,7 +506,7 @@ def am_to_static(am_inference,
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)]) am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)])
elif am_name == 'vits': elif am_name == 'vits' or am_name == 'jets':
if am_dataset in {"aishell3", "vctk"} and speaker_dict is not None: if am_dataset in {"aishell3", "vctk"} and speaker_dict is not None:
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, am_inference,
......
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .jets import *
from .jets_updater import *
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import numpy as np
import paddle
import paddle.nn.functional as F
from numba import jit
from paddle import nn
from paddlespeech.t2s.modules.masked_fill import masked_fill
class AlignmentModule(nn.Layer):
"""Alignment Learning Framework proposed for parallel TTS models in:
https://arxiv.org/abs/2108.10447
"""
def __init__(self, adim, odim):
super().__init__()
self.t_conv1 = nn.Conv1D(adim, adim, kernel_size=3, padding=1)
self.t_conv2 = nn.Conv1D(adim, adim, kernel_size=1, padding=0)
self.f_conv1 = nn.Conv1D(odim, adim, kernel_size=3, padding=1)
self.f_conv2 = nn.Conv1D(adim, adim, kernel_size=3, padding=1)
self.f_conv3 = nn.Conv1D(adim, adim, kernel_size=1, padding=0)
def forward(self, text, feats, x_masks=None):
"""
Args:
text (Tensor): Batched text embedding (B, T_text, adim)
feats (Tensor): Batched acoustic feature (B, T_feats, odim)
x_masks (Tensor): Mask tensor (B, T_text)
Returns:
Tensor: log probability of attention matrix (B, T_feats, T_text)
"""
text = text.transpose((0, 2, 1))
text = F.relu(self.t_conv1(text))
text = self.t_conv2(text)
text = text.transpose((0, 2, 1))
feats = feats.transpose((0, 2, 1))
feats = F.relu(self.f_conv1(feats))
feats = F.relu(self.f_conv2(feats))
feats = self.f_conv3(feats)
feats = feats.transpose((0, 2, 1))
dist = feats.unsqueeze(2) - text.unsqueeze(1)
dist = paddle.linalg.norm(dist, p=2, axis=3)
score = -dist
if x_masks is not None:
x_masks = x_masks.unsqueeze(-2)
score = masked_fill(score, x_masks, -np.inf)
log_p_attn = F.log_softmax(score, axis=-1)
return log_p_attn, score
@jit(nopython=True)
def _monotonic_alignment_search(log_p_attn):
# https://arxiv.org/abs/2005.11129
T_mel = log_p_attn.shape[0]
T_inp = log_p_attn.shape[1]
Q = np.full((T_inp, T_mel), fill_value=-np.inf)
log_prob = log_p_attn.transpose(1, 0) # -> (T_inp,T_mel)
# 1. Q <- init first row for all j
for j in range(T_mel):
Q[0, j] = log_prob[0, :j + 1].sum()
# 2.
for j in range(1, T_mel):
for i in range(1, min(j + 1, T_inp)):
Q[i, j] = max(Q[i - 1, j - 1], Q[i, j - 1]) + log_prob[i, j]
# 3.
A = np.full((T_mel, ), fill_value=T_inp - 1)
for j in range(T_mel - 2, -1, -1): # T_mel-2, ..., 0
# 'i' in {A[j+1]-1, A[j+1]}
i_a = A[j + 1] - 1
i_b = A[j + 1]
if i_b == 0:
argmax_i = 0
elif Q[i_a, j] >= Q[i_b, j]:
argmax_i = i_a
else:
argmax_i = i_b
A[j] = argmax_i
return A
def viterbi_decode(log_p_attn, text_lengths, feats_lengths):
"""
Args:
log_p_attn (Tensor):
Batched log probability of attention matrix (B, T_feats, T_text)
text_lengths (Tensor):
Text length tensor (B,)
feats_legnths (Tensor):
Feature length tensor (B,)
Returns:
Tensor:
Batched token duration extracted from `log_p_attn` (B,T_text)
Tensor:
binarization loss tensor ()
"""
B = log_p_attn.shape[0]
T_text = log_p_attn.shape[2]
device = log_p_attn.place
bin_loss = 0
ds = paddle.zeros((B, T_text), dtype="int32")
for b in range(B):
cur_log_p_attn = log_p_attn[b, :feats_lengths[b], :text_lengths[b]]
viterbi = _monotonic_alignment_search(cur_log_p_attn.numpy())
_ds = np.bincount(viterbi)
ds[b, :len(_ds)] = paddle.to_tensor(
_ds, place=device, dtype="int32")
t_idx = paddle.arange(feats_lengths[b])
bin_loss = bin_loss - cur_log_p_attn[t_idx, viterbi].mean()
bin_loss = bin_loss / B
return ds, bin_loss
@jit(nopython=True)
def _average_by_duration(ds, xs, text_lengths, feats_lengths):
B = ds.shape[0]
# xs_avg = np.zeros_like(ds)
xs_avg = np.zeros(shape=ds.shape, dtype=np.float32)
ds = ds.astype(np.int32)
for b in range(B):
t_text = text_lengths[b]
t_feats = feats_lengths[b]
d = ds[b, :t_text]
d_cumsum = d.cumsum()
d_cumsum = [0] + list(d_cumsum)
x = xs[b, :t_feats]
for n, (start, end) in enumerate(zip(d_cumsum[:-1], d_cumsum[1:])):
if len(x[start:end]) != 0:
xs_avg[b, n] = x[start:end].mean()
else:
xs_avg[b, n] = 0
return xs_avg
def average_by_duration(ds, xs, text_lengths, feats_lengths):
"""
Args:
ds (Tensor):
Batched token duration (B,T_text)
xs (Tensor):
Batched feature sequences to be averaged (B,T_feats)
text_lengths (Tensor):
Text length tensor (B,)
feats_lengths (Tensor):
Feature length tensor (B,)
Returns:
Tensor: Batched feature averaged according to the token duration (B, T_text)
"""
device = ds.place
args = [ds, xs, text_lengths, feats_lengths]
args = [arg.numpy() for arg in args]
xs_avg = _average_by_duration(*args)
xs_avg = paddle.to_tensor(xs_avg, place=device)
return xs_avg
此差异已折叠。
此差异已折叠。
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import logging
from typing import Dict
import paddle
from paddle import distributed as dist
from paddle.io import DataLoader
from paddle.nn import Layer
from paddle.optimizer import Optimizer
from paddle.optimizer.lr import LRScheduler
from paddlespeech.t2s.modules.nets_utils import get_segments
from paddlespeech.t2s.training.extensions.evaluator import StandardEvaluator
from paddlespeech.t2s.training.reporter import report
from paddlespeech.t2s.training.updaters.standard_updater import StandardUpdater
from paddlespeech.t2s.training.updaters.standard_updater import UpdaterState
logging.basicConfig(
format='%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s',
datefmt='[%Y-%m-%d %H:%M:%S]')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
class JETSUpdater(StandardUpdater):
def __init__(self,
model: Layer,
optimizers: Dict[str, Optimizer],
criterions: Dict[str, Layer],
schedulers: Dict[str, LRScheduler],
dataloader: DataLoader,
generator_train_start_steps: int=0,
discriminator_train_start_steps: int=100000,
lambda_adv: float=1.0,
lambda_mel: float=45.0,
lambda_feat_match: float=2.0,
lambda_var: float=1.0,
lambda_align: float=2.0,
generator_first: bool=False,
use_alignment_module: bool=False,
output_dir=None):
# it is designed to hold multiple models
# 因为输入的是单模型,但是没有用到父类的 init(), 所以需要重新写这部分
models = {"main": model}
self.models: Dict[str, Layer] = models
# self.model = model
self.model = model._layers if isinstance(model,
paddle.DataParallel) else model
self.optimizers = optimizers
self.optimizer_g: Optimizer = optimizers['generator']
self.optimizer_d: Optimizer = optimizers['discriminator']
self.criterions = criterions
self.criterion_mel = criterions['mel']
self.criterion_feat_match = criterions['feat_match']
self.criterion_gen_adv = criterions["gen_adv"]
self.criterion_dis_adv = criterions["dis_adv"]
self.criterion_var = criterions["var"]
self.criterion_forwardsum = criterions["forwardsum"]
self.schedulers = schedulers
self.scheduler_g = schedulers['generator']
self.scheduler_d = schedulers['discriminator']
self.dataloader = dataloader
self.generator_train_start_steps = generator_train_start_steps
self.discriminator_train_start_steps = discriminator_train_start_steps
self.lambda_adv = lambda_adv
self.lambda_mel = lambda_mel
self.lambda_feat_match = lambda_feat_match
self.lambda_var = lambda_var
self.lambda_align = lambda_align
self.use_alignment_module = use_alignment_module
if generator_first:
self.turns = ["generator", "discriminator"]
else:
self.turns = ["discriminator", "generator"]
self.state = UpdaterState(iteration=0, epoch=0)
self.train_iterator = iter(self.dataloader)
log_file = output_dir / 'worker_{}.log'.format(dist.get_rank())
self.filehandler = logging.FileHandler(str(log_file))
logger.addHandler(self.filehandler)
self.logger = logger
self.msg = ""
def update_core(self, batch):
self.msg = "Rank: {}, ".format(dist.get_rank())
losses_dict = {}
for turn in self.turns:
speech = batch["speech"]
speech = speech.unsqueeze(1)
text_lengths = batch["text_lengths"]
feats_lengths = batch["feats_lengths"]
outs = self.model(
text=batch["text"],
text_lengths=batch["text_lengths"],
feats=batch["feats"],
feats_lengths=batch["feats_lengths"],
durations=batch["durations"],
durations_lengths=batch["durations_lengths"],
pitch=batch["pitch"],
energy=batch["energy"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator",
use_alignment_module=self.use_alignment_module)
# Generator
if turn == "generator":
# parse outputs
speech_hat_, bin_loss, log_p_attn, start_idxs, d_outs, ds, p_outs, ps, e_outs, es = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_)
with paddle.no_grad():
# do not store discriminator gradient in generator turn
p = self.model.discriminator(speech_)
# calculate losses
mel_loss = self.criterion_mel(speech_hat_, speech_)
adv_loss = self.criterion_gen_adv(p_hat)
feat_match_loss = self.criterion_feat_match(p_hat, p)
dur_loss, pitch_loss, energy_loss = self.criterion_var(
d_outs, ds, p_outs, ps, e_outs, es, text_lengths)
mel_loss = mel_loss * self.lambda_mel
adv_loss = adv_loss * self.lambda_adv
feat_match_loss = feat_match_loss * self.lambda_feat_match
g_loss = mel_loss + adv_loss + feat_match_loss
var_loss = (
dur_loss + pitch_loss + energy_loss) * self.lambda_var
gen_loss = g_loss + var_loss #+ align_loss
report("train/generator_loss", float(gen_loss))
report("train/generator_generator_loss", float(g_loss))
report("train/generator_variance_loss", float(var_loss))
report("train/generator_generator_mel_loss", float(mel_loss))
report("train/generator_generator_adv_loss", float(adv_loss))
report("train/generator_generator_feat_match_loss",
float(feat_match_loss))
report("train/generator_variance_dur_loss", float(dur_loss))
report("train/generator_variance_pitch_loss", float(pitch_loss))
report("train/generator_variance_energy_loss",
float(energy_loss))
losses_dict["generator_loss"] = float(gen_loss)
losses_dict["generator_generator_loss"] = float(g_loss)
losses_dict["generator_variance_loss"] = float(var_loss)
losses_dict["generator_generator_mel_loss"] = float(mel_loss)
losses_dict["generator_generator_adv_loss"] = float(adv_loss)
losses_dict["generator_generator_feat_match_loss"] = float(
feat_match_loss)
losses_dict["generator_variance_dur_loss"] = float(dur_loss)
losses_dict["generator_variance_pitch_loss"] = float(pitch_loss)
losses_dict["generator_variance_energy_loss"] = float(
energy_loss)
if self.use_alignment_module == True:
forwardsum_loss = self.criterion_forwardsum(
log_p_attn, text_lengths, feats_lengths)
align_loss = (
forwardsum_loss + bin_loss) * self.lambda_align
report("train/generator_alignment_loss", float(align_loss))
report("train/generator_alignment_forwardsum_loss",
float(forwardsum_loss))
report("train/generator_alignment_bin_loss",
float(bin_loss))
losses_dict["generator_alignment_loss"] = float(align_loss)
losses_dict["generator_alignment_forwardsum_loss"] = float(
forwardsum_loss)
losses_dict["generator_alignment_bin_loss"] = float(
bin_loss)
self.optimizer_g.clear_grad()
gen_loss.backward()
self.optimizer_g.step()
self.scheduler_g.step()
# reset cache
if self.model.reuse_cache_gen or not self.model.training:
self.model._cache = None
# Disctiminator
elif turn == "discriminator":
# parse outputs
speech_hat_, _, _, start_idxs, *_ = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_.detach())
p = self.model.discriminator(speech_)
# calculate losses
real_loss, fake_loss = self.criterion_dis_adv(p_hat, p)
dis_loss = real_loss + fake_loss
report("train/real_loss", float(real_loss))
report("train/fake_loss", float(fake_loss))
report("train/discriminator_loss", float(dis_loss))
losses_dict["real_loss"] = float(real_loss)
losses_dict["fake_loss"] = float(fake_loss)
losses_dict["discriminator_loss"] = float(dis_loss)
self.optimizer_d.clear_grad()
dis_loss.backward()
self.optimizer_d.step()
self.scheduler_d.step()
# reset cache
if self.model.reuse_cache_dis or not self.model.training:
self.model._cache = None
self.msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in losses_dict.items())
class JETSEvaluator(StandardEvaluator):
def __init__(self,
model,
criterions: Dict[str, Layer],
dataloader: DataLoader,
lambda_adv: float=1.0,
lambda_mel: float=45.0,
lambda_feat_match: float=2.0,
lambda_var: float=1.0,
lambda_align: float=2.0,
generator_first: bool=False,
use_alignment_module: bool=False,
output_dir=None):
# 因为输入的是单模型,但是没有用到父类的 init(), 所以需要重新写这部分
models = {"main": model}
self.models: Dict[str, Layer] = models
# self.model = model
self.model = model._layers if isinstance(model,
paddle.DataParallel) else model
self.criterions = criterions
self.criterion_mel = criterions['mel']
self.criterion_feat_match = criterions['feat_match']
self.criterion_gen_adv = criterions["gen_adv"]
self.criterion_dis_adv = criterions["dis_adv"]
self.criterion_var = criterions["var"]
self.criterion_forwardsum = criterions["forwardsum"]
self.dataloader = dataloader
self.lambda_adv = lambda_adv
self.lambda_mel = lambda_mel
self.lambda_feat_match = lambda_feat_match
self.lambda_var = lambda_var
self.lambda_align = lambda_align
self.use_alignment_module = use_alignment_module
if generator_first:
self.turns = ["generator", "discriminator"]
else:
self.turns = ["discriminator", "generator"]
log_file = output_dir / 'worker_{}.log'.format(dist.get_rank())
self.filehandler = logging.FileHandler(str(log_file))
logger.addHandler(self.filehandler)
self.logger = logger
self.msg = ""
def evaluate_core(self, batch):
# logging.debug("Evaluate: ")
self.msg = "Evaluate: "
losses_dict = {}
for turn in self.turns:
speech = batch["speech"]
speech = speech.unsqueeze(1)
text_lengths = batch["text_lengths"]
feats_lengths = batch["feats_lengths"]
outs = self.model(
text=batch["text"],
text_lengths=batch["text_lengths"],
feats=batch["feats"],
feats_lengths=batch["feats_lengths"],
durations=batch["durations"],
durations_lengths=batch["durations_lengths"],
pitch=batch["pitch"],
energy=batch["energy"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator",
use_alignment_module=self.use_alignment_module)
# Generator
if turn == "generator":
# parse outputs
speech_hat_, bin_loss, log_p_attn, start_idxs, d_outs, ds, p_outs, ps, e_outs, es = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_)
with paddle.no_grad():
# do not store discriminator gradient in generator turn
p = self.model.discriminator(speech_)
# calculate losses
mel_loss = self.criterion_mel(speech_hat_, speech_)
adv_loss = self.criterion_gen_adv(p_hat)
feat_match_loss = self.criterion_feat_match(p_hat, p)
dur_loss, pitch_loss, energy_loss = self.criterion_var(
d_outs, ds, p_outs, ps, e_outs, es, text_lengths)
mel_loss = mel_loss * self.lambda_mel
adv_loss = adv_loss * self.lambda_adv
feat_match_loss = feat_match_loss * self.lambda_feat_match
g_loss = mel_loss + adv_loss + feat_match_loss
var_loss = (
dur_loss + pitch_loss + energy_loss) * self.lambda_var
gen_loss = g_loss + var_loss #+ align_loss
report("eval/generator_loss", float(gen_loss))
report("eval/generator_generator_loss", float(g_loss))
report("eval/generator_variance_loss", float(var_loss))
report("eval/generator_generator_mel_loss", float(mel_loss))
report("eval/generator_generator_adv_loss", float(adv_loss))
report("eval/generator_generator_feat_match_loss",
float(feat_match_loss))
report("eval/generator_variance_dur_loss", float(dur_loss))
report("eval/generator_variance_pitch_loss", float(pitch_loss))
report("eval/generator_variance_energy_loss",
float(energy_loss))
losses_dict["generator_loss"] = float(gen_loss)
losses_dict["generator_generator_loss"] = float(g_loss)
losses_dict["generator_variance_loss"] = float(var_loss)
losses_dict["generator_generator_mel_loss"] = float(mel_loss)
losses_dict["generator_generator_adv_loss"] = float(adv_loss)
losses_dict["generator_generator_feat_match_loss"] = float(
feat_match_loss)
losses_dict["generator_variance_dur_loss"] = float(dur_loss)
losses_dict["generator_variance_pitch_loss"] = float(pitch_loss)
losses_dict["generator_variance_energy_loss"] = float(
energy_loss)
if self.use_alignment_module == True:
forwardsum_loss = self.criterion_forwardsum(
log_p_attn, text_lengths, feats_lengths)
align_loss = (
forwardsum_loss + bin_loss) * self.lambda_align
report("eval/generator_alignment_loss", float(align_loss))
report("eval/generator_alignment_forwardsum_loss",
float(forwardsum_loss))
report("eval/generator_alignment_bin_loss", float(bin_loss))
losses_dict["generator_alignment_loss"] = float(align_loss)
losses_dict["generator_alignment_forwardsum_loss"] = float(
forwardsum_loss)
losses_dict["generator_alignment_bin_loss"] = float(
bin_loss)
# reset cache
if self.model.reuse_cache_gen or not self.model.training:
self.model._cache = None
# Disctiminator
elif turn == "discriminator":
# parse outputs
speech_hat_, _, _, start_idxs, *_ = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_.detach())
p = self.model.discriminator(speech_)
# calculate losses
real_loss, fake_loss = self.criterion_dis_adv(p_hat, p)
dis_loss = real_loss + fake_loss
report("eval/real_loss", float(real_loss))
report("eval/fake_loss", float(fake_loss))
report("eval/discriminator_loss", float(dis_loss))
losses_dict["real_loss"] = float(real_loss)
losses_dict["fake_loss"] = float(fake_loss)
losses_dict["discriminator_loss"] = float(dis_loss)
# reset cache
if self.model.reuse_cache_dis or not self.model.training:
self.model._cache = None
self.msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in losses_dict.items())
self.logger.info(self.msg)
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import paddle
import paddle.nn.functional as F
from paddle import nn
from paddlespeech.t2s.modules.masked_fill import masked_fill
class GaussianUpsampling(nn.Layer):
"""
Gaussian upsampling with fixed temperature as in:
https://arxiv.org/abs/2010.04301
"""
def __init__(self, delta=0.1):
super().__init__()
self.delta = delta
def forward(self, hs, ds, h_masks=None, d_masks=None):
"""
Args:
hs (Tensor): Batched hidden state to be expanded (B, T_text, adim)
ds (Tensor): Batched token duration (B, T_text)
h_masks (Tensor): Mask tensor (B,T_feats)
d_masks (Tensor): Mask tensor (B,T_text)
Returns:
Tensor: Expanded hidden state (B, T_feat, adim)
"""
B = ds.shape[0]
if h_masks is None:
T_feats = paddle.to_tensor(ds.sum(), dtype="int32")
else:
T_feats = h_masks.shape[-1]
t = paddle.to_tensor(
paddle.arange(0, T_feats).unsqueeze(0).tile([B, 1]),
dtype="float32")
if h_masks is not None:
t = t * paddle.to_tensor(h_masks, dtype="float32")
c = ds.cumsum(axis=-1) - ds / 2
energy = -1 * self.delta * (t.unsqueeze(-1) - c.unsqueeze(1))**2
if d_masks is not None:
d_masks = ~(d_masks.unsqueeze(1))
d_masks.stop_gradient = True
d_masks = d_masks.tile([1, T_feats, 1])
energy = masked_fill(energy, d_masks, -float("inf"))
p_attn = F.softmax(energy, axis=2) # (B, T_feats, T_text)
hs = paddle.matmul(p_attn, hs)
return hs
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import math import math
from typing import Tuple
import librosa import librosa
import numpy as np import numpy as np
...@@ -19,8 +20,13 @@ import paddle ...@@ -19,8 +20,13 @@ import paddle
from paddle import nn from paddle import nn
from paddle.nn import functional as F from paddle.nn import functional as F
from scipy import signal from scipy import signal
from scipy.stats import betabinom
from typeguard import check_argument_types
from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask
from paddlespeech.t2s.modules.predictor.duration_predictor import (
DurationPredictorLoss, # noqa: H301
)
# Losses for WaveRNN # Losses for WaveRNN
...@@ -1126,3 +1132,195 @@ class MLMLoss(nn.Layer): ...@@ -1126,3 +1132,195 @@ class MLMLoss(nn.Layer):
text_masked_pos_reshape) / paddle.sum((text_masked_pos) + 1e-10) text_masked_pos_reshape) / paddle.sum((text_masked_pos) + 1e-10)
return mlm_loss, text_mlm_loss return mlm_loss, text_mlm_loss
class VarianceLoss(nn.Layer):
def __init__(self, use_masking: bool=True,
use_weighted_masking: bool=False):
"""Initialize JETS variance loss module.
Args:
use_masking (bool): Whether to apply masking for padded part in loss
calculation.
use_weighted_masking (bool): Whether to weighted masking in loss
calculation.
"""
assert check_argument_types()
super().__init__()
assert (use_masking != use_weighted_masking) or not use_masking
self.use_masking = use_masking
self.use_weighted_masking = use_weighted_masking
# define criterions
reduction = "none" if self.use_weighted_masking else "mean"
self.mse_criterion = nn.MSELoss(reduction=reduction)
self.duration_criterion = DurationPredictorLoss(reduction=reduction)
def forward(
self,
d_outs: paddle.Tensor,
ds: paddle.Tensor,
p_outs: paddle.Tensor,
ps: paddle.Tensor,
e_outs: paddle.Tensor,
es: paddle.Tensor,
ilens: paddle.Tensor,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]:
"""Calculate forward propagation.
Args:
d_outs (LongTensor): Batch of outputs of duration predictor (B, T_text).
ds (LongTensor): Batch of durations (B, T_text).
p_outs (Tensor): Batch of outputs of pitch predictor (B, T_text, 1).
ps (Tensor): Batch of target token-averaged pitch (B, T_text, 1).
e_outs (Tensor): Batch of outputs of energy predictor (B, T_text, 1).
es (Tensor): Batch of target token-averaged energy (B, T_text, 1).
ilens (LongTensor): Batch of the lengths of each input (B,).
Returns:
Tensor: Duration predictor loss value.
Tensor: Pitch predictor loss value.
Tensor: Energy predictor loss value.
"""
# apply mask to remove padded part
if self.use_masking:
duration_masks = paddle.to_tensor(
make_non_pad_mask(ilens), place=ds.place)
d_outs = d_outs.masked_select(duration_masks)
ds = ds.masked_select(duration_masks)
pitch_masks = paddle.to_tensor(
make_non_pad_mask(ilens).unsqueeze(-1), place=ds.place)
p_outs = p_outs.masked_select(pitch_masks)
e_outs = e_outs.masked_select(pitch_masks)
ps = ps.masked_select(pitch_masks)
es = es.masked_select(pitch_masks)
# calculate loss
duration_loss = self.duration_criterion(d_outs, ds)
pitch_loss = self.mse_criterion(p_outs, ps)
energy_loss = self.mse_criterion(e_outs, es)
# make weighted mask and apply it
if self.use_weighted_masking:
duration_masks = paddle.to_tensor(
make_non_pad_mask(ilens), place=ds.place)
duration_weights = (duration_masks.float() /
duration_masks.sum(dim=1, keepdim=True).float())
duration_weights /= ds.size(0)
# apply weight
duration_loss = (duration_loss.mul(duration_weights).masked_select(
duration_masks).sum())
pitch_masks = duration_masks.unsqueeze(-1)
pitch_weights = duration_weights.unsqueeze(-1)
pitch_loss = pitch_loss.mul(pitch_weights).masked_select(
pitch_masks).sum()
energy_loss = (
energy_loss.mul(pitch_weights).masked_select(pitch_masks).sum())
return duration_loss, pitch_loss, energy_loss
class ForwardSumLoss(nn.Layer):
"""
https://openreview.net/forum?id=0NQwnnwAORi
"""
def __init__(self, cache_prior: bool=True):
"""
Args:
cache_prior (bool): Whether to cache beta-binomial prior
"""
super().__init__()
self.cache_prior = cache_prior
self._cache = {}
def forward(
self,
log_p_attn: paddle.Tensor,
ilens: paddle.Tensor,
olens: paddle.Tensor,
blank_prob: float=np.e**-1, ) -> paddle.Tensor:
"""
Args:
log_p_attn (Tensor): Batch of log probability of attention matrix (B, T_feats, T_text).
ilens (Tensor): Batch of the lengths of each input (B,).
olens (Tensor): Batch of the lengths of each target (B,).
blank_prob (float): Blank symbol probability
Returns:
Tensor: forwardsum loss value.
"""
B = log_p_attn.shape[0]
# add beta-binomial prior
bb_prior = self._generate_prior(ilens, olens)
bb_prior = paddle.to_tensor(
bb_prior, dtype=log_p_attn.dtype, place=log_p_attn.place)
log_p_attn = log_p_attn + bb_prior
# a row must be added to the attention matrix to account for blank token of CTC loss
# (B,T_feats,T_text+1)
log_p_attn_pd = F.pad(
log_p_attn, (0, 0, 0, 0, 1, 0), value=np.log(blank_prob))
loss = 0
for bidx in range(B):
# construct target sequnece.
# Every text token is mapped to a unique sequnece number.
target_seq = paddle.arange(
1, ilens[bidx] + 1, dtype="int32").unsqueeze(0)
cur_log_p_attn_pd = log_p_attn_pd[bidx, :olens[bidx], :ilens[
bidx] + 1].unsqueeze(1) # (T_feats,1,T_text+1)
# The input of ctc_loss API need to be fixed
loss += F.ctc_loss(
log_probs=cur_log_p_attn_pd,
labels=target_seq,
input_lengths=olens[bidx:bidx + 1],
label_lengths=ilens[bidx:bidx + 1])
loss = loss / B
return loss
def _generate_prior(self, text_lengths, feats_lengths,
w=1) -> paddle.Tensor:
"""Generate alignment prior formulated as beta-binomial distribution
Args:
text_lengths (Tensor): Batch of the lengths of each input (B,).
feats_lengths (Tensor): Batch of the lengths of each target (B,).
w (float): Scaling factor; lower -> wider the width
Returns:
Tensor: Batched 2d static prior matrix (B, T_feats, T_text)
"""
B = len(text_lengths)
T_text = text_lengths.max()
T_feats = feats_lengths.max()
bb_prior = paddle.full((B, T_feats, T_text), fill_value=-np.inf)
for bidx in range(B):
T = feats_lengths[bidx].item()
N = text_lengths[bidx].item()
key = str(T) + ',' + str(N)
if self.cache_prior and key in self._cache:
prob = self._cache[key]
else:
alpha = w * np.arange(1, T + 1, dtype=float) # (T,)
beta = w * np.array([T - t + 1 for t in alpha])
k = np.arange(N)
batched_k = k[..., None] # (N,1)
prob = betabinom.pmf(batched_k, N, alpha, beta) # (N,T)
# store cache
if self.cache_prior and key not in self._cache:
self._cache[key] = prob
prob = paddle.to_tensor(
prob, place=text_lengths.place, dtype="float32").transpose(
(1, 0)) # -> (T,N)
bb_prior[bidx, :T, :N] = prob
return bb_prior
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册