Robust_Fine_Grained_Prosody_Control
1.0.0
Pytorch Implementation of Robust and fine-grained prosody control of end-to-end speech synthesis (Unofficial)
This implementation uses the LibriTTS dataset.
git clone https://github.com/keonlee9420/Robust_Fine_Grained_Prosody_Control.gitcd Robust_Fine_Grained_Prosody_Controlgit submodule init; git submodule updatesed -i -- 's,/home/keon/speech-datasets/LibriTTS_preprocessed/train-clean-100/,your_libritts_dataset_folder/,g' filelists/*.txt
load_mel_from_disk=True in hparams.py and update mel-spectrogram pathspip install -r requirements.txtpython train.py --output_directory=outdir --log_directory=logdirtensorboard --logdir=outdir/logdir(TBD)
python inference.py -c checkpoint/path -r reference_audio/wav/path -t "synthesize text"python inference_all.py -c checkpoint/path -r reference_audios/dir/pathN.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2 and the Mel decoder were trained on the same mel-spectrogram representation.
@misc{lee2021robust_fine_grained_prosody_control,
author = {Lee, Keon},
title = {Robust_Fine_Grained_Prosody_Control},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {url{https://github.com/keonlee9420/Robust_Fine_Grained_Prosody_Control}}
}
WaveGlow Faster than real time Flow-based Generative Network for Speech Synthesis
nv-wavenet Faster than real time WaveNet.
This implementation uses code from the following repos: NVIDIA/Tacotron-2, KinglittleQ/GST-Tacotron
We are thankful to the paper authors, specially Younggun Lee, and Taesu Kim.