Fine-tune/adding of new speaker to existing models tts/wavernn

Is it possible for existing pre-trained models tts and wavernn to add a new dataset and a new speaker. and retrain the model for this speaker? if so, how?

Possible. You can use the universal vocoder model released here https://github.com/erogol/WaveRNN

To finetune the TTS model for your speaker this might help:

Basically, you need to format your dataset as LJSpeech and resume the training of one of the released models. Good luck.

@erogol, is there some colab notebook to show how to resume the training of one of the released model for new dataset.