When will Mozilla Voice data be trained on TTS? It would be good to generate a TTS voice based on the users that have contributed to that open source project
The prosody, volume, quality, intonations, etc. of the common voice dataset is highly varied. You might be able to train a multi-speaker model with it, but for a higher quality model it’s not ideal.
Deepspeech is what benefits from that dataset more than tts does.
Thanks for the reply @ baconator!