Currently HifiGAN is my personal favorite on your vocoder comparison page in respect to the combination of interference speed and voice quality. I am waiting for the final results of @monatis regarding Mulitband MelGAN.
Regarding the breathing: I like it in short sentences as it gives your voice an additional natural touch. In long sentences it seems too much, never reflected about how many breaths we take when speaking
p.s. Is the HifiGAN model already available somewhere so that we can “play” with it?
I’ve just added version 3 of my “thorsten” dataset. It’s based on v02, but speed has been increased by 10%. Trained TTS models will generate a little faster (but still natural) speechflow.
It took longer and was difficulter to pronounce emotional on non emotional (or wrong emotional) phrases but it’s done.
Now @dkreutz is doing his audio optimization magic. One he’s done i’ll publish the “Thorsten emotional dataset”.
Always keep in mind that i’m no professional voice actor, just a normal guy contributing his voice.
Just in case it’s interesting for you. I’ve created a Twitteraccount for my german voice contribution where i plan to post new models, news or updates around “Thorsten” dataset.
@Erogol from Coqui released my first trained open german TTS model .
It consists of:
Tacotron2 DCA model (based on “Thorsten” dataset)
WaveGrad vocoder
WaveGrad vocoder has a bad real time factor on cpu and an acceptable rtf on cuda. Next i’m training a Fullband-MelGAN vocoder for getting a better rtf to work with Mycroft voice assistant.
Want to give it a try?
pip install -U tts
tts --model_name tts_models/de/thorsten/tacotron2-DCA --text "Was geht, was geht, ich sags dir ganz konkret." --use_cuda=true
For updates on new models check my twitter account (x.com).
I recently had a public talk at Tensorflow Turkey on “How to make machines talk with your voice”.
If you’re interested on what steps i did to record my dataset and train a TTS model of my voice you might want to take a look. I addition to that i named some mistakes i made and lessons i learnt.
I just released v02 of my EMOTIONAL dataset.
Now included is “drunk” and “whispering”. Details / Samples and download link is available on my Github page:
I recorded a practical walk through screencast video on the process on creating your own TTS voice.
Starting from preparing a text corpus for recording up to synthesizing voice.
As i have passion for TTS i thought why shouldn’t i share my thoughts, mistakes and lessons learned not with the community (nothing cool or fancy stuff). Just little bit of tech talk.