Hi again, been trying to train this whole week, but unfortunately everything has been overfitting (plateuing early on and no improvements after a day). I have been training on Tacotron, first with LibriTTS 100, and then LibriTTS 360, female only. I tried Forward Attention at first (all three switches in config enabled), and bidirectional at some point, but unfortunately it craps out because of memory. I am trying with graves now and forward attention enabled as well. Should I give up Tacotron altogether and try Taco2? I cannot figure out why it is overfitting, I thought it would surely do okay.
overfitting is not a huge deal with TTS. Just check the audio quality listening to the samples.
When you use graves attention forward attention is out of service by default.
Ah okay, thanks, I had a suspicion. It’s just that even after a day all I get in the testing samples is static noice and even teacher forcing in eval does not look great (plus the alignment score plateaus after 2k steps and does not improve at all).
if it is not the alignment, then there might be something else broken as well.
Right, so,
As I said, I ended up training (on the master branch) Tacotron on all the female speakers of Libri-TTS-360. At first I tried to use Forward Attention, but the only mechanism that worked was Graves. I tried bidirectional but sadly it didn’t work (kept throwing memory errors). I also needed to disable self.attention.init_win_idx()
on layers/tacotron.py,
because otherwise it refused to synthesize. I am at approximately 103k steps now. What I have gathered:
- I have to restart training every 2 days, because the attention drops
- In general, graves is a very good mechanism
- I don’t know if it is the multispeaker nature, but it seems that, at random intervals, the alignment just drops extremely low and then it goes back up, gradually.
- At 103k steps, the model is somewhat able to synthesize in different voices; however, punctuation like commas breaks it (it doesn’t read further than the comma break). I am uploading synthesis with speaker 1 as the flag and speaker 14 as the flag.
I don’t have any test spectograms, because I just started retraining again. Again, my goal is more of producing a novel speaker using embeddings. I will let it train more and see how it goes, then use external embeddings I have extracted using the speaker encoder and feed these, instead of the speaker embedding layer. I would like to contribute the model I am training right now for the TTS project page on git, if it turns out to be good, along with the changes needed for loading your own embeddings.
Any thoughts? I wonder if I can use this model to start a training session using forward attention, or batch normalization, after a certain number of steps. I also left r at 7, because gradual training is enabled, so I didn’t think changing it would do anything.
As always, thanks for all the work! Really happy if I can contribute in any way.
samples.zip (104.2 KB)
for a multi-speaker mode it takes longer for the mode to show a reasonable performance. I could only see it works good enough after 700K iterations. So maybe it is better to be patient. You might also train another model finetunning a pretrained LJSpeech model. That might make the problem easier.
I tried to do it on a pretrained LJSpeech model, but it said that “as of now, you cannot introduce new speakers to an already trained model”.
I tried to do it on a pretrained LJSpeech model, but it said that “as of now, you cannot introduce new speakers to an already trained model”.
yes you cannot do that but you can initialize a new model partially with the matching layers of the LJSpeech model. So if you give the model with --resume_path
flag, it will load all the layer into your new model as their layers match in shape.
@georroussos how did you integrate speaker vectors to the model? Do you have your code somewhere? I can check if everything looks alright.
Aha, but that is what I tried. Then I checked the code and I saw that, if the multispeaker embeddings option is enabled in the config, it checks if I also gave it --restore_path
and if I did, it checks the .json
file. But I will definitely try again. Which model would you recommend? I think the one trained on ForwardAttn and fine-tuned on BN would be a good candidate. But would I keep on training it with BN? And also, keep the config file from it?
I integrated speaker embeddings by editing Tacotron2.py
in models/tacotron2.py
. I changed the condition to if num_speakers > 0
(I know it is redundant), and then initiated a torch.FloatTensor
variable, which included my embeddings. I created a lookup table torch.nn.Embedding.from_pretrained(weight)
and froze the layer with self.speaker_embedding.weight.requires_grad = False
. Something like this:
Then I think I finetuned the LJSpeech model for a while to include the embeddings (or not, really do not remember), and called it during inference time with --speaker_id 0
. It is a hacky way, but the embeddings did load and did change the prosody, as we saw.
maybe you can fork it and push your changes on github so we can collaborate.
You can take the latest released model but train it using just the location sensitive attention and normal prenet (not BN). If it trains well then you can switch to BN but I’d suggest to use forward attention only for inference.
I’d be super glad! I will fork now and start working on it.
Which one is the latest model? Is it Taco2 with Graves?
You can probably disable that assert for you run until we find a better check.
the error is nothing related. It is Memory Error. You can see the problem better is you run training on CPU.
Hi everyone,
Pardon the tardiness, but it has been a hectic time for me. I thought I would drop some updates on multispeaker.
First of all, I have not been able to get Tacotron (junior) to work at all. I do not know if it is my datasets, but it just refuses to align. Tacotron2, on the other hand, seems to do much better in terms of alignments. I have been trying some things here and there, but I still do not have a large enough dataset to get good results and I do not have the time, or the resources, to train on open source English datasets, as they are of no use to me and multispeaker seems to require a lot of training time.
It seems that a sequence limit of approximately 80 characters is what I can get away with, on an NVIDIA K80 GPU. Anything higher than that and CUDA craps out.
Fictitious voices are a go. I tried to train a dual speaker TTS and concatenating the speaker embeddings of both voices gives a mixed voice that doesn’t sound robotic and resembles the voice of the dominant in the dataset speaker, but is not the same one.
No experiments on GST, either. Trying to implement in on Taco2.
In general, the trend that I observe that data and its quality is probably the most critical thing. If anyone has anything to add, please. Cheers!
PS: Mozilla TTS is the best implementation of Taco2 out there.