Final results LPCNet + Tacotron2 (Spanish)

audio-archivo-156579483968273.zip (164,8 KB)

I’m able to synthesize your training file, thus your training format is correct, can be about transcription/audio quality, I mean wrong transcriptions or empty audio, like the last one you removed.

The last thing I eliminated, they were audios with phrases too extensive. I used the erogol’s notebook and eliminated audio. these auidos
eliminated, if renian audio, worse ko could be processed, I do not know what it can be. After that, I had already trained tacotron without lpcnet. And this was the result.

Could it be that I took a workout that was already saved, before deleting the long sentences?

I think the problem is to resume training with different files. At this time, I executed a new training from scratch.

later I discuss the attention plot, when the 2 workouts understand at the same level of training

Ok, let’s wait.

Yes, you need to delete the model trained with the wrong sentences.

Hello @carlfm01
In fact, my attention plot doesn’t look like it used to. this is the current one.

but the audio is heard with the same noise as before.

Share the generated feature to test? Looks like it needs silence trimming at the end

@carlfm01 This is an audio synthesized with tacotron, and processed with LPCNet.

https://transfer.sh/HvMt2/test-out.wav

Sounds good, but hard to tell if it needs more training with just 3 words, share a longer audio?

And the issue? How did you fix it?

Now I’m trying to adapt a new voice with just 3h using the pretrained model with the two old voices (Tux and Epachuko) with 10k steps
3h.zip (346,7 KB)

the model still needs more training, when I have at least about 25 thousand steps, I will begin to carry out sentences with longer sentences.

The noise audio was an audio generated by tacotron, in the evaluation. these audios are still produced the same.

I appreciate all your support, all that is just your merits.

@carlfm01 Have you tried to freeze the model?

Yes, you need to use Tacotron_model/inference/add as output name

1 Like

@carlfm01 Could we talk through an email or other means,please?

Use the DM of the forum?

hi @carlfm01! I was trying to runn synthesize.py from your tacotron-2 fork using your checkpoints, but looks like the tacotron_checkpoints are broken for me. Here is what I did:

  1. Fork your repo carlfm01/Tacotron-2
  2. Put the checkpoints from GDrive to a local ./checkpoint01 folder
  3. Run tacotron-synthesize using all the default args (mode=eval, model=Tacotron and so on) and adding some example sentences

The checkpoints loads correctly:

Loading checkpoint: ./checkpoints01/tacotron_model.ckpt-55000
INFO:tensorflow:Restoring parameters from ./checkpoints01/tacotron_model.ckpt-55000

But than I have some missing variable errors:

    NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

2 root error(s) found.
  (0) Not found: Key Tacotron_model/inference/decoder/Location_Sensitive_Attention/attention_bias_1 not found in checkpoint
	 [[node save_5/RestoreV2 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
  (1) Not found: Key Tacotron_model/inference/decoder/Location_Sensitive_Attention/attention_bias_1 not found in checkpoint
	 [[node save_5/RestoreV2 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
	 [[GroupCrossDeviceControlEdges_0/save_5/restore_all/_8]]

This is a full minimal repro notebook of what I am trying to do: https://colab.research.google.com/drive/1Ys6oWXIRUnGDYUVWYppiJXFOOUTHT-JN

Hello @Solbiati_Alessandro, those checkpoints are old, please try with https://drive.google.com/file/d/1JSC0jbdnOi4igCYTnDBdMGXIsp2VeKj9/view and the newspanish branch. For the LCPNet the old one will work.

1 Like

Hello @carlfm01. Thank you very much for your detailed tutorial steps. However, I am not sure why it is necessary to copy the LPCNet-compressed wavs (the f32s) into the audio folder of the tacotron training data (steps 5 and 7 of your summary). Surely Tacotron only converts from text to MFCCs?

No, for the LPCNet we need to train Tacotron with the real features extracted by the LPCNet extractor, that’s why you need to put the extracted features into the audio directory.
Once Tacotron is trained you can predict from text to LPC features that we can feed into LPCNet to generate the actual .wav for the predicted features.

Thank you.

What about training LPCNet. You suggest using the same training data as with Tacotron. However, with dump_data a single audio file takes 10 min to process with dump_data and produces 4gb of files…

Hello carlosfm, Thanks for the contribution you make, I am trying to test in Google Colab, and I get this error, how do I correct it ?:
/tensorflow-1.15.0/python3.6/tensorflow_core/python/training/saving/saveable_object_util.py in op_list_to_dict(op_list, convert_variable_to_tensor)
291 if name in names_to_saveables:
292 raise ValueError(“At least two variables have the same name: %s” %
–> 293 name)
294 names_to_saveables[name] = var
295

ValueError: At least two variables have the same name: Tacotron_model/Tacotron_model/inference/decoder/Location_Sensitive_Attention/attention_bias/Adam

Hello carlfm, thank you very much for sharing your work, I am new to this topic and I would like to know how to use your model (55k steps) in the new branch (new_spanish) to synthesize sentences in Spanish, because with the old model (47.5k ) returns audio with only noise. Thanks a lot.

1 Like