The model training is stuck at 1st epoch

phoneme cache computation is stuck during first epoch

Not very much to go on there really, is there

what could be the possible reason…? is it the issue with espeak?

You do realise you’ve basically said practically nothing about your situation don’t you? I recall your previous issues were overly brief too.

since the phoneme computation is slow, the results are cached in a folder during the very first epoch…this phoneme caching happens for only 500 odd audio files out of the 5.5 k files…because of which the model gets stuck at the 1st epoch… could you please give me some pointers as to why the phoneme caching is not happening for some of the audio transcriptions…?

Difficult to assist and diagnose remotely but I would look through the output from your initial run to see if there are errors that could be related.

If you don’t still have that output, maybe try removing the cache files completely and run it again.

The first thing to check is if there could be some issue with the transcript text for the one it stops on, or it might be some other issue relating to that record eg if the audio file is not present, is zero size or has some kind of format issue, basically something that would mean the process couldn’t get past that particular one.

On the transcription text, you could try running that text through espeak-ng directly and/or via the functions within the code that produce the phonemes to see what it yields.

1 Like