Stop loss jumps during gradual training

I’ve noticed a pattern where the stop loss jumps discontinuously when decreasing the number of decoder frames + batch size during training. Has anyone else seen this?

So far as I can tell, it doesn’t seem to greatly diminish inference quality (note the alignment and loss decoder scores aren’t affected much). But I was curious if it had some sort of explanation.

Is your r 1? Maybe that is why. I had the same problem, but I think it was because I forgot to switch off the transliteration switch and I was training in a language with special characters, so it might have messed with the stopnet as well. Have you trimmed silences?

Alternatively, if your r is 2, try training with 3 a bit longer.

have you checked how it performs in practice?