I had import the Dataset one time again. But with my edited aphabet, which contains (ä,ü,ö), that was my mistake.
I had found that by testing different alphabet’s. And I found out that (32,) is the number of letters in my edited alphabet version and (29,) the nummber of letters in the standard version.
And I haven’t had this Error again, but then came this Error:
ValueError: Alphabet cannot encode transcript “ich hoffe es” while processing sample “/media/sf_de/clips/common_voice_de_21632146.wav”, check that your alphabet contains all characters in the training corpus. Missing characters are: [’ ', ’ '].
I am also facing the same error related to outer layer shape mismatch.
This is the way I am trying to replace the hindi alphabet file with the english one and provide pre-trained deepspeech model for initialisation.