Hello,
I used a Kenlm built language model of vocabulary.txt ( text of transcriptions ) of Nepali language to build a DeepSpeech model and i built another model without using any language model.
The latter seems to be not working at all.
It means DeepSpeech use language model while training as well?
If so can I also use another language model for inferencing besides the one used for traning?
I said it’s not used during training, but it is used for inference (that’s what it’s for), which includes the final test epoch in the end. You can re-use the same checkpoint/model with and without the language model, or with different language models. The training phase itself is not dependent on the LM.
The language model is not the cause of the discrepancy, something else is different in your training procedure. Like I said, the LM is not used during training.