Hi @lissyx, I’ve got the same question:
I have just changed in native_clienty/python/client.py
# The alpha hyperparameter of the CTC decoder. Language Model weight LM_ALPHA = 0 # The beta hyperparameter of the CTC decoder. Word insertion bonus. LM_BETA = 0
and afterwards used different LMs, like:
python native_client/python/client.py --model training_accurate/export_dir3/output_graph.pb --alphabet deepspeech-0.5.0/model/alphabet.txt --audio $FILE1 --lm lm_trie_vocab/lm.binary --trie lm_trie_vocab/trie --extended
and
python native_client/python/client.py --model training_accurate/export_dir3/output_graph.pb --alphabet deepspeech-0.5.0/model/alphabet.txt --audio $FILE1 --lm lm/lm.binary --trie lm/trie --extended
and I got different outputs. How can that happen? Is there maybe instead another possibility to not using the LM for it? It seems like it’s not that easy to exclude the Scorer as it is in DeepSpeech.by because it is using the binary here?! Thanks already