Edit: It shows in previous instead of in the end. This is my own screen issue.
Any advice would be appreciated!
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
You gave no LM filename, --lm expects one.
Your statement is unclear and you don’t show the full console output. Also, please share as text and not as image.
The code in evaluate.py is the same that is run for test set when training. It prints general WER on dataset and then shows 10 worst examples.
Unless you can prove bad WER with the LM, this is not a bug, this is expected.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
3
So sorry, the latter seems to be my own issue. Will edit that out later. About the LM, I don’t need any language model when decoding, what filename should I give?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
5
Decoding without a LM will always produce poor performances.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
this is the new command ./evaluate.py --n_hidden 2048 --checkpoint_dir ./deepspeech-0.6.1-models --test_files ../../librispeech/librivox-test-clean.csv --lm_binary_path --lm_trie_path --beam_width 1
still get same error