Loading model from file /mnt/d/allModels/UA/F04/F04_incomplete_output_graph.pb
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-05-17 15:15:20.720751: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.242s.
Loading language model from files models/lm.binary models/trie
Loaded language model in 0.22s.
Running inference.
(the inference is a blank line)
Note that I’m even trying this on the training set, so the accuracy should be higher than usual. I ran quite a few examples and kept getting blank responses. Is it because I need more data/training iterations?
Update: I removed the lm binary and trie arguments, and I’m getting better results – I guess those pre-built binaries didn’t really work with my vocabulary set. Marking “Fixed” for now
(no trie, with binary) deepspeech --model /mnt/d/allModels/UA/tester/tester_output_graph.pb --alphabet models/alphabet.txt --audio /mnt/d/UA_Data/implementation/tester/train/tester/a/a_0.wav --lm models/lm.binary
Does this mean I need to generate my own trie/binary? (Also I feel like this should be something that is generated from the training process, but maybe that’s not viable)
According to my experience, you should generate your own trie/lm.binary if you have modified the alphabet.txt. Because the output of the model is the line in alphabet file so that the pre-generated trie/lm.binary will not able to match the correct alphabet!
I didn’t edit the alphabet file, however. I also tried generating my own trie, but I’m having trouble with making the binary. Guess I’ll keep trying on that