Hello Lissyx,
Thanks for your help and the quick responses!!
I have a question and please let me know if this is the right place to ask it or should I open a new discussion window for it?
When I ran the decoder using the default model on a audio .wav file of ~4 sec it took ~38 sec of inference time.
(deepspeech-venv) [centerstage@localhost DeepSpeech]$ deepspeech …/models/output_graph.pb …/hiroshima-1.wav …/models/alphabet.txt …/models/lm.binary …/models/trie
Loading model from file …/models/output_graph.pb
2018-02-27 17:26:13.741657: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.469s.
Loading language model from files …/models/lm.binary …/models/trie
Loaded language model in 2.297s.
Running inference.
on a bright cloud less morning
Inference took 38.391s for 4.620s audio file.
Why is it taking this long of time?
How to improve the speed?