- Have I written custom code (as opposed to running examples on an unmodified clone of the repository) : No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04) : Ubuntu Virtual Machine (I use a Windows computer)
- TensorFlow installed from (our builds, or upstream TensorFlow) : pip3 install tensorflow==1.12.0
- TensorFlow version (use command below) : ‘v1.13.1-0-g6612da8951’ 1.12.0
- Python version : 3.6
- Bazel version (if compiling from source) : N/A
- GCC/Compiler version (if compiling from source) : N/A
- CUDA/cuDNN version : N/A
- GPU model and memory : N/A
Exact command to reproduce :
deepspeech --model /mnt/d/allModels/UA/F04/F04_incomplete_output_graph.pb --alphabet models/alphabet.txt --audio /mnt/d/UA_Data/implementation/F04/train/F04/a/a_0.wav --lm models/lm.binary --trie models/trie
Loading model from file /mnt/d/allModels/UA/F04/F04_incomplete_output_graph.pb
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-05-17 15:15:20.720751: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.242s.
Loading language model from files models/lm.binary models/trie
Loaded language model in 0.22s.
(the inference is a blank line)
Note that I’m even trying this on the training set, so the accuracy should be higher than usual. I ran quite a few examples and kept getting blank responses. Is it because I need more data/training iterations?