Before attempting to cross compile, I want to ensure I am able to natively compile it in my machine locally so that everything works. I was able to successfully compile (without avx support ) and able to import python modules (earlier used to get illegal instruction as pip install deepspeech will install a version requires avx support).
However, I am getting garbage as output.
python3 client.py --model …/data/models/output_graph.pbmm --alphabet …/data/models/alphabet.txt --lm …/data/models/lm.binary --trie …/data/models/trie --audio ./LDC93S1.wav
Loading model from file …/data/models/output_graph.pbmm
TensorFlow: v1.12.0-rc2-11-gbea86c1e88
DeepSpeech: v0.4.0-alpha.0-69-g50d62b8
Loaded model in 0.0198s.
Loading language model from files …/data/models/lm.binary …/data/models/trie
Loaded language model in 0.649s.
Running inference.
te ee i vy sn m u wuw rh mh ki py ay k jih th r zo l ghn axt bags r ogy ng n lu rr b’o yoh up ghm ihy dm mc pez o l rr spu piw ze bahr ayi uor qe
Inference took 7.323s for 2.925s audio file.
I downloaded tire and lm.binary from current master. Even I tried to download it from v0.4.0-aloha-0, but similar result.
As mentioned above, it could be due to mismatch of version.
deepspeech binary also runs, but, produces similar output
./deepspeech --model …/data/models/output_graph.pbmm --alphabet …/data/models/alphabet.txt --lm …/data/models/lm1.binary --trie …/data/models/trie1 --audio ./LDC93S1.wav
TensorFlow: v1.12.0-rc2-11-gbea86c1e88
DeepSpeech: v0.4.0-alpha.0-69-g50d62b8
te ee i vy sn m u wuw rh mh ki py ay k jih th r zo l ghn axt bags r ogy ng n lu rr b’o yoh up ghm ihy dm mc pez o l rr spu piw ze bahr ayi uor qe