I am getting a “Segmentation Fault" error on both shorty after it starts processing.
Here is an example output:
-bash-4.2$ deepspeech --model models/output_graph.pb --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio file.wav
Loading model from file models/output_graph.pb
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2020-01-22 09:21:45.252368: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: FMA
Loaded model in 0.365s.
Loading language model from files models/lm.binary models/trie
Loaded language model in 0.181s.
Running inference.
After this line comes the dreaded “Segmentation Fault”
Any idea what causes it or how to solve it?
Thanks.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
Without more information, it’s not actionable, and 0.4.1 is an old release now, we can’t really do anything.
What more info can I add? I do not see any log file or any output to screen.
Also, I am using 0.4.1 since the Spanish model was trained using 0.4.1. I don’t think I can use the latest version, unless I retrain the Spanish model using that version, can I?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
4
You’re right.
A gdb stack at first, but you might need to rebuild with debug info …
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
5
$ ./deepspeech --model output_graph.pb --alphabet alphabet.txt --audio ../test-alex.en.wav -t
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2020-02-03 13:12:21.783527: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
su saz e ser lo del rojello
cpu_time_overall=1.54403
$ ./deepspeech --model output_graph.pb --alphabet alphabet.txt --lm lm.binary --trie trie --audio ../test-alex.en.wav -t
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2020-02-03 13:12:37.165594: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
se ha de ser lo del rollo
cpu_time_overall=1.19909
Thanks @lissyx, for the efforts.
The differences between us are:
I am using RedHat 7 while you are using Ubuntu 19
It appears that I am using a far larger file than you.
Do you think these might be related?
Thanks
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
7
Without at least a gdb stack, we can only waste our times. IMHO given the amount of data this model was trained, a much efficient use of your time would be to just retrain on current codebase.
108 hours of data, with CUDNN and mixed precision enabled, that’s going to be pretty fast (depends on your hw, though).