Sir,
I will let you know how everything is going from what Lyssix told me. The problem with your solution, it’s that I don’t have Nvida GPU. I have an AMD, so I can’t use gpu command. However I appreciate your help and I thank you !
Sir,
I will let you know how everything is going from what Lyssix told me. The problem with your solution, it’s that I don’t have Nvida GPU. I have an AMD, so I can’t use gpu command. However I appreciate your help and I thank you !
This time I can’t even train.
So what I did :
What I get :
Check the documentation, this downloaded latest master, which is now based on TensorFlow r1.11 while 0.2.0 is r1.6, that explains the symbol not found.
So I need to do :
In order to get the right native client? I’m sorry for asking so much, but I’m blocked since a moment.
Yes. Also, please try to use proper code formatting otherwise it’s painful to read and we can miss some informations that are being interpreted as message formatting.
It seems it’s working but it’s taking a lot of memory to launch the inferences. I’m gonna launch some tests during these 2 next weeks and I will keep in touch with you.
However, I would like to thank you all guys for your help !
Can you be more precise ?
Well, I gave like 11Go of RAM and when I launched the inferences the VM freezed and I had to shut it down. I gave 2 more Go and it works, but it seems a lot.
Strange, I just verified, valgrind --tool=massif
reports heap allocation ~650MB.
Not in my case, it went up really fast. But I will give you my feedback from my futur tests.
How to make two different virtual env ?
I am working on DeepSpeech 0.4.1
tensorflow-gpu==1.12.0
python 3.6
All requirements are installed. I have downloaded pretrained model and using common voice audio to check pre-trained model.
This is what I am getting. I couldn’t find the solution of this problem.
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio HF1.wav
Loading model from file models/output_graph.pbmm
TensorFlow: v1.6.0-18-g5021473
DeepSpeech: v0.2.0-0-g009f9b6
2019-01-28 18:24:10.033574: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Invalid argument: No OpKernel was registered to support Op ‘Pack’ with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: lstm_fused_cell/stack_1 = Pack[N=2, T=DT_INT32, axis=1](input_lengths, lstm_fused_cell/range_1)]]
Traceback (most recent call last):
File “/home/rc/.local/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/rc/.local/lib/python2.7/site-packages/deepspeech/client.py”, line 81, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/rc/.local/lib/python2.7/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 3
Kindly help me. Thank you!
Looks like you are using 0.2.0
binaries, this cannot work with a 0.4.1
model.
Can you guide me how to fix it ?
Thank you!
I had been working on DeepSpeech 0.2.0 but I left using it because decoded results were not separated with space due to CTC decoder problem in 0.2.0
I downloaded Deepspeech 0.4.1 in different directory. Help me in changing the binaries which are causing problem.
Thank you!!!
I’m not sure I can fix your system for you, there’s nothing magic here: setup a new virtualenv if you are using Python binaries and pip install
. It’s all documented.
Its resolved, my audio file was not in the required directory. everything else is working perfect.
Thank you!
what is tensorflow version should we used? I clone Deepspeech from this link
and clone Mozilla tensorflow ver 1.14 from here:
git clone https://github.com/mozilla/tensorflow.git