Error when running inference on an audio file

Ok, that’s my fault, I forgot to tell you that WAV argument has been moved to the end :). So try: ./deepspeech output_graph.pb alphabet.txt ../testing_sampling/out.wav

1 Like

@lissyx Thanks it works now.

Just curious as to what is this version you are talking about ?? is it deepspeech version ??
Also is c++ binary different in the way it works from the python one ??

No, the C++ binary mostly equivalent to what you have in Python. The version I’m referring to is the one that is embedded in the packages / tarball. It’s the TensorFlow version we used to build.

So the python project is built with 1.5 is it ??
But the native client is not updated for python ??

No, it’s about what has been released to PyPI/npm and what we have on TaskCluster. Latest version “released” is v0.1.1, based on r1.4. But newer binaries are available from TaskCluster. It will be updated on PyPI/NPM when we do a new release.

Got it. Thanks for patiently helping me out :slight_smile:

1 Like

@lissyx I face a similar problem.

just this mng, created a new topic ( Pip install deepspeech throws error while serving model trained on TF 1.6 ) , and later stumbled upon this topic… linked this topic there.

Can you please make the python deepspeech library whl corresponding to the latest taskcluster binaries available? I need to use in deepspeech-server internally.

Thank you very much!

I don’t understand your request, the wheel are also provided on TaskCluster: https://tools.taskcluster.net/index/project.deepspeech.deepspeech.native_client.master/cpu

It’s still versionned as 0.1.1 but it’s latest master.

I had the same problem, but now when i run it,

terminal output:

Loading model from file output_graph.pb
2018-04-04 11:57:19.772386: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Loaded model in 0.271s.
Loading language model from files 5-gram.binary quz_trie
Loaded language model in 0.699s.
Running inference.
2018-04-04 11:57:23.228952: E tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,4096], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
2018-04-04 11:57:23.229242: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,4096], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,4096], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Error running session: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,4096], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,4096], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
None
Inference took 2.494s for 1.117s audio file.

help me plz…

Please document how you get this error, because using latest master builds from https://tools.taskcluster.net/index/project.deepspeech.deepspeech.native_client.master/cpu with the v0.1.1 model works here.

@lissyx, thanks a lot for pointing me there. It works!!!

I have the same problem.
I trained a new model without errors, but when I try to classify I get the following error:

(deepspeech-venv) oscar@ubuntuDS:~$ deepspeech ~/Documentos/SpeechToText/Resultados/model_export/output_graph.pb ~/Documentos/SpeechToText/AudioFiles/khz16/SmallAudioFiles/s1_stream0.wav001.wav ~/Documentos/SpeechToText/alphabet_pt.txt
Loading model from file /home/oscar/Documentos/SpeechToText/Resultados/model_export/output_graph.pb
2018-04-10 21:18:29.318900: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Loaded model in 0.050s.
Running inference.
2018-04-10 21:18:29.653385: E tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name="bidirectional_rnn/bw/bw/dynamic_rnn/input_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
2018-04-10 21:18:29.653513: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name="bidirectional_rnn/bw/bw/dynamic_rnn/input_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name="bidirectional_rnn/bw/bw/dynamic_rnn/input_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice)]]
Error running session: Invalid argument: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name="bidirectional_rnn/bw/bw/dynamic_rnn/input_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name="bidirectional_rnn/bw/bw/dynamic_rnn/input_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice)]]
None
Inference took 0.280s for 78.755s audio file.
(deepspeech-venv) oscar@ubuntuDS:~$  

I read all the post and got the client by:

(deepspeech-venv) oscar@ubuntuDS:~/Documentos/SpeechToText$ wget -O - https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.cpu/artifacts/public/deepspeech-0.1.1.tgz | tar xvfz - 

I also checked the version of everything:

(deepspeech-venv) oscar@ubuntuDS:~/DeepSpeech$ pip list
Package           Version    
----------------- -----------
absl-py           0.1.13     
astor             0.6.2      
backports.weakref 1.0rc1     
beautifulsoup4    4.6.0      
bleach            1.5.0      
boost             0.1        
bs4               0.0.1      
certifi           2018.1.18  
chardet           3.0.4      
decorator         4.2.1      
deepspeech        0.1.1      
enum34            1.1.6      
funcsigs          1.0.2      
futures           3.2.0      
gast              0.2.0      
grpcio            1.10.1     
html5lib          0.9999999  
idna              2.6        
Markdown          2.6.11     
Mastodon.py       1.2.2      
mock              2.0.0      
numpy             1.14.2     
pandas            0.22.0     
pbr               4.0.1      
pip               10.0.0b2   
pkg-resources     0.0.0      
protobuf          3.5.2.post1
python-dateutil   2.7.2      
pytz              2018.3     
pyxdg             0.26       
requests          2.18.4     
scipy             1.0.1      
setuptools        39.0.1     
six               1.11.0     
SQLAlchemy        1.2.6      
tensorboard       1.6.0      
tensorflow        1.6.0      
termcolor         1.1.0      
Unidecode         1.0.22     
urllib3           1.22       
Werkzeug          0.14.1     
wheel             0.31.0     

I don’t know where the error is, could you help me please?

Oscar, you have the answer just above: you trained with TensorFlow v1.6.0 and then you tried running inference with DeepSpeech binaries v0.1.1 that are based on TensorFlow 1.4. This is the exact same issue as earlier.

@lissyx
Thanks for your response. I see my mistake, I was left with the wrong idea that was the 1.6.0 version.
Oscar

Can you confirm it works? I’m inclined to think we should open an issue to deal with that properly: have some ways to properly verify compatibility. Likely we should just dig into TensorFlow’s internal … ?

@lissyx
Thanks for your concern.
Unfortunately I can’t confirm nor denied that change the version solved the problem because I’m facing a new error that prevents train the model.
As you know from my recent post, looks like libctc_decoder_with_kenlm.so installed using util/taskcluster.py made a native client with a tf version different from that I just installed (1.4).
I’m sorry but I’m a little lost here.
I don’t know how I ended with a DeepSpeech binaries based on tf 1.4 and a native client based on another version of tf if I just follow the instructions in here.
Moreover, the Training section states

Install the required dependencies using pip:

cd DeepSpeech
    pip install -r requirements.txt

And requirements.txt list tensorflow == 1.6.0

Should I upgrade/downgrade DeepSpeech binaries or change the native client or something else?

It’s all true, and if you switch to tag v0.1.1 in the git tree matching the release you downloaded, then you should have proper instructions visible in README, and requirements.txt should list a dependency against v1.4.0 :slight_smile:

Happy to take any suggestion on how we can smooth this process, because to me it’s all trivial :slight_smile:

Up to you: you can either downgrade your DeepSpeech branch by checkout of v0.1.1 and then downgrade the TensorFlow install to 1.4.0 (AND use the matching libctc_decoder_with_kenlm.so, so the one packaged with v0.1.1), or keep training with v1.6.0, but then you need to download the native_client.tar.xz and/or the Python/NodeJS packages from taskcluster.

@lissyx
Thanks for your sincere answer!

If I manage to untangle the process of installation and tuning, I promise to polish my notes and to share them just in case they can help soften the process even a little bit.

I’m sorry but I don’t really understand where to look for the git tree matching you mention.

git checkout v0.1.1 should do it