Error during trie creation

$ sha1sum deepspeech libdeepspeech.so
b0455dd60674ad49d858a161902b37d7c87f4282 deepspeech
0c3f7496930eaed4759e8a78d05b99f965ef6615 libdeepspeech.so

What does this signify?

This command :
$ python …/util/taskcluster.py --arch osx --target .
Downloading https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.osx/artifacts/public/native_client.tar.xz
Downloading: 100%

This is exact same command which i ran. And from exact same URL, its downloading in both cases. Only difference that i see is that i used “python3” while you used “python”.

Will that make any difference?

No, it means you have downoaded the proper binaries. But I still don’t know exactly how you run your inference …

You can check for example this log from C++ tests running on OSX: https://taskcluster-artifacts.net/Xcn9cfUDSz2HEjc4uXfJeQ/0/public/logs/live_backing.log

There you will see output with TensorFlow and DeepSpeech versions given.

Okay. My bad.

This is the complete code for running inference:

Prafful’s MacBook Pro:DeepSpeech naveen$ deepspeech /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb /Users/naveen/Downloads/DeepSpeech/TEST/engtext_3488.wav /Users/naveen/Downloads/DeepSpeech/alphabet.txt
Loading model from file /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb
Loaded model in 0.090s.
Running inference.
2018-04-27 13:38:04.067366: E tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
2018-04-27 13:38:04.067432: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Error running session: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
None
Inference took 0.269s for 7.920s audio file.

Please:

sha1sum `which deepspeech`

Prafful’s MacBook Pro:DeepSpeech naveen$ sha1sum which deepspeech
-bash: sha1sum: command not found

I used this to check version of my tensorflow:

python3 -c ‘import tensorflow as tf; print(tf.version)’

Is there any method/command to check version of my downloaded and running binaries?

fe649aafec251357ff8a1e14b3ed842e79d510c2 /anaconda3/bin/deepspeech

And bingo, you’re running the binary from python package v0.1.1 that you installed in anaconda3. Please use the proper path to your extracted deepspeech binary.

Ohkay. I get it.

But where exactly do i specify the path? i use a run file to train the model which is like this:

#!/bin/sh
set -xe
if [ ! -f DeepSpeech.py ]; then
echo “Please make sure you run this from DeepSpeech’s top level directory.”
exit 1
fi;

python -u DeepSpeech.py
–train_files /Users/naveen/Downloads/DeepSpeech/train/train.csv
–dev_files /Users/naveen/Downloads/DeepSpeech/dev/dev.csv
–test_files /Users/naveen/Downloads/DeepSpeech/test/test.csv
–train_batch_size 80
–dev_batch_size 80
–test_batch_size 40
–n_hidden 375
–epoch 33
–validation_step 1
–early_stop True
–earlystop_nsteps 6
–estop_mean_thresh 0.1
–estop_std_thresh 0.1
–dropout_rate 0.22
–learning_rate 0.00095
–report_count 100
–use_seq_length False
–export_dir /Users/naveen/Downloads/DeepSpeech/results/model_export/
–checkpoint_dir /Users/naveen/Downloads/DeepSpeech/results/checkout/
–decoder_library_path /Users/naveen/Downloads/DeepSpeech/DeepSpeech/libctc_decoder_with_kenlm.so
–alphabet_config_path /Users/naveen/Downloads/DeepSpeech/alphabet.txt
–lm_binary_path /Users/naveen/Downloads/DeepSpeech/lm.binary
–lm_trie_path /Users/naveen/Downloads/DeepSpeech/trie
“$@”

Everywhere here i have given the path correctly.

Also from your explanation I realize that i installed python package v0.1.1 in anaconda3 sometime in past while trying all things. Do i uninstall that?

Please make an effort and pay attention: this is not the training that is problematic, this is your way to run inference. Just use the proper path when calling deepspeech. I don’t know your setup, I cannot tell you more. Maybe ./deepseech is enough, if it’s in your current working directory.

1 Like

As discussed earlier, i downloaded the binaries correctly but i was giving wrong path when calling deepspeech.

so i gave the correct path:

Prafful’s MacBook Pro:DeepSpeech naveen$ /Users/naveen/Downloads/DeepSpeech/DeepSpeech/deepspeech /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb /Users/naveen/Downloads/DeepSpeech/alphabet.txt /Users/naveen/Downloads/DeepSpeech/TEST/engtext_3488.wav

dyld: Library not loaded: @rpath/libsox.3.dylib
Referenced from: /Users/naveen/Downloads/DeepSpeech/DeepSpeech/deepspeech
Reason: image not found
Abort trap: 6

I tried to resolve it by looking at the discourses but was not able to.

Do you know what i am doing wrong here?

Please install libsox using brew.

Okay, i installed ‘sox’ using ‘brew install sox’ and when i used ‘brew install libsox’, it said - 'Error: No available formula with the name “libsox” 'and ‘This similarly named formula was found:
libsoxr
To install it, run:
brew install libsoxr’

so i installed brew install libsoxr.

I still got the exact same error : ’ dyld: Library not loaded: @rpath/libsox.3.dylib’

Then, i got ‘otool -L deepspeech’

and i got:

deepspeech:
@rpath/libdeepspeech.so (compatibility version 0.0.0, current version 0.0.0)
@rpath/libdeepspeech_utils.so (compatibility version 0.0.0, current version 0.0.0)
@rpath/libsox.3.dylib (compatibility version 4.0.0, current version 4.0.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 307.5.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)

i understand from this that libsox is installed correctly but there is some problem in loading it.

libsoxr is not what you want. I think the proper package name for bew is sox. The output of otool -L does not help us here, because it does not work as ldd under linux where it gives you the proper path. Check man dyld there are some environment variables that you can use to debug what happens at runtime. I’m not a macOS user, I cannot really help you more.

I tried ‘man dyld’ but i was not able to resolve using that.

@JanX2 Hey, will you be able to help me with this as you had a similar issue here :- https://github.com/mozilla/DeepSpeech/issues/1051

i also tried : ‘install_name_tool -change /Users/build-user/TaskCluster/LightTasks/1/tasks/task_1511961329/homebrew/opt/sox/lib/libsox.3.dylib @executable_path/libsox.3.dylib ./deepspeech’ but i still i get the exact same error.

I cannot help you more if you don’t check / document what is the dynamic loader doing. You need to read man dyld and run with the env var that will show you what is loaded: I know they exist, I don’t have a macOS. So please search a bit more.

Hey @lissyx , i think i am very close and i did try different things as suggested. I have created the model. i have installed all the things which are required to call the inference. Its just that i am not able to resolve this.

Can you suggest me more things to try?

This is my recurring error while trying to run inference:
dyld: Library not loaded: @rpath/libsox.3.dylib

I cannot help you more than I did, you need to check where libsox.3.dylib is … and why dyld does not find it, but I already told you all I know