Error during trie creation

Okay. My bad.

This is the complete code for running inference:

Prafful’s MacBook Pro:DeepSpeech naveen$ deepspeech /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb /Users/naveen/Downloads/DeepSpeech/TEST/engtext_3488.wav /Users/naveen/Downloads/DeepSpeech/alphabet.txt
Loading model from file /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb
Loaded model in 0.090s.
Running inference.
2018-04-27 13:38:04.067366: E tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
2018-04-27 13:38:04.067432: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Error running session: Invalid argument: NodeDef mentions attr ‘identical_element_shapes’ not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bidirectional_rnn/bw/bw/TensorArray_1 = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=[?,750], identical_element_shapes=true, tensor_array_name=“bidirectional_rnn/bw/bw/dynamic_rnn/input_0”, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
None
Inference took 0.269s for 7.920s audio file.

Please:

sha1sum `which deepspeech`

Prafful’s MacBook Pro:DeepSpeech naveen$ sha1sum which deepspeech
-bash: sha1sum: command not found

I used this to check version of my tensorflow:

python3 -c ‘import tensorflow as tf; print(tf.version)’

Is there any method/command to check version of my downloaded and running binaries?

fe649aafec251357ff8a1e14b3ed842e79d510c2 /anaconda3/bin/deepspeech

And bingo, you’re running the binary from python package v0.1.1 that you installed in anaconda3. Please use the proper path to your extracted deepspeech binary.

Ohkay. I get it.

But where exactly do i specify the path? i use a run file to train the model which is like this:

#!/bin/sh
set -xe
if [ ! -f DeepSpeech.py ]; then
echo “Please make sure you run this from DeepSpeech’s top level directory.”
exit 1
fi;

python -u DeepSpeech.py
–train_files /Users/naveen/Downloads/DeepSpeech/train/train.csv
–dev_files /Users/naveen/Downloads/DeepSpeech/dev/dev.csv
–test_files /Users/naveen/Downloads/DeepSpeech/test/test.csv
–train_batch_size 80
–dev_batch_size 80
–test_batch_size 40
–n_hidden 375
–epoch 33
–validation_step 1
–early_stop True
–earlystop_nsteps 6
–estop_mean_thresh 0.1
–estop_std_thresh 0.1
–dropout_rate 0.22
–learning_rate 0.00095
–report_count 100
–use_seq_length False
–export_dir /Users/naveen/Downloads/DeepSpeech/results/model_export/
–checkpoint_dir /Users/naveen/Downloads/DeepSpeech/results/checkout/
–decoder_library_path /Users/naveen/Downloads/DeepSpeech/DeepSpeech/libctc_decoder_with_kenlm.so
–alphabet_config_path /Users/naveen/Downloads/DeepSpeech/alphabet.txt
–lm_binary_path /Users/naveen/Downloads/DeepSpeech/lm.binary
–lm_trie_path /Users/naveen/Downloads/DeepSpeech/trie
“$@”

Everywhere here i have given the path correctly.

Also from your explanation I realize that i installed python package v0.1.1 in anaconda3 sometime in past while trying all things. Do i uninstall that?

Please make an effort and pay attention: this is not the training that is problematic, this is your way to run inference. Just use the proper path when calling deepspeech. I don’t know your setup, I cannot tell you more. Maybe ./deepseech is enough, if it’s in your current working directory.

1 Like

As discussed earlier, i downloaded the binaries correctly but i was giving wrong path when calling deepspeech.

so i gave the correct path:

Prafful’s MacBook Pro:DeepSpeech naveen$ /Users/naveen/Downloads/DeepSpeech/DeepSpeech/deepspeech /Users/naveen/Downloads/DeepSpeech/results/model_export/output_graph.pb /Users/naveen/Downloads/DeepSpeech/alphabet.txt /Users/naveen/Downloads/DeepSpeech/TEST/engtext_3488.wav

dyld: Library not loaded: @rpath/libsox.3.dylib
Referenced from: /Users/naveen/Downloads/DeepSpeech/DeepSpeech/deepspeech
Reason: image not found
Abort trap: 6

I tried to resolve it by looking at the discourses but was not able to.

Do you know what i am doing wrong here?

Please install libsox using brew.

Okay, i installed ‘sox’ using ‘brew install sox’ and when i used ‘brew install libsox’, it said - 'Error: No available formula with the name “libsox” 'and ‘This similarly named formula was found:
libsoxr
To install it, run:
brew install libsoxr’

so i installed brew install libsoxr.

I still got the exact same error : ’ dyld: Library not loaded: @rpath/libsox.3.dylib’

Then, i got ‘otool -L deepspeech’

and i got:

deepspeech:
@rpath/libdeepspeech.so (compatibility version 0.0.0, current version 0.0.0)
@rpath/libdeepspeech_utils.so (compatibility version 0.0.0, current version 0.0.0)
@rpath/libsox.3.dylib (compatibility version 4.0.0, current version 4.0.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 307.5.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)

i understand from this that libsox is installed correctly but there is some problem in loading it.

libsoxr is not what you want. I think the proper package name for bew is sox. The output of otool -L does not help us here, because it does not work as ldd under linux where it gives you the proper path. Check man dyld there are some environment variables that you can use to debug what happens at runtime. I’m not a macOS user, I cannot really help you more.

I tried ‘man dyld’ but i was not able to resolve using that.

@JanX2 Hey, will you be able to help me with this as you had a similar issue here :- https://github.com/mozilla/DeepSpeech/issues/1051

i also tried : ‘install_name_tool -change /Users/build-user/TaskCluster/LightTasks/1/tasks/task_1511961329/homebrew/opt/sox/lib/libsox.3.dylib @executable_path/libsox.3.dylib ./deepspeech’ but i still i get the exact same error.

I cannot help you more if you don’t check / document what is the dynamic loader doing. You need to read man dyld and run with the env var that will show you what is loaded: I know they exist, I don’t have a macOS. So please search a bit more.

Hey @lissyx , i think i am very close and i did try different things as suggested. I have created the model. i have installed all the things which are required to call the inference. Its just that i am not able to resolve this.

Can you suggest me more things to try?

This is my recurring error while trying to run inference:
dyld: Library not loaded: @rpath/libsox.3.dylib

I cannot help you more than I did, you need to check where libsox.3.dylib is … and why dyld does not find it, but I already told you all I know

From https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/dyld.1.html, please give a try to those:

  • DYLD_PRINT_LIBRARIES=1
  • DYLD_PRINT_LIBRARIES_POST_LAUNCH=1
  • DYLD_PRINT_RPATHS=1

I did entire process again on linux and i was able to resolve the libsox issues while running inference.

But currently i am getting this problem:

(deepspeech-venv) aa@aa-ubuntu:~/Downloads/deepspeech$ /home/aa/Downloads/deepspeech/DeepSpeech/deepspeech /home/aa/Downloads/deepspeech/results/model_export/output_graph.pb /home/aa/Downloads/engtext_3488.wav /home/aa/Downloads/deepspeech/alphabet.txt /home/aa/Downloads/deepspeech/lm.binary /home/aa/Downloads/deepspeech/trie
TensorFlow: v1.6.0-16-gc346f2c
DeepSpeech: v0.2.0-alpha.4-0-g4685c1d
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2018-05-10 13:13:49.168663: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Error: Alphabet size does not match loaded model: alphabet has size 1919, but model has 28 classes in its output. Make sure you’re passing an alphabet file with the same size as the one used for training.
Loading the LM will be faster if you build a binary file.
Reading /home/aa/Downloads/deepspeech/alphabet.txt
----5—10—15—20—25—30—35—40—45—50—55—60—65—70—75—80—85—90—95–100
terminate called after throwing an instance of ‘lm::FormatLoadException’
what(): native_client/kenlm/lm/read_arpa.cc:65 in void lm::ReadARPACounts(util::FilePiece&, std::vector&) threw FormatLoadException.
first non-empty line was “a” not \data. Byte: 218
Aborted (core dumped)

Thanks a lot for this, this got resolved finally. My libpng was linked to some other program, so i had to force unlink it and then libsox got installed properly using ‘brew install sox’. Now the model is running successfully on mac. Still facing the issue mentioned in previous comment on linux.

You are passing arguments in the wrong order. WAV should be the last one.