Error: Bazel to build Deepspeech library. Tensorflow 1.14, CUDA 10, Cudnn 7.5

Hi All

Hope you are well

I am currently trying to build and generate_trie but i receive the following error:

(deepspeech-venv) Chabanis-MacBook-Pro:tensorflow chabani$ bazel build --workspace_status_command=“bash native_client/” --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden // //native_client:generate_trie

Starting local Bazel server and connecting to it…

ERROR: Skipping ‘//’: no such package ‘native_client’: BUILD file not found on package path

ERROR: no such package ‘native_client’: BUILD file not found on package path

INFO: Elapsed time: 6.511s

INFO: 0 processes.

FAILED: Build did NOT complete successfully (0 packages loaded)

This is after I had run Bazel clean and reconfigured the bazel set up, this was done successfully with no issues.

Please note that i cant seem to run the pre-installed binaries as per the readme doc that is why I decided to build from source. Here is the error message:

(deepspeech-venv) Chabanis-MacBook-Pro:tensorflow chabani$ python3 util/ --arch osx --target

python3: can’t open file ‘util/’: [Errno 2] No such file or directory.

Thanks for you assistance

Are you sure you are using our tensorflow repo fork? Also, there’s no upstream support for CUDA on macOS.

You need to run that from the deepspeech git checkout …

Thanks for the response,

  1. With regards to the tensorflow repo fork, I am sure I did use the tensorflow fork as per the output below:

(deepspeech-venv) Chabanis-MacBook-Pro:Deepspeech chabani$ git clone

Cloning into ‘tensorflow’…

remote: Enumerating objects: 605841, done.

remote: Total 605841 (delta 0), reused 0 (delta 0), pack-reused 605841

Receiving objects: 100% (605841/605841), 347.18 MiB | 142.00 KiB/s, done.

Resolving deltas: 100% (490016/490016), done.

Checking out files: 100% (16472/16472), done.
(deepspeech-venv) Chabanis-MacBook-Pro:Tensorflow chabani$ git checkout origin/r1.14

Checking out files: 100% (5279/5279), done.

Note: checking out ‘origin/r1.14’.

  1. Okay, I did manage to run deepspeech from the deepspeech git checkout, output as follows:

deepspeech-venv) Chabanis-MacBook-Pro:Deepspeech chabani$ ./deepspeech --model models/output_graph.pb–alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio Track1.wav
Usage: ./deepspeech --model MODEL --alphabet ALPHABET [–lm LM --trie TRIE] --audio AUDIO [-t] [-e]

Running DeepSpeech inference.

--model MODEL		Path to the model (protocol buffer binary file)
--alphabet ALPHABET	Path to the configuration file specifying the alphabet used by the network
--lm LM			Path to the language model binary file
--trie TRIE		Path to the language model trie file created with native_client/generate_trie
--audio AUDIO		Path to the audio file to run (WAV format)
-t			Run in benchmark mode, output mfcc & inference time
--extended		Output string from extended metadata
--json			Extended output, shows word timings as JSON
--stream size		Run in stream mode, output intermediate results
--help			Show help
--version		Print version and exits

TensorFlow: v1.14.0-14-g1aad02a78e
DeepSpeech: v0.6.0-alpha.4-42-g3e60413

My issue here is that it clearly says Running Deepspeech Inference but there are no words appearing on the screen when inputting the Track1.wav file. I would expect the output to be similar to the demo shown as per kdavis demo on the deepspeech homepage.

My issue here is that your console output is improperly formatted, so I can’t really know exactly what is the output you get, and what is part of your forum post.

So far, it seems mostly that you failed passing proprely arguments in the command line, so nothing actualy run, you just got the help displayed …

Notice the two errors: instead of -- and missing space after .pb

Apologies for the formatting;

Okay here is the terminal code :

(deepspeech-venv) Chabanis-MacBook-Pro:deepspeech chabani$ deepspeech --model models/output_graph.pb-—alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
usage: deepspeech [-h] --model MODEL --alphabet ALPHABET [–lm [LM]]
[–trie [TRIE]] --audio AUDIO [–version] [–extended]
deepspeech: error: the following arguments are required: --alphabet

I did adjust the terminal as noted above but get the alphabet error. This is despite running the code from the deepspeech checkout.

Do you read what you post? You still have broken -— instead of -- and missing space.

This deepspeech command is not being ran from the git tree, but rather from the Python binding (which is fine).

@lissyx, thank you. adjusted the code as follows :

I have an error being:
deepspeech: error: unrecognized arguments: models/lm.binary

Running deepspeech from this directory:

I have the following file from the extracted deepspeech 0.5.1 file - lm.binary.

Not sure as to why this error is kicked out as this code to run deepspeech as per the documentation:

deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav

Where would I be wrong here

First, don’t post screenshots. Second, you are still missing a space betwen alphabet.txt and --lm

@llssyx, sorry for the screenshots. wont happen again

Lastly, got the pre trained model to work. Thanks for the patience.

1 Like