Build DeepSpeechv0.7.1 CUDA Linux from source

I am trying to build deepspeechv0.7.1 from source. I followed instructions here (https://github.com/mozilla/DeepSpeech/blob/v0.7.1/native_client/README.rst#building-deepspeech-binaries).

And I am getting error like the following.

ERROR: Skipping '//native_client:libdeepspeech.so': error loading package 'native_client': Unable to find package for @com_github_nelhage_rules_boost//:boost/boost.bzl: The repository '@com_github_nelhage_rules_boost' could not be resolved.
WARNING: Target pattern parsing failed.
ERROR: error loading package 'native_client': Unable to find package for @com_github_nelhage_rules_boost//:boost/boost.bzl: The repository '@com_github_nelhage_rules_boost' could not be resolved.
INFO: Elapsed time: 2.733s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)

My CUDA version is 10.2, and GPU is k80, compute capability is 3.7.

I am using mozila fork of tensorflow r1.15, bazel version 0.26.1. Configure succeeded. But compile libdeepspeech.so fails.

Thanks in advance for help.

this is wrong, it is documented you need 10.0.

We already have prebuilt binaries, can you explain why you need to rebuild from source?

Besides, the boost dependency was added after 0.7, so are you sure you are on the correct tag?

Please use v0.7.4 if you need 0.7 compatibke work.

Hi @lissyx

Thanks for your quick reply. I have reason to use cuda10.2. I have another tensorflow module running in the same piece of software that i need to integrate to, and it uses cuda10.2. That is why I need to build from source.

There are reason we document 10.0, TensorFlow upstream does not officially support 10.2.

You should be able to install a local version of 10.0, and adjust LD_LIBRARY_PATH.

@lissyx

Is there a version that provides prebuilt binaries on cuda10.2?

No, as lissyx already said you will have to build for 10.2 yourself or switch to 10.0 for training.

It’s worse than that, 10.2 is not supported by tensorflow.

i built tensorflowr1.15 on cuda10.2. i want to load both my own tensorflow model and the deepspeech model. any suggestions to go from here?

Use LD_LIRBARY_PATH ?