Unable to build deepspeech binary for v0.6.1 from scratch

Hi,

I am unable to do parallel inference on gpu, when I search I found this https://discourse.mozilla.org/t/running-multiple-inferences-in-parallel-on-a-gpu/49384.
Now I am trying to build deep speech binary for 0.6.1. I have read the native client readme and following each step, but I am getting this error.

DEBUG: /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: 
Auto-Configuration Warning: 'TMP' environment variable is not set, using 'C:\Windows\Temp' as default
ERROR: /home/ubuntu/tensorflow/native_client/BUILD:85:1: in cc_binary rule //native_client:libdeepspeech.so: target '@org_tensorflow//tensorflow:libtensorflow_framework.so' is not visible from target '//native_client:libdeepspeech.so'. Check the visibility declaration of the former target if you think the dependency is legitimate
ERROR: /home/ubuntu/tensorflow/native_client/BUILD:85:1: in cc_binary rule //native_client:libdeepspeech.so: target '@org_tensorflow//tensorflow:libtensorflow_framework.so.1' is not visible from target '//native_client:libdeepspeech.so'. Check the visibility declaration of the former target if you think the dependency is legitimate
ERROR: /home/ubuntu/tensorflow/native_client/BUILD:85:1: in cc_binary rule //native_client:libdeepspeech.so: target '@org_tensorflow//tensorflow:libtensorflow_framework.so.1' is not visible from target '//native_client:libdeepspeech.so'. Check the visibility declaration of the former target if you think the dependency is legitimate
ERROR: Analysis of target '//native_client:libdeepspeech.so' failed; build aborted: Analysis of target '//native_client:libdeepspeech.so' failed; build aborted
INFO: Elapsed time: 15.356s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (126 packages loaded, 7208 targets configured)

As the error suggest there is some files('@org_tensorflow//tensorflow:libtensorflow_framework.so') not visible, I am unable to find that file. Can you please help me to resolve this error.

The steps that I did:[deepspeech_build.pdf|attachment]
(upload://k8rtic4z2AZkNeYL0qaNQMbh0bL.pdf) (43.8 KB)

Please help me to solve this issue.

Thanks

Looks like it failed. Please avoid sharing that as a PDF, it’s painful to read for everyone.

Those errors suggests you are either:

  • not using the documented bazel version
  • not using our tensorflow fork
  • maybe both.

Ok sure, next time i will share it as text not pdf.

I have used Bazel 0.24.1, git clone https://github.com/mozilla/tensorflow.git, git checkout origin/r1.14 for DeepSpeech 0.6.1. I followed https://www.tensorflow.org/install/source#tested_build_configurations for bazel info corresponding to tensorflow 1.14.0.

Here is my steps that I followed:

git clone --branch r1.14 https://github.com/mozilla/tensorflow.git
cd tensorflow
git checkout origin/r1.14

sudo apt install g++ unzip zip

pip install -U pip six numpy wheel setuptools mock 'future>=0.17.1'
pip install -U keras_applications --no-deps
pip install -U keras_preprocessing --no-deps

wget https://github.com/bazelbuild/bazel/releases/download/0.24.1/bazel-0.24.1-installer-linux-x86_64.sh

chmod +x bazel-0.24.1-installer-linux-x86_64.sh

./bazel-0.24.1-installer-linux-x86_64.sh --user

export PATH="$PATH:$HOME/bin"

cd tensorflow

./configure
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
Do you wish to build TensorFlow with ROCm support? [y/N]: n
Do you wish to build TensorFlow with CUDA support? [y/N]: y
Do you wish to build TensorFlow with TensorRT support? [y/N]: n
Found CUDA 10.0 in:
    /usr/local/cuda-10.0/lib64
    /usr/local/cuda/include
Found cuDNN 7 in:
    /usr/local/cuda/lib64
    /usr/local/cuda/include
compute capabilities >= 3.5 [Default is: 7.5,7.5]: enter
Do you want to use clang as CUDA compiler? [y/N]: nPlease specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Enter
Do you wish to build TensorFlow with MPI support? [y/N]: n
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Enter (default)
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n

bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=cuda -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie

Could you please tell me anything wrong in above step that i followed?

when I used Bazel 0.19.2 for tensorflow mozilla r1.13, I successfully built generate_trie and libdeepspeech.so for DeepSpeech 0.5.1. But I faced the issue for DeepSpeech 0.6.1 only.

I Do’nt see anything obviously wrong, so I don’t know.

We don’t have any reference to org_tensorflow//tensorflow:libtensorflow_framework.so so I don’t know what is going on.

Are you sure you are building v0.6.1 tag of deepspeech ? Are you sure there is no stale data in bazel cache ? Maybe purge it ?

Yes I am building it for v0.6.1 tag.

ok Now I will clear the bazel cache and and purge previous installation of bazel and will try it fresh.

@nasim.alam086 Can you share native_client/BUILD as well ?

# Description: Deepspeech native client library.

load(
    "@org_tensorflow//tensorflow:tensorflow.bzl",
    "if_cuda",
    "tf_cc_shared_object",
)
load(
    "@org_tensorflow//tensorflow/lite:build_def.bzl",
    "tflite_copts",
    "tflite_linkopts",
)

config_setting(
    name = "tflite",
    define_values = {
        "runtime": "tflite",
    },
)

genrule(
    name = "workspace_status",
    outs = ["workspace_status.cc"],
    cmd = "$(location :gen_workspace_status.sh) >$@",
    local = 1,
    stamp = 1,
    tools = [":gen_workspace_status.sh"],
)

KENLM_SOURCES = glob(
    [
        "kenlm/lm/*.cc",
        "kenlm/util/*.cc",
        "kenlm/util/double-conversion/*.cc",
        "kenlm/lm/*.hh",
        "kenlm/util/*.hh",
        "kenlm/util/double-conversion/*.h",
    ],
    exclude = [
        "kenlm/*/*test.cc",
        "kenlm/*/*main.cc",
    ],
)

OPENFST_SOURCES_PLATFORM = select({
    "//tensorflow:windows": glob(["ctcdecode/third_party/openfst-1.6.9-win/src/lib/*.cc"]),
    "//conditions:default": glob(["ctcdecode/third_party/openfst-1.6.7/src/lib/*.cc"]),
})

OPENFST_INCLUDES_PLATFORM = select({
    "//tensorflow:windows": ["ctcdecode/third_party/openfst-1.6.9-win/src/include"],
    "//conditions:default": ["ctcdecode/third_party/openfst-1.6.7/src/include"],
})

LINUX_LINKOPTS = [
    "-ldl",
    "-pthread",
    "-Wl,-Bsymbolic",
    "-Wl,-Bsymbolic-functions",
    "-Wl,-export-dynamic",
]

cc_library(
    name = "decoder",
    srcs = [
        "ctcdecode/ctc_beam_search_decoder.cpp",
        "ctcdecode/decoder_utils.cpp",
        "ctcdecode/decoder_utils.h",
        "ctcdecode/scorer.cpp",
        "ctcdecode/path_trie.cpp",
        "ctcdecode/path_trie.h",
    ] + KENLM_SOURCES + OPENFST_SOURCES_PLATFORM,
    hdrs = [
        "ctcdecode/ctc_beam_search_decoder.h",
        "ctcdecode/scorer.h",
    ],
    defines = ["KENLM_MAX_ORDER=6"],
    includes = [
        ".",
        "ctcdecode/third_party/ThreadPool",
        "kenlm",
    ] + OPENFST_INCLUDES_PLATFORM,
)

tf_cc_shared_object(
    name = "libdeepspeech.so",
    srcs = [
        "deepspeech.cc",
        "deepspeech.h",
        "alphabet.h",
        "modelstate.h",
        "modelstate.cc",
        "workspace_status.h",
        "workspace_status.cc",
    ] + select({
        "//native_client:tflite": [
            "tflitemodelstate.h",
            "tflitemodelstate.cc",
        ],
        "//conditions:default": [
            "tfmodelstate.h",
            "tfmodelstate.cc",
        ],
    }),
    copts = select({
        # -fvisibility=hidden is not required on Windows, MSCV hides all declarations by default
        "//tensorflow:windows": ["/w"],
        # -Wno-sign-compare to silent a lot of warnings from tensorflow itself,
        # which makes it harder to see our own warnings
        "//conditions:default": [
            "-Wno-sign-compare",
            "-fvisibility=hidden",
        ],
    }) + select({
        "//native_client:tflite": ["-DUSE_TFLITE"],
        "//conditions:default": ["-UUSE_TFLITE"],
    }) + tflite_copts(),
    linkopts = select({
        "//tensorflow:macos": [],
        "//tensorflow:linux_x86_64": LINUX_LINKOPTS,
        "//tensorflow:rpi3": LINUX_LINKOPTS + ["-l:libstdc++.a"],
        "//tensorflow:rpi3-armv8": LINUX_LINKOPTS + ["-l:libstdc++.a"],
        "//tensorflow:windows": [],
        "//conditions:default": [],
    }) + tflite_linkopts(),
    deps = select({
        "//native_client:tflite": [
            "//tensorflow/lite/kernels:builtin_ops",
        ],
        "//conditions:default": [
            "//tensorflow/core:core_cpu",
            "//tensorflow/core:direct_session",
            "//third_party/eigen3",
            #"//tensorflow/core:all_kernels",
            ### => Trying to be more fine-grained
            ### Use bin/ops_in_graph.py to list all the ops used by a frozen graph.
            ### CPU only build, libdeepspeech.so file size reduced by ~50%
            "//tensorflow/core/kernels:spectrogram_op",  # AudioSpectrogram
            "//tensorflow/core/kernels:bias_op",  # BiasAdd
            "//tensorflow/contrib/rnn:lstm_ops_kernels",  # BlockLSTM
            "//tensorflow/core/kernels:cast_op",  # Cast
            "//tensorflow/core/kernels:concat_op",  # ConcatV2
            "//tensorflow/core/kernels:constant_op",  # Const, Placeholder
            "//tensorflow/core/kernels:shape_ops",  # ExpandDims, Shape
            "//tensorflow/core/kernels:gather_nd_op",  # GatherNd
            "//tensorflow/core/kernels:identity_op",  # Identity
            "//tensorflow/core/kernels:immutable_constant_op",  # ImmutableConst (used in memmapped models)
            "//tensorflow/core/kernels:deepspeech_cwise_ops",  # Less, Minimum, Mul
            "//tensorflow/core/kernels:matmul_op",  # MatMul
            "//tensorflow/core/kernels:reduction_ops",  # Max
            "//tensorflow/core/kernels:mfcc_op",  # Mfcc
            "//tensorflow/core/kernels:no_op",  # NoOp
            "//tensorflow/core/kernels:pack_op",  # Pack
            "//tensorflow/core/kernels:sequence_ops",  # Range
            "//tensorflow/core/kernels:relu_op",  # Relu
            "//tensorflow/core/kernels:reshape_op",  # Reshape
            "//tensorflow/core/kernels:softmax_op",  # Softmax
            "//tensorflow/core/kernels:tile_ops",  # Tile
            "//tensorflow/core/kernels:transpose_op",  # Transpose
            # And we also need the op libs for these ops used in the model:
            "//tensorflow/core:audio_ops_op_lib",  # AudioSpectrogram, Mfcc
            "//tensorflow/contrib/rnn:lstm_ops_op_lib",  # BlockLSTM
            "//tensorflow/core:math_ops_op_lib",  # Cast, Less, Max, MatMul, Minimum, Range
            "//tensorflow/core:array_ops_op_lib",  # ConcatV2, Const, ExpandDims, Fill, GatherNd, Identity, Pack, Placeholder, Reshape, Tile, Transpose
            "//tensorflow/core:no_op_op_lib",  # NoOp
            "//tensorflow/core:nn_ops_op_lib",  # Relu, Softmax, BiasAdd
            # And op libs for these ops brought in by dependencies of dependencies to silence unknown OpKernel warnings:
            "//tensorflow/core:dataset_ops_op_lib",  # UnwrapDatasetVariant, WrapDatasetVariant
            "//tensorflow/core:sendrecv_ops_op_lib",  # _HostRecv, _HostSend, _Recv, _Send
        ],
    }) + if_cuda([
        "//tensorflow/core:core",
    ]) + [":decoder"],
)

genrule(
    name = "libdeepspeech_so_dsym",
    srcs = [":libdeepspeech.so"],
    outs = ["libdeepspeech.so.dSYM"],
    output_to_bindir = True,
    cmd = "dsymutil $(location :libdeepspeech.so) -o $@"
)

cc_binary(
    name = "generate_trie",
    srcs = [
        "alphabet.h",
        "generate_trie.cpp",
    ],
    copts = ["-std=c++11"],
    linkopts = [
        "-lm",
        "-ldl",
        "-pthread",
    ],
    deps = [":decoder"],
)

cc_binary(
    name = "trie_load",
    srcs = [
        "alphabet.h",
        "trie_load.cc",
    ],
    copts = ["-std=c++11"],
    linkopts = [
        "-lm",
        "-ldl",
        "-pthread",
    ],
    deps = [":decoder"],
)

Hi,
I am getting the same error. Any update on how it can be solved.
Any help would be appreciated.

–config=monolithic

bazel build --workspace_status_command=“bash native_client/bazel_workspace_status_cmd.sh” -c opt --copt=-O3 --config=monolithic --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie

1 Like

Good point. There was so much noise I missed this even though I know it’s importantt :slight_smile:

Please verify your command line again @nasim.alam086 @Tanish_Kaushal

Thanks for the reply.

But as I know the option -config=monolithic is for cpu based build not the gpu based?
we used -config=cuda the only difference in the command for gpu and cpu build.

Monolithic builds have nothing to do with being GPU based. The docs say add --config=cuda, not replace some option with it: https://github.com/mozilla/DeepSpeech/tree/v0.6.1/native_client#compile-libdeepspeechso--generate_trie

ok that means I need to add --config = cuda along with -config=monolithic.

Sorry, its my bad did not read doc carefully, as it suggest to add -config=cuda, not abt replace -config=monolithic with -config=cuda.

Thanks a lot