I am attempting to build a version of
deepspeech-gpu bindings and the
native_client for ARMv8 with GPU support. The target platform is NVIDIA’s Jetson-class embedded systems – the TX-1/2 in particular, but I have access to a PX2 as well.
These systems run ubuntu 16.04 LTS for aarch64. Cuda 8.0, Cudnn 6, and the compute capability is 5.2.
I have the Deepspeech repo as of commit
e5757d21a38d40923c1de9b86597685f365150ee, the Mozilla fork of tensorflow as of commit
08894f64fc67b7a8031fc68cb838a27009c3e6e6, and bazel
0.5.4. My python version is
I have added the
--config=cuda option to the suggested build command. Here’s the session output:
ubuntu@nvidia:~/Source/deepspeech/tensorflow$ bazel build -c opt --config=cuda --copt=-O3 //tensorflow:libtensorflow_cc.so //tensorflow:libtensorflow_framework.so //native_client:deepspeech //native_client:deepspeech_utils //native_client:libctc_decoder_with_kenlm.so //native_client:generate_trie .... 547 / 671] Compiling native_client/kenlm/util/double-conversion/bignum-dtoERROR: /home/ubuntu/Source/deepspeech/tensorflow/native_client/BUILD:48:1:C++ compilation of rule '//native_client:deepspeech' failed (Exit 1). In file included from native_client/kenlm/util/double-conversion/bignum-dtoa.h:31:0, from native_client/kenlm/util/double-conversion/bignum-dtoa.cc:30: native_client/kenlm/util/double-conversion/utils.h:71:2: error: #error Target architecture was not detected as supported by Double-Conversion. #error Target architecture was not detected as supported by Double-Conversion. ^
What is a more appropriate list of build targets to give bazel? I’m willing to go without the language model for now if i have to – the raw output from the NN is good enough for my purposes right now.