Can you share more details on exactly what you are doing ?
What kind of details do you need ?
I want to be able to modify the C++ source code, so I am trying to build the existing code. I have cloned mozilla/tensorflow and simply followed instructions in "native_client/README.md”.
Both mozilla/tensorflow and mozilla/deepspeech are on the HEAD of their master branch.
Which system are you building for? If you are using DeepSpeech/master
, you should be using tensorflow/r1.6
.
I am building for linux 64 with CUDA. I have just switched to tensorflow/r1.6 and seems to have the same issue.
I have also hit the following error (which has already been reported) :
Illegal ambiguous match on configurable attribute “deps” in @jpeg//:jpeg:
@jpeg//:k8
@jpeg//:armeabi-v7a
So I have commented out the armeabi-v7a part.
It seems really spurious. Which distro are you targetting ? Which version of bazel are you using ?
If you are building for CUDA then you lack --config=cuda
as well …
bazel release 0.12.0
#43~16.04.1-Ubuntu
Can you try and stick to bazel v0.10.0 ?
Also, why do you need to rebuild ? Our binaries should work well on 16.04
ok
Yes, I can run DeepSpeech with the pre-built binaries. But I suspect there might be a bug in the binaries related to this issue https://github.com/mozilla/DeepSpeech/issues/1156.
Well, building yourself, you will get the same bug
Well, actually, I would only need to recompile libctc_decoder_with_kenlm.so for now, which works. I have trouble only with building libdeepspeech_utils.so.
Yeah, but to be able to fix it, I need to be able to build
Still, I’m enclined to tell you “works for me”, which is not helpful, so there is something in your environment that is wrong, somehow.
Knowing that it works on another environment already helps. I have downgraded to bazel 0.10, we’ll see…
ok, with bazel 0.10 and “–config=cuda --config=monolithic”, everything builds !
The python binding of libctc_decoder_with_kenlm.so seems broken with DeepSpeech.py, but it might eventually work.
Thanks !
libctc_decoder_with_kenlm.so
should work, what is the error you have ?
DeepSpeech.py says :
AttributeError: ‘module’ object has no attribute
ctc_beam_search_decoder_with_lm
The command arguments are not relevant since it works with the pre-built libctc_decoder_with_kenlm.so
, and this function (ctc_beam_search_decoder_with_lm
) is always needed.
I don’t find where the function is defined though, I suppose there should be a python binding from some C++ function in libctc_decoder_with_kenlm.so
to the python function ctc_beam_search_decoder_with_lm
.
EDIT : this page https://www.tensorflow.org/extend/adding_an_op describes the mechanism used to add a custom operation. For instance, my libctc_decoder_with_kenlm.so
does contain the following symbols :
000000000031bd70 w F .text 0000000000001be2 _ZN28CTCBeamSearchDecoderWithLMOp7ComputeEPN10tensorflow15OpKernelContextE
0000000000311730 w F .text 000000000000025a _ZN28CTCBeamSearchDecoderWithLMOpD2Ev
0000000000313450 w F .text 00000000000019d3 _ZN28CTCBeamSearchDecoderWithLMOpC2EPN10tensorflow20OpKernelConstructionE
0000000000ce43c8 w O .data.rel.ro 0000000000000018 _ZTI28CTCBeamSearchDecoderWithLMOp
0000000000ce4550 w O .data.rel.ro 0000000000000038 _ZTV28CTCBeamSearchDecoderWithLMOp
0000000000313450 w F .text 00000000000019d3 _ZN28CTCBeamSearchDecoderWithLMOpC1EPN10tensorflow20OpKernelConstructionE
0000000000311730 w F .text 000000000000025a _ZN28CTCBeamSearchDecoderWithLMOpD1Ev
000000000095cd20 w O .rodata 000000000000001f _ZTS28CTCBeamSearchDecoderWithLMOp
0000000000311990 w F .text 0000000000000262 _ZN28CTCBeamSearchDecoderWithLMOpD0Ev
This guy had the same issue :
Oh, did you by any mistake build it with --config=monolithic
?
Oh yes, sorry, I should have mentioned that !
Without this flag I had link errors, not finding tensorflow functions. I will rebuild and tell you which ones.