Hey ,
Please I m trying to excute my deepspeech model after generating theses binaries : lm.binary, trie file
but I am getting this error.
Error: Can’t parse trie file, invalid header. Try updating your trie file.
terminate called after throwing an instance of ‘int’
Fatal Python error: Aborted
I installed Mozilla’s Tensorflow r1.13 branch
Then I used Bazel to build the deepspeech library and the generate_trie binary
I downloaded DeepSpeech version 0.6.0-alpha.2
and I installed the deepspeech-gpu v0.5.1-0-g4b29b78
lol Ok you right,
I did what you told me I did not rebuild and I used just the pre-built binaries for the deepspeech by executing this command :
python3 util/taskcluster.py --target .
I installed the deepspeech-gpu v0.6.0-alpha.1
and I am using Deepspeech version 0.6.0-alpha.3
also I have tensorflow 1.14
but I can not train my model,
here is the error I get :
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation tower_0/DeserializeSparse: Could not satisfy explicit device specification ‘/device:GPU:0’ because no supported kernel for GPU devices is available
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation tower_0/DeserializeSparse: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
DeserializeSparse: CPU
Colocation members, user-requested devices, and framework assigned devices, if any:
tower_0/DeserializeSparse (DeserializeSparse) /device:GPU:0
Op: DeserializeSparse
Node attrs: dtype=DT_INT32, Tserialized=DT_VARIANT
Registered kernels:
device='CPU'; Tserialized in [DT_VARIANT]
device='CPU'; Tserialized in [DT_STRING]
[[{{node tower_0/DeserializeSparse}}]]
And I have cuda10.0. Here is the output of nvcc -V :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
Then when I execute the command to train my model, I get the above error.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
21
That looks weird, maybe a regression on GPUs from switching to TensorFlow r1.14