RuntimeError: CreateModel failed with error code 12294

Hi Guys,
Am new to Deep Speech, I install all the requires libraries and models.
I got a error which I tried a lot to fix. But cant able to find a solution. Can you help out ? PFB the command and log

C:\Users>deepspeech --model models/output_graph.pb --alphabet models/alphabet.txt --audio output.wav

Loading model from file models/output_graph.pb
TensorFlow: v1.13.1-10-g3e0cc5374d
DeepSpeech: v0.5.0-alpha.8-0-ga4b35d2
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-07-26 15:35:40.761126: I tensorflow/core/platform/cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not compiled touse: AVX2
2019-07-26 15:35:46.551626: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel (‘op: “WrapDatasetVariant” device_type: “CPU”’) for unknown op: WrapDatasetV
ariant
2019-07-26 15:35:46.551626: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel (‘op: “WrapDatasetVariant” device_type: “GPU” host_memory_arg: "input_handle
" host_memory_arg: “output_handle”’) for unknown op: WrapDatasetVariant
2019-07-26 15:35:46.551626: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel (‘op: “UnwrapDatasetVariant” device_type: “CPU”’) for unknown op: UnwrapData
setVariant
2019-07-26 15:35:46.551626: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel (‘op: “UnwrapDatasetVariant” device_type: “GPU” host_memory_arg: “input_hand
le” host_memory_arg: “output_handle”’) for unknown op: UnwrapDatasetVariant
Not found: Op type not registered ‘Assert’ in binary running on A356A7L626LB8ZC.
Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, ac
cessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Traceback (most recent call last):
File “c:\programdata\anaconda3\lib\runpy.py”, line 193, in run_module_as_main
main”, mod_spec)
File “c:\programdata\anaconda3\lib\runpy.py”, line 85, in run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\deepspeech.exe_main
.py", line 9, in
File “c:\programdata\anaconda3\lib\site-packages\deepspeech\client.py”, line 88, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File "c:\programdata\anaconda3\lib\site-packages\deepspeech_init
.py", line 23, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 12294

What model are you using ? This is textbook incompatible model with binaries.

Hi, I downloaded a model from
https://github.com/mozilla/DeepSpeech/releases/download/v0.1.1/deepspeech-0.1.1-models.tar.gz

So please use matching version. Also your deepspeech runtime is a bit outdated, 0.5.0a8 is kinda old. Please update to 0.5.1, for both model and runtime.

thank you…
https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz

is this the correct model to use ?

Yes, that’s the one you want to use !

After executing the command : python3 recognize.py models/output_graph.pb ae_recorded.wav models/alphabet.txt models/lm.binary models/trie

I got this error:

Loading model from file models/output_graph.pb
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2020-01-09 17:36:40.785438: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Not found: Op type not registered ‘Assert’ in binary running on vaibhav. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Traceback (most recent call last):
File “recognize.py”, line 96, in
main()
File “recognize.py”, line 63, in main
ds = Model(args.model, BEAM_WIDTH)
File “/home/robust_audio_ae/env/lib/python3.6/site-packages/deepspeech/init.py”, line 42, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 12294

I am using model 0.6.0 and DeepSpeech 0.6.0

Are you sure about that ? Your model contains Op that are not in the binary, I don’t think your model is good. Where does it comes from?

Yes i am sure

Model : https://github.com/hiromu/robust_audio_ae

Is this a joke ? Your link stipulates v0.1.0 model. Not 0.6.0. Some very old model did have Assert op.