I could be wrong (what you’ve written about what you’ve done isn’t totally clear to me) but it looks like you installed from master. You’d need to install from the checkpoint for the last full release (ie 0.5.1) if you want to use the pre-trained model (which only exists for 0.5.1 and before)
@adesara.amit I would suggest you to check the 2 file path paramaters in the Model() Function.
When I updated the code from ds = Model('deepspeech-0.5.1-models/output_graph.pbmm', N_FEATURES, N_CONTEXT, 'deepspeech-0.5.1-models/alphabet.txt', BEAM_WIDTH)
to ds = Model('deepspeech-0.5.1-models/deepspeech-0.5.1-models/output_graph.pbmm', N_FEATURES, N_CONTEXT, 'deepspeech-0.5.1-models/deepspeech-0.5.1-models/alphabet.txt', BEAM_WIDTH)
Hey everyone! this is my version
TensorFlow: v1.14.0-21-ge77504ac6b
DeepSpeech: v0.6.1-0-g3df20fe
I does that
(deepspeech-venv) C:\Users\User\HOME\tmp\deepspeech-venv\Scripts\deepspeech-0.6.0-models> deepspeech --model models/output_graph.pbmm --audio my_audio_file.wav
and i getting this error:
Not found: NewReadOnlyMemoryRegionFromFile failed to Create/Open: models/output_graph.pbmm : ╤шёЄхьх эх єфрхЄё эрщЄш єърчрээ√щ яєЄ№.
; No such process
Traceback (most recent call last):
File “c:\python\python37_2\lib\runpy.py”, line 193, in run_module_as_main
“main”, mod_spec)
File “c:\python\python37_2\lib\runpy.py”, line 85, in run_code
exec(code, run_globals)
File "C:\Python\Python37_2\Scripts\deepspeech.exe_main.py", line 7, in
File “c:\python\python37_2\lib\site-packages\deepspeech\client.py”, line 113, in main
ds = Model(args.model, args.beam_width)
File "c:\python\python37_2\lib\site-packages\deepspeech_init.py", line 42, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 12288
By googling “ERROR: Model provided has model identifier ‘=’+;', should be ‘TFL3’ RuntimeError: CreateModel failed with error code 12288
I found this issue:
one of the answers Lissyx recommends switching to tflite model, due to the fact that binaries are pre-configured to use TensorFlow Lite runtime. Since the error
Lissyx: “As documented, RPi3/4 binaries are using the TensorFlow Lite runtime, so you need to pass output_graph.tflite and not output_graph.pbmm .”
Therefore it seems that you are using tensorflow lite runtime in your installation.
Try: deepspeech --model output_graph.tflite --audio myaudio.wav
My output:
Loading model from file output_graph.tflite
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.1-0-g3df20fe
INFO: Initialized TensorFlow Lite runtime.
Loaded model in 0.0428s.
Warning: original sample rate (48000) is different than 16000hz. Resampling might produce erratic speech recognition.
Running inference.
one two one two on to three four five six
Inference took 15.384s for 15.000s audio file.