RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)

Hello everyone, So I’m new to deepspeech and either I’m facing an issue here or I might just didn’t know how to use it.

So I’m working on Windows 10 and I’m using deepspeech python version.

And I want to work with the french prebuilt models for deepspeech which exist in here.

So I’ve setup two python virtual environments with venv.

In the first venv, I’ve downloaded the french tensorflow model.

And in the second venv I’ve downloaded the french tflite model.

The first environment I’ve setup it’s for the tensorflow model which contains the following packages:

colorama    0.4.4
deepspeech  0.9.3
halo        0.0.31
log-symbols 0.0.14
numpy       1.14.5
pip         18.1
PyAudio     0.2.11
scipy       1.4.1
setuptools  40.6.2
six         1.15.0
spinners    0.0.24
termcolor   1.1.0
webrtcvad   2.0.10

And the second environment I’ve setup it’s for the tflite model which contains the following packages:

absl-py                0.11.0
astunparse             1.6.3
cachetools             4.2.0
certifi                2020.12.5
chardet                4.0.0
colorama               0.4.4
deepspeech             0.8.0
deepspeech-tflite      0.8.0
gast                   0.3.3
google-auth            1.24.0
google-auth-oauthlib   0.4.2
google-pasta           0.2.0
grpcio                 1.34.0
h5py                   2.10.0
halo                   0.0.31
idna                   2.10
importlib-metadata     3.3.0
Keras-Preprocessing    1.1.2
log-symbols            0.0.14
Markdown               3.3.3
numpy                  1.14.4
oauthlib               3.1.0
opt-einsum             3.3.0
pip                    18.1
protobuf               3.14.0
pyasn1                 0.4.8
pyasn1-modules         0.2.8
PyAudio                0.2.11
requests               2.25.1
requests-oauthlib      1.3.0
rsa                    4.6
scipy                  1.4.1
setuptools             40.6.2
six                    1.15.0
spinners               0.0.24
tensorboard            2.4.0
tensorboard-plugin-wit 1.7.0
tensorflow-estimator   2.3.0
termcolor              1.1.0
typing-extensions      3.7.4.3
urllib3                1.26.2
webrtcvad              2.0.10
Werkzeug               1.0.1
wheel                  0.36.2
wrapt                  1.12.1
zipp                   3.4.0

And now, I want to work in both virtual environments with mic_vad_streaming.

So when I’m working with the first venv(tensorflow model), I have no problems and deepspeech works flawlessly(I’ve encountered some lag/slow response but it’s okey for now).

But when I’m trying to use the second venv(tflite model), I encountered this issue:

Loading model from file output_graph.tflite
TensorFlow: v2.2.0-17-g0854bb5188
DeepSpeech: v0.8.0-0-gf56b07da
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2021-01-05 21:30:07.055669: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Data loss: Can't parse output_graph.tflite as binary proto
Traceback (most recent call last):
  File "C:\Program Files\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Program Files\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\HP\Downloads\model_tflite_fr\Scripts\deepspeech.exe\__main__.py", line 9, in <module>
  File "c:\users\hp\downloads\model_tflite_fr\lib\site-packages\deepspeech\client.py", line 117, in main
    ds = Model(args.model)
  File "c:\users\hp\downloads\model_tflite_fr\lib\site-packages\deepspeech\__init__.py", line 38, in __init__
    raise RuntimeError("CreateModel failed with '{}' (0x{:X})".format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)

Here’s the output of the first venv(tensorflow model) when it work successfully:

Initializing model...
INFO:root:ARGS.model: output_graph.pbmm
TensorFlow: v2.3.0-6-g23ad988fcd
DeepSpeech: v0.9.3-0-gf2e9c858
2021-01-05 21:19:58.129550: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO:root:ARGS.scorer: kenlm.scorer
Listening (ctrl-C to exit)...
Recognized: bonjour
Recognized: bonsoir dont
Recognized: en
Recognized: on range en deux
Recognized: mais profond
Recognized: la pole
Recognized: mai coute moi bien
Recognized: la
Recognized: paul
Recognized: du point

The command I’ve used for the first venv(tensorflow model) which works successfully:
python mic_vad_streaming.py -m output_graph.pbmm -s kenlm.scorer

The command I’ve used for the second venv
(tflite model) which doesn’t work:
python mic_vad_streaming.py -m output_graph.tflite -s kenlm.scorer

I’ve even tried using deepspeech directly in the second venv with a .wav audio, but still the same results.

(model_tflite_fr) C:\Users\Ayoub\Downloads\model_tflite_fr>deepspeech --model output_graph.tflite --scorer kenlm.scorer --audio outputs\savewav_2021-01-05_21-26-23_483447.wav
Loading model from file output_graph.tflite
TensorFlow: v2.2.0-17-g0854bb5188
DeepSpeech: v0.8.0-0-gf56b07da
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2021-01-05 21:30:07.055669: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Data loss: Can't parse output_graph.tflite as binary proto
Traceback (most recent call last):
  File "C:\Program Files\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Program Files\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\HP\Downloads\model_tflite_fr\Scripts\deepspeech.exe\__main__.py", line 9, in <module>
  File "c:\users\hp\downloads\model_tflite_fr\lib\site-packages\deepspeech\client.py", line 117, in main
    ds = Model(args.model)
  File "c:\users\hp\downloads\model_tflite_fr\lib\site-packages\deepspeech\__init__.py", line 38, in __init__
    raise RuntimeError("CreateModel failed with '{}' (0x{:X})".format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)

I think that’s all.

I appreciate any help possible and thanks mozilla for this awesome project.

Hm, you are the second person today with some tflite problems. But that was custom built.

Any particular reason you are using v0.8 for tflite and 0.9 for the pb model? Set up a new venv and use just the most current 0.9 version if possible. And please search next time before you post. Have you seen this thread?

It looks like on your second environment you have both deepspeech and deepspeech-tflite installed, maybe they’re conflicting. Try installing only the TFLite version.

1 Like

Any particular reason you are using v0.8 for tflite and 0.9 for the pb model?

Well I’ve installed in both venv at the first time deepspeech v0.9, but the python requirements of the mic_vad_streaming v0.9 which you can find here requests deepspeech 0.8.0 as you can see:

deepspeech~=0.8.0

So when I’m doing pip install -r requirements.txt for mic_vad_streaming it automatically downgrades me to deepspeech 0.8.0.

Once that’ve been said. You were right, it was a version issue. Once I’ve upgraded to deepspeech 0.9.3 and deepspeech-tflite 0.9.3 it worked.

Thank you for your time.

Thanks for posting your findings. The requirements should be on 0.9. If you have the time, please do a PR to bump that to 0.9.x

If you have the time, please do a PR to bump that to 0.9.x

Of course, I’ll do my best.

1 Like

You should still refrain from installing both, the error message as @reuben stated explicitely mentions “proto buffer”, if it was the tflite runtime you would get another error, at best.

Thanks for your feedback :slight_smile:

You should still refrain from installing both, the error message as @reuben stated.

Yes, actually I kept only one. I removed the tensorflow runtime and kept only the tflite runtime.

Thanks.