An error was reported using tflite model inference

Training
Python:3.6.9
OS Platform and Distribution:18.04.1-Ubuntu
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
deepspeech-tflite: 0.9.3
The reasoning using the pbmm model is normal, but the reasoning using the tfile model is wrong.

log
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Data loss: Corrupted memmapped model file: owner-models.pbmm/tflite/output_graph.tflite Invalid directory offset
Traceback (most recent call last):
File “/root/tmp/deepspeech-train-venv/bin/deepspeech”, line 8, in
sys.exit(main())
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/client.py”, line 119, in main
ds = Model(args.model)
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/init.py”, line 38, in init
raise RuntimeError(“CreateModel failed with ‘{}’ (0x{:X})”.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with ‘Failed to initialize memory mapped model.’ (0x3000)

(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/output_graph.pbmm --audio helloworld.wav
Loading model from file owner-models.pbmm/output_graph.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
2021-04-21 08:34:34.756936: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loaded model in 0.0126s.
Running inference.
hello world
Inference took 0.206s for 1.044s audio file.
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla#

@chenminghui0927 Please read your own console output and look for errors ?

@chenminghui0927 You also don’t share any STR as documented [READ FIRST] What and how to report if you need support so we can’t cross-check what you did.

Use the following command to generate tflite naming
(deepspeech-train-venv) root@ip-10-0-1-86:DeepSpeech# python DeepSpeech.py --export_tflite --export_dir …/owner-models.pbmm/tflite --n_hidden 128
I Exporting the model…
I Could not find best validating checkpoint.
I Loading most recent checkpoint from /root/.local/share/deepspeech/checkpoints/train-225
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
I Models exported at …/owner-models.pbmm/tflite
I Model metadata file saved to …/owner-models.pbmm/tflite/author_model_0.0.1.md. Before submitting the exported model for publishing make sure all information in the metadata file is correct, and complete the URL fields.
(deepspeech-train-venv) root@ip-10-0-1-86:DeepSpeech#

Why did it report an error during execution?

How can I know ? This is your setup.

You still don’t explain how you performed the inference setup. Specifically, it’s unclear how you managed to install python deepspeech and deepspeech-tflite: over the same setup? different virtualenv?

Please share actionable infos.

Hello, I installed it according to this document
https://deepspeech.readthedocs.io/en/latest/TRAINING.html#prerequisites-for-training-a-model

Please be respectful of people trying to help you. This kind of answer is NOT helping:

  • you refer to a problem related to inference yet you link training doc
  • I know the documentation, I wrote it
  • I gave specific details on what kind of information I need to help you
  • I linked to the thread explaining why we need you to document precisely what you did

Until you share actionable informations, I can’t help you.

The pbmm model can be inferred normally, they are just different derived models. The training data is the same

You still have not shared information on this error.

Since you are refusing to cooperate, I am not going to help.

(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Data loss: Corrupted memmapped model file: owner-models.pbmm/tflite/output_graph.tflite Invalid directory offset
Traceback (most recent call last):
File “/root/tmp/deepspeech-train-venv/bin/deepspeech”, line 8, in
sys.exit(main())
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/client.py”, line 119, in main
ds = Model(args.model)
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/ init .py”, line 38, in init
raise RuntimeError(“CreateModel failed with ‘{}’ (0x{:X})”.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with ‘Failed to initialize memory mapped model.’ (0x3000)
Sorry, I am a novice, I can only see the error message from here. Where else do I need to provide a more detailed error log? Thank you

Please read my previous messages, I have already detailed what I need from you.

From (deepspeech-train-venv) it’s 99.99% looking like you are either using deepspeech instead of deepspeech-tflite, but until you share the exact steps you followed, I can’t confirm.

At present, I found that I was not prepared for reasoning on the Android side. I tried to verify the accuracy of tflite reasoning in the platform Ubuntu environment. Thank you

I’m sorry but you really need to read my previous message and give the answers to the questions I have asked.

Is the same virtualenv, Inference tfile model and pbmm model,use pip install deepspeech ,deepspeech-tflite
Thank you

Then please setup a different virtualenv for each package.

And please verify your tflite file. Use our model for reference to make sure things are setup right; https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.tflite

$ sha1sum deepspeech-0.9.3-models.tflite
d626e4b5433ad597880b7fedb692daf117ff68ed  deepspeech-0.9.3-models.tflite
```

ok,I will try and tell you the result.thank you very much

Hi, the tflite model can be inferred normally.thank you very much.
step.1
virtualenv -p python3 $HOME/tmp/deepspeech-tflite/
step.2
source $HOME/tmp/deepspeech-tflite/bin/activate
step.3
pip install deepspeech-tflite
step.4
(deepspeech-tflite) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Loaded model in 0.000964s.
Running inference.
cat
Inference took 0.101s for 0.810s audio file.