Training
Python:3.6.9
OS Platform and Distribution:18.04.1-Ubuntu
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
deepspeech-tflite: 0.9.3
The reasoning using the pbmm model is normal, but the reasoning using the tfile model is wrong.
log
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Data loss: Corrupted memmapped model file: owner-models.pbmm/tflite/output_graph.tflite Invalid directory offset
Traceback (most recent call last):
File “/root/tmp/deepspeech-train-venv/bin/deepspeech”, line 8, in
sys.exit(main())
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/client.py”, line 119, in main
ds = Model(args.model)
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/init.py”, line 38, in init
raise RuntimeError(“CreateModel failed with ‘{}’ (0x{:X})”.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with ‘Failed to initialize memory mapped model.’ (0x3000)
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/output_graph.pbmm --audio helloworld.wav
Loading model from file owner-models.pbmm/output_graph.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
2021-04-21 08:34:34.756936: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loaded model in 0.0126s.
Running inference.
hello world
Inference took 0.206s for 1.044s audio file.
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla#
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
3
@chenminghui0927 Please read your own console output and look for errors ?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
4
Use the following command to generate tflite naming
(deepspeech-train-venv) root@ip-10-0-1-86:DeepSpeech# python DeepSpeech.py --export_tflite --export_dir …/owner-models.pbmm/tflite --n_hidden 128
I Exporting the model…
I Could not find best validating checkpoint.
I Loading most recent checkpoint from /root/.local/share/deepspeech/checkpoints/train-225
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
I Models exported at …/owner-models.pbmm/tflite
I Model metadata file saved to …/owner-models.pbmm/tflite/author_model_0.0.1.md. Before submitting the exported model for publishing make sure all information in the metadata file is correct, and complete the URL fields.
(deepspeech-train-venv) root@ip-10-0-1-86:DeepSpeech#
Why did it report an error during execution?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
How can I know ? This is your setup.
You still don’t explain how you performed the inference setup. Specifically, it’s unclear how you managed to install python deepspeech and deepspeech-tflite: over the same setup? different virtualenv?
(deepspeech-train-venv) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Data loss: Corrupted memmapped model file: owner-models.pbmm/tflite/output_graph.tflite Invalid directory offset
Traceback (most recent call last):
File “/root/tmp/deepspeech-train-venv/bin/deepspeech”, line 8, in
sys.exit(main())
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/client.py”, line 119, in main
ds = Model(args.model)
File “/root/tmp/deepspeech-train-venv/lib/python3.6/site-packages/deepspeech/ init .py”, line 38, in init
raise RuntimeError(“CreateModel failed with ‘{}’ (0x{:X})”.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with ‘Failed to initialize memory mapped model.’ (0x3000)
Sorry, I am a novice, I can only see the error message from here. Where else do I need to provide a more detailed error log? Thank you
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
13
Please read my previous messages, I have already detailed what I need from you.
From (deepspeech-train-venv) it’s 99.99% looking like you are either using deepspeech instead of deepspeech-tflite, but until you share the exact steps you followed, I can’t confirm.
At present, I found that I was not prepared for reasoning on the Android side. I tried to verify the accuracy of tflite reasoning in the platform Ubuntu environment. Thank you
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
15
I’m sorry but you really need to read my previous message and give the answers to the questions I have asked.
Hi, the tflite model can be inferred normally.thank you very much.
step.1
virtualenv -p python3 $HOME/tmp/deepspeech-tflite/
step.2
source $HOME/tmp/deepspeech-tflite/bin/activate
step.3
pip install deepspeech-tflite
step.4
(deepspeech-tflite) root@ip-10-0-1-86:mozilla# deepspeech --model owner-models.pbmm/tflite/output_graph.tflite --audio cat.wav
Loading model from file owner-models.pbmm/tflite/output_graph.tflite
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Loaded model in 0.000964s.
Running inference.
cat
Inference took 0.101s for 0.810s audio file.