Loading model from file deepspeech-0.6.1-models/output_graph.pbmm
TensorFlow: v1.14.0-21-ge77504ac6b
DeepSpeech: v0.6.1-0-g3df20fe
2020-02-18 15:37:29.484316: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Loaded model in 0.027s.
Running inference.
your power is sufficient i said
Inference took 1.158s for 2.590s audio file.
In case of tflite model I am installing deepspeech via pip install deepspeech-tflite and run the following example
Loading model from file deepspeech-0.6.1-models/output_graph.tflite
TensorFlow: v1.14.0-21-ge77504ac6b
DeepSpeech: v0.6.1-0-g3df20fe
INFO: Initialized TensorFlow Lite runtime.
Loaded model in 0.0197s.
Running inference.
your power is sufficient i said
Inference took 2.766s for 2.590s audio file.
I am happy to provide any further information if needed.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
22
Core clock frequency during the test ? Can you disable any power saving feature and ensure it’s at maximum speed + turbo enabled ?
I am using my laptop in plugged-in state. Just checked Turbo option from the BIOS and it was enabled.
After a restart I put battery option Best performance and ran again. Results are roughly the same
Inference took 2.707s for 2.590s audio file.
Before and after the test Task Manager shows CPU clock frequency around 2GHz (1.89-2.06), during the test it jumps up to 4.28GHz.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
24
Slower than the tensorflow runtime is not completely surprising, but that slow on this CPU is really surprising … Did we regressed performances on Windows ?
@lissyx, today I’ve tried the same experiment on MacOS and got the following results:
With deepspeech I’ve got around 2 seconds (2.006, 2.024) for inference time and with deepspeech-tflite I’ve got around 2.3 seconds (2.288, 2.359). I used the same steps and files as described above.
The hardware parameters are:
macOS Mojave 10.14.6
MacBook Pro (Retina, 13-inch, Late 2013)
CPU: 2.6 GHz Intel Core i5
Memroy: 8GB 1600MHz DDr3
So tflite version is again slower, but not as slow as on Windows 10.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
27
How is that a problem? Tensorflow runtime is able to leverage multiple threads out of our model, tflite don’t so its not surprising. What we care is being able to be real time. With 2.5s of audio, that’s the case.
Please @shazz, can you show us what did you install ? Beacause I’ve tried with
(with the right version of python) but still get this error: RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)
Don’t revive old threads with a new problem. Create a new one and provide all the information about what you want to do, what you tried, and full logs showing the error.
It’s exactly the same problem. Same Error on win 10
Data loss: Can't parse C:\Users\TNND5388\Desktop\DeepSpeech/assets/output_graph.tflite as binary proto
Traceback (most recent call last):
File "C:\Users\TNND5388\Desktop\DeepSpeech\DeepSpeech_Application.py", line 86, in <module>
ds = Model(model, BEAM_WIDTH)
File "C:\Users\TNND5388\AppData\Roaming\Python\Python37\site-packages\deepspeech\__init__.py", line 42, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 12293
The error states that it’s trying to load a protobuf file, not a TFLite file, so you’re still using the protobuf package. Is there any reason why you’re using 0.6.0? On the latest release you can simply do pip install deepspeech-tflite and it’ll work.
It works for me. I’ve searched deepspeech-tflite==0.6.0 before posting my error here that’s why I don’t test other versions. I thought that the old version was 0.7.0.