Thanks for all your hard work everyone!
With the release of the Pi 4 with USB 3 showing big improvements both with and without the Coral USB Accelerator, I took another look at this today.
I can confirm @lissyx’s initial impressions from the online compiler ( TensorFlow Lite inference - #13 by lissyx ) that the deepspeech tflite model is rejected. I also ran the recently released offline compiler which reported a more meaningful error: “Model not quantized”.
My understanding is limited, but I believe the reasoning is documented on this page (first blue note box):
Note: The Edge TPU does not support models built using post-training quantization, because although it creates a smaller model by using 8-bit values, it converts them back to 32-bit floats during inference. So you must use quantization-aware training, which uses “fake” quantization nodes to simulate the effect of 8-bit values during training, thus allowing inferences to run using the quantized values. This technique makes the model more tolerant of the lower precision values, which generally results in a higher accuracy model (compared to post-training quantization).
which I think @sranjeet.visteon was touching on in this thread.
Given the new potential for the Pi 4 + Edge TPU,
( Benchmarking Machine Learning on the New Raspberry Pi 4, Model B - Hackster.io ),
I’d be grateful if the devs could take another look at both the Pi 4 and Edge TPUs when considering future priorities.