TensorFlow Lite support will be first class citizen in DeepSpeech, and we will use only TFLite ops in future. This is scheduled to occur in v0.5.0 which should be out before years end.
For the home server version of mycroft.ai (see link below), it does STR locally (rather than in the cloud).
To enable local STT - and Therefore near real-time processing - it would require a high power CPU or GPU… I’ hoping this TensorFlow Lite chip would allow an always-on, low power Raspberry Pi or equivalent to be useful as the homeserver.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
11
Well, I can’t tell for the setup from mycroft, but now that we have TFLite runtime with quantized model, we’re getting quite decent perfs. I have no recent benchmark on RPi3 though.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
12
Looks like it cannot be delivered to France …
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
13
Unfortunately, it seems our TensorFlow Lite model is not being accepted by their online compiler and thus it fails, without any meaningful error. To the best of my knowledge, our TFLite model matches their requirements, but maybe I missed something.
Anyway, the current status is that we can’t get a TFLite model ready for EdgeTPU.