TensorFlow Lite inference

Hi!

I read about ongoing efforts to port model to TensorFlow Lite flatbuffers.

Please let me know if TensorFlow Lite support will be first class citizen in DeepSpeech project and you will use only tflite ops in future?

PS. Did you already tried streaming ASR using tflite?

We are using TensorFlow Lite for other inference tasks on device and can help with testing performance on various devices.

TensorFlow Lite support will be first class citizen in DeepSpeech, and we will use only TFLite ops in future. This is scheduled to occur in v0.5.0 which should be out before years end.

1 Like

Hi there. We are also interested in the TFlite support. Any updates on that?

It’s already available but not officially endorsed.

So if fully TensorFlow Lite compatible, will it work with these?
https://www.tomshardware.com/news/google-edge-tpu-coral-dev-board-usb-accelerator,38750.html

No idea, I would not be surprised the Edge TPU has more strongs requirements that would break. I don’t know where to buy one.

Haha, stock: 0, factory lead time: 17 weeks.

I can help with that… :wink:

https://www.mouser.com/ProductDetail/212-842776110077

The USB (above) is in stock for $74.99.
The SBC versikn is shown as out of stock, with a price of $149.99.

If they work, i’d be tempted to buy the USB version to test with a (Debian based) home server implementation of mycroft.ai

What’s the difference ? Do you really need an Edge TPU ?

For the home server version of mycroft.ai (see link below), it does STR locally (rather than in the cloud).
To enable local STT - and Therefore near real-time processing - it would require a high power CPU or GPU… I’ hoping this TensorFlow Lite chip would allow an always-on, low power Raspberry Pi or equivalent to be useful as the homeserver.

Well, I can’t tell for the setup from mycroft, but now that we have TFLite runtime with quantized model, we’re getting quite decent perfs. I have no recent benchmark on RPi3 though.

Looks like it cannot be delivered to France …

Unfortunately, it seems our TensorFlow Lite model is not being accepted by their online compiler and thus it fails, without any meaningful error. To the best of my knowledge, our TFLite model matches their requirements, but maybe I missed something.

Anyway, the current status is that we can’t get a TFLite model ready for EdgeTPU.