in the Wiki it says I can use the model with TensorFlow Serving.
you can also use the model exported by export directly with TensorFlow Serving.
Is this information correct? I found comments that serving is no longer supported in the Github Issues section.
I created a simple websocket/bottlepy based server based on the deepspeech-server project for real-time STT and while it works nicely with a single client, I am wondering how to allow inference for multiple users at the same time using one model.
If I understand correctly TensorFlow Serving would be the answer?
Servables are the central abstraction in TensorFlow Serving. Servables are the underlying objects that clients use to perform computation (for example, a lookup or inference).
Thanks for the help