Two questions. I plan to use tts-server professionally (many users, many servers).
Q1: Is it multithreaded?
Q2: Is it supposed to be this big? 1.3 to 1.6 gigabytes resident size in Linux. And that’s for one voice… Quite hefty, even for a server.
It seems to me it shouldn’t take much to make tts-server production ready. It’s a really short Python script that pretty much just connects the TTS to Flask…
Depends on your definition of “multithreaded”. There was some support for training on multiple GPUs, but not for parallel threads during inference.
The TTS package from pypi.org is provided by Coqui.ai - a fork/successor to Mozilla-TTS. You may have a look in to their discussions forum: coqui-ai/TTS · Discussions · GitHub
Hmm, no parallel inference? That’s too bad and might make it hard to use practically… When I create a 1+ GB process for TTS I would sure hope to use it in multiple threads.