Embed model on Docker?

How do I run a pretrained model on Docker? The instructions are not very clear to me, as I have no idea what the difference between Development/Server package is.

The docs appear to have 10 different ways to do it, but none of them seem clear.
I would rather not waste my limited (and slow) bandwidth to download 2Gb of data, to find out that was the incorrect way of doing it.

I feel like this project could benefit a lot from streamlining the getting started section, I’ve been trying to get started with this project many times and just gave up.

I’d be happy to help cleaning/streamlining it up, but I would need someone to point me in the right direction in order to accomplish that.

Here’s a very simple Dockerfile that I used earlier, it should still work:

FROM python:3.6

RUN apt-get update && \
    apt-get -y install espeak libsndfile1 wget
RUN wget https://github.com/reuben/TTS/releases/download/ljspeech-fwd-attn-pwgan/TTS-0.0.1+92aea2a-py3-none-any.whl
RUN pip install TTS-0.0.1+92aea2a-py3-none-any.whl
ENTRYPOINT [ "python", "-m" ]
CMD [ "TTS.server.server" ]

It is based on an older yet still good enough model, wrapped together with a Vocoder and the TTS module.

@thllwg thanks mate, unfortunately that didn’t end up working for me. It complained about no module named numba.decorators.
I’ll try to work it out soon.

Hi @jkob,

Your module problem could be the same as this issue: https://github.com/mozilla/TTS/issues/437 - recommendation for that case is switching to an earlier version of librosa

Haven’t tried using TTS with Docker but I think the above might help