Trained (tuned) model not loading in python

Hi all,

I’ve been working on fine tuning the pretrained model (0.2.0) using my own training data. The training works and I can use the binaries to run inference from the command line but I can’t seem to get it working in python. Does anyone know what might be happening here?

More background / details:

  • Command line inference (from v0.2.1-alpha.2-8-ge746c50, binaries installed on 64 bit linux server with --arch gpu) works on the updated model with no issues

  • The python module was installed with pip3 install deepspeech-gpu.

Inference based on the updated model works fine from the command line but I’m getting the following error in python:

Here’s the code:

BEAM_WIDTH = 500
LM_WEIGHT = 1.50
VALID_WORD_COUNT_WEIGHT = 2.25
N_FEATURES = 26
N_CONTEXT = 9
MODEL_FILE = "models/tuned.pb"
ALPHABET_MODEL = "models/alphabet.txt"
LANGUAGE_MODEL = "models/lm.binary"
TRIE_MODEL     = "models/trie"

from deepspeech import Model

ds = Model(MODEL_FILE, N_FEATURES, N_CONTEXT, ALPHABET_MODEL, BEAM_WIDTH)
ds.enableDecoderWithLM(ALPHABET_MODEL, LANGUAGE_MODEL, TRIE_MODEL, LM_WEIGHT,
                               VALID_WORD_COUNT_WEIGHT)

ds = Model(MODEL_FILE, N_FEATURES, N_CONTEXT, ALPHABET_MODEL, BEAM_WIDTH)
ds.enableDecoderWithLM(ALPHABET_MODEL, LANGUAGE_MODEL, TRIE_MODEL, LM_WEIGHT,
                               VALID_WORD_COUNT_WEIGHT)

This works with the pre-trained model “models/output_graph.pb” but not with the updated model (“models/tuned.pb”).

And here’s the error message:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-21-5f2e9c73dba4> in <module>
----> 1 ds = Model(MODEL_FILE, N_FEATURES, N_CONTEXT, ALPHABET_MODEL, BEAM_WIDTH)
      2 ds.enableDecoderWithLM(ALPHABET_MODEL, LANGUAGE_MODEL, TRIE_MODEL, LM_WEIGHT,
      3                                VALID_WORD_COUNT_WEIGHT)

~/venv/ds_test/lib/python3.6/site-packages/deepspeech/__init__.py in __init__(self, *args, **kwargs)
     12         status, impl = deepspeech.impl.CreateModel(*args, **kwargs)
     13         if status != 0:
---> 14             raise RuntimeError("CreateModel failed with error code {}".format(status))
     15         self._impl = impl
     16 

RuntimeError: CreateModel failed with error code 3

The error message gives no hint at all

But judging from that, you are working with the updated file format for trie, and pip3 install deepspeech-gpu only would install 0.2.0 release, which uses the old format.

Try pip3 install deepspeech-gpu==0.2.1a2

1 Like

Thanks, that fixed it! I hadn’t realized the versions were not in sync.

1 Like