enableDecoderWithLM with TFlite

Hello everyone,

i was using this piece of code :

model = deepspeech.Model(model_file_path)
model.setBeamWidth(FLAGS.export_beam_width)
model.enableDecoderWithLM(lm_file_path, trie_file_path, FLAGS.lm_alpha, FLAGS.lm_beta)

but with the stanrdard tensorflow build with a pbmm file

Now i want to use the same code with a tflite file and i found that i have to rebuild deepspeech following those isntructions:

r’’’
This module should be self-contained:

  • build libdeepspeech.so with TFLite:
    • bazel build […] --define=runtime=tflite […] //native_client:libdeepspeech.so
  • make -C native_client/python/ TFDIR=… bindings
  • setup a virtualenv
  • pip install native_client/python/dist/deepspeech*.whl
  • pip install -r requirements_eval_tflite.txt
    Then run with a TF Lite model, a scorer and a CSV test file
    ‘’’

I did so successfully but now i cannot use:

model.enableDecoderWithLM(lm_file_path, trie_file_path, FLAGS.lm_alpha, FLAGS.lm_beta)

AttributeError: ‘Model’ object has no attribute ‘enableDecoderWithLM’

I cannot use enableDecoderWithLM with tflite?

P.S i am using the latest version of deepspeech that exists right now

It looks like you were using v0.6.1, but then you built a recent master from source, instead of v0.6.1. For Python, you don’t have to rebuild, just do pip install deepspeech-tflite==0.6.1 and then the same code should work, but this time with the .tflite model.

If you still want to build from source for some other reason, just checkout the v0.6.1 tag first: git checkout v0.6.1.

Thanks you very much, it worked, you are right!!

I remembered that i recently replaced the version file of an older version i had 1 month ago with a new repo because i wanted to fuse some codes.

Thanks you very much again :slight_smile: