Hello everyone,
i was using this piece of code :
model = deepspeech.Model(model_file_path)
model.setBeamWidth(FLAGS.export_beam_width)
model.enableDecoderWithLM(lm_file_path, trie_file_path, FLAGS.lm_alpha, FLAGS.lm_beta)
but with the stanrdard tensorflow build with a pbmm file
Now i want to use the same code with a tflite file and i found that i have to rebuild deepspeech following those isntructions:
r’’’
This module should be self-contained:
- build libdeepspeech.so with TFLite:
- bazel build […] --define=runtime=tflite […] //native_client:libdeepspeech.so
- make -C native_client/python/ TFDIR=… bindings
- setup a virtualenv
- pip install native_client/python/dist/deepspeech*.whl
- pip install -r requirements_eval_tflite.txt
Then run with a TF Lite model, a scorer and a CSV test file
‘’’
I did so successfully but now i cannot use:
model.enableDecoderWithLM(lm_file_path, trie_file_path, FLAGS.lm_alpha, FLAGS.lm_beta)
AttributeError: ‘Model’ object has no attribute ‘enableDecoderWithLM’
I cannot use enableDecoderWithLM with tflite?
P.S i am using the latest version of deepspeech that exists right now