Tflite Model giving rubbish output during inference

Hello @lissyx @reuben @kdavis
Hope all of you are doing well.
As per advice of the team, I shifted to deep speech 0.6.0 and trained the model for the same. While the model is doing quite good which I checked during inference with .pb model but I am facing weird problem after converting it into tflite.
Using tflite in android and while testing any audio with tflite I am getting only single letter as output like “i” or “f”

Please help with the problem. Thanking all of you very much!


  • Ensure you are using proper 0.6.0 with the forget_bias=0 fix for TFLite export
  • Or, ensure you are using 0.6.1 model
  • “i” or “f” is a 0.6.1-fixed behavior when infering from silence, so please check your feeding and verify with 0.6.1 library.

I am using DeepSpeech: v0.6.0-0-g6d43e21 with forget_bias=0 .
Yet the output is only “i” or maybe “f”.

That’s why I said 0.6.1 library.

so i need to train model again on 0.6.1? or can just install 0.6.1 library with github of 0.6.0?

Yes, there is no model change, just upgrade the library for inference.

Also, please share more details on how you test under Android. It seems you just feed silence.

If you’re using your own trained models, you need to re-export them with the v0.6.1 code. No need to re-train, just re-export the same checkpoint.

Yes i have trained it further from pretrained model of 0.6.0.
So with change in library i also need to git clone v0.6.1? or just need to update library from 0.6.0 to 0.6.1 and use the git version 0.6.0 with it?

You need to update the training code to v0.6.1, then re-export the checkpoint to get the fix, and use this re-exported model. You also need to update the inference code to v0.6.1 for the silence being recognized as “i” or “a” bug.

I tested on already given tflite with three given audio with Android and it gave good result. I am facing this problem after i have further trained my model from pre trained and then converted it into tflite model.
The process of using android AAR file and all the other stuff is same.
And i am only facing problem with tflite model while .pb model is working quite good for me.

Inference is working fine with .pb model it is only facing problem after i convert it into tflite model and use it in android. Also i am feeding it proper audio file instead of any silence like “lasagne or white sauce pasta” etc.
But okay git cloning for v0.6.1 and also installing library for the same and then will update you. Thank you so much.

Yes, so I missed that from your previosu message, you need to re-perform the export using 0.6.1 tree. No need to re-train, just re-export.

I just tried with 0.6.1 and re-exported first .pb model and then using --export_tflite True flag re-exported output.tflite . Then tried org.mozilla.deepspeech:libdeepspeech:0.6.1-alpha.0 and on running still it is giving nothing as an output but if i test the same on the three audio given by you it gives output though rubbish but it is because i have fed my language model. I dont know why it is not giving output for only my wavfile.
Despite being trained on indian voices.

Please use 0.6.1 not 0.6.1-alpha.0

You have not replied to my previous comment about the volume of your audio file. Is it the proper format as well, etc?
You have not also documented how , onin your app, you are working.
You should also share your WER figures and training parameters.

Yes, I am taking audio from which I trained the model and if there would have been a problem with the format or voice it wouldn’t have given correct inference on while running .pb model. for an example someone is saying “cheese mysore masala dosa” and the inference is correct on .pb file but the same audio doesnt work while running on android with tflite. Where can i find 0.6.1? As in maven repository i can only see latest org.mozilla.deepspeech:libdeepspeech:0.6.1-alpha.0.

Also, I am just using the AAR file which is given in the documentation. It worked well when i tested pre trained tflite with the given three audio. The problem is coming on the tflite which i converted after training it on my dataset but again .pb is working fine i have attached inference of the same file while testing on .pb .

As documented, it is published on JCenter:

Please be clear on your steps. Current status is a mixed of known-broken releases and known-broken models, so it’s really unclear.

Please refrain from using screenshot when text works.

Tried with org.mozilla.deepspeech:libdeepspeech:0.6.1 and the app is getting crash as soon as I click on inference button and getting following error on android

Caused by: java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file “/data/app/”],nativeLibraryDirectories=[/data/app/, /data/app/!/lib/arm64-v8a, /system/lib64, /vendor/lib64, /system/vendor/lib64, /product/lib64]]] couldn’t find “”

Sorry i didnt get you. I am just telling that after i trained my model further from pre trained model and received .pb file and ran for inference it is doing quite good but after that when i convert it in tflite and use it in android it gives nothing in inference.

This is completely different from running inference after training on Pb model, hard to tell.