I am trying to fine tune the deepspeech-0.7.0-models using some of the voice data i have collected for one of my Android application( which is created referring https://github.com/mozilla/androidspeech ) . And trained the same using the steps mentioned in link . And exported the tflite using the --export_tflite flags.
For the Android application to work with need the two files lm.binary and trie . Whether we can reuse the same from the release model , even if the model is fine tuned using some other voice data ? Or how to create these files so that i can work with my Android Application ?
I am not the expert on the Android version, but think of it working like that: the neural net outputs some characters and then Deepspeech checks the trie and binary for matching words. So if you still do English, the released files should be just fine. You would change that if you want to transcribe specific contextual areas (medicine, aviation, …) otherwise go with the standard.