Result of tflite and scorer on android_mic_strem is so bad

All of the above operations i do on colab with this notebook
I train model, this is result train:
Epoch 24 | Training | Elapsed Time: 0:06:42 | Steps: 1497 | Loss: 0.656175
Epoch 24 | Validation | Elapsed Time: 0:00:26 | Steps: 216 | Loss: 57.963452 | Dataset: /content/vivos/dev.csv
The next i test checkpoint and export tflite model with my scorer, this is result:
Test epoch | Steps: 248 | Elapsed Time: 0:11:50
Test on /content/vivos/test.csv - WER: 0.120686, CER: 0.056437, loss: 52.248905
Then i test tflite model, this is result:
1240
Totally 1240 wav file transcripted
Test on /content/vivos/test.csv - WER: 0.065873, CER: 0.034874, loss: 0.000000
But when i use tflite and scorer model in android_mic_stream, it have a bad result ,hardly get the right word. Please help me,thank you so much

It’s hard to provide any help when you don’t document anything on your datasets and training process.

1 Like

I have 40 hours of audio data in Vietnamese. I create lm.binary use this code:
python3 generate_lm.py --input_txt vocabulary.txt --output_dir .
–top_k 500000 --kenlm_bins ~/DeepSpeech/kenlm/build/bin/
–arpa_order 5 --max_arpa_memory “85%” --arpa_prune “0|0|1”
–binary_a_bits 255 --binary_q_bits 8 --binary_type trie
–discount_fallback
Create scorer with:
./generate_scorer_package --alphabet alphabet2.txt
–lm lm.binary
–vocab vocab-500000.txt
–package kenlm.scorer
–default_alpha 2.6143084230162623
–default_beta 1.998640895456838
All my progress are saved in the notebook as mentioned above
colab cuda is 11.2
my alphabet.txt is here

I don’t think you can expect a model to generalize properly with this amount.

But even when I play the train file to the mic it won’t translate properly,is there any problem with android_mic_stream.