I’m wondering if there’s a way to use DeepSpeech to analyze an audio file and see if a word of interest is included or not. This is similar to the problem: How to classify unknown words, how to ignore words. However, in this case I want to see if a word exists in a sentence. An example of this will be:
file1.wav - “this wav file does not have the word of interest”
file2.wav - “this wav file has the word of interest which is gfuel”
The caveat is that I’m planning to train only the word of interest (“gfuel”) and not train any other words in the audio file. The language model and the audio model will only include the trained word. I’m not sure if I’m misunderstanding this but I believe that there is some type of “threshold” in which it can recognize words, otherwise, deespeech will not output anything at all:
file1.wav - outputs: “”
file2.wav - outputs: “gfuel”
Another way is I can obtain confidence score through --json flag introduced in 0.5.1 in the metadata field and filter our sentences on a given threshold.
However, both of these methods don’t work. The first method creates incorrect inferences because of the limit language model therefore it will print “gfuel” b/c this is the only word I have trained on. The training data set looks like this:
/root/speech/data/gfuel_custom/2.wav,1139352,gfuel gfuel gfuel gfuel gfuel
/root/speech/data/gfuel_custom/5.wav,688172,gfuel gfuel gfuel
/root/speech/data/gfuel_custom/9.wav,745516,gfuel gfuel gfuel gfuel
When running this, I get the following:
root@speech:~/speech/data/gfuel_custom# deepspeech --model output_graph.pb --alphabet alphabet.txt --lm lm.binary --trie trie --audio test1.wav
Inference took 0.419s for 15.440s audio file.
I have generated the language model by following these steps:
…/…/kenlm/build/bin/lmplz --text transcript.txt --arpa words.arpa --o 3 --discount_fallback
…/…/kenlm/build/bin/build_binary -T -s words.arpa lm.binary
…/…/native_client/generate_trie alphabet.txt lm.binary trie
Here are the parameters that I have used to train the audio dataset:
python3 -u DeepSpeech.py --noshow_progressbar
I have played around with the parameters n_hidden, epochs since I’ve read somewhere this can be attributed to over fitting however it’s not producing any effect.
I have tried grabbing the confidence metadata (I had to download/install native_client), however, i’m getting inconsistent scoring that has no merit on what’s on the audio.
I have pretty much followed every step in Tune MoziilaDeepSpeech to recognize specific sentences and TUTORIAL : How I trained a specific french model to control my robot but coming into a conclusion that deepspeech cannot provide indication if word is in audio sentence and can only transcribe on best match on their available language model.
I would greatly appreciate if someone can confirm this or point me in the right direction.
Coding session: https://www.twitch.tv/videos/451440201?t=01h01m43s