TUTORIAL : How I trained a specific french model to control my robot

Yes.
In alphabet.txt, you only have symbols !!
Each symbol is a label.
Deepspeech learns each label with a lot of sounds.

Some others params lm/trie work hard to evaluate one heard sentence, and predict result inference)

Thanks for your tutorial. We have hundreds of audio files for just one person/speaker and are considering making a specific model. Was considering breaking up each audio into single words, for training purposes. However, now I see by your comment that a complete sentence is preferred.

My thinking on using the single word approach was to significantly reduce the size of the model, as it is for one person/speaker. For example, a 19 second WAV that has 55 words has 33 unique words. Is there any advantage in using the same word by the same speaker for training the model ?

I guess my question is - how differently can one person speak one word ?

Hi. JHOSHUA
I give you and easy answer :
Do a test :
Record 2 words, with same tone and duration,
Open both files in audacity and zoom them.
Your eyes will detect variations.
And we’re only thinking of your voice…
Our environment is really noizzy.

Keep in mind that your computer is a bit silly : for it, variations = different.

The more sounds per character,the easier for the silly pc to recognize…

Now logic sentences are imperative for trie build, to help deepspeech to process a good inference
Hope to help.

Oh, I forgot a part of your question : record differently sentences.
I’ll update the tuto this afternoon.

1 Like

Usually audio corpuses publically available are in much larger files than 3-5 seconds. If I am training my own model, will Deep speech learn from the files, say 10-15 minutes long?

Of course I can split those big files into shorter by using some voice activity detection tool, but they are not perfect… so, as the result I might get sentences split in between, and in any case it requires much more manual work i.e adjusting audio files and transcripts, etc…

Hi Mark2.

I had same idea, but…

Depending on silences duration variations, I think that errors can easily happen.

So, imagine that a gap happens about a sound, regarding to a character…
This would build a very bad model…

I think the best way is specificity : inferences are done with wav’s near 5s, so, train files nearly 5-10s max seems to be the correct way.(not too much risks of gap in learning)

The best way to cut silences is VAD, and human control…good luck

Have a look at Kaldi, They work on separation words…

Or ask Deepspeech team about their process to create this perfect model.

Thanks. Yes, a test with Audacity, the differences were quite recognisable. I will have to look into how to break up an audio into (say) 10 second slices and ensure words are not cut off. There are a few posts here on Discourse regards that.

I did have a quick look at “audiogrep” ( https://github.com/antiboredom/audiogrep ) yesterday, but there was an error preventing me from continuing. It doesn’t look like it has been maintained for a while ?

I used Audacity recently to remove some noise in a WAV file. Considering the audios that we need to process here, there would be considerable gaps in the audio, as the speaker is pausing/waiting. It would take a while to manually go through the audio and remove those gaps. Are there any tools that can process an audio and remove (say) gaps longer than 5 seconds ?

try sox tool and its silence effect, similar issue resolved in this stackoverflow topic

1 Like

ADDON in first post. Hope it will help !

Hi @elpimous_robot
Great tutorial and discussion. I am trying to train on 5000 utterances and it is taking a couple of hours per epoch. Can you share what configuration you used and how long each epoch took? Thanks for the help.

Hi, Great tutorial… May I know how you french alphabet.txt looks like? Thanks

Hi, Mansurul1985.

Yes of course, but keep in mind that it’s for a robot AI (so simplified one, and some tricks to limit bad inferences results.

alphabet.txt :

# Each line in this file represents the Unicode codepoint (UTF-8 encoded)
# associated with a numeric label.
# A line that starts with # is a comment. You can escape it with \# if you wish
# to use '#' as a label.
# FOR FRENCH LIMITED CORPUS - JUST WORKING IN SOUND PERCEPTION - A BOT WILL ANALYSE RESULTS
 
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
'
é
è
ç
-
# The last (non-comment) line needs to end with a newline.
1 Like

Hello, Phanthanhlong7695.

did you compile kenlm utils ?? (needed if you want to do more than using existing model)
http://kheafield.com/code/kenlm/estimation/
Compilation will give you the binaries you want.
Hope to help.

it done. i fixed this

and how can i create trie file .

I’ve been able to use a Python tool to cut a WAV into word chunks - Longer audio files with Deep Speech

The audio outputs range from 1 second to 49 seconds. How will the longer (than 3 to 5 seconds) audio lengths affect the building of a model ?

Hi.
Well, as you can see in the deepspeech process,
A wav is cut using miliseconds.
Each part of the audio cut is “linked” to a vocabulary word character, and both are sent to “builder”.

There is a big error risks in this process, because a really small gap could result in lots of errors. (Big gap, characters errors…)

So, a small wav file, nearly 5s is the “best” compromise.

You could think : “so, I’ll use wav’s about 1 word only, to avoid gap”

It’s not a good idea : starting a word and continue a word after a previous one doesn’t produce same wave form (amplitude) beginning.
Ex: “hello”, "I say hello"
Often, the waveform beginning is highter in a start word.

Don’t hesitate to share with us your tests.

Please be more explicit because I don’t understand your question.

Yes, and I appreciate your thread here is based on building the wav files used for training, by speaking. However, that is not always the case, as sometimes we may want to do the ‘same’ type of building (i.e. build our own models), but the WAV sources used are all from a WAV file. Hence the need to cut a WAV file into small pieces, and attempting to keep words within each cut.

That is, no broken words.

As you say, a small WAV of maximum duration of 5 seconds is ideal. I have been testing the "Python interface to the WebRTC Voice Activity Detector " at https://github.com/wiseman/py-webrtcvad

There is a python script there, example.py and I ran it against a 10 minute WAV file. The results were 56 WAV files, duration range from 00.63 seconds to 49.38 seconds.

Then the author of that package advised how to cut down the range duration size, as 49.38 seconds is a long way from your recommendation of 5 seconds max. The results then were 243 WAV files, duration range from 00.18 seconds to 13.44 seconds.

Of course some of those smaller duration sized WAV are just noise and even no noise, at least not that I could hear. Some 2 or 3 worded WAV’s were only 2 seconds long and there are quite a few that are just 1 word in duration.

Of those 243 WAV files, there are only 31 that exceed your recommendation of 5 seconds though, so that seems encouraging.

1 Like

Very good, Jehoshua.
Train it and tell us about wer…