TUTORIAL : How I trained a specific french model to control my robot


(Vincent Foucault) #23

Hi Mark2.

I had same idea, but…

Depending on silences duration variations, I think that errors can easily happen.

So, imagine that a gap happens about a sound, regarding to a character…
This would build a very bad model…

I think the best way is specificity : inferences are done with wav’s near 5s, so, train files nearly 5-10s max seems to be the correct way.(not too much risks of gap in learning)

The best way to cut silences is VAD, and human control…good luck

Have a look at Kaldi, They work on separation words…

Or ask Deepspeech team about their process to create this perfect model.


#24

Thanks. Yes, a test with Audacity, the differences were quite recognisable. I will have to look into how to break up an audio into (say) 10 second slices and ensure words are not cut off. There are a few posts here on Discourse regards that.

I did have a quick look at “audiogrep” ( https://github.com/antiboredom/audiogrep ) yesterday, but there was an error preventing me from continuing. It doesn’t look like it has been maintained for a while ?


Can DeepSpeech process longer audio files?
#25

I used Audacity recently to remove some noise in a WAV file. Considering the audios that we need to process here, there would be considerable gaps in the audio, as the speaker is pausing/waiting. It would take a while to manually go through the audio and remove those gaps. Are there any tools that can process an audio and remove (say) gaps longer than 5 seconds ?


(Yv) #26

try sox tool and its silence effect, similar issue resolved in this stackoverflow topic


(Vincent Foucault) #27

ADDON in first post. Hope it will help !


(Abdul Rafay Khalid) #28

Hi @elpimous_robot
Great tutorial and discussion. I am trying to train on 5000 utterances and it is taking a couple of hours per epoch. Can you share what configuration you used and how long each epoch took? Thanks for the help.


(Mansurul1985) #30

Hi, Great tutorial… May I know how you french alphabet.txt looks like? Thanks


(Vincent Foucault) #31

Hi, Mansurul1985.

Yes of course, but keep in mind that it’s for a robot AI (so simplified one, and some tricks to limit bad inferences results.

alphabet.txt :

# Each line in this file represents the Unicode codepoint (UTF-8 encoded)
# associated with a numeric label.
# A line that starts with # is a comment. You can escape it with \# if you wish
# to use '#' as a label.
# FOR FRENCH LIMITED CORPUS - JUST WORKING IN SOUND PERCEPTION - A BOT WILL ANALYSE RESULTS
 
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
'
é
è
ç
-
# The last (non-comment) line needs to end with a newline.

(Vincent Foucault) #32

Hello, Phanthanhlong7695.

did you compile kenlm utils ?? (needed if you want to do more than using existing model)
http://kheafield.com/code/kenlm/estimation/
Compilation will give you the binaries you want.
Hope to help.


(Phanthanhlong7695) #33

it done. i fixed this


(Phanthanhlong7695) #34

and how can i create trie file .


#35

I’ve been able to use a Python tool to cut a WAV into word chunks - Longer audio files with Deep Speech

The audio outputs range from 1 second to 49 seconds. How will the longer (than 3 to 5 seconds) audio lengths affect the building of a model ?


(Vincent Foucault) #36

Hi.
Well, as you can see in the deepspeech process,
A wav is cut using miliseconds.
Each part of the audio cut is “linked” to a vocabulary word character, and both are sent to “builder”.

There is a big error risks in this process, because a really small gap could result in lots of errors. (Big gap, characters errors…)

So, a small wav file, nearly 5s is the “best” compromise.

You could think : “so, I’ll use wav’s about 1 word only, to avoid gap”

It’s not a good idea : starting a word and continue a word after a previous one doesn’t produce same wave form (amplitude) beginning.
Ex: “hello”, "I say hello"
Often, the waveform beginning is highter in a start word.

Don’t hesitate to share with us your tests.


(Vincent Foucault) #38

Please be more explicit because I don’t understand your question.


#40

Yes, and I appreciate your thread here is based on building the wav files used for training, by speaking. However, that is not always the case, as sometimes we may want to do the ‘same’ type of building (i.e. build our own models), but the WAV sources used are all from a WAV file. Hence the need to cut a WAV file into small pieces, and attempting to keep words within each cut.

That is, no broken words.

As you say, a small WAV of maximum duration of 5 seconds is ideal. I have been testing the "Python interface to the WebRTC Voice Activity Detector " at https://github.com/wiseman/py-webrtcvad

There is a python script there, example.py and I ran it against a 10 minute WAV file. The results were 56 WAV files, duration range from 00.63 seconds to 49.38 seconds.

Then the author of that package advised how to cut down the range duration size, as 49.38 seconds is a long way from your recommendation of 5 seconds max. The results then were 243 WAV files, duration range from 00.18 seconds to 13.44 seconds.

Of course some of those smaller duration sized WAV are just noise and even no noise, at least not that I could hear. Some 2 or 3 worded WAV’s were only 2 seconds long and there are quite a few that are just 1 word in duration.

Of those 243 WAV files, there are only 31 that exceed your recommendation of 5 seconds though, so that seems encouraging.


Longer audio files with Deep Speech
(Vincent Foucault) #41

Very good, Jehoshua.
Train it and tell us about wer…


(Matti Meikäläinen) #43

After running the command:

/bin/bin/./build_binary -T -s words.arpa lm.binary

I get for some (but not all) vocabularies the following error:

vocab.cc:305 in void lm::ngram::MissingSentenceMarker(const lm::ngram::Config&, const char*) threw SpecialWordMissingException.
The ARPA file is missing < /s > and the model is configured to reject these models. Run build_binary -s to disable this check. Byte: 106432571
ERROR"

Do you know what causes it?


(Vincent Foucault) #44

Hi Mark2.
I think you should ask to Kenneth, the creator of kenlm tools :
http://kheafield.com/code/kenlm
It’s a lm problem, regarding to silences.
I saw issues on it github, if I remember !

Did you add silences in your “file”.txt, before converting to arpa ?
Me, no !
I just added a sentence per lign, without punctuation
I didn’t have any problems
Good luck


Create A SubSet of existing models
(Gr8nishan) #45

Thanks for sharing such a wonderful article … but can you please share a snapshot of your csv as i am confused that do we need to give the full path of the wav files or only their name


(Vincent Foucault) #46

@gr8nishan,
Thanks for compliments.

here is a sample of a typical deepspeech csv file :

wav_filename,wav_filesize,transcript
/home/nvidia/DeepSpeech/data/alfred/dev/record.1.wav,87404,qui es-tu et qui est-il
/home/nvidia/DeepSpeech/data/alfred/dev/record.2.wav,101804,quel est ton nom ou comment tu t'appelles
/home/nvidia/DeepSpeech/data/alfred/dev/record.3.wav,65324,est-ce que tu vas bien 

You must respect the first line (needed to create columns for CSV usage)
And each next line inform 3 values, separated by a comma :

  • where is the wav file, (I use complete link, perhaps relative path could work ?!)
  • what is it size, (you can have size with this : os.path.getsize(“the wav file”))
  • what is the transcript (in the wav language)

Take a look at …DeepSpeech/bin/import_ldc93s1.py, L23 for CSV creation !!

About transcript, pay attention to only enter characters present in alphabet.txt, otherwise you’ll encounter errors when training.

Hope it will help you.
Vincent