TUTORIAL : How I trained a specific french model to control my robot

(Murugan R) #85

@karthikeyank sir. i think no need to build new LM. it will adapt for ds-lm.

(karthikeyan k) #86

so only fine tuning the acoustic model will give better results right…

(Vincent Foucault) #87

Yes.
But your words must be in the lm (it should be the case)

1 Like
(karthikeyan k) #88

okay in that case, how can i add my corpus words to the existing lm. so that i can get the existing knowledge base as well as the new word’s knowledge… is there any way for that…

(Vincent Foucault) #89

Yep.
Download the complete vocabulary file of the last ds model,
Add your own sentences, build LM.

(Vincent Foucault) #90

Yep.
Download the complete vocabulary file of the last ds model,
Add your own sentences, build LM.

But, are you sure that your words aren t in the model ??

An easy way : record the needed sentences, with a good online us text to speech,
Convert it to 16k mono, and test the model…

I did it for some tests, and it works perfectly.

Hope it Will help

(karthikeyan k) #91

okay i will try. can you please share the link where i can get the vocabulary file of the last model if you know.
thanks

(karthikeyan k) #92

this is the issue am facing…! can anyone help me with this…!

(karthikeyan k) #93

hi @elpimous_robot, if you dont mind can you please explain the below lines…
--early_stop True --earlystop_nsteps 6 --estop_mean_thresh 0.1 --estop_std_thresh 0.1 --dropout_rate 0.22

(Vincent Foucault) #94

Hello.
Early stop and its params are used to limit the overfitting.
Dropout_rate too.
Perhaps could you investigate on tensorflow learning params.
Have a nice day.
Vincent

(karthikeyan k) #95

yeah … Thank you…

(Hafsa Farooq) #97

Hi, I am using DeepSpeech 0.4.1 for developing Urdu Language ASR using Deepspeech.
I developed the language model, data is prepared, wrote the alphabets in alphabets.txt as per given guideline in this post.
Now I am trying to generate trie file. Bur I am having this error.
ERROR: VectorFst::Write: Write failed:

Please help. Thank you so much!

(kdavis) #98

Could you give a bit more info on how you’ve attempted to generate the trie? For example the command line and arguments you ran.

(Hafsa Farooq) #99

/home/rc/Desktop/0.4.1/DeepSpeech-master/native-client-U/generate_trie //home/rc/Desktop/0.4.1/DeepSpeech-master/data/alphabet.txt //home/rc/Desktop/0.4.1/DeepSpeech-master/data/lm/lm.binary //home/rc/Desktop/0.4.1/DeepSpeech/data/trie

I am following this tutorial to generate trie file.

(Bacon Ator) #100

One thing I changed was to bump the n_hidden size up to an even number (1024, based of issue #1241’s results). The first time I ran my model with an odd number returned a warning and WER wasn’t great:
“BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375”

(YogeshA) #101

How to create lm/trie ?

(Vincent Foucault) #102

Hi @yogesha,
Your first post… perhaps you could start with a simple "hello…":wink:

Welcome in this Discourse section.

Have a look at your Deepspeech directory :
Deepspeech/bin/lm
Under, in files, you’ll have the commands to lm creation.

You ll need too to install kenlm files ! (See beginning tuto)
Have a nice day YogeshA.

#103

Hi, Thanks to you tutorial, have a litle question, how can compile this? I am not an expert in this topic

(Vincent Foucault) #104

Hi.
Yes you need to compile kenlm libs…

For a deepspeech/native client compilation, if needed, see the readme.md in native client.