Train model but actual prediction is too poor

Yes if course it will help, but after basis…
It will not help here for now.
Not enough datas.

Augmentation is helpfull to add noise, echos, durations, and tone.
But the most important part is good initial datas, and ENOUGH

@elpimous_robot thanks friend

@lissyx and @elpimous_robot
Inference taking long time any idea to fast that process?
eg.

   Loaded model in 0.0259s.
   Loading language model from files KenLM-model/trie
   Loaded language model in 0.00017s.
   Running inference.
   hi how are you 
   Inference took 3.328s for 5.952s audio file.

is any solution please help me?

Hey, you can’t just ask people random questions without context. 3.3s for 5.9s is quite fast.

2 Likes

@lissyx for help thanks

@Sudarshan.gurav14 I won’t answer since you don’t care about my answers to read them?

1 Like

@lissyx apologize.

really i was not having the idea but still i got the answer which i was expecting that’s why i said thank you

next time i will make sure to give the proper context before asking.

Please tell me what is difficult to understand in asking “it’s not fast enough, what can I do” without even telling what is your hardware, what are your constraints.

If you’re using 0.6.1 you should also update your trie and lm.binary. Are you sure that you’re generating your LM from the file with all of your possible commands?

Can I increase a dataset using audio augmentation ?

No. From the source code, I infered that audio augmentation don’t create new files, just transform the current audio into something noisy. This is to create a trained model more robust for noisy tests and that can generalize well.
In your case, you don’t need that good generalization, because you already know that only a few persons will be using it.

Try getting more data like the french robot topic.

1 Like

Yes, and I could say : after more more more datas…,
use python, or a terminal command to duplicate all your datas, and process audio transformations, to slighty change audio specs.
You’ll have 2x more datas…
The more datas, the better your accuracy.
Note : pay attention in data augmentation values !! use small changes, or you’ll train bad audio files, and your accuracy will not increase.

@Sudarshan.gurav14,
Friend, deep learning learns us patience !
You need to do like all of us, progress slowly, read, read…read, test your own ideas.
And magic will appear ! :yum:

I was change the recording speed like slow , fast

@elpimous_robot Yes, Right :blush:

i am reading and understand concepts

Thanks

Hi @elpimous_robot

I want to change gain of audio using voice corpus tool as you suggest
how much gain i change now i am change my gain 0.5 is it ok ?

there is one more arg -times i did’t no how to use can please help me?

Now, i am decrease my commands i just want 70 command out of 200

One more que:
suppose i have 1 wav can i change its gain 2 time mean

1.wav [ original]
1_gain_05.wav [same file]
1_gain_07.wav [same file]
:slightly_smiling_face:

is it ok?

Hello.
Reducing orders is a good idea, but doesn’t change the fact that you need more samples per command.
You augmented the number of wav, good, but use low values. I’ll try with 0.2 to 0.3 max

Thanks for quick reply @elpimous_robot
Ok will try with 0.2 to 0.3 as well as try to get more samples

HI @elpimous_robot,

Now i have 10000 wav file thousand samples and i am split that into 70:20:10

Then train model using below command

  python DeepSpeech.py \
     --train_files dataset/train.csv \
     --dev_files dataset/dev.csv \
     --test_files dataset/test.csv \
     --epochs 50 \
     --learning_rate 0.0001 \
     --export_dir export1/ \
     --checkpoint_dir cpoint1/ \

Can you please suggest me changes in command if required ?

Use batch size for train and cudrnn to make it faster, include dropout to improve accuracy.

Thanks @othiele for quick reply

Then train model using below command

  python DeepSpeech.py \
     --train_files dataset/train.csv \
     --dev_files dataset/dev.csv \
     --test_files dataset/test.csv \
     --epochs 50 \
     --learning_rate 0.0001 \
     --export_dir export1/ \
     --train_batch_size 10 \
     --dev_batch_size 10 \
     --test_batch_size 5 \
     --dropout_rate 0.15 \
     --checkpoint_dir cpoint1/ \ 

Right ?

not getting cudrnn please explore ?

--n_hidden i was use default no need to change ?

You can change n_hidden, didn’t make much of a difference for me, but I have 1000 hours. You could try 512.

Set

–use_cudnn_rnn=True

for speed and set the train batch size as high as you can without getting an error. Anything higher than 1 is good, usually (4,8,16,…)

@othiele hi
I was train model but WER is constant is any way to decrease WER

Here is my training result:

           Epoch 3 |   Training | Elapsed Time: 0:00:21 | Steps: 49 | Loss: 1.624806
           Epoch 3 | Validation | Elapsed Time: 0:00:05 | Steps: 13 | Loss: 3.249443 | 
           Dataset: 10000_data_set/dev.csv
           I Early stop triggered as (for last 4 steps) validation loss: 3.249443 with 
           standard deviation: 0.094293 and mean: 3.145775
           I FINISHED optimization in 0:01:52.091116
           I Restored variables from best validation checkpoint at 
           10000_512_checkpoint/best_dev-21290, step 21290
           Testing model on 10000_data_set/test.csv
           Test epoch | Steps: 9 | Elapsed Time: 0:00:44
           Test on 10000_data_set/test.csv - WER: 0.225941, CER: 0.049616, loss: 
           4.468176

Command is :

python3.6 DeepSpeech.py
–train_files 10000_data_set/train.csv
–checkpoint_dir 10000_512_checkpoint/
–epochs 60
–dev_files 10000_data_set/dev.csv
–test_files 10000_data_set/test.csv
–n_hidden 512
–learning_rate 0.0001
–export_dir 10000_512_export
–early_stop False
–use_seq_length False
–earlystop_nsteps 3
–estop_mean_thresh 0.1
–estop_std_thresh 0.1
–dropout_rate 0.25
–train_batch_size 80
–dev_batch_size 80
–test_batch_size 45
–report_count 50
–use_cudnn True \

@elpimous_robot
is it required to give full path of wav file in csv

directory structure

wav_file_folder:

          -all_wav[10000 wav file]
          -train.csv
          -test.csv
          -dev.csv

in same directory