Question with DeepSpeech Transfer Learning

“too bad” “a little” are not really helpful. Please have proper figures, you might be tricked by examples.

According to me, you need to share more context on what you have tested. Fine-tuning requires work, maybe your learning rate is too high, maybe you need to tune dropout, maybe you need to train on more or less epochs. Have you had a look at train / dev loss evolution ?

30 plus hours from Youtube might not be the best source for learning. How do you get the transcripts, what is their quality?

And I don’t find .29 bad for 30 hours from Youtube.

And you probably have catastropic forgetting due to bad data.

1 Like

I have run the evaluate.py using the same checkpoint as downloaded from realease (no tuning).

Results:

Test on …/deepspeech-0.6.1-models/audio/vlsi/test.csv - WER: 0.525011, CER: 0.311230, loss: 111.989403

After Fine-Tuning on audio files extracted from YouTube.
https://www.youtube.com/playlist?list=PLCmoXVuSEVHlEJi3SwdyJ4EICffuyqpjk

Downloading the above playlist plus one more then divided the into chunks, created 16500 audio samples. (13k/2k/1.5k - train/dev/test).

python3 DeepSpeech.py 
--n_hidden 2048
--checkpoint_dir  checkpoints/deepspeech-0.6.1-checkpoint/
--epochs 50
--train_files ../deepspeech-0.6.1-models/audio/vlsi/train.csv
--dev_files ../deepspeech-0.6.1-models/audio/vlsi/validate.csv
--test_files ../deepspeech-0.6.1-models/audio/vlsi/test.csv 
--learning_rate 0.00001 
--use_cudnn_rnn true 
--use_allow_growth true 
--lm_binary_path ../deepspeech-0.6.1-models/lm.binary 
--lm_trie_path ../deepspeech-0.6.1-models/trie 
--noearly_stop 
--dropout_rate 0.15 
--export_dir exported_model/vlsi2 
--train_batch_size 64 
--dev_batch_size 64 
--test_batch_size 64

Results:

Test on …/deepspeech-0.6.1-models/audio/vlsi/test.csv - WER: 0.203588, CER: 0.105375, loss: 38.082546

Using Random Audio File:
Pre-Trained Results:

Test on …/deepspeech-0.6.1-models/audio/fluent_speech/csv/test.csv - WER: 0.263332, CER: 0.125334, loss: 12.215734

Fine-Tuned Results:

Test on …/deepspeech-0.6.1-models/audio/fluent_speech/csv/test.csv - WER: 0.478590, CER: 0.276625, loss: 24.773346

If you need any other info, plz let me know :slight_smile:

Hi @othiele
I have shared the link of YouTube Playlist. I used the transcript provided by the channel and did some pre-processing.

And I don’t find .29 bad for 30 hours from Youtube.

Yeah, even I did some hyperparameter tuning and got WER 0.20 (plz refer my last comment).
But the problem is, it disturbed the previous weights.

So, on your YouTube-based test-set, you have 52.5% WER and 31.1% CER before fine-tuning, and 20.4% WER / 10.5% CER after fine-tuning with ~30h of data ?

That indeed does look like a very nice improvement.

Well, that’s going to be the issue you have to work on.

I suspect you want more than just those validation and test set if you want to avoid degrading quality on previous data. Otherwise, it makes sense that the new learning optimizes for the new data.

What did you change for fine-tuning as you might be “overfitting” for the new data now?

And how do you cut the chunks of the videos? The videos look OK for training as the speaker talks slowly and it doesn’t have much background noise?

I cut down the audio using the video’s SRT file using Pydub, converted into mono channel + 16k frame_rate exported as a wav file.

I mentioned every step I did for fine tuning in previous comments (plz refer).

Check the created chunks, cutting just by SRT might give you bad results.

Sorry, thought you also changed lm values, don’t for now. It is more likely your data.

Ok, Thanks for the suggestion, I will try to get something better than this.

No, I haven’t changed anything in LM or Trie. I am using the same LM and Trie from the pre-trained model :slight_smile:

1 Like

Any suggestions from your side? It would be great.

I suspect you want more than just those validation and test set if you want to avoid degrading quality on previous data.

Sorry, I didn’t get this point. Can you plz elaborate?

I don’t have your dataset, I can’t do your work there. I’ve already shared suggestions.

I don’t see how to say it otherwise: you are fine-tuning and using only one validation set, so your network is getting optimized for this one. That’s also why it regressed on previous dataset.

I guess - stating that in plain English - you could go for an even lower learning rate or put more plain English examples into the validation set so as to alter the original weights a little less :slight_smile:

One thing about transfer learning on master: the checkpoints from v0.6.1 are not compatible with master, due to a bugfix in the MFCC computation code. But they will still load, just give bad results. Make sure you don’t mix those up.

Ohh, now I got your points.
Thanks a lot.

True, forgot about that. Are you referring to the upper limit of frequency? Maybe in this case, it’s hackable by removing it. Being able to use up-to-date transfer learning from 0.6.1 model will likely bring more good than harm?

Haha… Thanks for the suggestion.
I will try to add more audio with American accent and will update you guys and for other people who might face this problem.

@lissyx, @reuben I had a couple of questions.

-Do you have a patch that addresses the issue with incompatibility between the 0.6.1 checkpoint and the master branch w.r.t to transfer learning?
-What is the difference between fine tuning with a lower learning rate(0.000001) Vs. using transfer learning for Indian accent English over 0.6.1 checkpoint? Is one of this approach better than the other, in producing a better WER on resultant model?

@lissyx:

You have mentioned above that we can use https://github.com/mozilla/DeepSpeech/tree/transfer-learning2 branch to perform Transfer Learning with different alphabets.txt

It would be helpful if you could please let us of know which version of English checkpoints are valid for transfer-learning2 branch.

Josh is the one who knows. Also this branch has been merged into master now.

1 Like

@josh_meyer, could you please point me which English checkpoints are compatible with TransferLearning2. I want to use different alphabet file for training the model.

@lissyx: Is the TransferLearning2 also part of v0.6.0? I can find checkpoints only for versions v0.6.0 or before.