Hi! I want to train a model to recognize Spanish, since there are not too much data available, I have thought of using transfer learning from English. I have a couple questions.
I have seen several branches of transfer learning, but I am not sure of how to use them, is there some docs about them?
Is it really worth it to try transfer learning from English to Spanish? I do not know if the sounds are close enough to be helpful
If the transfer learning branches are not what I am looking for I have thought of removing the last layer and adding a new one, have anybody done that before or is there some kind of guide, I am not sure of how to do it within deepsearch since there is a lot of code?
Thanks a lot for your help
4 Likes
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
I guess @josh_meyer should be able to help specifically on that but he’s traveling right now?
I tried transfer learning from English to German and after removing 2 last layers and allowing fine tuning with 0.0001 I was able to get down with WER from 11,7% to 9,4%.
https://www.openslr.org/75/
apart from Common Voice. Those are ~180h.
I’ll add Data Augmentation once I learn how to use DeepSpeech (visualization, correct hyperparams…) and then try Transfer Learning.
Be careful with the tedx dataset, it has a lot of grammar mistakes, and caito did not converge for me.
About the crowdsourced data, I think it needs silence trimming.
Did you substitute ä=>ae and so on? If not, how were you able to restore the weights from the english model with the german alphabet having more letters and therefore more nodes in the last layer?
I am aware of this parameter. I tried to load the english model with the drop_source_layers = 2, but it fails restoring the weights because the amount of nodes in the last layer dont accord (due to german alphabet being bigger). Have you had different experiences?
That’s interesting, I have German alphabet with 3 umlaut characters, but did not experience any problems with transfer learning. I’ll have a closer look and try to report my findings tomorrow.
Thanks, that would be very helpful. Because I suspect that the drop_source_layers parameter only drops the weights after they have been initially loaded. But the initial loading doesn’t work if the network (or alphabet in that sense) deviates.
This is the concrete error with usung the drop_source_layers flag:
deepspeech_asr_1 | E InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
deepspeech_asr_1 | E
deepspeech_asr_1 | E Assign requires shapes of both tensors to match. lhs shape= [2048,33] rhs shape= [2048,29]
deepspeech_asr_1 | E [[node save/Assign_32 (defined at DeepSpeech.py:448) ]]
deepspeech_asr_1 | E
deepspeech_asr_1 | E The checkpoint in /model/model.v0.5.1 does not match the shapes of the model. Did you change alphabet.txt or the --n_hidden parameter between train runs using the same checkpoint dir? Try moving or removing the contents of /model/model.v0.5.1.