Fine tuning vs Transfer Learning in latin alphabet + a few special characters

I’m trying to train DeepSpeech to recognise other characters such as î, ă, ș. I’ve read most posts here (i believe) and is to my understanding that i either Fine tune a pretrained model which has the last layer configured for UTF-8 or go for Transfer Learning, remove the last layer and retrain using an updated alphabet. https://deepspeech.readthedocs.io/en/v0.7.1/TRAINING.html#fine-tuning-same-alphabet).
My question is two fold:

  1. Firstly, the documentation states: If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8.. I haven’t been able to identify such a pretrained model. Is the provided model in the documentation pretrained on UTF-8?
  2. Is there a better method of these two? It would make more sense to me to go fine-tuning since i only can muster about 100hrs (so far) of training material.

I’m sorry if any of there questions have an obvious answer, i have some background on computer vision but it didn’t help me all that much.

Thank you!

We don’t currently have a UTF-8 model as part of our English model releases.

Thanks for the response!