I’d like to train a TTS model in an Indigenous language that uses an orthography that doesn’t have a supported phone set. Would I be able to change the training data from the language’s orthography into it’s IPA representation and train it that way? Example below:
wavFile1|həloʊ wɜːld
waveFile2|siː spɑːt ɹʌn
waveFile3|tədeɪ ɪz ɐ naɪs deɪ
If that’s the case would I set the follow values in the config.json file?
"use_phonemes": false, // use phonemes instead of raw characters. It is suggested for better pronounciation.
"phoneme_language": "en-us",
Any help is appreciated!