Thanks, thats something i didnt had on my mind.
Ur right, if eaven compiling more than the latest cuda compute capability is to much to ask for (on tensorflow or torch), why walk a extra mile on top and implement multi gpu. I know, i dont pay the makers of that software, so i got no reason to complain. But its somehow frustrating to see that not eaven basic things, like a bigger ccc range, is maintained propperly on the fundamental parts (frameworks?) like torch and tensorflow. Sure, its possible for everyone (in theory at least) to compile the ccc needed, u just need to find ur way accross the “dependency hell” of countles subversions from librarys and more (before i got the gtx750, was trying to get a quadro k5000 to work with torch or tensor and failed misserably).
Long story short for now, it looks like google colab or buy some gpu time from a hoster is the only is the only way to get on a budget a trained voice that dont sound like “admiral chainsmoker” or worse.
btw, @mrthorstenm Huge thanks for the voice u created. Its the best german non Microsoft TTS voice by far. Maybe i am wrong but i think i hear a tiny bit of “saarbrigger platt” in that voice. correct me if i am wrong