@mischmerz,
you’re right. This technology isn’t very usefull for RPI3.
For a small GPU board like mine (TX2), it’s better.
It was one of my researchs in the past : what is the best STT for a small board/low power/outdoor…
I really think that, for a full model, the best way is an online solution (google/bing…) with a mobile model access (phone)
But, Lissyx, and all the team, work hard, triing to reduce ram/size needs. They are dependant of TF possibilities.
Sure, in the past, cpu need will be reduced (and cards will grow up. Thanks Mr Moore)
Now, I’m an old user of pocketsphinx, I created my own fr model, adapted from LIUM one, and even with a lot of time spent to increase learning (adaptation), I never had this result ! (never better than 82% accuracy. Now, with same vocabulary, I reach 95% !)
The time, in response(inference) is nearly the same pocketsphinx/deepspeech(python)
But, I agree, a small board like a rpi3 is too light !! (but good for arcade emulation (‘recalbox’) LOL
See U