I’m really glad the DeepSpeech project exists. I have been trying to move away from Google services recently, and one of their apps I rely on heavily since I’m deaf is Live Transcribe.
I had an idea to make an alternative app that would use DeepSpeech as a backend, either running on the device or on a separate server. I have very little experience with Android development or machine learning so I’m not exactly confident in my ability to make this myself or even know where to start.
My question is: Has anyone (or a team) started working on a project like this already? Or if not, would anyone be interested in trying?
Thanks
Edit: I tried out the demo app from here and it looks like it only works after you have made a recording, unlike Live Transcribe which works nearly in real-time. I wonder if there’s a way to adapt it for this use case.
The other thing I noticed is that it’s rather inaccurate, but I’m sure that’s just due to the pre-tained model it downloaded. I’ve already directed some of my friends to contribute to Common Voice so hopefully it keeps getting better with time!