Local voice control using Snips/Mycroft/Snowboy/etc

In this thread it is mentioned that there is no interest in adding voice control beyond the web input experiment.

I was curious why this is. To me it seems that voice control, when done right, is a useful and intuitive way of controlling a smart home.

Furthermore, strategically it could be powerful to position the Mozilla Gateway as a system that explicitly rejects the surveillance businessmodel that come with Alexa, Cortana and the other cloud-connected versions. For me the only way I will allow voice control into my home is if the system is not cloud connected at all.

It seems open source cloudless voice control systems are having their moment, with wonderful hardware and software popping up all over the place. It would seem strange to me if Mozilla would not embrace this chance.

Hi @buttonmash,

What I said in that thread was that “There are currently no plans to use the experimental smart assistant UI of the gateway as a front end for another voice assistant” (by “another voice assistant”, I was referring to Mycroft).

Note that in addition to the smart assistant experiment on the gateway, we are also exploring local voice control through the Things Controller experiment built on Android Things (see a photo of it running on a RasPad here). See also this experiment by @andrenatal which uses Snips.ai to do local wake word detection, speech to text and intent parsing on Android.

The gateway software itself is intended to run on smart home hub and smart router type hardware which don’t have a microphone or screen (which requires quite a different hardware and software stack).

We also have a dedicated speech & language assistants team at Mozilla who are working on a related voice browser project.

1 Like

Thanks for the explanation! I’m glad I misunderstood :slight_smile: