Voice is increasingly becoming a key interface to the internet and to technology. It’s now as natural to ask our devices a question as it is to type a query into the search bar. This only works if voice recognition technology is accessible to everyone, regardless of one’s accent, language, gender, or age.
A few years ago, we set out to create an open source speech-to-text engine and a repository of audio files anyone could use to train new speech-based interfaces. We called the engine DeepSpeech and the repository of audio files Common Voice. Over the years, we’ve incubated and supported the growth of this technology that we feel is fundamental to making the web more accessible for everyone.
DeepSpeech is already a resounding success, thanks to the passion and brilliance of this community and the dedication of our contributors. The project incubated at Mozilla and, with your help, grew from a research project into a prototype. It has now reached the point where the next natural step is to start work on specific applications.
It is our belief that to truly bloom, the DeepSpeech project should see applications in a variety of domains. Due to the diversity of those use cases, we’ve decided to support the transition of the project to the people and organizations that are interested in furthering use-case-based explorations.
DeepSpeech is a sophisticated project and to help people use the codebase, we’re cleaning up the documentation and preparing to stop Mozilla staff maintenance of the codebase. In these last weeks we’ve streamlined the continuous integration processes for getting DeepSpeech up and running with minimal dependencies.
In the coming month, we’ll also publish a toolkit to help people, researchers and companies and any interested parties use DeepSpeech to build their own voice-based solutions.
-
To that effect, we’ll launch a grant program that will fund a number of initiatives aimed at demonstrating applications for DeepSpeech. We’ll prioritize Projects that contribute to the core technology while also showcasing the potential of this technology to empower and enrich areas that may not otherwise have a viable route towards speech-based interaction. We’ll announce further details about the grant submission process in May, so please check back for more soon.
-
We’ll also publish a playbook that guides people on how to use DeepSpeech’s codebase as a powerful starting point for any direction.
We’re excited to see what uses people develop with DeepSpeech and look forward to supporting your projects through the grant program in the coming months.