DeepSpeech update, grant and playbook

Voice is increasingly becoming a key interface to the internet and to technology. It’s now as natural to ask our devices a question as it is to type a query into the search bar. This only works if voice recognition technology is accessible to everyone, regardless of one’s accent, language, gender, or age.

A few years ago, we set out to create an open source speech-to-text engine and a repository of audio files anyone could use to train new speech-based interfaces. We called the engine DeepSpeech and the repository of audio files Common Voice. Over the years, we’ve incubated and supported the growth of this technology that we feel is fundamental to making the web more accessible for everyone.

DeepSpeech is already a resounding success, thanks to the passion and brilliance of this community and the dedication of our contributors. The project incubated at Mozilla and, with your help, grew from a research project into a prototype. It has now reached the point where the next natural step is to start work on specific applications.

It is our belief that to truly bloom, the DeepSpeech project should see applications in a variety of domains. Due to the diversity of those use cases, we’ve decided to support the transition of the project to the people and organizations that are interested in furthering use-case-based explorations.

DeepSpeech is a sophisticated project and to help people use the codebase, we’re cleaning up the documentation and preparing to stop Mozilla staff maintenance of the codebase. In these last weeks we’ve streamlined the continuous integration processes for getting DeepSpeech up and running with minimal dependencies.

In the coming month, we’ll also publish a toolkit to help people, researchers and companies and any interested parties use DeepSpeech to build their own voice-based solutions.

  • To that effect, we’ll launch a grant program that will fund a number of initiatives aimed at demonstrating applications for DeepSpeech. We’ll prioritize Projects that contribute to the core technology while also showcasing the potential of this technology to empower and enrich areas that may not otherwise have a viable route towards speech-based interaction. We’ll announce further details about the grant submission process in May, so please check back for more soon.

  • We’ll also publish a playbook that guides people on how to use DeepSpeech’s codebase as a powerful starting point for any direction.

We’re excited to see what uses people develop with DeepSpeech and look forward to supporting your projects through the grant program in the coming months.

7 Likes

@mlopatka, I don’t think this is a viable strategy for this project. Giving grants from time to time won’t fix the little bugs that arise continously in a complex project like this. Take the CI or move to TF2 …

Anybody interested in STT should consider switching to the fork by most of the original developers: coqui.ai. Same developers, better service :slight_smile:

I don’t know why you don’t mention them as they, in contrast to you, actively maintain their codebase … and publish a model zoo, and check out their TTS.

Disclaimer: I don’t have any connection to coqui. I am just sad that Mozilla fired the devs, didn’t care about the community for 8 months and then presents a solution that won’t benefit anyone …

12 Likes

Hi @mlopatka any update on the grants? It has been four months now and no news on the Mozilla front. We’re all eagerly awaiting news! :slight_smile:

1 Like

Given the unfinished business here relating to grants, etc, it seems worth noting that there is a bit more discussion related to this topic at

Thanks to Mozilla for supporting this project for so long, with the right kind of license, and to the developers everywhere for keeping it moving forward!

2 Likes