Links to pretrained models

You are using the wrong model. As written above and in the readme, the new model (ending on .pb) is not compatible with DeepSpeech anymore. You have to use the old model ending on .pbmm

Oh thank you so much. However, if I want to use the last spanish model Quartznet15x5, D8CV (WER: 10.0%) I need to install Quartznet? because I’ve been reading but when I search on Internet how to install Quartznet, I don’t find anything only how to install NeMo, NeMo is the library to use Quartznet right?

No, you just need to install tflite+dsctcdecoder. See the inference example which is linked in the first paragraph of the usage chapter:

You can find a short and experimental inference example here

it is not an error, please read the message correctly, it just says your CPU supports more than what we have built the library with, it’s harmless.

Oh thank you so much, I installed your examples and with model english worked but I tried with model spanish and when the audio is a little bigger, it shows me an error, hou could i do to load file bigger?
ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)

A very good tutorial
https://medium.com/@klintcho/creating-an-open-speech-recognition-dataset-for-almost-any-language-c532fb2bc0cf

A longer audio file should only result in more memory usage. And from your error message it seems that the audio file might be broken or in wrong format.

Hi,
Is this project still alive ? I don’t find any releases since December 2020 in french neither in english on these pages
I’m asking because I was not totally convinced by the quality of the model (especially in french) so I made a break of 4 months on this project and I’m surprised to see there is no new releases


Contributions are welcome, I have asked for help on the french model a lot of times. I am not working anymore on that at all, so I can only work on the french model on my spare time. And my spare time is negative for months.

Dear @LucieDevGirl, first of all thanks for your interest in DeepSpeech! First of all, the project certainly isn’t dead. In fact very soon there will be announced a grants programme for DeepSpeech. In terms of the different models, there has been no release since December, but a lot of people have been working on models for different languages. If you’re interested we’d be happy to discuss more about how you can participate on Mozilla’s Matrix. I haven’t been working on French because it is already a very well resourced language, but I’d be happy to help out with support etc.

1 Like

How can I contribute more that giving data on common-voice ? I would be happy to contribute to a french model more efficient !

2 Likes

Let’s talk on Matrix! :slight_smile:

I think the problem is the memory because when the audio is short I can load the file

if i want to use your spanish model, I only need to put the path of the model in checkpoint_file right? but I have a question in the code testing_tflite.py the acoustic model is in enlgish, do i need an acoustic model for spanish? or I only need the language model in spanish? because I’ve understood that the acoustic model is like the pronunciation and the language model is the grammar but the pronunciation in spanish is different that english

Yes, you need both models

Has anybody found a Japanese language model?

Maybe it can be trained on this 2000 hour corpus: https://github.com/laboroai/LaboroTVSpeech

The Mozilla Common Voice corpus for Japanese is very small (26 hours, 639 MB).

Hello @ftyers is this possible to update your link ? Im interrested to test your wolof model if possible :wink:

Did anybody train the acoustic model on 8khz audio?
Would it be possible to share it for testing?

I trained an Esperanto model with 720 hours and a WER around 20-30% using Coqui AI which is backwards compatible to Deepspeech. I document the work in this Repo: https://github.com/parolteknologio/stt-esperanto

You can also find some experimental Scorers and Colab Notebooks there, including a Colab Notebook to create subtitles in Esperanto using AutoSub. We also have a small website: https://parolteknologio.github.io/

Edit: There is also an Esperanto Vosk Model that can be used in many tools such as Kdenlive to create subtitles: https://alphacephei.com/vosk/models
It has the impressive WER of 8.28% and is very usable.

2 Likes

Here it is: https://models.omnilingo.cc/wo/
Sorry for the delay, I didn’t see the post until now!