Terrible Accuracy?

Hey guys, I used a pre-trained version of the model specifically, https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz I have said hello about 15 times and each time the prediction is wildly off, is there something I missed when reading about the pretrained models? Obviously the other solution is to start training a fresh model. The blog post I was reading seemed to believe that this pre trained model was strong enough to handle common words etc.

Any advice?

I think I know the blog post you’re referring to and that number is a benchmark of very clean audio, not necessarily an indicator of real-world results.

DeepSpeech’s models are still in development and don’t have the quantity of data that a production model should have.

In particular (and I suspect this was the issue in your case), the pre-built models are not very robust to noise. This should improve over time as the model gains more data and also with DeepSpeech features like augmentation (coming in 0.6).

But you don’t need to train a model completely from scratch. You can continue training the checkpoints with your own data.

Can you ellaborate on your testing process? Even if @dabinat is right, the model is able to give good enough accuracy with my poor english accent. Most of our training data, for English, for now, is american accent, so this also adds some bias.

Hi there, I am a native english speaker. In terms of output from the model, I say “Hello” as clearly as I can and the out put this time is “right el you hela her” I am also getting some warnings which ill post below.

* recording
* done recording
TensorFlow: v1.13.1-10-g3e0cc5374d
DeepSpeech: v0.5.1-0-g4b29b78
2019-10-15 09:27:17.689491: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-15 09:27:17.702351: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-10-15 09:27:17.702375: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-10-15 09:27:17.702386: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-10-15 09:27:17.702395: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant

That does not give informations about the accent.

That does not give any context on how to perform your recording and you run inference.

training from the checkpoint with the mozilla dataset, wouldnt that cause overfitting? (I am assuming this how they trained this current model)

The 0.5.1 model doesn’t include Common Voice data. But it’s only about 130 hrs of English so you’d still need additional data.

An easy way to get more accuracy is to use pre-trained acoustic model as you are, but provide custom language model.
The vocabulary used in 5.1 contains many many combinations which to me, look like 1800s english, so it’s best to separate the two. See:

and

you are sure to use monophonic audio sampled at 16000Hz ?

1 Like

Do you have an update for us? Having same troubles of inaccurate results (not as bad as yours, but “hello world” often results in something that sounds similar but isn’t accurate at all like “hello willed” “hello old” or “allow for”)

I wish I had a better update for you, other then looking at the model and our use case it was easier for us to implement a cloud solution for the small project we were building which was disappointing. There just isnt enough support/data for us to train something like deepspeech to work comparably to something like GCP services. I am planning on taking another look at this in the future (6months-ish) hopefully ill have some more information for everyone.

If your sentences are somewhat limited, e.g. 100k, providing your own lm.binary will improve things immensely. It’s still a mystery to me why the acoustic model and the language model are generated from the same corpora.

They aren’t.​​​​​​​​​​​​

What makes you thing this is the case? It is not.

The acoustic model and language model are generated from different corpora.

I wonder why the pre-trained model with the lm.binary + trie they provide return such inaccurate results. If I create my own lm.binary with just a handful of words or sentences it works wonderful (like here), but just for these sentences/words. If I replace that LM with the one they provide, the results make no sense again. (words make sence but not in relation to each other even tho lm&trie provided)

I wonder If accuracy would improve with a acoustic-model trained with the CommonVoice Dataset + different Language-Model. Does something like this already exist OpenSouce?

Or am I missing something and this should work fine?

I’d guess this is the case as the WER of the 0.5.1 release model on LibriSpeech clean is 8.2%

Given how easy it is to build a language model, I’d strongly recommend anyone who has access to a text corpus that matches their intended use case to use a custom LM.

Our LM is created from a corpus [0] that will not necessarily match your use case.

It’s good that I’m wrong. Perhaps since "how i trained my own french … " guide does not distinguish between two and that caused confusion …
in any case, I checked the words.arpa of the 0.5.1 lm.binary and it contains really strange sentences from 1800s and not so common words.
Would be good to emphasize that building own lm.binary per use case would improve things

I’m also using the 5.1 model against the dev-clean LibriSpeech, but getting an average WER of 18%, which seems high.