Terrible Accuracy?

Do you have an update for us? Having same troubles of inaccurate results (not as bad as yours, but “hello world” often results in something that sounds similar but isn’t accurate at all like “hello willed” “hello old” or “allow for”)

I wish I had a better update for you, other then looking at the model and our use case it was easier for us to implement a cloud solution for the small project we were building which was disappointing. There just isnt enough support/data for us to train something like deepspeech to work comparably to something like GCP services. I am planning on taking another look at this in the future (6months-ish) hopefully ill have some more information for everyone.

If your sentences are somewhat limited, e.g. 100k, providing your own lm.binary will improve things immensely. It’s still a mystery to me why the acoustic model and the language model are generated from the same corpora.

They aren’t.​​​​​​​​​​​​

What makes you thing this is the case? It is not.

The acoustic model and language model are generated from different corpora.

I wonder why the pre-trained model with the lm.binary + trie they provide return such inaccurate results. If I create my own lm.binary with just a handful of words or sentences it works wonderful (like here), but just for these sentences/words. If I replace that LM with the one they provide, the results make no sense again. (words make sence but not in relation to each other even tho lm&trie provided)

I wonder If accuracy would improve with a acoustic-model trained with the CommonVoice Dataset + different Language-Model. Does something like this already exist OpenSouce?

Or am I missing something and this should work fine?

I’d guess this is the case as the WER of the 0.5.1 release model on LibriSpeech clean is 8.2%

Given how easy it is to build a language model, I’d strongly recommend anyone who has access to a text corpus that matches their intended use case to use a custom LM.

Our LM is created from a corpus [0] that will not necessarily match your use case.

It’s good that I’m wrong. Perhaps since "how i trained my own french … " guide does not distinguish between two and that caused confusion …
in any case, I checked the words.arpa of the 0.5.1 lm.binary and it contains really strange sentences from 1800s and not so common words.
Would be good to emphasize that building own lm.binary per use case would improve things

I’m also using the 5.1 model against the dev-clean LibriSpeech, but getting an average WER of 18%, which seems high.

Please keep in mind this is an old and contributed tutorial, a lot has moved since. I don’t want to dismiss @elpimous_robot contribution, it is great :slight_smile:

How do you check that ?

Which is not surprising, since LibriSpeech is based on old books.

2 Likes

I’m testing with LibriSpeech dev-clean, so it’s the same old books. To calculate WER, I’m using jiwer.

I’m tracking each sample like:

Then averaging the clean_wer

They are using a different method for evaluation. Ours is consistent with others, but I don’t remember the specifics. Maybe @reuben remembers?

Lissyx, thanks my friend😉

So it seems I’m simply calculating WER differently - is that right? https://github.com/mozilla/DeepSpeech/blob/daa6167829e7eee45f22ef21f81b24d36b664f7a/util/evaluate_tools.py#L19 seems to have a function to evaluate. But is there some clean interface?

That’s about right, you can also look at how it is used in evaluate.py. Regarding a clean interface, it’s not really meant to be exposed, so I don’t think we can guarantee that …

The only thing that would explain the inaccuracy would be my german accent. I have an easy-to-setup example project here which uses Angular & Node.js to record and transcribe audio. It would help me a great lot if you could see for yourself and confirm/deny my experience with the accuracy.

Well, that’s not a small difference. As documented, the current pre-trained model mostly has american english accent, so it’s expected to be of lower quality with other accents.

FTR, being french, I’m also suffering from that …

1 Like

Around 10,000 hours of speech data is required to create a high-quality STT model; the current model has a fraction of this. It is also not very robust to noise.

These issues will be solved over time with more data, but the current model should not be considered production-ready.

The model does achieve a <10% WER on the Librispeech clean test set - the key word there being “clean”. It is not a test of noisy environments or accent diversity.

1 Like

I am currently using the dev-clean set, so I should have similar results. As for measuring WER, I am now doing:

    def word_error_rate(self, ground_truth, hypothesis):
        ground_truth_words = ground_truth.split(' ')
        hypothesis_words = hypothesis.split(' ')
        levenshtein_word_distance = editdistance.eval(ground_truth_words, hypothesis_words)
        wer = levenshtein_word_distance / len(ground_truth_words)
        return wer

Where editdistance uses a word-level Levenshtein distance. I am now getting an average WER of ~17%. What am I doing wrong?