Scoring or evaluation inference of Model trained with DeepSpeech

I would like to know if it’s possible to evaluate with a score, maybe 0% to 100% of precision at the moment of running inference using a model trained with DeepSpeech. Thanks.

Hi! I don’t know about the “native” solution, but as a workaround you can try using util/text.py.
You need to make labels for all files you infer and then catch deepspeech output and transfer it to wer_cer_batch(originals, results).
Hope it helps! Would like to know about embedded solution though :slight_smile:

thanks, so it’s necessary to have the transcriptions of each audio inference, isn’t it? to calculate the WER (Word Error Rate), right?

Yes, otherwise you’d not know what the true transcription should have been.

What you describe seems close to what I’m about to finish on https://github.com/mozilla/DeepSpeech/pull/1854

In this solution still being necessary to have the transcription for each audio that I infer?

At some point, I don’t see how you can expect to evaluate accuracy without having the good known transcription.

1 Like

Like some Object Detection Frameworks (Detectron from Facebook) gives me an estimation about the inference like this image:

I would like to have an estimation on DeepSpeech inferences, it is possible if i don’t have the transcription? of the audio that i want to infer?

This seems to be something very different from what was explained above, now you want the confidence of the decoding. We don’t yet have any API to expose that.

1 Like

Thanks! That’s what i wanted to know.

Sir if we have getting well known inference results(99% accuracy) and i have predefined results also. how can i check WER for my inference results.

i succeeded. thanks @nene .

hello,can you tell me how to do it??