DeepSpeech benchmarking profile

DeepSpeech has been made available as a benchmarking profile over at OpenBenchmarking.org. It’s very handy to see what kind of performance people get with different hardware / software combos.

https://openbenchmarking.org/test/pts/deepspeech

1 Like

Nice, are you in touch with the people handling this ?

Their audio sample is 172s long :slight_smile:

No, I just subscribe to the Phoronix blog. But his contact info is here: https://www.michaellarabel.com/

Or you can ping username Michael on the Phoronix forum.

P.S. There’s a lot of anti-NVIDIA sentiment in the comments on the post and I feel someone should point out that DeepSpeech is perfectly usable for inference on a CPU, but it’s not letting me comment for some reason: https://www.phoronix.com/scan.php?page=news_item&px=DeepSpeech-0.6-Released

I honestly won’t get into the lion’s cage there. It’s already pretty clear that we did pursue and test OpenCL and other runtime. The fact is, they are still not yet first-class citizen of TensorFlow. Whether we like it or not that’s the case. And we can’t solve all problems.

Like https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1144159-mozilla-releases-deepspeech-0-6-with-better-performance-leaner-speech-to-text-engine?p=1144440#post1144440 “SYCL will soon be merged”. That was already the story years back. I litterally spent weeks on those runtime, trying to get SYCL properly running with our model, gettings into a lot of bugs, then debugging NVIDIA’s OpenCL drivers as well as Intel’s drivers, and even VideoCore for the RPi3.

So they can’t really say we jumped into NVIDIA’s …

1 Like

Oh, I didn’t mean that. I just meant there seems to be a misconception that you have to use the GPU version. The CPU version is perfectly usable for inference and, given the extra complexity of setting up CUDA, it’s probably the version most people should be encouraged to download IMO if they’re not doing training.

1 Like