How to run deepspeech on an existing Tensorflow installation?

I am trying to build a software that runs Deepspeech inference along with another Tensorflow-based network.

However, when I ran my software, I find that the two tasks are simutaneously trying to allocate large chunks of memory on all 4 of my GPUs, starving each other of memory and cause cuDNN runtime errors.

But run the two models on two separate processes, each having 2 visible GPUs, they run just fine.

Since the deepspeech-gpu pip package doesn’t depend on tensorflow-gpu, I assume that deepspeech-gpu has Tensorflow runtimes integrated, and the integrated Tensorflow runtime is conflicting with the Tensorflow that I manually installed via pip.

Is my assumption correct? If it is, is there a way to run Deepspeech on my existing Tensorflow installation?


We have test coverage to avoid that

Have you tried the allow growth parameter or passing GPU options to limit the amount of memory required? Several people reported being able to share GPUs this way, and this was unrelated to a side tensorflow install.

But honestly, with that few infos, no log, no versions of what you have running, we can’t really do more.

As far as I know, allow growth can be only be configured for a specific TF sesssion. Is there a way to access the TF session on which my Deepspeech model is running on from the python interface?
I am running infrence with deepspeech-gpu 0.9.3 , using the official deepspeech-0.9.3-models.pbmm model.
I ran the script, with a few lines added to include my other model.

Not when you use, but if you switch to you can. However on Windows you might be facing other issues.

I think tensorflow allows changing that from env variable, you should look up in their docs.

1 Like

Thank you. Problem solved.
I simply added os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' to that script and no more problems occured.

1 Like