Cuda is not getting load for DeepSpeech-GPU 0.7.4

Dear Support.

I have tried to install DS 0.7.4 gpu version.
the NVIDIA and CUDA info with CUDNN info are below.

Fri Jul 24 15:00:15 2020       
| NVIDIA-SMI 418.56       Driver Version: 418.56       CUDA Version: 10.1     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GeForce RTX 208...  Off  | 00000000:06:00.0  On |                  N/A |
| 31%   35C    P2    64W / 250W |    311MiB / 10986MiB |      0%      Default |
|   1  GeForce RTX 208...  Off  | 00000000:41:00.0 Off |                  N/A |
| 33%   38C    P2    62W / 250W |    164MiB / 10989MiB |      0%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|    0      2732      G   /usr/bin/X                                    78MiB |
|    0      4126      G   /usr/bin/gnome-shell                          68MiB |
|    0      6441      C   python3                                      153MiB |
|    1      6441      C   python3                                      153MiB |

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105

 cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 5

#include "driver_types.h"

Still the each GPU is not taking cuda load above

Tensorflow = 1.15.2

where is the mistake I am making.? or which point I am ignoring? I tried sample-cuda compilation which worked fairly well. I have also tried with cudnn=7.6 but the situation is same.

I don’t have a crystal ball:

It’s not even clear what you are doing: inference? training?

There are two python3 process in your nvidia-smi output above, what are they?

how ?
0.7.4 relies on CUDA 10.0, all your pasted information does refer to 10.1. If you want help, please start by describing what you do and your setup.

And I’m sorry @Tortoise to say this, but you are on this forum for a long time now, I find it extremely irritating that each and every time you reach for support you are unable to provide a clear view of your setup, of your goals and of the problem you are facing.

@lissyx and @othiele Thank you for kind help. I tried to give maximum info. Thank you for mentioning cuda 10.0. I was confused with the tensorflow 1.15.2 where it expects I think 10.1. Anyways. this is again great help. It resolves the issue now. Really great help. Thank you so much.

Just read the docs, CUDA versions are in there …

Yet you have not answered any of our questions. I can only deduce you were working on training. But you also mentionned twice deepspeech-gpu which is the name of the inference package.

Please understand that so many unknowns when you are seeking for help makes it close to impossible to be useful to you and others that might have the same issue in the future.

@lissyx o sorry. I forgot to write here. python processes are same deepspeech. But, due to wrong cuda, it was not getting the enough to compute. No complete utilization. But now, it is normally working. thank you. Sorry again and thank you for kind help. Great help.