Performance (training time) of single vs dual gpu is same

Dear Support.

I have earlier single gpu, now I have two gpus. GPU is utilizing complete ram and 85% volatile usage.
I have refresh drivers installed now. but performance of single gpu and double gpus is same almost. while I expect the double efficiency. Is there any point in deepspeech where I need to point or address? or any other ignorance is done by me.

PID of job for both GPUs is same.
tensorflow_gpu = 1.15
gpu 2X RTX 2080 ti
cuda 10.2
nvidia 440.82
OS CentOS 7.4
DeepSpeech 0.6.1

If you can guide and help.

I don’t think you are using your GPUs at all, TensorFlow upstream documents CUDA 10.0 for this version.