Loading pre-trained DeepSpeech model on GPU for inference!

Hi,

In advance, I apologize if my question is silly. I have almost zero experience with TensorFlow. For my project, I need to use DeepSpeech v0.4.1. I download the model from here . However, I cannot use it to do inference on my gpu device using the code [here] (https://raw.githubusercontent.com/carlini/audio_adversarial_examples/master/classify.py).

I want to do the following precisely: Load the above model (which seems to be trained on CPU) on GPU, so I can use my GPU for inference. The code I shared is working properly when everything is on CPU, but I have no idea how I can transfer the model to GPU from CPU. I played with the code using “with tf.device(“gpu:0”)”, but I got the following error:

`tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:`

Cannot assign a device for operation strided_slice: node strided_slice (defined at /audio_adversarial_examples/tf_logits.py:33) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0 ]. Ma ke sure the device specification refers to a valid device. [[strided_slice]]

I’m sorry if I should have not asked this question here. I was trying to look up for a working solution on other places, didn’t find any. I’m a pytorch guy, and I don’t know why it’s tricky to do such a transfer…

I really appreciate your help. :slight_smile:

Thanks a lot!

This is a TensorFlow question rather than a DeepSpeech question so maybe you’d have better chances in StackOverflow. The error message seems to indicate your TensorFlow installation does not have GPU support.

Yes. You’re right, and I’m sorry for that.

I did install tensorflow-gpu successfully and it works on gpu pretty well.

It would be great if you can point me to particular sections of the code that is responsible for loading a trained model (using CPU) on GPU for inference. That’s what I exactly need and as far as I understood you should have such a thing in the code.

thanks

The error you posted, coming directly from TensorFlow, contradicts that:

There’s no such thing in the code. TensorFlow puts operations on the GPU automatically unless you force set them on a specific device. Our inference code in Python just lets the automatic behavior do its thing, and our native inference clients have different builds for CPU and GPU enabled versions.