Hi,
In advance, I apologize if my question is silly. I have almost zero experience with TensorFlow. For my project, I need to use DeepSpeech v0.4.1. I download the model from here . However, I cannot use it to do inference on my gpu device using the code [here] (https://raw.githubusercontent.com/carlini/audio_adversarial_examples/master/classify.py).
I want to do the following precisely: Load the above model (which seems to be trained on CPU) on GPU, so I can use my GPU for inference. The code I shared is working properly when everything is on CPU, but I have no idea how I can transfer the model to GPU from CPU. I played with the code using “with tf.device(“gpu:0”)”, but I got the following error:
`tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:`
Cannot assign a device for operation strided_slice: node strided_slice (defined at /audio_adversarial_examples/tf_logits.py:33) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0 ]. Ma ke sure the device specification refers to a valid device. [[strided_slice]]
I’m sorry if I should have not asked this question here. I was trying to look up for a working solution on other places, didn’t find any. I’m a pytorch guy, and I don’t know why it’s tricky to do such a transfer…
I really appreciate your help.
Thanks a lot!