Select GPU when using deepspeech-gpu package

As in the subject: is it possible to select the used GPU if running inference using deepspeech-gpu package similarly as in evaluate_tflite.py? Or is there no fine-grain configurability when using package?

I would like to use one GPU for inference with some exported model and use the second GPU simultaneously for training to get the new, improved model.

You should be able to use CUDA_VISIBLE_DEVICES just like with any CUDA program.

1 Like