As in the subject: is it possible to select the used GPU if running inference using deepspeech-gpu package similarly as in evaluate_tflite.py? Or is there no fine-grain configurability when using package?
I would like to use one GPU for inference with some exported model and use the second GPU simultaneously for training to get the new, improved model.