Train and Inference on difference resources

Hi everyone. Is it possible to train DeepSpeech on GPU and run the inference step on CPU? If it is, then how is it possible? I couldn’t find any resource for that. Thanks in advance.

There is nothing specific to do ?

As long as you don’t pick CUDA variants of the binary of bindings, it will run on CPU.

Thanks for your response, @lissyx . Do you have any specific resource to learn how to do that?