Hi
I have trained a model on a GPU and would like to use it for inference. Even though if leave out or set --use_cuda false
as parameter on the command line when invoking synthesize.py… it does not change the fact that it starts a GPU process and loads data to the GPU memory!
Is the parameter working at all? If not, the way it is supposed to work (how?) is then at least highly misleading.
In any case: it should be possible – both for training and inference – to controll CPU/GPU usage via the command line. Might be a feature request then