[FEATURE REQUEST] Argument "--use_cuda" in synthesize.py: not working or not as expected!?


I have trained a model on a GPU and would like to use it for inference. Even though if leave out or set --use_cuda false as parameter on the command line when invoking synthesize.py… it does not change the fact that it starts a GPU process and loads data to the GPU memory!

Is the parameter working at all? If not, the way it is supposed to work (how?) is then at least highly misleading.

In any case: it should be possible – both for training and inference – to controll CPU/GPU usage via the command line. Might be a feature request then :wink:

You can force CPU inference in synthesize.py by mapping it to the CPU. Around line 134, after the #load model comment, it is initialised. You can change it to:

cp = torch.load(args.model_path,map_location=torch.device('cpu'))
1 Like

thanks for this workaround @georroussos – however, shouldn’t this or various other steps better be controlled by such a CLI argument like use_cuda?