Distributed training without GPU

Hi,

I have 3 32GB high end VMs but no GPU. Is it possible to use them for distributed re-training(transfer learning) process.
I went through other similar post but was unable to understand much. Also saw run-cluster.sh which is provided, but feel that GPU is needed for it.
Could someone help by telling how to setup distributed training for improving results.

Thanks

There’s no reason it should not, but you need to be aware that the speedup from a GPU is way more important that you can get on CPU, so even three high-end VMs will still be much slower.