Distributed Training on Multiple Machines, Multiple GPUs

Hi,
I am not sure how to start a training job on multiple machines. Using the flag --worker_hosts is no longer valid in the latest version 0.8.2

Thanks and Regards,

Short answer: It doesn’t work any longer.

Long answer: Please use the search feature before you post, same questions with long answers came up just some days ago.