I am trying to train a DeepSpeech model across multiple machines with one GPU each. Until v0.4.0 there was support for training using distributed TensorFlow, but as of v0.5.0 this feature seems to be gone and I cannot find any documentation about it.
Is distributed training still supported? Maybe on a different branch? I remember having read in this forum that cluster mode development was kept back in recent versions.
I would appreciate any guidance.