Is there a recommended batch size for DeepSpeech 0.4.1?
I’m trying to finetune the prebuilt checkpoint to a dataset of 30,000 audio files. I’m currently trying a batch size of 50, but would that be too big?
Learning rate: 0.0001
Unrelated question: is it proper to run multiple epochs on the same dataset?