Hi,
I’m wondering what is the ideal batch size theoretically or best practice?
In github’s release page, I can find train_batch_size is set to 128.
But why 128?
Is it best practice or just biggest number which fit on Quadro RTX 6000?
Since my gpu is RTX 2080Ti, batch size 128 is not feasible but should I choose biggest batch size if possible? Or should I consider step concept like yolo?
Thanks in advance.