What does this warning affect?


(Tibogom) #1

Tensorflow-gpu 1.6.0
DS 0.2.0a9

W tensorflow/contrib/rnn/kernels/lstm_ops.cc:849] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=7, cell_size=375


(Reuben Morais) #2

I’m not sure I can clarify it any further than what the message is saying: odd numbers can hurt performance.


(Tibogom) #3

yes, but why default values batch_size = 1


(Reuben Morais) #4

What does this have to do with your original message? 1 is the only reasonable default value because what batch size you use is a function of how much resources (memory) your computer has, and only each and every user can know that.


(Tibogom) #5

Thank you for your answer and patience, how can I calculate the batch size based on RAM or gpu memory? for example if the 16GB RAM and 11GB gpu memory


(Reuben Morais) #6

Unfortunately there’s no way to calculate it directly, you have to experiment a bit. It depends on your dataset as well. If you have very long audio files, it’ll reduce the maximum batch size you can use. As a reference, we use train_batch_size = 24 on TITAN Xp GPUs (12GB) with Fisher+Switchboard+Librispeech datasets. dev and test batch sizes = 48.