I am running deepspeech training on Tedlium-2 and Mozzila open voice.
in approximate 12 hours it reached
- Training of Epoch 0 - loss - 141.758459
I used batch size 8 for training, validation and test data with -use_warpctc option. Except these I am using default options.
Approximately how much time my training should take.
I am using 1 GeForce GTX 1080 GPU.