Hi all,
I am new to DeepSpeech and I am excited to train my first model! I am currently training the model using a dataset of 700k + audio files and transcripts (I did a 70-20-10 split, batch size of 64). The training phase and validation phase went smoothly which took 8 hours in total.
However, I have been stuck in testing phase for 2 days and counting. I understand that this is due to the decoding been done in CPU. But when I checked ‘nvidia-smi’, it is still using up my GPU space instead of releasing it.
Therefore I would like to check if this behavior is normal, or is there a way that I can release the GPU during testing phase?
Thanks!