Testing phase and GPU usage

Hi all,

I am new to DeepSpeech and I am excited to train my first model! I am currently training the model using a dataset of 700k + audio files and transcripts (I did a 70-20-10 split, batch size of 64). The training phase and validation phase went smoothly which took 8 hours in total.

However, I have been stuck in testing phase for 2 days and counting. I understand that this is due to the decoding been done in CPU. But when I checked ‘nvidia-smi’, it is still using up my GPU space instead of releasing it.

Therefore I would like to check if this behavior is normal, or is there a way that I can release the GPU during testing phase?


You can safely kill the testing and just make a model from the last checkpoint. Check the docs. You would usually take less than 70k for testing. More like 1k :slight_smile:

1 Like

Thanks for the advice!

Unfortunately, the model gave an output of spaces during inference (eg: " ")

I was quite disappointed but I decided to keep trying until I succeed in getting decent output. I have kept 90% of the dataset for training, 700 files for testing and the rest for dev set. For now, I am trying out transfer learning which will hopefully give a satisfactory result and I will proceed to tune the hyperparameters (currently epoch=1, batch size=64, learning rate=0.001) thereafter.

With lower testing time it definitely helps a lot!

Generally, switch over to Coqui which is supported, they’ll help you quicker. And you need 10-15 epochs to get results.