I successfully ran the fine-tuning and got better results. But when I introduced augmentation, the fine-tuning is getting stuck at Epoch 1(second epoch). I have not been able to find the reason. There is no error code. Just stuck at that point. Please help.
The following command does not contain test or dev files as I was just debugging the process.
python ./DeepSpeech-0.7.4/DeepSpeech.py --train_files ./iitk_data/train_40.csv --learning_rate 0.0001 --n_hidden 2048 --load_checkpoint_dir /home/blraml/projectWorkSpace/swaraj/dsf/ds_0_7_4/deepspeech-0.7.4-checkpoint/ --alphabet_config_path /home/blraml/projectWorkSpace/swaraj/dsf/ds_0_7_4/DeepSpeech-0.7.4/data/alphabet.txt --scorer /home/blraml/projectWorkSpace/swaraj/dsf/ds_0_7_4/DeepSpeech-0.7.4/data/lm/kenlm.scorer --epochs 10 --train_batch_size 5 --augment overlay[p=0.4,source=./iitk_data/test_20.csv,layers=1,snr=50:20~10] --cache_for_epochs 2
I tried it even in high-end machines both in CPU and GPU. same issue.
GPU-: Tesla P40
deepspeech version-: 0.7.4