Infinite Loss on a training model

I have started training a voice set of 27.000 files and on the epoch 0 I get some files that cause infinite loss.
Do I have to stop the training, remove the files and run again the training or on next epochs the loss will variate and drop?
Also in case I will stop the training I can load the previous checkpoints to continue or it will be better to start allover again?
Below is the code and output that I get:

python3 DeepSpeech.py
–train_files in-texts/train.csv
–dev_files in-texts/dev.csv
–test_files in-texts/test.csv
–checkpoint_dir in-texts/checkpoints
–export_dir in-texts/exported-model
–checkpoint_secs 1800
–max_to_keep 3
–epochs 120
–alphabet_config_path in-texts/alphabet-g.txt
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 | Training | Elapsed Time: 0:38:05 | Steps: 546 | Loss: 2766.476558 E The following files caused an infinite (or NaN) loss: /WORK/app-vol/DeepSpeech/in-texts/mono/in25710.wav
Epoch 0 | Training | Elapsed Time: 0:48:02 | Steps: 670 | Loss: inf E The following files caused an infinite (or NaN) loss: /WORK/app-vol/DeepSpeech/in-texts/mono/in2790.wav
Epoch 0 | Training | Elapsed Time: 1:13:56 | Steps: 992 | Loss: inf E The following files caused an infinite (or NaN) loss: /WORK/app-vol/DeepSpeech/in-texts/mono/in4907.wav
Epoch 0 | Training | Elapsed Time: 1:14:00 | Steps: 993 | Loss: inf E The following files caused an infinite (or NaN) loss: /WORK/app-vol/DeepSpeech/in-texts/mono/in4909.wav