Guideline on early stopping and number of epochs to train when fine tuning

I am working with 0.7.0 release fine tuning with the voxforge data. I had some questions on the number of epochs to run and early stopping. I read in the docs that no_early was used during all phases of training DeepSpeech at Mozilla. Why was this done?

I am running:

python DeepSpeech.py --n_hidden 2048 --checkpoint_dir exportmodel2/ --epochs 100 --train_files bin/voxforge/voxforge-train.csv --dev_files bin/voxforge/voxforge-dev.csv --learning_rate 0.00001 --scorer_path models/deepspeech-0.7.0-models.scorer --train_cudnn --use_allow_growth --train_batch_size 32 --dev_batch_size 32

Right around the 22nd Epoch I see that the validation loss started going up and has been consistently increasing. Would it not make sense to use early stopping here?

Epoch 19 | Training | Elapsed Time: 0:21:27 | Steps: 1305 | Loss: 5.887315
Epoch 19 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.531167 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 20 | Training | Elapsed Time: 0:21:20 | Steps: 1305 | Loss: 5.650174
Epoch 20 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.506124 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 21 | Training | Elapsed Time: 0:21:17 | Steps: 1305 | Loss: 5.431311
Epoch 21 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.463296 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 22 | Training | Elapsed Time: 0:21:13 | Steps: 1305 | Loss: 5.212523
Epoch 22 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.707750 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 23 | Training | Elapsed Time: 0:21:11 | Steps: 1305 | Loss: 5.003987
Epoch 23 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.759955 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 24 | Training | Elapsed Time: 0:21:08 | Steps: 1305 | Loss: 4.797262
Epoch 24 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.000812 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 25 | Training | Elapsed Time: 0:21:05 | Steps: 1305 | Loss: 4.619324
Epoch 25 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.981936 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 26 | Training | Elapsed Time: 0:21:04 | Steps: 1305 | Loss: 4.457188
Epoch 26 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.065967 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 27 | Training | Elapsed Time: 0:21:00 | Steps: 1305 | Loss: 4.265630
Epoch 27 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 15.963606 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 28 | Training | Elapsed Time: 0:20:59 | Steps: 1305 | Loss: 4.089587
Epoch 28 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.152604 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 29 | Training | Elapsed Time: 0:20:57 | Steps: 1305 | Loss: 3.941048
Epoch 29 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.370993 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 30 | Training | Elapsed Time: 0:20:58 | Steps: 1305 | Loss: 3.790779
Epoch 30 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.272458 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 31 | Training | Elapsed Time: 0:20:54 | Steps: 1305 | Loss: 3.654684
Epoch 31 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.494969 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 32 | Training | Elapsed Time: 0:21:03 | Steps: 1305 | Loss: 3.546278
Epoch 32 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.530435 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 33 | Training | Elapsed Time: 0:21:06 | Steps: 1305 | Loss: 3.392584
Epoch 33 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 16.641926 | Dataset: bin/voxforge/voxforge-dev.csv

Epoch 34 | Training | Elapsed Time: 0:20:58 | Steps: 1305 | Loss: 3.275460
Epoch 34 | Validation | Elapsed Time: 0:00:04 | Steps: 11 | Loss: 17.125124 | Dataset: bin/voxforge/voxforge-dev.csv

I do understand that there is no science to this and actually depends on several factors but wanted some clarification esp because no_earlystop was used using the hyperparam tuning phase at deepspeech
Any input is appreciated
Thanks