Fine tuning a model results in overfitting

I have been following the process below to fine tune my model but it results in overfitting with all the different datasets that I use:
1.I start training for 10 epochs with DeepSpeech 0.4.1 checkpoint
2.I observe that the validation loss is reduced till 6th epoch, starts increasing after that and early stopping is triggered in 8th epoch.
3.I pickup the DeepSpeech 0.4.1 checkpoint again and this time train only till 6th epoch.

Does anybody have any suggestions on how I could avoid this?