Save checkpoint from an old epoch

On DeepSpeech 0.4.1, I overtrained my model (train, validate, and test all began to diverge). Is there a way to save a checkpoint on an older epoch, or do I have to restart to get the most optimal checkpoint?

Unfortunately no. Checkpoints are saved every 10 minutes by default which for big datasets means you don’t have checkpoints from past epochs.

I am facing a similar issue. Using 0.7.4 and after fine tuning it is saving only the final model. So far I could not find any way to save the fine tuning checkpoints. Tried reducing checkpoint_secs but no luck. Please help me with saving the checkpoints.

Please search before you post, there is a flag max_to_keep for how many checkpoints to save, so you could save them all :slight_smile:

Thanks a lot. It worked.

1 Like