Checkpoints and frozen model in fine tuning

I am going through the documentation to retrain existing deepspeech-0.5.1-models with my own data, where only checkpoints are used to retrain the model. Here I downloaded the checkpoints directory and the code looks as below.

 python3 -u DeepSpeech.py \
   --train_files "/home/dev_ds/deepspeech_dir/corpus/corpus-train.csv" \
   --dev_files "/home/dev_ds/deepspeech_dir/corpus/corpus-dev.csv" \
   --test_files "/home/dev_ds/deepspeech_dir/corpus/corpus-test.csv" \
   --alphabet_config_path "/home/dev_ds/deepspeech_dir/deepspeech-0.5.1-models/alphabet.txt" \
   --lm_binary_path "/home/dev_ds/deepspeech_dir/my-model/lm.binary" \
   --lm_trie_path "/home/dev_ds/deepspeech_dir/my-model/trie" \
   --checkpoint_dir /home/dev_ds/deepspeech_dir/deepspeech-0.5.1-checkpoint/ \
   --train_batch_size 2 \
   --learning_rate 0.000001 \
   --export_dir "/home/dev_ds/deepspeech_dir/my-model/"

I have two questions

  1. Should I use checkpoints (--checkpoint_dir) as well as frozen graph ( --initialize_from_frozen_model) from existing model to adapt the model to get better accuracy?[1]

  2. What is the difference between --checkpoint_dir and --source_model_checkpoint_dir?[2]

The documentation you linked to is for master, if you’re using v0.5.1 you should look at the documentation for v0.5.1: https://github.com/mozilla/DeepSpeech/tree/v0.5.1

--initialize_from_frozen_model no longer exists in v0.5.1.

--source_model_checkpoint_dir does not exist in v0.5.1. I think that’s a flag from the transfer learning branches? So it’s only useful if you’re doing transfer to a different alphabet, and requires checking out that branch.