Training pre-trained model

(costas) #1

Hello,

I would like to continue training the pre-trained model (version 0.4.1) with my own data. I downloaded the checkpoints of the latest version 0.4.1 and try to train for 8 epochs. The training continues from epoch 239 which is strange given the fact that in the release they report training the model for 30 epochs.

this is the command for deepspeech:
python -u DeepSpeech.py --train_files /notebooks/data/LDC_corpus/final_train.csv --dev_files /notebooks/data/LDC_corpus/final_dev.csv --test_files /notebooks/data/LDC_corpus/final_test.csv --train_batch_size 32 --dev_batch_size 32 --test_batch_size 32 --validation_step 1 --learning_rate 0.0001 --alphabet_config_path /deepspeech/DeepSpeech/data/ldc93s1/models/pre-trained/models/alphabet.txt --lm_binary_path /deepspeech/DeepSpeech/data/ldc93s1/models/pre-trained/models/lm.binary --trie_path /deepspeech/DeepSpeech/data/ldc93s1/models/pre-trained/models/trie --epoch -8 --checkpoint_dir “/deepspeech/DeepSpeech/data/ldc93s1/models/pre-trained/models/deepspeech-0.4.1-checkpoint” --export_dir “/deepspeech/DeepSpeech/data/ldc93s1/models/pre-trained/pre-trained_model_checkpoint/”

and this is the outcome:
I Training epoch 239…

Souldn’t be epoch 31?

thanks

0 Likes