Bad performance on checkpoint

output:-

code:-

[quote]
!CUDA_VISIBLE_DEVICES=0 python3 DeepSpeech.py \

–n_hidden 2048 \

–epochs 1 \

–checkpoint_dir “/content/gdrive/My Drive/DeepSpeech/checkpoint_directory/deepspeech-0.6.1-checkpoint/” \

–test_files “/content/gdrive/My Drive/DeepSpeech/data_directory/librivox-test-clean.csv” \

–audio_sample_rate 16000 \

–test_batch_size 32 \

–dropout_rate 0.2 \

–lm_alpha 0.75 \

–lm_beta 1.85 \

–learning_rate 0.0001 \

–augmentation_freq_and_time_masking true \

–augmentation_sparse_warp true \

–augmentation_sparse_warp_time_warping_para 80 \

–use_allow_growth true \

–train_cudnn true
[\quote]

Please, stop posting without clear status and context.
There are many questions here:

  • what checkpoint?
  • what version?

etc.

You are even adding augmentation steps: we are not in your head, if you don’t clearly state the problem you think you are facing, how do you expect us to know how to help you?

Looks like you are using v0.6.1 checkpoint with training code from a different version. Don’t. Do git checkout v0.6.1 before using the v0.6.1 chekpoint. Same applies to any other checkpoint version.

1 Like

version - 0.6.1

checkpoint - deepspeech-0.6.1-checkpoint
Test Dataset- Librivox test clean

I have already done that

So you fine-tuned for only one epoch? You also don’t document how much data you fine-tune with.

Maybe:

  • not enough data
  • bad data
  • not enough epochs
  • too high learning rate
  • what is the target language?
  • what is the LM you are using?

You also add augmentation in the mix. That’s really a lot of variables to debug from, please try and reduce the search space.