Training parameters for Librispeech-clean dataset

Hello,

I would like to train the system from scratch on Librispeech-clean (train-clean-100.tar.gz, train-clean-360.tar.gz, train-other-500.tar.gz). What are the parameters that I should use to get correct same results as the pre-trained model?

Currently I am using the parameters below (Hyperparameters for fine-tuning) and trying to train on a simple example (ldc93s1) :

python -u DeepSpeech_unidirectional.py --train_files ./data/ldc93s1/ldc93s1.csv --dev_files ./data/ldc93s1/ldc93s1.csv --test_files ./data/ldc93s1/ldc93s1.csv --n_hidden 2048 --train_batch_size 12 --dev_batch_size 8 --test_batch_size 8 --epoch 13 --learning_rate 0.0001 display_step 10000 --validation_step 1 --dropout_rate 0.2367 --default_stddev 0.046875 --checkpoint_step 1 --log_level 0 --checkpoint_dir ./models/checkpoints/dummy/

However the results are not good. Which parameters should I use in both cases (ldc93s1 / Librispeech-clean) ??

For ldc93s1 the following seems to work :

python -u DeepSpeech.py --train_files ./data/ldc93s1/ldc93s1.csv --dev_files ./data/ldc93s1/ldc93s1.csv --test_files ./data/ldc93s1/ldc93s1.csv --summary_dir . --train_batch_size 1 --dev_batch_size 1 --test_batch_size 1 --n_hidden 494 --epoch 50 --checkpoint_dir ./models/checkpoints/dummy/

but for Librispeech-clean ?

Found it :

python -u DeepSpeech.py
–train_files “$COMPUTE_DATA_DIR/librivox-train-clean-100.csv,$COMPUTE_DATA_DIR/librivox-train-clean-360.csv,$COMPUTE_DATA_DIR/librivox-train-other-500.csv”
–dev_files “$COMPUTE_DATA_DIR/librivox-dev-clean.csv,$COMPUTE_DATA_DIR/librivox-dev-other.csv”
–test_files “$COMPUTE_DATA_DIR/librivox-test-clean.csv,$COMPUTE_DATA_DIR/librivox-test-other.csv”
–train_batch_size 12
–dev_batch_size 12
–test_batch_size 12
–learning_rate 0.0001
–epoch 15
–display_step 5
–validation_step 5
–dropout_rate 0.30
–default_stddev 0.046875
–checkpoint_dir “$checkpoint_dir”
“$@”

What might be a good idea it to start with the parameters suggested in the release README, see the Hyperparameters for fine-tuning section.