Installation for Training/Transfer Learning Issues

Okay, thanks for your help so far, really do appreciate it. Regarding what I did for setting up:

sudo apt install curl
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
mkdir tmp
mkdir deepspeech-venv
sudo apt install virtualenv
virtualenv -p python3 $HOME/tmp/deepspeech-venv
cd tmp
git clone https://github.com/mozilla/DeepSpeech
source $HOME/tmp/deepspeech-venv/bin/activate
pip3 install deepspeech
pip3 install --upgrade deepspeech
cd DeepSpeech
pip3 install -r requirements.txt
pip3 install $(python3 util/taskcluster.py --decoder)
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-checkpoint.tar.gz
tar xvfz deepspeech-0.5.1-checkpoint.tar.gz
git checkout v0.5.1
git checkout -b v0.5.1
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
tar xvfz deepspeech-0.5.1-models.tar.gz

Then I ran the traning script.

Anyways thank you very much for your guidance so far, I’ll look into the trie issue.

You should git checkout v0.5.1 here.

You don’t need that

@lissyx Thanks so much, its finally working.

Output (I hide the src, but you can still see WER):

(deepspeech-venv) lee@lee-VirtualBox:~/tmp/DeepSpeech$ ./bin/run-ASRtestt.sh
+ [! -f DeepSpeech.py ]
./bin/run-ASRtestt.sh: 3: ./bin/run-ASRtestt.sh: [!: not found
+ python -u DeepSpeech.py --checkpoint_dir /home/lee/tmp/downloads/deepspeech-0.5.1-checkpoint --train_files /home/lee/tmp/DS_TRAINING/TRAIN/train.csv --dev_files /home/lee/tmp/DS_TRAINING/DEV/dev.csv --test_files /home/lee/tmp/DS_TRAINING/TEST/test.csv --train_batch_size 1 --dev_batch_size 1 --test_batch_size 1
WARNING:tensorflow:From /home/lee/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
    tf.py_function, which takes a python function which manipulates tf eager
    tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
    an ndarray (just call tensor.numpy()) but having access to eager tensors
    means `tf.py_function`s can use accelerators such as GPUs as well as
    being differentiable using a gradient tape.

WARNING:tensorflow:From /home/lee/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:358: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/lee/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/lstm_ops.py:696: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/lee/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
I Restored variables from most recent checkpoint at /home/lee/tmp/downloads/deepspeech-0.5.1-checkpoint/train-467359, step 467359
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:02:27 | Steps: 1 | Loss: 25.211950
Epoch 0 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/devEpoch 0 | Validation | Elapsed Time: 0:00:11 | Steps: 1 | Loss: 495.532104 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dEpoch 0 | Validation | Elapsed Time: 0:01:05 | Steps: 2 | Loss: 4118.931091 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/Epoch 0 | Validation | Elapsed Time: 0:01:05 | Steps: 2 | Loss: 4118.931091 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dev.csv
I Saved new best validating model with loss 4118.931091 to: /home/lee/tmp/downloads/deepspeech-0.5.1-checkpoint/best_dev-467360
Epoch 1 |   Training | Elapsed Time: 0:02:29 | Steps: 1 | Loss: 53.805149
Epoch 1 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/devEpoch 1 | Validation | Elapsed Time: 0:00:11 | Steps: 1 | Loss: 769.573303 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dEpoch 1 | Validation | Elapsed Time: 0:01:06 | Steps: 2 | Loss: 4633.189484 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/Epoch 1 | Validation | Elapsed Time: 0:01:06 | Steps: 2 | Loss: 4633.189484 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dev.csv
Epoch 2 |   Training | Elapsed Time: 0:02:32 | Steps: 1 | Loss: 103.141228
Epoch 2 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/devEpoch 2 | Validation | Elapsed Time: 0:00:09 | Steps: 1 | Loss: 823.645752 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dEpoch 2 | Validation | Elapsed Time: 0:01:06 | Steps: 2 | Loss: 4865.774048 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/Epoch 2 | Validation | Elapsed Time: 0:01:06 | Steps: 2 | Loss: 4865.774048 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dev.csv
Epoch 3 |   Training | Elapsed Time: 0:02:32 | Steps: 1 | Loss: 114.529778
Epoch 3 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/devEpoch 3 | Validation | Elapsed Time: 0:00:09 | Steps: 1 | Loss: 911.154663 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dEpoch 3 | Validation | Elapsed Time: 0:01:08 | Steps: 2 | Loss: 5047.323425 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/Epoch 3 | Validation | Elapsed Time: 0:01:08 | Steps: 2 | Loss: 5047.323425 | Dataset: /home/lee/tmp/DS_TRAINING/DEV/dev.csv
I Early stop triggered as (for last 4 steps) validation loss: 5047.323425 with standard deviation: 312.041962 and mean: 4539.298208
I FINISHED optimization in 0:14:43.409375
I Restored variables from best validation checkpoint at /home/lee/tmp/downloads/deepspeech-0.5.1-checkpoint/best_dev-467360, step 467360
Testing model on /home/lee/tmp/DS_TRAINING/TEST/test.csv
Test epoch | Steps: 1 | Elapsed Time: 0:00:18
Test on /home/lee/tmp/DS_TRAINING/TEST/test.csv - WER: 0.847458, CER: 0.402985, loss: 743.186279
--------------------------------------------------------------------------------
WER: 0.847458, CER: 0.402985, loss: 743.186279
- src: "----------"
- res: "i mean there are i think about his prey was basically we were trying it long do or bow or with a fine rest in we sit and we did that he's to be risin for to express it pray me in he calls it is a goin now is i pray it up to a bout eighty three is it weak and poor to do aron strip it "
--------------------------------------------------------------------------------
+ --n_hidden 2048
./bin/run-ASRtestt.sh: 17: ./bin/run-ASRtestt.sh: --n_hidden: not found

Training script:

--checkpoint_dir '/home/lee/tmp/downloads/deepspeech-0.5.1-checkpoint' \
--train_files '/home/lee/tmp/DS_TRAINING/TRAIN/train.csv' \
--dev_files '/home/lee/tmp/DS_TRAINING/DEV/dev.csv' \
--test_files '/home/lee/tmp/DS_TRAINING/TEST/test.csv' \
--train_batch_size 1 \
--dev_batch_size 1 \
--test_batch_size 1 \	
--n_hidden 2048 \	
--epochs 1 \
#--validation_step 1 \
--early_stop True \
--earlystop_nsteps 6 \
--estop_mean_thresh 0.1 \
--estop_std_thresh 0.1 \
--dropout_rate 0.22 \
--learing_rate 0.0001 \
--report_count 100 \
--use_seq_length False \
--export_dir '/home/lee/tmp/EXPORT' \	
--alphabet_config_path '/home/lee/tmp/downloads/deepspeech-0.5.1-models/alphabet.txt' \
--lm_binary_path '/home/lee/tmp/downloads/deepspeech-0.5.1-models/lm.binary' \
--lm_trie_path '/home/lee/tmp/downloads/deepspeech-0.5.1-models/trie' \
"$@"

After checking out right after cloning, its working. I can finally move on to selecting the right parameters for training now. Thank you so much, I really appreciate your help!