DeepSpeech Training questions

We have started training to our Model using below command

python3 DeepSpeech.py --train_files …/clips/train.csv –-train_batch_size 100 --train_cudnn --dev_files …/clips/dev.csv –-dev_batch_size 100 --test_files …/clips/test.csv –-test_batch_size 100 --log_level 0

We are using 8GB GPU and also its getting used 100%

We are using approx. 12000 dataset to train the model.
Average length of 16k sampling audio files is 60 seconds.

Few initial logged lines:

there must be at least one NUMA node, so returning NUMA node zero
2021-02-01 09:45:06.567988: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
name: Tesla M60 major: 5 minor: 2 memoryClockRate(GHz): 1.1775
pciBusID: 0000:00:1e.0
2021-02-01 09:45:06.568049: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-02-01 09:45:06.568082: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-02-01 09:45:06.568103: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-02-01 09:45:06.568132: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-02-01 09:45:06.568152: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-02-01 09:45:06.568184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-02-01 09:45:06.568210: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-02-01 09:45:06.568353: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-01 09:45:06.569026: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-01 09:45:06.569596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2021-02-01 09:45:06.569642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-02-01 09:45:06.569664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0
2021-02-01 09:45:06.569674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N
2021-02-01 09:45:06.569798: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-01 09:45:06.570423: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-01 09:45:06.571006: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7171 MB memory) -> physical GPU (device: 0, name: Tesla M60, pci bus id: 0000:00:1e.0, compute capability: 5.2)
WARNING:tensorflow:From /home/ubuntu/DeepSpeech/DeepSpeech/training/deepspeech_training/util/checkpoints.py:71: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer Variable.assign which has equivalent behavior in 2.X.
W0201 09:45:06.574804 139693934528320 deprecation.py:323] From /home/ubuntu/DeepSpeech/DeepSpeech/training/deepspeech_training/util/checkpoints.py:71: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer Variable.assign which has equivalent behavior in 2.X.
D Session opened.
I Loading best validating checkpoint from /home/ubuntu/.local/share/deepspeech/checkpoints/best_dev-85345
I Loading variable from checkpoint: beta1_power
I Loading variable from checkpoint: beta2_power
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel/Adam
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel/Adam_1
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/bias/Adam
I Loading variable from checkpoint: layer_1/bias/Adam_1
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_1/weights/Adam
I Loading variable from checkpoint: layer_1/weights/Adam_1
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/bias/Adam
I Loading variable from checkpoint: layer_2/bias/Adam_1
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_2/weights/Adam
I Loading variable from checkpoint: layer_2/weights/Adam_1
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/bias/Adam
I Loading variable from checkpoint: layer_3/bias/Adam_1
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_3/weights/Adam
I Loading variable from checkpoint: layer_3/weights/Adam_1
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/bias/Adam
I Loading variable from checkpoint: layer_5/bias/Adam_1
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_5/weights/Adam
I Loading variable from checkpoint: layer_5/weights/Adam_1
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/bias/Adam
I Loading variable from checkpoint: layer_6/bias/Adam_1
I Loading variable from checkpoint: layer_6/weights
I Loading variable from checkpoint: layer_6/weights/Adam
I Loading variable from checkpoint: layer_6/weights/Adam_1
I Loading variable from checkpoint: learning_rate
I STARTING Optimization
Epoch 0 | Training | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000
2021-02-01 09:45:09.087454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-02-01 09:45:09.684289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
Epoch 0 | Training | Elapsed Time: 0:00:01 | Steps: 1 | Loss: 16.107815
Epoch 0 | Training | Elapsed Time: 0:00:01 | Steps: 2 | Loss: 39.940804
Epoch 0 | Training | Elapsed Time: 0:00:02 | Steps: 3 | Loss: 41.197852
Epoch 0 | Training | Elapsed Time: 0:00:02 | Steps: 4 | Loss: 41.779977
Epoch 0 | Training | Elapsed Time: 0:00:02 | Steps: 5 | Loss: 47.854725
Epoch 0 | Training | Elapsed Time: 0:00:02 | Steps: 6 | Loss: 50.081092
Epoch 0 | Training | Elapsed Time: 0:00:02 | Steps: 7 | Loss: 54.767900
Epoch 0 | Training | Elapsed Time: 0:00:03 | Steps: 8 | Loss: 50.239637
Epoch 0 | Training | Elapsed Time: 0:00:03 | Steps: 9 | Loss: 52.046041

As I am new to DeepSpeech training, have few questions:

  1. we faced some interrupts in training due to system reboot, etc. when we started the training using the above command we could see in the logs that earlier checkpoint is picked up. But we everytime we could see training starts from Epoch 0.

So is the earlier training getting saved and starting from where it stoppped or each time its a new start?

  1. Why is it taking so much time for training using GPUs?

We are using 12200 dataset to train the model.
Average duaration of 16k sampling audio files is 60 seconds.

Logged lines:
Epoch 0 | Training | Elapsed Time: 7:07:08 | Steps: 12200 | Loss: 873.814420
Epoch 0 | Training | Elapsed Time: 7:07:08 | Steps: 12200 | Loss: 873.814420

Is the training elapsed time proper or is it taking longer than expected?

  1. What is the ideal Epoch training count required to train the Model?

  2. What is the ideal each audio file duration required to train? Also what will be the ideal dataset count to be used to train the Model?
    We are using 12200 files to train

First, please use code formating for logs.

The training is getting saved and new training starts from previous saved weights if they are found. Nevertheless, DS runs the number of epochs you specify for this training job. The saved state does know the epoch it was saved, since it not necessary to know.
If you want to train only a specific number of epochs, look where it crashed(?) and set --epochs properly.

  • What is the ideal Epoch training count required to train the Model?

It depends on your dataset. As a hint you can take a look into
DS release news

This also depends on your dataset.

1 Like

Thanks @NanoNabla, just adding some details:

  • The training always starts at 0, but as stated you continue.
  • 60 secs mean is quite long, I do my trainings with 5-10 seconds
  • You should see results getting somewhere with 10 epochs

Steps are (total nr / batch), so if you have 12200 steps, you didn’t start with batch 100. Something is off here. Check GPU mem usage, should be really low as you barely use it. Check that with differen params for train batch size. There is something wrong on your end.

1 Like

Thank you for the replies.

I have trained a model to Epoch 28 and tried exporting the model and get some transcription of an audio file using Python deepspeech library.
But I do not get any letters in the output.

Below are the commands and logs:

  1. Exporting model command:

python3 DeepSpeech.py --log_level 0 --export_dir /my_exportdir/model.pb

  1. Using model with inference:

command: deepspeech --model /home/ubuntu/DeepSpeech/my_exportdir/model.pb/output_graph.pb --audio /home/ubuntu/DeepSpeech/clips/audio16K/5139c832-022c-11eb-aa12-0e1d429dd585.wav --json

output logs:

ubuntu@ip-172-31-36-32:~$ deepspeech --model /home/ubuntu/DeepSpeech/my_exportdir/model.pb/output_graph.pb --audio /home/ubuntu/DeepSpeech/clips/audio16K/5139c832-022c-11eb-aa12-0e1d429dd585.wav --json
2021-02-15 19:20:30.161376: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.161418: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Loading model from file /home/ubuntu/DeepSpeech/my_exportdir/model.pb/output_graph.pb
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2021-02-15 19:20:30.335553: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-15 19:20:30.358198: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-15 19:20:30.477510: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-15 19:20:30.478355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla M60 computeCapability: 5.2
coreClock: 1.1775GHz coreCount: 16 deviceMemorySize: 7.44GiB deviceMemoryBandwidth: 149.31GiB/s
2021-02-15 19:20:30.478481: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.478605: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.478713: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.478809: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcurand.so.10'; dlerror: libcurand.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.478906: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcusolver.so.10'; dlerror: libcusolver.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.478999: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcusparse.so.10'; dlerror: libcusparse.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64:
2021-02-15 19:20:30.709141: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-02-15 19:20:30.709211: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1753] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-02-15 19:20:30.875205: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-02-15 19:20:30.875247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0
2021-02-15 19:20:30.875269: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N
Loaded model in 2.45s.
Running inference.
{
  "transcripts": [
    {
      "confidence": -100.98429870605469,
      "words": [
        {
          "word": "",
          "start_time": 0,
          "duration": 3.92
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.0
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.8
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 6.62
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 7.48
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 8.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 9.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.16
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.96
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 11.7
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 12.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.14
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.86
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.26
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.3
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.32
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.34
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.36
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.4
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.42
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.46
        }
      ]
    },
    {
      "confidence": -100.99630737304688,
      "words": [
        {
          "word": "",
          "start_time": 0,
          "duration": 3.92
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.0
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.8
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 6.62
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 7.48
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 8.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 9.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.16
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.96
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 11.7
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 12.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.14
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.86
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.3
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.32
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.34
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.36
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.4
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.42
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.46
        }
      ]
    },
    {
      "confidence": -101.03715515136719,
      "words": [
        {
          "word": "",
          "start_time": 0,
          "duration": 3.92
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.0
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 5.8
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 6.62
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 7.48
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 8.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 9.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.16
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 10.96
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 11.7
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 12.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.14
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 13.86
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.24
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.26
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.28
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.3
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.32
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.34
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.36
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.38
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.4
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.42
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.44
        },
        {
          "word": "",
          "start_time": 0,
          "duration": 14.46
        }
      ]
    }
  ]
}
Inference took 23.827s for 14.480s audio file.

Is this the same machine you are training on? Maybe you are training without your GPU?

This usually indicates that you didn’t train long enough or you are missing some parameters. Can you post the command you used for training? I can’t see learning rate and droput above.

Hello,

I am using the same machine for training too and have been using 8 GB GPU and its been used to 100% while training.

command:
python3 DeepSpeech.py --train_files …/clips/train.csv –-train_batch_size 100 --train_cudnn --dev_files …/clips/dev.csv –-dev_batch_size 100 --test_files …/clips/test.csv –-test_batch_size 100 --log_level 0

Please suggest parameters I should include.

Thank you.

  1. Please post train and dev loss for a couple of epochs.

  2. Next time use a dropout of 0.3-0.4, default is too low. Learning rate with standard is fine.

  3. 60 secs is really long. But even with that you should have about 10x the material to get good results.

  4. What language is your material? You may have to change the alphabet.

Hi,

Thank you replying.

  1. Please see the below logs of Epoch 27 training and validation:

I have posted few lines from logs

Epoch 26 | Validation | Elapsed Time: 0:27:56 | Steps: 1498 | Loss: 928.760906 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:28:04 | Steps: 1499 | Loss: 929.050569 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:28:14 | Steps: 1500 | Loss: 930.103331 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:28:24 | Steps: 1501 | Loss: 930.834689 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:28:39 | Steps: 1502 | Loss: 933.888949 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:28:53 | Steps: 1503 | Loss: 935.579138 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:29:08 | Steps: 1504 | Loss: 937.148359 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:29:46 | Steps: 1505 | Loss: 951.052634 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:30:04 | Steps: 1506 | Loss: 952.619517 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:30:38 | Steps: 1507 | Loss: 962.583601 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:31:16 | Steps: 1508 | Loss: 969.871313 | Dataset: ../clips/dev.csv
Epoch 26 | Validation | Elapsed Time: 0:31:16 | Steps: 1508 | Loss: 969.871313 | Dataset: ../clips/dev.csv
--------------------------------------------------------------------------------
Epoch 27 |   Training | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000
Epoch 27 |   Training | Elapsed Time: 0:00:00 | Steps: 1 | Loss: 16.012011
Epoch 27 |   Training | Elapsed Time: 0:00:00 | Steps: 2 | Loss: 40.323937
Epoch 27 |   Training | Elapsed Time: 0:00:00 | Steps: 3 | Loss: 42.284379
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 4 | Loss: 43.334262
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 5 | Loss: 50.498967
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 6 | Loss: 52.865224
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 7 | Loss: 58.218975
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 8 | Loss: 53.031822
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 9 | Loss: 55.255576
Epoch 27 |   Training | Elapsed Time: 0:00:01 | Steps: 10 | Loss: 55.001126
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 11 | Loss: 70.587121
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 12 | Loss: 74.051502
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 13 | Loss: 78.278034
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 14 | Loss: 74.092985
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 15 | Loss: 77.536041
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 16 | Loss: 78.462801
Epoch 27 |   Training | Elapsed Time: 0:00:02 | Steps: 17 | Loss: 81.968311
Epoch 27 |   Training | Elapsed Time: 0:00:03 | Steps: 18 | Loss: 80.938078
Epoch 27 |   Training | Elapsed Time: 0:00:03 | Steps: 19 | Loss: 79.977340
Epoch 27 |   Training | Elapsed Time: 0:00:03 | Steps: 20 | Loss: 86.060360
Epoch 27 |   Training | Elapsed Time: 0:00:03 | Steps: 21 | Loss: 88.086165
Epoch 27 |   Training | Elapsed Time: 0:00:03 | Steps: 22 | Loss: 90.268335
Epoch 27 |   Training | Elapsed Time: 0:00:04 | Steps: 23 | Loss: 96.002163
Epoch 27 |   Training | Elapsed Time: 0:00:04 | Steps: 24 | Loss: 97.024613
Epoch 27 |   Training | Elapsed Time: 0:00:04 | Steps: 25 | Loss: 96.164787
Epoch 27 |   Training | Elapsed Time: 0:00:04 | Steps: 26 | Loss: 95.270884
Epoch 27 |   Training | Elapsed Time: 0:00:04 | Steps: 27 | Loss: 102.807339
Epoch 27 |   Training | Elapsed Time: 0:00:05 | Steps: 28 | Loss: 107.701536
Epoch 27 |   Training | Elapsed Time: 0:00:05 | Steps: 29 | Loss: 106.914660
Epoch 27 |   Training | Elapsed Time: 0:00:05 | Steps: 30 | Loss: 111.810944
Epoch 27 |   Training | Elapsed Time: 0:00:05 | Steps: 31 | Loss: 108.951781
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 32 | Loss: 111.481779
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 33 | Loss: 113.189505
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 34 | Loss: 113.131423
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 35 | Loss: 112.271983
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 36 | Loss: 114.633477
Epoch 27 |   Training | Elapsed Time: 0:00:07 | Steps: 37 | Loss: 113.646733
Epoch 27 |   Training | Elapsed Time: 0:00:07 | Steps: 38 | Loss: 112.647386
Epoch 27 |   Training | Elapsed Time: 0:00:07 | Steps: 39 | Loss: 115.887965
Epoch 27 |   Training | Elapsed Time: 0:00:07 | Steps: 40 | Loss: 114.054972
Epoch 27 |   Training | Elapsed Time: 0:00:08 | Steps: 41 | Loss: 116.054757
Epoch 27 |   Training | Elapsed Time: 0:00:08 | Steps: 42 | Loss: 115.993475
Epoch 27 |   Training | Elapsed Time: 0:00:08 | Steps: 43 | Loss: 117.596279
Epoch 27 |   Training | Elapsed Time: 0:00:08 | Steps: 44 | Loss: 115.539979
Epoch 27 |   Training | Elapsed Time: 6:54:29 | Steps: 12179 | Loss: 860.764300
Epoch 27 |   Training | Elapsed Time: 6:55:06 | Steps: 12180 | Loss: 862.080741
Epoch 27 |   Training | Elapsed Time: 6:55:29 | Steps: 12181 | Loss: 862.154552
Epoch 27 |   Training | Elapsed Time: 6:55:50 | Steps: 12182 | Loss: 862.176993
Epoch 27 |   Training | Elapsed Time: 6:56:12 | Steps: 12183 | Loss: 862.218486
Epoch 27 |   Training | Elapsed Time: 6:56:34 | Steps: 12184 | Loss: 862.244087
Epoch 27 |   Training | Elapsed Time: 6:56:55 | Steps: 12185 | Loss: 862.256227
Epoch 27 |   Training | Elapsed Time: 6:57:37 | Steps: 12186 | Loss: 864.360452
Epoch 27 |   Training | Elapsed Time: 6:58:03 | Steps: 12187 | Loss: 864.612945
Epoch 27 |   Training | Elapsed Time: 6:58:31 | Steps: 12188 | Loss: 864.942953
Epoch 27 |   Training | Elapsed Time: 6:59:10 | Steps: 12189 | Loss: 866.288809
Epoch 27 |   Training | Elapsed Time: 6:59:39 | Steps: 12190 | Loss: 866.691137
Epoch 27 |   Training | Elapsed Time: 7:00:07 | Steps: 12191 | Loss: 866.903732
Epoch 27 |   Training | Elapsed Time: 7:00:53 | Steps: 12192 | Loss: 868.637313
Epoch 27 |   Training | Elapsed Time: 7:01:40 | Steps: 12193 | Loss: 869.857616
Epoch 27 |   Training | Elapsed Time: 7:02:32 | Steps: 12194 | Loss: 871.582724
Epoch 27 |   Training | Elapsed Time: 7:03:10 | Steps: 12195 | Loss: 871.903624
Epoch 27 |   Training | Elapsed Time: 7:03:54 | Steps: 12196 | Loss: 872.549494
Epoch 27 |   Training | Elapsed Time: 7:04:26 | Steps: 12197 | Loss: 872.609729
Epoch 27 |   Training | Elapsed Time: 7:04:59 | Steps: 12198 | Loss: 872.681917
Epoch 27 |   Training | Elapsed Time: 7:05:45 | Steps: 12199 | Loss: 873.124481
Epoch 27 |   Training | Elapsed Time: 7:06:30 | Steps: 12200 | Loss: 873.234334
Epoch 27 |   Training | Elapsed Time: 7:06:30 | Steps: 12200 | Loss: 873.234334
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 1 | Loss: 80.903816 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 3 | Loss: 93.584595 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 5 | Loss: 87.633374 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 6 | Loss: 117.426702 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:00 | Steps: 7 | Loss: 128.203088 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 8 | Loss: 150.572115 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 9 | Loss: 160.595259 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 10 | Loss: 160.264433 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 11 | Loss: 165.206120 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 12 | Loss: 173.128124 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 13 | Loss: 169.424146 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 14 | Loss: 167.738200 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:01 | Steps: 15 | Loss: 178.499871 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:00:02 | Steps: 16 | Loss: 190.872425 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:16:52 | Steps: 1343 | Loss: 793.615168 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:16:55 | Steps: 1344 | Loss: 795.010740 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:16:57 | Steps: 1345 | Loss: 794.881033 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:00 | Steps: 1346 | Loss: 795.183453 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:02 | Steps: 1347 | Loss: 796.830010 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:05 | Steps: 1348 | Loss: 798.038055 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:07 | Steps: 1349 | Loss: 797.732362 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:10 | Steps: 1350 | Loss: 797.699558 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:12 | Steps: 1351 | Loss: 797.269964 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:14 | Steps: 1352 | Loss: 797.968867 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:17 | Steps: 1353 | Loss: 798.801794 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:20 | Steps: 1354 | Loss: 799.961696 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:23 | Steps: 1355 | Loss: 803.273815 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:26 | Steps: 1356 | Loss: 804.786562 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:29 | Steps: 1357 | Loss: 807.159425 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:31 | Steps: 1358 | Loss: 808.853647 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:34 | Steps: 1359 | Loss: 808.520369 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:36 | Steps: 1360 | Loss: 808.759987 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:39 | Steps: 1361 | Loss: 810.536654 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:42 | Steps: 1362 | Loss: 810.537405 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:45 | Steps: 1363 | Loss: 812.788877 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:47 | Steps: 1364 | Loss: 813.746725 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:50 | Steps: 1365 | Loss: 814.942635 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:53 | Steps: 1366 | Loss: 815.926070 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:56 | Steps: 1367 | Loss: 817.258293 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:17:59 | Steps: 1368 | Loss: 818.337001 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:01 | Steps: 1369 | Loss: 818.693495 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:04 | Steps: 1370 | Loss: 819.338378 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:07 | Steps: 1371 | Loss: 820.987847 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:10 | Steps: 1372 | Loss: 822.994076 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:13 | Steps: 1373 | Loss: 823.755432 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:16 | Steps: 1374 | Loss: 824.632283 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:19 | Steps: 1375 | Loss: 826.120103 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:21 | Steps: 1376 | Loss: 826.865366 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:24 | Steps: 1377 | Loss: 828.347706 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:27 | Steps: 1378 | Loss: 829.133034 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:30 | Steps: 1379 | Loss: 830.704378 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:33 | Steps: 1380 | Loss: 830.779444 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:36 | Steps: 1381 | Loss: 832.427738 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:39 | Steps: 1382 | Loss: 834.554614 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:42 | Steps: 1383 | Loss: 835.129614 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:45 | Steps: 1384 | Loss: 836.792431 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:48 | Steps: 1385 | Loss: 837.699977 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:51 | Steps: 1386 | Loss: 837.926158 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:54 | Steps: 1387 | Loss: 838.783328 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:56 | Steps: 1388 | Loss: 838.467737 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:18:59 | Steps: 1389 | Loss: 839.295391 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:02 | Steps: 1390 | Loss: 839.949800 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:05 | Steps: 1391 | Loss: 839.548893 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:08 | Steps: 1392 | Loss: 840.358808 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:11 | Steps: 1393 | Loss: 841.728153 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:14 | Steps: 1394 | Loss: 844.044022 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:17 | Steps: 1395 | Loss: 845.332092 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:20 | Steps: 1396 | Loss: 846.231919 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:24 | Steps: 1397 | Loss: 848.490235 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:27 | Steps: 1398 | Loss: 848.223876 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:29 | Steps: 1399 | Loss: 847.893346 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:33 | Steps: 1400 | Loss: 850.723191 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:35 | Steps: 1401 | Loss: 850.549955 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:39 | Steps: 1402 | Loss: 853.738204 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:43 | Steps: 1403 | Loss: 855.902811 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:46 | Steps: 1404 | Loss: 857.191115 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:50 | Steps: 1405 | Loss: 863.227975 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:53 | Steps: 1406 | Loss: 863.443730 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:56 | Steps: 1407 | Loss: 865.461796 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:19:59 | Steps: 1408 | Loss: 865.455532 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:03 | Steps: 1409 | Loss: 867.688529 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:06 | Steps: 1410 | Loss: 868.592965 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:09 | Steps: 1411 | Loss: 868.622914 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:12 | Steps: 1412 | Loss: 869.686508 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:16 | Steps: 1413 | Loss: 871.034898 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:20 | Steps: 1414 | Loss: 873.534031 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:23 | Steps: 1415 | Loss: 875.361313 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:27 | Steps: 1416 | Loss: 876.908869 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:31 | Steps: 1417 | Loss: 878.584480 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:34 | Steps: 1418 | Loss: 879.382126 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:37 | Steps: 1419 | Loss: 880.418144 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:41 | Steps: 1420 | Loss: 881.659437 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:45 | Steps: 1421 | Loss: 883.414596 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:48 | Steps: 1422 | Loss: 884.148551 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:52 | Steps: 1423 | Loss: 887.322311 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:20:56 | Steps: 1424 | Loss: 888.454441 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:00 | Steps: 1425 | Loss: 889.784765 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:04 | Steps: 1426 | Loss: 890.893559 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:07 | Steps: 1427 | Loss: 891.702160 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:11 | Steps: 1428 | Loss: 893.009839 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:15 | Steps: 1429 | Loss: 895.353628 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:19 | Steps: 1430 | Loss: 896.263222 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:22 | Steps: 1431 | Loss: 897.374290 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:27 | Steps: 1432 | Loss: 900.152110 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:31 | Steps: 1433 | Loss: 901.776398 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:35 | Steps: 1434 | Loss: 903.093278 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:39 | Steps: 1435 | Loss: 905.595848 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:43 | Steps: 1436 | Loss: 907.800903 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:48 | Steps: 1437 | Loss: 909.420871 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:51 | Steps: 1438 | Loss: 909.482330 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:21:55 | Steps: 1439 | Loss: 909.743433 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:00 | Steps: 1440 | Loss: 914.024669 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:03 | Steps: 1441 | Loss: 914.596138 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:08 | Steps: 1442 | Loss: 917.180580 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:13 | Steps: 1443 | Loss: 919.655434 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:17 | Steps: 1444 | Loss: 921.280003 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:21 | Steps: 1445 | Loss: 922.620245 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:27 | Steps: 1446 | Loss: 925.851441 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:31 | Steps: 1447 | Loss: 926.727849 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:35 | Steps: 1448 | Loss: 926.880099 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:39 | Steps: 1449 | Loss: 928.102048 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:44 | Steps: 1450 | Loss: 929.907238 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:48 | Steps: 1451 | Loss: 929.841633 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:53 | Steps: 1452 | Loss: 932.190388 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:22:58 | Steps: 1453 | Loss: 932.535442 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:03 | Steps: 1454 | Loss: 933.865215 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:08 | Steps: 1455 | Loss: 937.271984 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:14 | Steps: 1456 | Loss: 939.500949 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:19 | Steps: 1457 | Loss: 940.536606 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:24 | Steps: 1458 | Loss: 943.492790 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:30 | Steps: 1459 | Loss: 945.415508 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:36 | Steps: 1460 | Loss: 949.264211 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:42 | Steps: 1461 | Loss: 953.206602 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:48 | Steps: 1462 | Loss: 955.411060 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:53 | Steps: 1463 | Loss: 957.947126 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:23:58 | Steps: 1464 | Loss: 958.226640 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:04 | Steps: 1465 | Loss: 960.473449 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:09 | Steps: 1466 | Loss: 961.644230 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:15 | Steps: 1467 | Loss: 962.994007 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:21 | Steps: 1468 | Loss: 965.554322 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:26 | Steps: 1469 | Loss: 966.367580 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:31 | Steps: 1470 | Loss: 967.182017 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:39 | Steps: 1471 | Loss: 973.170356 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:44 | Steps: 1472 | Loss: 973.492197 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:50 | Steps: 1473 | Loss: 976.403724 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:24:56 | Steps: 1474 | Loss: 978.280402 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:03 | Steps: 1475 | Loss: 980.894186 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:10 | Steps: 1476 | Loss: 984.094085 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:17 | Steps: 1477 | Loss: 986.639380 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:22 | Steps: 1478 | Loss: 986.501973 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:28 | Steps: 1479 | Loss: 988.203432 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:34 | Steps: 1480 | Loss: 988.158931 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:39 | Steps: 1481 | Loss: 987.921383 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:45 | Steps: 1482 | Loss: 989.142558 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:52 | Steps: 1483 | Loss: 991.321306 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:25:59 | Steps: 1484 | Loss: 994.284802 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:06 | Steps: 1485 | Loss: 995.222084 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:13 | Steps: 1486 | Loss: 998.174079 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:19 | Steps: 1487 | Loss: 998.327108 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:24 | Steps: 1488 | Loss: 998.138159 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:34 | Steps: 1489 | Loss: 1003.222412 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:43 | Steps: 1490 | Loss: 1008.583870 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:51 | Steps: 1491 | Loss: 1010.855054 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:26:58 | Steps: 1492 | Loss: 1010.652691 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:08 | Steps: 1493 | Loss: 1014.717950 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:18 | Steps: 1494 | Loss: 1018.666777 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:25 | Steps: 1495 | Loss: 1019.157669 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:37 | Steps: 1496 | Loss: 1024.630466 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:44 | Steps: 1497 | Loss: 1024.446256 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:27:57 | Steps: 1498 | Loss: 1029.979388 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:28:05 | Steps: 1499 | Loss: 1029.911999 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:28:15 | Steps: 1500 | Loss: 1030.822440 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:28:24 | Steps: 1501 | Loss: 1031.266270 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:28:40 | Steps: 1502 | Loss: 1034.604185 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:28:54 | Steps: 1503 | Loss: 1036.173080 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:29:09 | Steps: 1504 | Loss: 1037.541424 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:29:47 | Steps: 1505 | Loss: 1053.331423 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:30:05 | Steps: 1506 | Loss: 1054.584528 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:30:39 | Steps: 1507 | Loss: 1065.840281 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:31:16 | Steps: 1508 | Loss: 1073.914524 | Dataset: ../clips/dev.csv
Epoch 27 | Validation | Elapsed Time: 0:31:16 | Steps: 1508 | Loss: 1073.914524 | Dataset: ../clips/dev.csv
--------------------------------------------------------------------------------
  1. I would be using --dropout_rate parameter in future . Could you please help me understand what exactly this parameter mean and what would be perfect 0.3 or 0.4 value for it?

  2. we are using 12200 steps or files for training and 1508 files for validation

  3. We are using English language ALPHABET file for training. Pasting ALPHABET.txt file below:

# Each line in this file represents the Unicode codepoint (UTF-8 encoded)
# associated with a numeric label.
# A line that starts with # is a comment. You can escape it with \# if you wish
# to use '#' as a label.

,
!
 
"
$
%
'
+
-
.
0
1
2
3
4
5
6
7
8
9
:
?
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
# The last (non-comment) line needs to end with a newline.

It’s always the last (highest loss) of train/dev that we are interested in. But these values are really high. Under 100 is normal. And in the < 30 in the end.

Read about dropout somewhere, you have to find the right value. 0.4 might be better for you.

Again, very few data.

Don’t change the original one if you don’t know what you are doing. Don’t change the provided alphabet.txt.

In general, your training looks bad. Could be the dropout and alphabet. But my guess is that you have too few material for too much information you want to detect.