Training loss of DeepSpeech bigger and bigger

  • Have I written custom code (as opposed to running examples on an unmodified clone of the repository) :
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04) : Linux Ubuntu 18.04
  • TensorFlow installed from (our builds, or upstream TensorFlow) : Anaconda
  • TensorFlow version (use command below) : 1.14.0
  • Python version : 3.6.2
  • Bazel version (if compiling from source) : None
  • GCC/Compiler version (if compiling from source) : None
  • CUDA/cuDNN version : 10.1 / 7.6.5
  • GPU model and memory : 1080Ti / 11G

my training command was:

CUDA_VISIBLE_DEVICES=0 python DeepSpeech.py \
--inter_op_parallelism_threads 4 \
--train_files ./ch_datas/clips/train.csv \
--test_files ./ch_datas/clips/test.csv \
--dev_files ./ch_datas/clips/dev.csv \
--train_cudnn  \
--summary_dir ./summaries_ch \
--checkpoint_dir ./checkpoint_ch \
--export_dir ./checkpoint_ch \
--epochs 30 \
--train_batch_size 16 \
--test_batch_size 8 \
--learning_rate 0.001 \
--load init \
--alphabet_config_path ./data/hanzi.txt

training loss:

Epoch: 0, step:  1, loss: 684.010
Epoch: 0, step:  2, loss: 723.529
Epoch: 0, step:  3, loss: 761.368
Epoch: 0, step:  4, loss: 776.013
Epoch: 0, step:  5, loss: 796.762
Epoch: 0, step:  6, loss: 809.914
Epoch: 0, step:  7, loss: 837.805
Epoch: 0, step:  8, loss: 844.220
Epoch: 0, step:  9, loss: 864.128
Epoch: 0, step:  10, loss: 861.952
Epoch: 0, step:  11, loss: 875.764
Epoch: 0, step:  12, loss: 888.888
Epoch: 0, step:  13, loss: 903.153
Epoch: 0, step:  14, loss: 915.556
Epoch: 0, step:  15, loss: 902.702
Epoch: 0, step:  16, loss: 909.542
Epoch: 0, step:  17, loss: 927.071
Epoch: 0, step:  18, loss: 930.566
Epoch: 0, step:  19, loss: 933.876
Epoch: 0, step:  20, loss: 955.658
Epoch: 0, step:  21, loss: 949.201
Epoch: 0, step:  22, loss: 944.346
Epoch: 0, step:  23, loss: 953.274
Epoch: 0, step:  24, loss: 974.061
Epoch: 0, step:  25, loss: 981.751
Epoch: 0, step:  26, loss: 994.461
Epoch: 0, step:  27, loss: 977.802
Epoch: 0, step:  28, loss: 990.688
Epoch: 0, step:  29, loss: 998.575
Epoch: 0, step:  30, loss: 1006.883
Epoch: 0, step:  31, loss: 1000.199
Epoch: 0, step:  32, loss: 1009.248
Epoch: 0, step:  33, loss: 1014.233
Epoch: 0, step:  34, loss: 1023.469
Epoch: 0, step:  35, loss: 1014.114
Epoch: 0, step:  36, loss: 1026.772
Epoch: 0, step:  37, loss: 1037.882
Epoch: 0, step:  38, loss: 1033.054
Epoch: 0, step:  39, loss: 1033.161
Epoch: 0, step:  40, loss: 1029.055
Epoch: 0, step:  41, loss: 1043.339
Epoch: 0, step:  42, loss: 1044.823
Epoch: 0, step:  43, loss: 1043.854
Epoch: 0, step:  44, loss: 1059.264
Epoch: 0, step:  45, loss: 1047.047
Epoch: 0, step:  46, loss: 1050.243
Epoch: 0, step:  47, loss: 1057.279
Epoch: 0, step:  48, loss: 1073.368
Epoch: 0, step:  49, loss: 1087.432
Epoch: 0, step:  50, loss: 1082.651
Epoch: 0, step:  51, loss: 1076.342
Epoch: 0, step:  52, loss: 1081.378
Epoch: 0, step:  53, loss: 1096.052
Epoch: 0, step:  54, loss: 1088.232
Epoch: 0, step:  55, loss: 1086.565
Epoch: 0, step:  56, loss: 1089.221
Epoch: 0, step:  57, loss: 1098.793
Epoch: 0, step:  58, loss: 1100.954
Epoch: 0, step:  59, loss: 1090.182
Epoch: 0, step:  60, loss: 1095.552
Epoch: 0, step:  61, loss: 1091.202
Epoch: 0, step:  62, loss: 1107.121
Epoch: 0, step:  63, loss: 1113.671
Epoch: 0, step:  64, loss: 1111.900
Epoch: 0, step:  65, loss: 1124.308
Epoch: 0, step:  66, loss: 1129.829
Epoch: 0, step:  67, loss: 1144.116
Epoch: 0, step:  68, loss: 1119.834
Epoch: 0, step:  69, loss: 1128.630
Epoch: 0, step:  70, loss: 1139.365
Epoch: 0, step:  71, loss: 1129.976
Epoch: 0, step:  72, loss: 1134.635
Epoch: 0, step:  73, loss: 1138.775
Epoch: 0, step:  74, loss: 1144.260
Epoch: 0, step:  75, loss: 1146.753
Epoch: 0, step:  76, loss: 1145.286
Epoch: 0, step:  77, loss: 1153.300
Epoch: 0, step:  78, loss: 1164.778
Epoch: 0, step:  79, loss: 1172.312
Epoch: 0, step:  80, loss: 1167.586
Epoch: 0, step:  81, loss: 1166.814
Epoch: 0, step:  82, loss: 1176.949
Epoch: 0, step:  83, loss: 1175.590
Epoch: 0, step:  84, loss: 1193.740
Epoch: 0, step:  85, loss: 1184.772
Epoch: 0, step:  86, loss: 1189.684
Epoch: 0, step:  87, loss: 1193.016
Epoch: 0, step:  88, loss: 1193.975
Epoch: 0, step:  89, loss: 1192.083
Epoch: 0, step:  90, loss: 1191.130
Epoch: 0, step:  91, loss: 1197.058
Epoch: 0, step:  92, loss: 1199.076
Epoch: 0, step:  93, loss: 1207.282
Epoch: 0, step:  94, loss: 1203.571
Epoch: 0, step:  95, loss: 1205.604
Epoch: 0, step:  96, loss: 1209.475
Epoch: 0, step:  97, loss: 1218.885
Epoch: 0, step:  98, loss: 1235.660
Epoch: 0, step:  99, loss: 1225.161
Epoch: 0, step:  100, loss: 1218.309
Epoch: 0, step:  101, loss: 1238.086
Epoch: 0, step:  102, loss: 1232.846
Epoch: 0, step:  103, loss: 1234.757
Epoch: 0, step:  104, loss: 1231.328
Epoch: 0, step:  105, loss: 1229.912

。。。。
i used Librivox datasets, and chinese dataset from https://voice.mozilla.org/data, the loss was being bigger and bigger

Please post in english.

@lissyx can you give me some advice? thx a lot

This is only epoch 0, there’s nothing conclusive to draw from there. Loss value depends on your data, I can’t tell you what hyperparameters to use, you need to do your own homework here.