Error when start test epoch

Dear Support,

I am training the UTF-8 Cantonese model with dataset from Common Voice. The Training Phase and Validation Phase have been completed successfully by the below command.

./DeepSpeech.py
–train_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv
–dev_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
–epochs 30
–checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/
–alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt
–scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer
–reduce_lr_on_plateau
–learning_rate 0.0001
–n_hidden 2048
–train_batch_size 160
–dev_batch_size 20
–dropout_rate 0.28
–utf8 \

But when start the Test Phase, the error occurs. The error log as below:

I Loading best validating checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/best_dev-28
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
I Test epoch...
Fatal Python error: Segmentation fault

Thread 0x00007f50df91b740 (most recent call first):
  File "/Segmentation fault
  • Have I written custom code (as opposed to running examples on an unmodified clone of the repository) : NO
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04) : Windows - Dockerfile - Ubuntu 18.04
  • TensorFlow installed from (our builds, or upstream TensorFlow) : our builds
  • TensorFlow version (use command below) : tensorflow r1.15.0
  • Python version : Python3.7
  • Bazel version (if compiling from source) :
  • GCC/Compiler version (if compiling from source) :
  • CUDA/cuDNN version :
  • GPU model and memory :
  • Exact command to reproduce :

./DeepSpeech.py
–noshow_progressbar
–test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
–checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/
–alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt
–scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer
–n_hidden 2048
–test_batch_size 20
–utf8 \

The version number as below:

root@94f02792732d:/DeepSpeech# pip list
Package              Version                              Location
-------------------- ------------------------------------ --------------------
absl-py              0.9.0
alembic              1.4.2
astor                0.8.1
attrdict             2.0.1
audioread            2.1.8
beautifulsoup4       4.9.1
bs4                  0.0.1
certifi              2020.4.5.2
cffi                 1.14.0
chardet              3.0.4
cliff                3.1.0
cmaes                0.5.0
cmd2                 0.8.9
colorlog             4.1.0
decorator            4.4.2
deepspeech           0.7.3
deepspeech-training  training-deepspeech-training-VERSION /DeepSpeech/training
ds-ctcdecoder        0.7.3
gast                 0.2.2
google-pasta         0.2.0
grpcio               1.29.0
h5py                 2.10.0
idna                 2.9
importlib-metadata   1.6.1
joblib               0.15.1
Keras-Applications   1.0.8
Keras-Preprocessing  1.1.2
librosa              0.7.2
llvmlite             0.31.0
Mako                 1.1.3
Markdown             3.2.2
MarkupSafe           1.1.1
numba                0.47.0
numpy                1.18.5
opt-einsum           3.2.1
optuna               1.5.0
opuslib              2.0.0
pandas               1.0.4
pbr                  5.4.5
pip                  20.0.2
prettytable          0.7.2
progressbar2         3.51.3
protobuf             3.12.2
pycparser            2.20
pyparsing            2.4.7
pyperclip            1.8.0
python-dateutil      2.8.1
python-editor        1.0.4
python-utils         2.4.0
pytz                 2020.1
pyxdg                0.26
PyYAML               5.3.1
requests             2.23.0
resampy              0.2.2
scikit-learn         0.23.1
scipy                1.4.1
semver               2.10.1
setuptools           39.1.0
six                  1.15.0
SoundFile            0.10.3.post1
soupsieve            2.0.1
sox                  1.3.7
SQLAlchemy           1.3.17
stevedore            2.0.0
tensorboard          1.15.0
tensorflow           1.15.2
tensorflow-estimator 1.15.1
tensorflow-gpu       1.15.0
termcolor            1.1.0
threadpoolctl        2.1.0
tqdm                 4.46.1
urllib3              1.25.9
wcwidth              0.2.4
Werkzeug             1.0.1
wheel                0.33.6
wrapt                1.12.1
zipp                 3.1.0

I have reviewed all the datasets and filtered the better quality dataset for the training.

Please find the datasets and .csv file are attached as below:

The lm.bainary and kenlm.scorer files are attached as below:

When I train the datasets without the Validation Phase, the Test Phase also occurs errors.

Please find the logs as below:

root@94f02792732d:/DeepSpeech# ./DeepSpeech.py \
>   --train_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv \
>   --test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv \
>   --epochs 30 \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --reduce_lr_on_plateau \
>   --learning_rate 0.0001 \
>   --n_hidden 2048 \
>   --train_batch_size 160 \
>   --test_batch_size 20 \
>   --dropout_rate 0.28 \
>   --utf8 \
>
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 1095.837158
Epoch 1 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 666.401001
Epoch 2 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 261.214905
Epoch 3 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 238.135345
Epoch 4 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 294.818665
Epoch 5 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 296.967285
Epoch 6 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 264.965759
Epoch 7 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 221.122955
Epoch 8 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 191.377640
Epoch 9 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 199.586411
Epoch 10 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 200.823044
Epoch 11 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 188.246185
Epoch 12 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 177.267181
Epoch 13 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 172.420319
Epoch 14 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 172.861816
Epoch 15 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 175.622055
Epoch 16 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 177.549561
Epoch 17 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 177.466965
Epoch 18 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 175.106949
Epoch 19 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 171.883148
Epoch 20 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 168.839874
Epoch 21 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 166.874115
Epoch 22 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 165.515945
Epoch 23 |   Training | Elapsed Time: 0:01:10 | Steps: 1 | Loss: 164.082031
Epoch 24 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 161.798309
Epoch 25 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 159.209564
Epoch 26 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 157.023651
Epoch 27 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 155.863556
Epoch 28 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 155.569336
Epoch 29 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 155.955887
I FINISHED optimization in 0:36:19.105478
I Could not find best validating checkpoint.
I Loading most recent checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/train-30
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
Test epoch | Steps: 0 | Elapsed Time: 0:00:00                                                                                                                                                                                               Fatal Python error: Segmentation fault

Thread 0x00007f28117fa700 (most recent call first):
  File "/usr/lib/python3.6/threading.py", line 295 in wait
  File "/usr/lib/python3.6/queue.py", line 164 in get
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run
  File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
  File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007f2811ffb700 (most recent call first):
  File "/usr/lib/python3.6/threading.py", line 295 in wait
  File "/usr/lib/python3.6/queue.py", line 164 in get
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run
  File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
  File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007f28c5f09740 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/swigwrapper.py", line 364 in ctc_beam_search_decoder_batch
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/__init__.py", line 134 in ctc_beam_search_decoder_batch
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 110 in run_test
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 128 in evaluate
  File "/DeepSpeech/training/deepspeech_training/train.py", line 645 in test
  File "/DeepSpeech/training/deepspeech_training/train.py", line 917 in main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250 in _run_main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299 in run
  File "/DeepSpeech/training/deepspeech_tSegmentation fault

But when I set the --n_hidden from 2048 to 512, the Test Phase can execute normally.

root@94f02792732d:/DeepSpeech# ./DeepSpeech.py \
>   --train_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv \
>   --test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv \
>   --epochs 30 \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --reduce_lr_on_plateau \
>   --learning_rate 0.0001 \
>   --n_hidden 512 \
>   --train_batch_size 160 \
>   --test_batch_size 20 \
>   --dropout_rate 0.28 \
>   --utf8 \
>
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 1093.047852
Epoch 1 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 1046.377197
Epoch 2 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 990.846680
Epoch 3 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 930.426086
Epoch 4 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 864.664734
Epoch 5 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 794.309631
Epoch 6 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 717.662781
Epoch 7 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 636.994812
Epoch 8 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 552.994995
Epoch 9 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 470.793152
Epoch 10 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 392.140533
Epoch 11 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 325.533356
Epoch 12 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 276.200684
Epoch 13 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 244.471344
Epoch 14 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 229.381546
Epoch 15 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 226.331833
Epoch 16 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 230.216553
Epoch 17 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 237.089355
Epoch 18 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 243.226028
Epoch 19 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 249.238281
Epoch 20 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 252.872406
Epoch 21 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 255.091064
Epoch 22 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 255.621185
Epoch 23 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 255.067184
Epoch 24 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 253.103195
Epoch 25 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 250.204681
Epoch 26 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 246.448883
Epoch 27 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 242.263184
Epoch 28 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 238.072708
Epoch 29 |   Training | Elapsed Time: 0:00:06 | Steps: 1 | Loss: 233.581329
I FINISHED optimization in 0:03:30.042398
I Could not find best validating checkpoint.
I Loading most recent checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/train-30
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
Test epoch | Steps: 1 | Elapsed Time: 0:00:12
Test on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv - WER: 1.000000, CER: 0.985772, loss: 322.528870
--------------------------------------------------------------------------------
Best WER:
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.913043, loss: 268.580719
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20137012.wav
 - src: "姨 媽 同 我 去 長 洲 冰 廠 路 買 餸"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.923077, loss: 450.225311
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20137138.wav
 - src: "老 闆 請 我 去 葵 涌 童 子 街 間 餐 廳 食 西 多 士 飲 奶 茶"
 - res: "請問我想去大埔滘科�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.960000, loss: 273.003296
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20197584.wav
 - src: "有 個 老 人 去 左 西 貢 沙 咀 街 食 齋"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.972973, loss: 433.278564
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20101461.wav
 - src: "八 號 風 球 好 大 風 西 營 盤 爹 核 里 依 家 橫 風 橫 雨"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.974359, loss: 447.331390
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20148149.wav
 - src: "老 友 唔 記 得 去 石 硤 尾 澤 安 道 南 參 加 緩 步 跑 練 習"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
Median WER:
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 402.582550
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20137044.wav
 - src: "細 佬 喺 沙 田 沙 田 車 站 圍 收 養 左 一 隻 流 浪 狗"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 397.107635
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20226277.wav
 - src: "有 無 人 知 道 愉 景 灣 深 水 埗 徑 係 點 去 㗎"
 - res: "請問我想去大埔�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 392.437347
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20197632.wav
 - src: "亞 爸 喺 灣 仔 盧 押 道 買 左 三 磅 士 多 啤 梨 返 屋 企"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 372.739594
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20136999.wav
 - src: "流 浪 貓 喺 沙 田 禾 盛 街 嘅 垃 圾 桶 搵 野 食"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 366.272583
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20101462.wav
 - src: "有 個 老 婆 婆 喺 牛 池 灣 紫 葳 路 等 緊 小 巴"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
Worst WER:
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 358.208649
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20226141.wav
 - src: "有 個 老 婆 婆 喺 東 涌 翔 東 路 等 緊 小 巴"
 - res: "請問我想去大埔滘�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 209.531494
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20137132.wav
 - src: "我 住 喺 何 文 田 站 附 近"
 - res: "請問我想去大�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 78.237846
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20143952.wav
 - src: "寧 波 街"
 - res: "請問�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.000000, loss: 74.291573
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20143973.wav
 - src: "咩 事 呀"
 - res: "請問我�"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 1.200000, loss: 73.213852
 - wav: file:///mnt/deepspeechdata/filter/CV/zh-HK/clips/test/common_voice_zh-HK_20137365.wav
 - src: "義 德 道"
 - res: "請問我想去�"
--------------------------------------------------------------------------------

That’s strange. Can you run a test with some training data and 2048 to see whether it is your test set that is causing the segmentation fault? If everything trains smoothly it should be your testing data.

So you are trying to use kenlm.scorer from the repository as utf8 ?

1 Like

I build kenlm.scorer ourselves through the below command:

python3 ./data/lm/generate_lm.py \
  --input_txt /mnt/deepspeechdata/filter/CV/zh-HK/vocabulary.txt \
  --output_dir /mnt/deepspeechdata/filter/lm/ \
  --top_k 50000 \
  --kenlm_bins /DeepSpeech/native_client/kenlm/build/bin/ \
  --arpa_order 5 \
  --max_arpa_memory "85%" \
  --arpa_prune "0|1|2|4|4" \
  --binary_a_bits 255 \
  --binary_q_bits 8 \
  --binary_type trie \
  --discount_fallback \

python3 ./data/lm/generate_package.py \
  --alphabet /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
  --lm /mnt/deepspeechdata/filter/lm/lm.binary \
  --vocab /mnt/deepspeechdata/filter/lm/vocab-50000.txt \
  --package /mnt/deepspeechdata/filter/lm/kenlm.scorer \
  --default_alpha 0.931289039105002 \
  --default_beta 1.1834137581510284 \

Although I did not use the parameter --force_utf8 in generate_package.py, I found the “Using detected UTF-8 mode: True” in the logs.

The logs of building the kenlm.scorer as below:

root@938cdb90fd11:/DeepSpeech# python3 ./data/lm/generate_lm.py \
>   --input_txt /mnt/deepspeechdata/filter/CV/zh-HK/vocabulary.txt \
>   --output_dir /mnt/deepspeechdata/filter/lm/ \
>   --top_k 50000 \
>   --kenlm_bins /DeepSpeech/native_client/kenlm/build/bin/ \
>   --arpa_order 5 \
>   --max_arpa_memory "85%" \
>   --arpa_prune "0|1|2|4|4" \
>   --binary_a_bits 255 \
>   --binary_q_bits 8 \
>   --binary_type trie \
>   --discount_fallback \
>

Converting to lowercase and counting word occurrences ...
| |#                                                                                                                                                                                                             | 297 Elapsed Time: 0:00:00

Saving top 50000 words ...

Calculating word statistics ...
  Your text file has 4139 words in total
  It has 916 unique words
  Your top-50000 words are 100.0000 percent of all words
  Your most common word "去" occurred 77 times
  The least common word in your top-k is "埗" with 1 times
  The first word with 2 occurrences is "糕" at place 497

Creating ARPA file ...
=== 1/5 Counting and sorting n-grams ===
Reading /mnt/deepspeechdata/filter/lm/lower.txt.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Unigram tokens 4139 types 919
=== 2/5 Calculating and sorting adjusted counts ===
Chain sizes: 1:11028 2:1042213952 3:1954151296 4:3126641920 5:4559686656
Substituting fallback discounts for order 4: D1=0.5 D2=1 D3+=1.5
Statistics:
1 919 D1=0.613139 D2=1.09764 D3+=1.45929
2 629/2515 D1=0.861694 D2=1.18672 D3+=1.33816
3 334/2816 D1=0.930233 D2=0.900634 D3+=1.37806
4 109/2861 D1=0.954577 D2=0.63422 D3+=1.02926
5 78/2822 D1=0.5 D2=1 D3+=1.5
Memory estimate for binary LM:
type    kB
probing 49 assuming -p 1.5
probing 59 assuming -r models -p 1.5
trie    32 without quantization
trie    33 assuming -q 8 -b 8 quantization
trie    32 assuming -a 22 array pointer compression
trie    33 assuming -a 22 -q 8 -b 8 array pointer compression and quantization
=== 3/5 Calculating and sorting initial probabilities ===
Chain sizes: 1:11028 2:10064 3:6680 4:2616 5:2184
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
**##################################################################################################
=== 4/5 Calculating and writing order-interpolated probabilities ===
Chain sizes: 1:11028 2:10064 3:6680 4:2616 5:2184
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
####################################################################################################
=== 5/5 Writing ARPA model ===
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Name:lmplz      VmPeak:10631028 kB      VmRSS:6524 kB   RSSMax:1866488 kB       user:0.231444   sys:0.626261    CPU:0.85783     real:0.981677

Filtering ARPA file using vocabulary of top-k words ...
Reading /mnt/deepspeechdata/filter/lm/lm.arpa
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************

Building lm.binary ...
Reading /mnt/deepspeechdata/filter/lm/lm_filtered.arpa
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Identifying n-grams omitted by SRI
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Quantizing
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Writing trie
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
SUCCESS
root@938cdb90fd11:/DeepSpeech# python3 ./data/lm/generate_package.py \
>   --alphabet /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --lm /mnt/deepspeechdata/filter/lm/lm.binary \
>   --vocab /mnt/deepspeechdata/filter/lm/vocab-50000.txt \
>   --package /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --default_alpha 0.931289039105002 \
>   --default_beta 1.1834137581510284 \
>
916 unique words read from vocabulary file.
Looks like a character based model.
Using detected UTF-8 mode: True
Package created in /mnt/deepspeechdata/filter/lm/kenlm.scorer

Trained again. But the errors appear again.

Test with all train data. Please find the logs as below:

root@938cdb90fd11:/DeepSpeech# ./DeepSpeech.py \
>   --train_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv \
>   --dev_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv \
>   --epochs 30 \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --reduce_lr_on_plateau \
>   --learning_rate 0.0001 \
>   --n_hidden 2048 \
>   --train_batch_size 160 \
>   --dev_batch_size 20 \
>   --dropout_rate 0.28 \
>   --utf8 \
>
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 1096.802979
Epoch 0 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 1017.848572 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 1017.848572 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-1
Epoch 1 |   Training | Elapsed Time: 0:01:09 | Steps: 1 | Loss: 715.850708
Epoch 1 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 410.956985 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 410.956985 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-2
Epoch 2 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 295.658600
Epoch 2 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 344.836891 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 344.836891 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-3
Epoch 3 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 233.109619
Epoch 3 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 447.344696 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 4 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 298.361786
Epoch 4 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 456.759995 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 5 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 306.927338
Epoch 5 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 406.370789 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 6 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 276.717590
Epoch 6 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 332.820160 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 332.820160 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-7
Epoch 7 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 230.595047
Epoch 7 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 271.197578 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 271.197578 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-8
Epoch 8 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 190.261688
Epoch 8 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 271.447388 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 9 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 186.322662
Epoch 9 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 306.081093 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 10 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 205.984955
Epoch 10 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 297.242828 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 11 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 202.078079
Epoch 11 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 269.114983 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 269.114983 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-12
Epoch 12 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 185.405334
Epoch 12 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 249.665253 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 249.665253 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-13
Epoch 13 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 173.836823
Epoch 13 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 246.800919 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 246.800919 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-14
Epoch 14 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 171.987473
Epoch 14 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 253.132378 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 15 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 175.916275
Epoch 15 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 258.636436 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 16 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 179.156570
Epoch 16 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 258.550911 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 17 |   Training | Elapsed Time: 0:01:14 | Steps: 1 | Loss: 179.109436
Epoch 17 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 253.042953 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 18 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 175.504974
Epoch 18 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 244.833130 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 244.833130 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-19
Epoch 19 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 170.263031
Epoch 19 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 237.417091 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 237.417091 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-20
Epoch 20 |   Training | Elapsed Time: 0:01:08 | Steps: 1 | Loss: 165.556534
Epoch 20 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 233.467789 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 233.467789 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-21
Epoch 21 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 163.036423
Epoch 21 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 233.106209 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 233.106209 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-22
Epoch 22 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 162.784271
Epoch 22 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 233.689850 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 23 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 163.073837
Epoch 23 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 232.485092 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 232.485092 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-24
Epoch 24 |   Training | Elapsed Time: 0:01:07 | Steps: 1 | Loss: 162.395432
Epoch 24 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 229.192719 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 229.192719 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-25
Epoch 25 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 160.297195
Epoch 25 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 225.623672 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 225.623672 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-26
Epoch 26 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 158.059662
Epoch 26 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 223.631378 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 223.631378 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-27
Epoch 27 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 156.692657
Epoch 27 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 223.584152 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I Saved new best validating model with loss 223.584152 to: /mnt/deepspeechdata/filter/results/checkpoint/best_dev-28
Epoch 28 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 156.386642
Epoch 28 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 224.507439 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
Epoch 29 |   Training | Elapsed Time: 0:01:06 | Steps: 1 | Loss: 156.734344
Epoch 29 | Validation | Elapsed Time: 0:00:08 | Steps: 2 | Loss: 225.059799 | Dataset: /mnt/deepspeechdata/filter/CV/zh-HK/clips/dev.csv
I FINISHED optimization in 0:40:50.255176
root@938cdb90fd11:/DeepSpeech# ./DeepSpeech.py \
>   --noshow_progressbar \
>   --test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --n_hidden 2048 \
>   --test_batch_size 160 \
>   --utf8 \
>
I Loading best validating checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/best_dev-28
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/train.csv
I Test epoch...
Fatal Python error: Segmentation fault

Thread 0x00007f1c7a6dc740 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/swigwrapper.py", line 364 in ctc_beam_search_decoder_batch
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/__init__.py", line 134 in ctc_beam_search_decoder_batch
  File "/DeepSpeech/trSegmentation fault

Test with some training data. The logs as below:

root@938cdb90fd11:/DeepSpeech# ./DeepSpeech.py \
>   --noshow_progressbar \
>   --test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --n_hidden 2048 \
>   --test_batch_size 10 \
>   --utf8 \
>
I Loading best validating checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/best_dev-28
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
I Test epoch...
Fatal Python error: Segmentation fault

Thread 0x00007f03f6cde740 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/swigwrapper.py", line 364 in ctc_beam_search_decoder_batch
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/__init__.py", line 134 in ctc_beam_search_decoder_batch
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 110 in run_test
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 128 in evaluate
  File "/DeepSpeech/training/deepspeech_training/train.py", line 645 in test
  File "/DeepSpeech/training/deepspeech_training/train.py", line 917 in main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250 in _run_main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299 in run
  File "/DeepSpeech/training/deepspeech_training/train.py", line 941 in run_script
  File "./DeepSpeech.py", line 12 in <module>
Segmentation fault

Did you have a look at the github issue given by lissyx, sounds like you are running into the same issue. As kdavis suggests, reopen the issue if you still have the issue after --force_utf8

And you don’t need to retrain. Training and running test are 2 seperate operations. So you can simply build a new scorer and run it with your best checkpoint.

I have regenerated the scorer file with the below logs:

root@f8b3a438ba16:/DeepSpeech# python3 ./data/lm/generate_package.py \
>   --alphabet /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --lm /mnt/deepspeechdata/filter/lm/lm.binary \
>   --vocab /mnt/deepspeechdata/filter/lm/vocab-50000.txt \
>   --package /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --default_alpha 0.931289039105002 \
>   --default_beta 1.1834137581510284 \
>   --force_utf8 True \
>
916 unique words read from vocabulary file.
Looks like a character based model.
Package created in /mnt/deepspeechdata/filter/lm/kenlm.scorer

And rerun the Test again with the n_hidden 2048 and the best checkpoint, but the errors still occur.

root@f8b3a438ba16:/DeepSpeech# ./DeepSpeech.py \
>   --noshow_progressbar \
>   --test_files /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv \
>   --checkpoint_dir /mnt/deepspeechdata/filter/results/checkpoint/ \
>   --alphabet_config_path /mnt/deepspeechdata/filter/CV/zh-HK/alphabet.txt \
>   --scorer_path /mnt/deepspeechdata/filter/lm/kenlm.scorer \
>   --n_hidden 2048 \
>   --test_batch_size 160 \
>   --utf8 \
>
I Loading best validating checkpoint from /mnt/deepspeechdata/filter/results/checkpoint/best_dev-28
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on /mnt/deepspeechdata/filter/CV/zh-HK/clips/test.csv
I Test epoch...
Fatal Python error: Segmentation fault

Thread 0x00007f7624929740 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/swigwrapper.py", line 364 in ctc_beam_search_decoder_batch
  File "/usr/local/lib/python3.6/dist-packages/ds_ctcdecoder/__init__.py", line 134 in ctc_beam_search_decoder_batch
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 110 in run_test
  File "/DeepSpeech/training/deepspeech_training/evaluate.py", line 128 in evaluate
  File "/DeepSpeech/training/deepspeech_training/train.py", line 645 in test
  File "/DeepSpeech/training/deepspeech_training/train.py", line 917 in main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250 in _run_main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299 in run
  File "/DeepSpeech/training/deepspeech_training/train.py", line 941 in run_script
  File "./DeepSpeech.py", line 12 in <module>
Segmentation fault

Then I need to reopen that github issue?

@reuben is asking for a stacktrace in the github issue, so if you can deliver that, he might look into it, but that can take some time as it seems there are not too many people with this problem and it is hard to nail it to a specific cause.