Libctc_decoder_with_kenlm need with version 0.4.1-0

Ok, thats why you havent kicked in earlier :slight_smile: Tell me, what info you need from me ?

Well, proper STR, not relying on stuff that moves. So basically, the output of the test set will show you the worst samples, but that might be still different from what you will get in the end.

Same parameters, everywhere, same data, not part of the training, validation nor test set, to make sure thereā€™s no hidden behavior.

I will send you some outputs and parameters what I have used.

Before that I list here few points:

  1. After finished training Deepspeech run test set, which in my case is one wav, which is separated from training set, I get decent results, many words which part of the are OK.
  2. I run client.py and give it that model from export dir and use -same- wav as in step 1 when test is run. Result is different . I also use same LM and TRIE as in training phase.
  3. BEAM Width is same.

Versions I am using:
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6

How much different? Are all the other parameters, not just beam width, identical ?

Could you give a try to current master / 0.5.0-alpha.10 as well ?

Just one more stupid question. I noticed that file alphabet.txt plays important role, well that wasnt surpise, but also order inside that file is -very- important. Inference is very different depending on order of alphabetsā€¦
Could this explain difference if you use different versions of alphabet.txt ? Even if both of them have same alphabets, but in different orderā€¦

Well, the order in the alphabet file will impact the output classes of the model, so you should not mix two differently-ordered alphabet, even if they cover the same set of characters.

Ordering itself, as long as itā€™s consistent, should not be an issue: I donā€™t see any good reason it would be impacting the output. Is there something I might be missing here @kdavis @reuben ?

Looking at alphabet.h#L18 and at alphabet.h#L79 it seems that yes the order of the alphabet.txt file matters.

However, ordering, as long as itā€™s consistent, shouldnā€™t have a large effect, modulo ā€œbadā€ random initializations of your network when you start training.

Thanks, maybe there should be somekind of ā€œWarning messageā€ to user if he or she is trying to run model with different alphabet.

I train my model with -same alphabet.txt / same order- and eliminate possibility my bad predictions are from mixed alphabet.txt -files.

Still using 0.4.xxx version as before

Lets see what happensā€¦

How would you be able to identify itā€™s not the same alphabet, as long as their shapes are compatibles ?

Just to mention it in some document would be enough.

I have now trained my model again and made sure I am using same alphabet in training. Here are the results:

Training phase:

 [ ! -f DeepSpeech.py ]
+ python -u DeepSpeech.py --train_files meh_and_dna_kw_and_zero_m_calls.csv --dev_files dev_m.csv --test_files test_m.csv --train_batch_size 80 --dev_batch_size 1 --test_batch_size 1 --n_hidden 375 --epoch -10 --validation_step 3 --early_stop True --earlystop_nsteps 6 --estop_mean_thresh 0.2 --estop_std_thresh 0.2 --dropout_rate 0.22 --learning_rate 0.0003 --report_count 200 --export_dir ac_models/ --checkpoint_dir m_and_d_checkpoint/ --alphabet_config_path alphabet/alphabet.txt --lm_binary_path LM_models/m_zero_and_one_stuff_bigram.bin --lm_trie_path tier/TRIE_2905
Preprocessing ['m_and_d_kw_and_zero_m_calls.csv']
Preprocessing done
Preprocessing ['dev_m.csv']
Preprocessing done
I STARTING Optimization
I Training epoch 0...
I Training of Epoch 0 - loss: 439.156241                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:08:46 Time:  0:08:46
I Training epoch 1...
I Training of Epoch 1 - loss: 372.225003                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:07 Time:  0:09:07
I Training epoch 2...
I Training of Epoch 2 - loss: 334.638444                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:06 Time:  0:09:06
I Training epoch 3...
I Training of Epoch 3 - loss: 311.968943                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:01 Time:  0:09:01
I Validating epoch 3...
I Validation of Epoch 3 - loss: 369.239349                                                                                                                                                                                                                                                                                                                                                                                                                             
100% (1 of 1) |##################################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:00:02 Time:  0:00:02
I Training epoch 4...
I Training of Epoch 4 - loss: 295.408689                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:03 Time:  0:09:03
I Training epoch 5...
I Training of Epoch 5 - loss: 282.173687                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:04 Time:  0:09:04
I Training epoch 6...
I Training of Epoch 6 - loss: 271.299984                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:03 Time:  0:09:03
I Validating epoch 6...
I Validation of Epoch 6 - loss: 318.596985                                                                                                                                                                                                                                                                                                                                                                                                                             
100% (1 of 1) |##################################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:00:02 Time:  0:00:02
I Training epoch 7...
I Training of Epoch 7 - loss: 262.116281                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:04 Time:  0:09:04
I Training epoch 8...
I Training of Epoch 8 - loss: 254.414588                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:02 Time:  0:09:02
I Training epoch 9...
I Training of Epoch 9 - loss: 247.973091                                                                                                                                                                                                                                                                                                                                                                                                                               
100% (544 of 544) |##############################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:09:01 Time:  0:09:01
I Validating epoch 9...
I Validation of Epoch 9 - loss: 287.961700                                                                                                                                                                                                                                                                                                                                                                                                                             
I FINISHED Optimization - training time: 1:30:26
100% (1 of 1) |##################################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:00:00 Time:  0:00:00
Preprocessing ['test_m.csv']
Preprocessing done
Computing acoustic model predictions...
100% (1 of 1) |##################################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:00:00 Time:  0:00:00
Decoding predictions...
100% (1 of 1) |##################################################################################################################################################################################################################################################################################################################################################################################################################| Elapsed Time: 0:00:00 Time:  0:00:00
Test - WER: 0.846154, CER: 90.000000, loss: 287.961700
--------------------------------------------------------------------------------
WER: 0.846154, CER: 90.000000, loss: 287.961700
 - src: "no niin tarviis viela perua nii tana iltana kymmeneen mennessa ooksa muuten missa vaiheessa kuullut tost meidan autotarkastus kampanjasta joka on nyt meneillaan satanelkytyhdeksan euroa tarkastus"
 - res: "niin jos kavis sielta taa hintaan peruutusmaksu mutta missa lasku tai autotarkastus kampanja elanyt menee janne yhdeksan euron tarkastus "
--------------------------------------------------------------------------------
I Exporting the model...
I Models exported at ac_models/

So we have model, and then I test it to same audio as in training phase test ā€¦ We would expect to see same prediction, but ā€¦

From command line:

deepspeech --model ac_models/output_graph.pb --alphabet alphabet/alphabet.txt --lm LM_models/m_zero_and_one_stuff_bigram.bin --trie tier/TRIE_2905 --audio /test/mchunk-28.wav

And results:

Loading model from file ac_models/output_graph.pb
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-05-30 10:58:39.044617: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-05-30 10:58:39.184976: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-30 10:58:39.185339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:01:00.0
totalMemory: 10.73GiB freeMemory: 10.32GiB
2019-05-30 10:58:39.185349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-05-30 10:58:39.751032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-30 10:58:39.751050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-05-30 10:58:39.751053: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-05-30 10:58:39.751595: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9981 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)
Loaded model in 0.714s.
Loading language model from files LM_models/m_zero_and_one_stuff_bigram.bin tier/TRIE_2905
Loaded language model in 0.000698s.
Warning: original sample rate (8000) is different than 16kHz. Resampling might produce erratic speech recognition.
Running inference.
2019-05-30 10:58:40.063889: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.063906: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.069141: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.069153: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.071366: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.071377: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.073476: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.073485: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.075498: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.075515: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.077668: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.077678: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.079753: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.079763: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.081759: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.081769: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.083809: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.083820: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.085950: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.085961: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.087999: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.088010: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.090099: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.090110: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.092132: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.092142: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.094175: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.094184: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.096157: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.096168: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.098204: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.098214: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.100243: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.100253: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.102216: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.102226: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.104205: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.104214: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.106195: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.106203: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.108174: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.108184: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.110226: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.110236: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.112355: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.112364: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.114524: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.114534: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.116509: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.116520: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
2019-05-30 10:58:40.118722: W 
tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375
2019-05-30 10:58:40.144028: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375
minkalainen aika on mennaan
Inference took 0.462s for 5.865s audio file.

So, four words ā€¦ And lets see pythons client.py is doing:

TensorFlow: v1.12.0-10-ge232881

DeepSpeech: v0.4.1-0-g0e40db6

Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.

2019-05-30 14:43:12.873604: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

2019-05-30 14:43:12.990533: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2019-05-30 14:43:12.990895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 

name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635

pciBusID: 0000:01:00.0

totalMemory: 10.73GiB freeMemory: 10.32GiB

2019-05-30 14:43:12.990905: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0

2019-05-30 14:43:13.310039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:

2019-05-30 14:43:13.310056: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 

2019-05-30 14:43:13.310060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 

2019-05-30 14:43:13.310182: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9980 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)

Loaded model in 0.443s.

Loaded language model in 0.000482s.

Running inference.

2019-05-30 14:43:13.546050: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:850] BlockLSTMOp is inefficient when both batch_size and input_size are odd. You are using: batch_size=1, input_size=375

2019-05-30 14:43:13.559923: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375


 2019-05-30 14:43:13.584934: W tensorflow/contrib/rnn/kernels/lstm_ops.cc:855] BlockLSTMOp is inefficient when both batch_size and cell_size are odd. You are using: batch_size=1, cell_size=375

aikataulussa saako sen takia lisaksi 

Inference took 0.394s for 11.730s audio file.

Same audio, same model, and different results ā€¦ Should I try what next ? 0.5 version perhaps ? I thought it was that order inside alphabet which would cause different results, but in this case, alphabet.txt order is same.

Your loss value is very very high, Iā€™m not sure how much reliable this is.

Yes, it is. But with loss that high I get those few words I am after. Still, if that high loss is is still giving all those words in test phase, how come it does not give same inference when I use that high loss model using exactly the same wav ?

I donā€™t know, but we have not seen your client.py. Are the results consistent over each run at least ?

Wait, what is this ?

The client.py you posted above does have some code to handle resampling, yet in the log you posted it does not print the sample rate conversion warning. Did you remove the sample rate conversion code?

That test wav is 8000hz, training material is 8000hz. I have played around with that clients code resampling ā€¦ I have tried to keep that upsampling from 8000hz to 16000hz and I have tried to keep it in 8000hz (So I skip that resampling part, or let code to do conversion from 8000 to 16000) ā€¦ I do get different results depending on that, but still same amount of words and not even close to result which I am after (test phase result ā€¦ long sentence, not just few words) ā€¦

Donā€™t. If youā€™re testing reproducibility, just convert everything and keep it converted in disk, do all the conversions using the same tool and the same parameters, then pass the same file to all the different clients, and make sure no automatic resampling is happening.

Test wav is 8000hz. Training material is 8000hz ā€¦ in Pythons client.py I can let it upsample to 16000 or skip that part of code and let it be in 8000hz ā€¦ both options give little different results, but only few words, not that long sentence I am after ā€¦

Oh, wait, if the training material is 8000Hz you should definitely not be upsampling, but that requires modifying the client to pass the native (8000Hz) sample rate to the API. So itā€™s expected that youā€™ll get different results with resampling.