InvalidArgumentError at validating step

I am training for Czech with my own dataset. I set the validation step and display step to 5. I wait for about a week, then I start getting output, then with the 5th epoch, I get an error with this trace:

STDOUT:

I STARTING Optimization
I Training of Epoch 0 - loss: inf
I Training of Epoch 1 - loss: 68.523875
I Training of Epoch 2 - loss: 52.459619
I Training of Epoch 3 - loss: 42.462033
I Training of Epoch 4 - loss: 35.077793
E not all arguments converted during string formatting
E You must feed a value for placeholder tensor 'Queue_Selector' with dtype int32
E        [[Node: Queue_Selector = Placeholder[dtype=DT_INT32, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:C
PU:0"]()]]
E 
E Caused by op u'Queue_Selector', defined at:
E   File "./DeepSpeech.py", line 1838, in <module>
E     tf.app.run()
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 124, in run
E     _sys.exit(main(argv))
E   File "./DeepSpeech.py", line 1795, in main
E     train()
E   File "./DeepSpeech.py", line 1489, in train
E     tower_feeder_count=len(available_devices))
E   File "~/git/DeepSpeech/util/feeding.py", line 43, in __init__
E     self.ph_queue_selector = tf.placeholder(tf.int32, name='Queue_Selector')
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1680, in placeholder
E     return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3141, in _placeholder
E     "Placeholder", dtype=dtype, shape=shape, name=name)
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
E     op_def=op_def)
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
E     op_def=op_def)
E   File "~/virtualenv/deepspeech/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
E     self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
E 
E InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Queue_Selector' with dtype int32
E        [[Node: Queue_Selector = Placeholder[dtype=DT_INT32, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
E 
E The checkpoint in ~/dsasr/temp/checkpoints does not match the shapes of the model. Did you change alphabet.txt or the --n_hidden parameter between train runs using the same checkpoint dir? Try moving or removing the contents of ~/dsasr/temp/checkpoints.

STDERR:

Traceback (most recent call last):
  File "./DeepSpeech.py", line 1660, in train
    job = COORD.next_job(job)
  File "./DeepSpeech.py", line 1428, in next_job
    if epoch.done():
  File "./DeepSpeech.py", line 1045, in done
    print(FLAGS.wer_log_pattern % (time, self.set_name, self.wer))
TypeError: not all arguments converted during string formatting

Here is my training command:

./DeepSpeech.py \
    --alphabet_config_path "$ASRH/res/alphabet.txt" \
    --checkpoint_dir "$ASRH/temp/checkpoints" \
    --checkpoint_secs 900 \
    --decoder_library_path "$DECODER_LIB" \
    --dev_batch_size 80 \
    --dev_files "$ASRH/data/dev.csv" \
    --display_step 5 \
    --export_dir "$ASRH/model" \
    --fulltrace true \
    --lm_binary_path "$ASRH/data/lm/lm.binary" \
    --lm_trie_path "$ASRH/data/lm/trie" \
    --max_to_keep 3 \
    --summary_dir "$ASRH/temp/summaries" \
    --summary_secs 900 \
    --test_batch_size 40 \
    --test_files "$ASRH/data/test.csv" \
    --train_batch_size 80 \
    --train_files "$ASRH/data/train.csv" \
    --validation_step 5 \
    --wer_log_pattern "GLOBAL LOG: logwer(%%s, %%s, %%f)" \
    "$@"

Any help much appreciated.

Hae you verified that ? It seems you might have an earlier checkpoint there with different geometry.

I started with empty checkpoints dir and haven’t touched anything since start; I tried this three times

I removed the option --wer_log_pattern and the training successfully ended, exporting a model file