Possible mismatch error

I am getting the following error:

Traceback (most recent call last):
File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1296, in restore
names_to_keys = object_graph_key_mapping(save_path)
 File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1614, in object_graph_key_mapping
object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY)
  File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 678, in get_tensor
return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str))
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "DeepSpeech.py", line 931, in <module>
  File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
  File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
  File "DeepSpeech.py", line 915, in main
  File "DeepSpeech.py", line 549, in train
loaded = try_loading(session, checkpoint_saver, 'checkpoint', 'most recent')
  File "DeepSpeech.py", line 403, in try_loading
saver.restore(session, checkpoint_path)
      File "/home/sayantan/anaconda3/envs/deepspeech_0_5_train/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1302, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key cond_1/beta1_power not found in checkpoint
 [[node save/RestoreV2 (defined at DeepSpeech.py:489) ]]

This possibly is an issue with versioning mismatch.
I’m using the public release: “deepspeech-0.5.1-checkpoint”
And I just pulled the latest repo for using the augmentation flags. This is possibly causing a mismatch. I’m sure this has been solved somewhere. Can you please suggest where to look at?

On running:


This works. Hence, I believe the issue is with the checkpoint. Can I use the latest pull:

git clone https://github.com/mozilla/DeepSpeech

And use the checkpoint released on 0.5.1?

Yes, this is not supported. You need to apply: https://gist.github.com/reuben/b68b9085f7b293580f8431156a33daa9

@sayantangangs.91 please check, the previous link was not the good one.

Thanks. Shall look into these.

Hey @lissyx. Thank you got it. Was running training (for 4 days) and couldn’t see this. I saw the link now. Could you expain what is fixup function doing, as in why replace “lstm_fused_cell” and “cudnn_compatible_lstm_cell”?

It makes weights compatible.

1 Like

Note that the patch will make it so the weights can be loaded, but that’s it. The weights are still not appropriate for the code on master so the results will be very poor.

I get that. Thank you. Shall not work on that right now then.