You can’t use the released model with DeepSpeech.py unless you made significant changes to it. There’s no command line parameter that expects a frozen model to be passed in. From the error, it looks like maybe you passed --decoder_library_path models/models/output_graph.pb, when you should be passing the path to libctc_decoder_with_kenlm.so to it.
This would require writing a custom C++ operator that modifies our current custom C++ operator for CTC decoding[1] and would be a significant amount of work.
It loads a frozen graph, runs inference on given wav, model and alphabet, softmaxes the logits to get the probabilities and displays most likely characters and their probabilities, “-” is used for blank. Each character prediction is on a new line so the predicted text can be read in columns.
E.g. “cat” string could have these character predictions:
c k - (0.999957) (1.62438e-05) (6.99057e-06)
a - (0.999998) (1.05044e-06) (4.43978e-07)
t d - (0.999999) (4.27885e-07) (1.09088e-07)