Using MFCC from another code for DeepSpeech decoding

I would like to use MFCC generated by another code for decoding using DeepSpeech.
I see on github that audiofile_to_input_vector is used to compute MFCC in DeepSpeech.py. But I cannot find this code in STT -0.7.4 or in DeepSpeech-0.7.4 release that I have downloaded. Please advice/direct on how to go about changing the code to be able to use MFCC from say a file.

Please understand that while we are happy you re-use our code, we can’t support third party usages of that.

The MFCC are being computed directly within the model, by tensorflow.

Thank you for your response.
I am trying to enhance speech using another code and hence would need to use MFCC as input to DeepSpeech instead of wave files. Based on some reading, I realize that this was possible in the earlier versions. However, if it is not supported as of now, I understand. Thank you once again.

It should be possible, the code has simply been moved/refactored. What lissyx is saying is that you can’t expect us to provide support for custom modifications like that.