Inference with model different than 16kHz

Maybe instead of creating a new topic for the same case I will ask in this old one:

is there anyone why has managed to run the inference with good results after training 8kHz model?

I tried with the new 0.6.0 version and even though right now the client.py script automatically detects the sampling rate of the model the inference results differ significantly from what I get during test phase (are much worse).

Of course it was nowhere stated, that DeepSpeech is now fully compatible with the data of sampling rate different that 16 kHz. Just wanted to ask if there was anyone who managed to succeed with use of such model (with sampling rate != 16000).