Issue: Feature request: streaming decoder (fast DS_IntermediateDecode calls)

With reference to this issue and the pull request that closes it.
I am wondering if there is a implementation in the python interface of DeepSpeech. I am not very good with C++. I have been looking to find a sample implementation of the streaming recognizer for Deep speech’s python interface. Can someone please point me in the right direction?

Thanks a lot!
Yours

The API is the same, the only difference is that DS_IntermediateDecode is now fast enough to be usable.

The python function which calls DS_INtermediateDecode is intermediateDecode()

Here is a sample code showing this:

from scipy.io import wavfile
fs, frames = wavfile.read('abc.wav')
stream_context = ds.setupStream()
start = 0
start_time = time.time()
while(start<frames.shape[0]):
    ds.feedAudioContent(stream_context, np.frombuffer(frames[start:start+4000], np.int16))
    start += 4000
    print('Done '+str(start/16000)+":"+str(time.time()-start_time)+" seconds")
    print("Recognized: %s" % ds.intermediateDecode(stream_context))
text = ds.finishStream(stream_context)
print("Recognized: %s" % text)