DeepSpeech: intermediateDecode() vs. FinishStream()

Hi, I’d like to know the difference between
model.intermediateDecode() vs. model.finishStream()

Looks like they both calls model_->decode(decoder_state_) and only the difference is whether to call finalizeStream() which has processBatch() [1].
[1] https://github.com/mozilla/DeepSpeech/blob/e99b938ebfb0634668553d00b0c9cded2503d234/native_client/deepspeech.cc#L171

In the processBatch() it calls model_->infer(), but what does it do?
Is it that finishStream is heavier and more accurate than intermediateDecode due to the infer method, or only the difference is upper layer operation like buffer clean up?

Thanks in advance.

DS_IntermediateDecode returns the transcription results accumulated so far. DS_FinishStream processes any leftover buffered data, returns results, and closes the stream permanently.

Thank you for the answer, so are their accuracy and exec time almost same in terms of neural network calculation cost?