Running multiple inferences in parallel on a single machine?

@praveeny1986 You should really give a try to GPUs, with properly designed software (not the demo client), given it’s faster than real time, you can likely get better results.

Yeah, I am going to do that next :slight_smile: