Hi! I don’t know about the “native” solution, but as a workaround you can try using util/text.py.
You need to make labels for all files you infer and then catch deepspeech output and transfer it to wer_cer_batch(originals, results).
Hope it helps! Would like to know about embedded solution though 