Anyone able to get TF serving to work with the custom ctc beam op using the lm model?

Hello,

I am using a model trained with DeepSpeech in tf serving just fine. However without using the custom decoder my WER is around 16%. I exported the model using the custom decoder but when I loaded it into tf serving I received

Loading servable: {name: speech_detection version: 1} failed: Not found: Op type not registered 'CTCBeamSearchDecoderWithLM' in binary running on c41d408c3a94. Make sure the Op and Kernel are registered in the binary running in this process.

has anyone resolved thus issue?

I figured I could attempt a rebuild of tf serving with the .so files included in the BUILD file but wanted advice before I started investigating.

Thanks,
Dom

P.s.
I understand tf serving is not supported. I am just wondering if anyone encountered this. I am happy to make a PR to the docs for how to get the model in serving working without (and hopefully with)the decoder

Loading the module requires specific handling code in DeepSpeech.py, likely you have to do something similar for serving?

erniemargatapinnock1@gmail.com.

I think I will probably do as you say and implement the decoder after i recieve the response from the call to tf-serving. That way I won’t have to manage tf-serving builds ECT, I can just use the decoding logic with that custom op in tf session in my app code.

I will let you know how it goes