Could not create Model: Invalid argument: No OpKernel was registered to support Op ‘RealDiv’ used by {{node linear_to_mel_weight_matrix/truediv}}with these attrs: [T=DT_FLOAT]

Hi Team,
I am using DeepSpeech: v0.6.0-alpha.5-26-g896ac9d. I git cloned this using git LFS and started training on my corpus and the training went good with theaugmentations included. However when I tried to export my model from the best dev checkpoint and then ran an inference on audio file using deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --audio my_audio_file.wav, I got this error:

Invalid argument: No OpKernel was registered to support Op ‘RealDiv’ used by {{node linear_to_mel_weight_matrix/truediv}}with these attrs: [T=DT_FLOAT]
Registered devices: [CPU, GPU]
Registered kernels:


Could not create model.

I am kind of stuck here. Any insights why would this be happening?
@ bernardohenz

Kindly help

Can you share full output ?

I fear this is related. Can you do a quick test (1 epoch, not much data) without those augmentations and see if you can run the model ?

Hi, thanks for the prompt reply!
Yes I did the exact same thing and got the same error.
Took a relatively small dataset and ran the training for 3 epochs.
Using the exported model I ran the following command
–model model_test/output_graph.pb --alphabet speech_data/speech_commands_train/speech_commands_lm/alphabet.txt --lm speech_data/speech_commands_train/speech_commands_lm/lm3.binary --trie speech_data/speech_commands_train/speech_commands_lm/trie3 --audio /home/sumegha/ds/content/datalab/data/deepspeech/speech_commands_train/speech_commands/down/257e17e0_nohash_0.wav

Pardon the long data paths!

And still landed up with the same error!!
Any insights there??

This is not an op we are using, so there’s some missing element here. Have you made any change to DeepSpeech ? Is this using TensorFlow r1.14 ?

Yes I am using TF 1.14. Just changed the features to be fed from MFCC to MFSC. and some dimensionality changes in

It would have been nice to document that from the beginning … You completely changed the graph then. You need to adapt native_client/BUILD and rebuild to have the matching ops.

Care to document ?

It’s nice to experiment, but when reaching for help with errors, it would have been even nicer to document those changes ahead of time: it would have saved a lot of time to people.

I put up a guide here on how to adjust the dependencies if you add new ops, hopefully it’s useful:

I think you can also just add “//tensorflow/core:ops” and “//tensorflow/core:all_kernels” to include everything, if you’re just experimenting. But note that it’ll substantially increase the size of

Thanks a ton for the help Reuben! Could you please help me out with where exactly do I have to add this “//tensorflow/core:ops” and “//tensorflow/core:all_kernels” .

I am trying to figure out the tensorflow operations, might need some help there.
Thanks a ton again!

The guide I linked above explains this. It goes in the libdeepspeech dependencies in native_client/BUILD

Got that! And the model is up and running!!
Thanks a ton team! :slight_smile:

1 Like

Good to know, it’d be interesting to have feedback on your changes.

The features that Deepspeech currently uses is MFCC, it’s pretty simple that if we don’t take the DCT the features are called as Log-Mel-spectrograms of Mel-Frequency Spectral Coefficients(MFSC). After some research we found that they work better. So tried to implement them instead of MFCC and thus there were some dimentionality changes that were made in Augmentation pipeline to get better results.
You guys have done exceptional work already!! Thanks a ton for all the help and documentation. :smiley:

I already attached the entire code while discussing with bernardohenz.

Sorry, you never mentionned that, so I missed it.