Hi. When I try to test the model I trained, the following error appears:
Loading model from file ./traincmd/output_graph.pb
TensorFlow: v1.11.0-9-g97d851f
DeepSpeech: v0.3.0-0-gef6b5bd
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2018-11-07 02:07:57.031685: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: FMA
Invalid argument: No OpKernel was registered to support Op ‘Softmax’ with these attrs. Registered devices: [CPU], Registered kernels:
Traceback (most recent call last):
File “/home/crs/tmp/deepspeech-venv/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/crs/tmp/deepspeech-venv/local/lib/python2.7/site-packages/deepspeech/client.py”, line 81, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/crs/tmp/deepspeech-venv/local/lib/python2.7/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 3
But when I try a model already TRAINED I have no problem.
Does anyone know any solution? Thank you
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
I suspect you did the training with master ? Either checkout v0.3.0 for the training, or use v0.4.0-alpha.0 binaries, e.g., pip install deepspeech==0.4.0a0
I’ve trained the model with “transfer-learning” branch and when trying to infer it with “v.0.3.0” or “v.0.4.0-alpha.0” branch, I get the same above error.
Invalid argument: No OpKernel was registered to support Op ‘Softmax’ with these attrs. Registered devices: [CPU,GPU], Registered kernels:
Ok… though I was bit specific in the above question, I wanted to know if there’s a way where
I can train a model in, say ‘master’ and run it using “v.0.3.0” or “v.0.4.0-alpha.0” branches.
Essentially, the question is
“Is there any work around for the above softmax related error when running and training across different branches”
Thanks.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
softmax should be in latest 0.4.0-alpha.0 or master binaries. I have not looked at the changes on the model on that branch, maybe it’s just not compatible and the branch does not contains the needed changes on native client? cc @josh_meyer
Grt!
The softmax is present in 0.4.0-alpha0 . So, the model trained using transfer-learning branch is working with 0.4.0-alpha0 binaries also.
The issue occurs when trying to run using 0.3.0 binaries.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
10
That’s exactly what I said earlier, except in your first messages you said you tested all versions and it was never working.
With softmax, if you confirm it works with 0.4.0-alpha.0 as it should, then I don’t see what needs to be done here: that’s what we expect.