Test with deepspeech fails


(Andres ) #1

Hi. When I try to test the model I trained, the following error appears:

Loading model from file ./traincmd/output_graph.pb
TensorFlow: v1.11.0-9-g97d851f
DeepSpeech: v0.3.0-0-gef6b5bd
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2018-11-07 02:07:57.031685: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: FMA
Invalid argument: No OpKernel was registered to support Op ‘Softmax’ with these attrs. Registered devices: [CPU], Registered kernels:

 [[{{node Softmax}} = Softmax[T=DT_FLOAT](raw_logits)]]

Traceback (most recent call last):
File “/home/crs/tmp/deepspeech-venv/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/crs/tmp/deepspeech-venv/local/lib/python2.7/site-packages/deepspeech/client.py”, line 81, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/crs/tmp/deepspeech-venv/local/lib/python2.7/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 3

But when I try a model already TRAINED I have no problem.
Does anyone know any solution? Thank you

#ubuntu 16.04lts
#tensorflow 1.11
#deepspeech 0.3.0
#python 2.7


(Lissyx) #2

I suspect you did the training with master ? Either checkout v0.3.0 for the training, or use v0.4.0-alpha.0 binaries, e.g., pip install deepspeech==0.4.0a0


(Rpratesh) #3

Hi @lissyx,

I’ve trained the model with “transfer-learning” branch and when trying to infer it with “v.0.3.0” or “v.0.4.0-alpha.0” branch, I get the same above error.

Invalid argument: No OpKernel was registered to support Op ‘Softmax’ with these attrs. Registered devices: [CPU,GPU], Registered kernels:

 [[{{node Softmax}} = Softmax[T=DT_FLOAT](raw_logits)]]

Could not create model.

But I won’t get any such error if I infer model in the same “transfer-learning” branch.

So, how to deploy the models trained with “tranfer-learning” branch using the executable of “v.0.3.0” or “v.0.4.0-alpha.0” branch.


(Reuben Morais) #4

The transfer-learning branch is an unsupported WIP, there’s no guarantees inference will even work.


(Rpratesh) #5

Ok… though I was bit specific in the above question, I wanted to know if there’s a way where
I can train a model in, say ‘master’ and run it using “v.0.3.0” or “v.0.4.0-alpha.0” branches.
Essentially, the question is
“Is there any work around for the above softmax related error when running and training across different branches”

Thanks.


(Lissyx) #6

softmax should be in latest 0.4.0-alpha.0 or master binaries. I have not looked at the changes on the model on that branch, maybe it’s just not compatible and the branch does not contains the needed changes on native client? cc @josh_meyer


(Iyer Sujatha94) #7

@lissyx I have trained my own model. I am running into the same error. Currently using deepspeech v0.3.0 .


(Lissyx) #8

And which version did you trained with ? Again, this is something for @josh_meyer, not me.


(Rpratesh) #9

Grt!
The softmax is present in 0.4.0-alpha0 . So, the model trained using transfer-learning branch is working with 0.4.0-alpha0 binaries also.
The issue occurs when trying to run using 0.3.0 binaries.


(Lissyx) #10

That’s exactly what I said earlier, except in your first messages you said you tested all versions and it was never working.

With softmax, if you confirm it works with 0.4.0-alpha.0 as it should, then I don’t see what needs to be done here: that’s what we expect.


(Murugan R) #11

@lissyx sir @rpratesh sir. in this discussion, what you says, “adding softmax function/layer transfer learning will works better”

sir is this correct or not?
thank you


(Lissyx) #12

No, I don’t see what you are referring to.


(Murugan R) #13

just “will i train a model with 0.4.1 alpha.0 or master 0.3.0?” just a conformation point of view i asked.
thank you sir. :slightly_smiling_face: