This shape is read by the native_client in DS_CreateModel to know the
# value of n_steps, n_context and n_input. Make sure you update the code
# there if this shape is changed.
client.cc calls the DS_CreateModel. When/how is client.cc called, could you please point me to the code ?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
4
Do you know about git grep ? Reading client.cc and you would see it’s the main C++deepspeech binary, which is one example of callers. Have a look at the bindings as well, those are others callers.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
5
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
Basically, you need to re-implement what is exposed from deepspeech.h and implemented in deepspeech.cc. Details may vary depending on how tf.js works, and maybe you would rather look at tflitemodelstate.cc.
It also depends on exactly what do you want to do: just run the model, or provide the same API?
I want my model to be able to train in-browser. And then I should be able to use the model I get.
Yes, looking at that file.
I am not clear on different folders in native client - java/python/dotnet The folders are the different backends that deepspeech can be run on?
versus,
all the files in native client folder tfmodelstate modelstate deepspeech.cc etc. - These files are common for all binaries jave/dotnet/python backend?
If you can point me to a resource/file etc explaining the design of native client I should be good with that as well.
Sorry for these basic questions, I am not very comfortable with C++.
I posted a similar question on github issues as well.This is my current understanding. Would be helpful if you could point flaws
ok client.cc calls DS_createModel , that initialises a model with TFModelState input. So, while running the model in tfjs , the input (say, X ) to model.predict should be of the form defined in TFModelState . Is that correct?
If yes, I have anther question:
I was passing an audio file through mode.predict which obviously is not of the form X . Why did it mean that I need to convert the TFModelState to tfjs (as you stated earlier in the thread)? Shouldn’t it just mean that the audio file that I am passing should be converted to the form X
The train.py (create_inference_graph) has already defined what the X format mean. Why was my input not directly converted into that format? The create_inference_graph is not converted to tfjs through tfjs_coverter , is it? So basically i need to write code in tfjs to convert this input to form X ?
And how would i convey in the tfjs code that model.predict would mean Session::Run() equivalent-of-tfjs.
PS: I can take this to the discourse forum, if this is not the right place to discuss this here.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
9
yes
for building libdeepspeech.so whichi is then used by otehrs