Everything is working fine and model has been trained and exported. But the exported file output_graph.pb file size remains the same size as deepspeech pre-trained model size 188.9mb.
I don’t know whether my training files has been concatenated with the pre-trained models. I assume the file size will get increased while i am training common voice datasets. But i see from the pre-trained 467356 steps has been increased with 487573 after the export.
Please clarify.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
Model size depends on the number of parameters, not on the amount of data. Since you kept the same geometry, this is expected.
Is there any documentation for how to use the parameters. I see util/flags.py but not more detail.
I will would like to fine tune the checkpoints oftenly.
Any suggestions please.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
4
That depends on the meaning of your questions. How to use parameters is documented in util/flags.py. If you mean how in the sense of what values you should select, that depends on your use-case …
If you mean something else, please explain what is missing in the current code.
Could you please correct parameters if I have specified anything wrong here or any parameters missed. Also, I have used the param what deepspeech 0.5.1 models used.
Note:: For understanding those parameters usage in detail, should I learn deep learning or machine learning in detail.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
Yeah, this is not black magic, you need to understand what you are doing …