I’m am trying to understand the DeepSpeech code completely. Why do you not freeze the graph after saving the checkpoints in “do_single_file_inference?”
That function (and its corresponding --one_shot_infer
flag) is meant to quickly test a checkpoint on an audio file. If you have a frozen model you can just use one of the clients.
It also doubles as documentation on how to use a DeepSpeech checkpoint if you want to customize things without having to write C++ for an experiment, for example.
Gotcha. Thank you that helps a lot