I am currently running Deepspeech with a virtual environment on Linux and can get decently accurate transcriptions of my sound files (there are some technical words that aren’t being picked up on). However, my current analysis is to just look at the command line print statements. Is there a way to output the results of:
deepspeech models/output_graph.pb my_audio_file.wav models/alphabet.txt models/lm.binary models/trie
to a text file? I’d ideally like to place multiple outputs from different chronological short wav files so the transcript would be one long transcript for the larger wav file the short ones came from.
I figured this would have already been answered, but I can’t find an answer within the Mozilla forums or the feature requests/issues on github.