Does anyone get the same result with me？
Can you elaborate a little bit on your process ?
I ran the following command using the pre-trained model for each of the 1680 files in the TIMIT test partition:
$ python DeepSpeech/native_client/python/client.py DeepSpeech/models/output_graph.pb <audio_file> DeepSpeech/models/alphabet.txt
This resulted in a microaveraged WER of 31.7% (3,817 substitutions, 362 insertions, and 424 deletions for 14,518 reference words).
Best way to get a baseline WER with the pre-trained model on my own test set