Model doesn't recognize record audios

Hi.
I used deep speech v0.9.3 to train my model. I can to have good results in training phase but when I use the model with my own recorded audios I receive bad results (wrong inferences).
Example:
(training phase):

WER: 0.000000, CER: 0.000000, loss: 3.205268

  • wav: file://deepspeech-training-data-stt/clips/bok-25-097-b.wav
  • src: “não tens nada em vista nada tanto melhor tanto melhor”
  • res: “não tens nada em vista nada tanto melhor tanto melhor”

(my own recorded audio by microphone):
Audio with transcript: "olá bom dia como você esta"
Loading model from file deepspeech-data-pt-br/exported-model/output_graph.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
Loaded model in 0.00965s.
Running inference.
dar aalu daaaaaaaaaaaaaaaaaaaa
Inference took 10.137s for 13.955s audio file.

My dataset contains 16k audios files. My command of training is:
python3 -u DeepSpeech.py
–train_files deepspeech-training/studio-dataset/clips/train.csv
–dev_files deepspeech-training/studio-dataset/clips/dev.csv
–test_files deepspeech-training/studio-dataset/clips/test.csv
–feature_cache ./feature.cache
–automatic_mixed_precision
–alphabet_config_path data/new_portuguese_alphabet.txt
–load_checkpoint_dir deepspeech-training/modelos/19h-6min/training05_200122-210122/checkpoints
–save_checkpoint_dir deepspeech-training/modelos/19h-6min/training06_210122/checkpoints
–export_dir deepspeech-training-data/modelos/19h-6min/training06_210122/exported-model
–epochs 100
–augment pitch[p=0.3,pitch=1~0.2]
–train_batch_size 64
–test_batch_size 64
–n_hidden 600
–learning_rate 0.0001
–dropout_rate 0.3

What is missing to my model recognize other audios? Please guide me to find the solution.