Inference result with space after adding scorer

hi, i am facing a thing that everything goes well when i just use the pbmm model to infer, but after adding a scorer model, there is many spaces between every chinese word and the result become very weired:
Before using a scorer model:
root@32884f327085:/DeepSpeech# deepspeech --model model.pbmm --audio 1690041315330513107_aaaf20a46fdf11ea_C_12146_20200414_090204.wav --beam_width 3
Loading model from file ./model.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
2021-02-23 06:39:29.321567: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loaded model in 0.0526s.
Running inference.
呃我家的电视不能看了

but after using a scorer mode,the result become:
root@32884f327085:/DeepSpeech# deepspeech --model model.pbmm --audio 1690041315330513107_aaaf20a46fdf11ea_C_12146_20200414_090204.wav --beam_width 3 --scorer word_noprune_alphabet.scorer --lm_alpha 0.8 --lm_beta 100
Loading model from file ./model.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
2021-02-23 06:38:24.955914: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loaded model in 0.0527s.
Loading scorer from files ./word_noprune_alphabet.scorer
Loaded scorer in 0.0455s.
Running inference.
呃 我 这 家 的 电 视
Inference took 15.492s for 18.120s audio file.

and if we have a small lm_beta:
root@32884f327085:/DeepSpeech# deepspeech --model model.pbmm --audio 1690041315330513107_aaaf20a46fdf11ea_C_12146_20200414_090204.wav --beam_width 3 --scorer noprune_alphabet.scorer --lm_alpha 0.8 --lm_beta 10
Loading model from file ./model.pbmm
TensorFlow: v2.3.0-6-g23ad988
DeepSpeech: v0.9.3-0-gf2e9c85
2021-02-23 07:00:54.037949: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loaded model in 0.104s.
Loading scorer from files ./noprune_alphabet.scorer
Loaded scorer in 0.0519s.
Running inference.

Inference took 15.654s for 18.120s audio file.

very thankful if anyone can give some advice !!

Are you using a Chinese trained model? Self trained or our model ?