I get an error while extracting my own Scorer file

Hello. I trained my own model. I extracted my “.pb and .pbmm” model files. I want to extract the scorer file. However, I got the following error. How can I extract the scorer file?

python3 generate_lm.py --input_txt /home/bcode/words.txt --output_dir ~/deepspeech_model --top_k 500000 --kenlm_bins ~/kenlm/build/bin/ --arpa_order 5 --max_arpa_memory “85%” --arpa_prune “0|0|1” --binary_a_bits 255 --binary_q_bits 8 --binary_type trie

Converting to lowercase and counting word occurrences …

Saving top 500000 words …

Calculating word statistics …
Your text file has 36305 words in total
It has 4398 unique words
Your top-500000 words are 100.0000 percent of all words
Your most common word “bu” occurred 1003 times
The least common word in your top-k is “yankılara” with 1 times
The first word with 2 occurrences is “suçlamaktan” at place 4342

Creating ARPA file …
=== 1/5 Counting and sorting n-grams ===
Reading /home/bcode/deepspeech_model/lower.txt.gz

Unigram tokens 36305 types 4401
=== 2/5 Calculating and sorting adjusted counts ===
Chain sizes: 1:52812 2:681589184 3:1277979776 4:2044767488 5:2981952768
/home/bcode/kenlm/lm/builder/adjust_counts.cc:60 in void lm::builder::{anonymous}::StatCollector::CalculateDiscounts(const lm::builder::DiscountConfig&) threw BadDiscountException because `discounts_[i].amount[j] < 0.0 || discounts_[i].amount[j] > j’.
ERROR: 2-gram discount out of range for adjusted count 2: -0.09219742. This means modified Kneser-Ney smoothing thinks something is weird about your data. To override this error for e.g. a class-based model, rerun with --discount_fallback

Traceback (most recent call last):
File “generate_lm.py”, line 213, in
File “generate_lm.py”, line 204, in main
build_lm(args, data_lower, vocab_str)
File “generate_lm.py”, line 100, in build_lm
File “/usr/lib/python3.6/subprocess.py”, line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[’/home/bcode/kenlm/build/bin/lmplz’, ‘–order’, ‘5’, ‘–temp_prefix’, ‘/home/bcode/deepspeech_model’, ‘–memory’, ‘85%’, ‘–text’, ‘/home/bcode/deepspeech_model/lower.txt.gz’, ‘–arpa’, ‘/home/bcode/deepspeech_model/lm.arpa’, ‘–prune’, ‘0’, ‘0’, ‘1’]’ died with <Signals.SIGABRT: 6>.

@askinucuncu Please read my reply on Github, read your errors: