(mark3) root@computer:/home/computer/Desktop/mark3# ls
customModels DeepSpeech indianModel kenlm mfit-models namesModel tensorflow trieModel
(mark3) root@computer:/home/computer/Desktop/mark3/mfit-models# ../kenlm/build/bin/lmplz --discount_fallback --text mirrorfit.txt --arpa words.arpa --o 3
=== 1/5 Counting and sorting n-grams ===
Reading /home/computer/Desktop/mark3/mfit-models/mirrorfit.txt
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Unigram tokens 200003 types 200006
=== 2/5 Calculating and sorting adjusted counts ===
Chain sizes: 1:2400072 2:9327010816 3:17488144384
Substituting fallback discounts for order 0: D1=0.5 D2=1 D3+=1.5
Substituting fallback discounts for order 1: D1=0.5 D2=1 D3+=1.5
Substituting fallback discounts for order 2: D1=0.5 D2=1 D3+=1.5
Statistics:
1 200006 D1=0.5 D2=1 D3+=1.5
2 400006 D1=0.5 D2=1 D3+=1.5
3 200003 D1=0.5 D2=1 D3+=1.5
Memory estimate for binary LM:
type kB
probing 17969 assuming -p 1.5
probing 21094 assuming -r models -p 1.5
trie 10718 without quantization
trie 7864 assuming -q 8 -b 8 quantization
trie 10132 assuming -a 22 array pointer compression
trie 7278 assuming -a 22 -q 8 -b 8 array pointer compression and quantization
=== 3/5 Calculating and sorting initial probabilities ===
Chain sizes: 1:2400072 2:6400096 3:4000060
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
####################################################################################################
=== 4/5 Calculating and writing order-interpolated probabilities ===
Chain sizes: 1:2400072 2:6400096 3:4000060
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
####################################################################################################
=== 5/5 Writing ARPA model ===
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Name:lmplz VmPeak:26372372 kB VmRSS:22700 kB RSSMax:6075336 kB user:0.876577 sys:1.34088 CPU:2.21748 real:2.16143
(mark3) root@computer:/home/computer/Desktop/mark3/mfit-models# ../kenlm/build/bin/build_binary -T -s words.arpa lm.binary
Reading words.arpa
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
SUCCESS
Below one doesn’t give any output just create trie file
(mark3) root@computer:/home/computer/Desktop/mark3/mfit-models# ../DeepSpeech/generate_trie alphabet.txt lm.binary trie
Deep-speech directory:-
(mark3) root@computer:/home/computer/Desktop/mark3/DeepSpeech# ls
bazel.patch DeepSpeech.py libdeepspeech.so requirements.txt
bin doc LICENSE runNameTrieModel.sh
build-python-wheel.yml-DISABLED_ENABLE_ME_TO_REBUILD_DURING_PR Dockerfile myDataset stats.py
CODE_OF_CONDUCT.md evaluate.py native_client SUPPORT.rst
CONTRIBUTING.rst evaluate_tflite.py native_client.amd64.cpu.linux.tar.xz taskcluster
data examples __pycache__ transcribe.py
deepspeech generate_trie README.mozilla util
deepspeech-0.6.1-checkpoint GRAPH_VERSION README.rst VERSION
deepspeech-0.6.1-checkpoint.tar.gz images RELEASE.rst
deepspeech.h ISSUE_TEMPLATE.md requirements_eval_tflite.txt
Please let me know if I forgot to mention anything.