Can we use DeepSpeech for Vietnamese Speech To Text?


(Lissyx) #21

You need to find where it comes from: you have some transcription that has characters not in the alphabet :). There’s nothing we can do more for now.


(Phanthanhlong7695) #22

yeah. thank you for your help.
for example. in windown: mà
and in linux : ma`
maybe error :smiley:
i’m try to fix this. thank for your support


(Lissyx) #23

The best I can suggest for this is simply binary search: open your train CSV (if it happens during training), remove the first half of the lines, try to re-run. If it works, then the first half contains the offending character. If not, it’s in the second half, and you restart the process by removing half of the second half, until you get ONE line :slight_smile:


(Phanthanhlong7695) #24

i know. but i can not create trie file. so how could i start trainning. i am still follow instruction. :slight_smile:


(Lissyx) #25

Well, you don’t need the trie file for training. Worst case, you can just apply the same process with generate_trie.


(Phanthanhlong7695) #26

that mean i can delete this :
–lm_trie_path /home/nvidia/DeepSpeech/data/alfred/trie
that right ?


(Lissyx) #27

What is this ? Where is this coming from ?


(Phanthanhlong7695) #28

come from there .
DEEPSPEECH/bin/run-alfred.sh


(Lissyx) #29

I’m a bit lost now in the status of your system. When do you have the invalid label error ? At training or during trie creation ? Why do you try to use a trie made for french on vietnamese data ?


(Phanthanhlong7695) #30

invalid label during trie creation


(Phanthanhlong7695) #31

i did not use trie for french. i find how to create trie for Vietnamese ?


(Lissyx) #32

We are circling here. You need to create it. Vincent documented it in his thread. If you are hitting the invalid label during its creation, you need to find what is missing in your alphabet.


(Phanthanhlong7695) #33

i am trying :frowning:


(Vincent Foucault) #34

Hi, @phanthanhlong7695,

To create Trie file, you need some parts :

  • alphabet.txt
  • lm.binary
  • vocabulary.txt

invalid label during trie creation : it seems that you have unknown characters in your vocabulary.
a “label” is a character (a letter, or a punctuation)
check that all caracters in your vocabulary are present in alphabet.

If not, correct it, and restart all process.


(Lissyx) #35

Well, I cannot do it for you, and I have much other work to perform. I gave you a process to find what is broken. Apply it.


(Jageshmaharjan) #36

Hi @lissyx and @elpimous_robot. I did everything of English Corpus and everything went well from creating language model using kenLM tool. And generating trie file with generate_tire. My alphabets were in english, and trained successfully later with DeepSpeech.py.

But, the story for chinese characters is different. I have a chinese corpus in my vocabulary.txt file, i am using KenLM tool to generate chinese_lm.binary. But, stuck up at this point. is there anything that i need to with unicode.
This is my command to generate arap file, with KenLm,

/bin/lmplz -o 3 <~/Desktop/zh_deepspeech_small/vocabulary.txt> words.arpa --discount_fallback 1

And, i got this error;

/home/shenzhen/PycharmProjects/DeepSpeech/kenlm/util/scoped.cc:20 in void* util::{anonymous}::InspectAddr(void*, std::size_t, const char*) threw MallocException because `!addr && requested’.
Cannot allocate memory for 5104689640 bytes in malloc
Aborted (core dumped)

My vocabulary file consist of chinese characters (not pinyin)
eg:

七十 年代 末 我 外出 求学 母亲 叮咛 我 吃饭 要 细嚼慢咽 学习 要 深 钻 细 研
陈云 同志 同时 要求 干部 认真学习 业务 精通 业务 向 一切 业务 内 行人 学习


(Vincent Foucault) #37

Hello,
Look at kenlm website, there is a command to allocate space/ram for building…