The problem of training the Chinese Model

I trained the model using the Chinese speech data set. The training process dosen’t take a long time(less than 3 hours). However, it takes a long time to test and export the model(more than 3 days). Is it caused by the large alphabet size of Chinese, or other factors?

I also happen this problem. I think maybe because the large alphabet size and long audio.

Are you also training the Chinese model?

Yes,I trained a Chinese model ,use Hanzi as alphabet.

Do you think that only the testing process take such a long time?

The training process seems not take much time

the loss seems big? should it be converged?

It seems that the loss will be converged at about 230. The loss is too high, and the result is very bad. No matter what voice data is given, the model will output the same and very short result like “额”.

@jackhuang Just one reminder, please try to paste text for this kind of output, images are not indexed, takes more time to load, and are always complicated to search in :slight_smile:

Can you document your training parameters ? Amount of audio, width of the network, etc. ? We have no feedback on languages like Chinese, so it’s likely you will have to play trial / error to move forward :slight_smile:

1 Like

Yes its quite big, have you tried training with a batch of short utterances first? Also how many characters in alphabet?