Hi Everyone! I train the model and get different losts
I Finished training epoch 1842 - loss: 7.289767 I Training epoch 1843… I Finished training epoch 1843 - loss: 6.417722 I Training epoch 1844… I Finished training epoch 1844 - loss: 8.443151 I Training epoch 1845… I Finished training epoch 1845 - loss: 6.085240 I Training epoch 1846… I Finished training epoch 1846 - loss: 6.817554 I Training epoch 1847… I Finished training epoch 1847 - loss: 6.163227 I Training epoch 1848… I Finished training epoch 1848 - loss: 6.673455 I Training epoch 1849… I Finished training epoch 1849 - loss: 6.459898 I Training epoch 1850… I Finished training epoch 1850 - loss: 6.594937 I Training epoch 1851… I Finished training epoch 1851 - loss: 6.687344 I Training epoch 1852… I Finished training epoch 1852 - loss: 11.897445 I Training epoch 1853… I Finished training epoch 1853 - loss: 33.757729 I Training epoch 1854… I Finished training epoch 1854 - loss: 1402.510132 I Training epoch 1855… I Finished training epoch 1855 - loss: 4485.072754 I Training epoch 1856… I Finished training epoch 1856 - loss: 2153.560059 I Training epoch 1857… I Finished training epoch 1857 - loss: 1923.557007 I Training epoch 1858… I Finished training epoch 1858 - loss: 2507.593750 I Training epoch 1859… I Finished training epoch 1859 - loss: 1716.422974 I Training epoch 1860… I Finished training epoch 1860 - loss: 1413.547974 I Training epoch 1861… I Finished training epoch 1861 - loss: 1260.555908 I Training epoch 1862… I Finished training epoch 1862 - loss: 1238.079102 I Training epoch 1863… I Finished training epoch 1863 - loss: 1010.995300 I Training epoch 1864… I Finished training epoch 1864 - loss: 944.558167 Run parameters:
python -u DeepSpeech.py --noshow_progressbar
–train_files /home/user/DeepSpeech/test/train.csv
–test_files /home/user/DeepSpeech/test/train.csv
–train_batch_size 31
–test_batch_size 30
–n_hidden 200
–epochs 4000
–checkpoint_dir /home/user/DeepSpeech/test/model/
–export_dir /home/user/DeepSpeech/test/model/
–alphabet_config_path /home/user/DeepSpeech/test/alphabet.txt
–lm_binary_path /home/user/DeepSpeech/test/lm.binary
–lm_trie_path /home/user/DeepSpeech/test/trie
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
2
Please copy/paste text content, don’t use screenshots. Also, please share context on your training, there is nothing we can do to help you here.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
8
This one defines a model of 100 and not 200, and it’s purposely overfitting. It’s just here as a basic sanity test to help ensure training setup is okay.
Train my own langauge model,i n our language there are no full-fledged voice recognition systems, respectively, and datasets
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
14
Well, have you followed the documentation ? Do you understand what you are doing ? You need lots of data, and proper train / dev / test repartition of it.
That does not document where train.csv is coming from.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
19
Finally. Not a question I could answer until you accept to be more specific about your context. 32 files for training, that’s far from enough.
Your training setup is likely very inconsistent: no validation set, so it’s overfitting. Small dataset, so it’s overfitting. Way too much epoch, so it’s doing random.
I can’t do divination, if you don’t explain I cannot know and I can’t help you.
So please divide into train.csv, dev.csv and test.csv.