Error in train (different losts)

Hi Everyone! I train the model and get different losts

I Finished training epoch 1842 - loss: 7.289767
I Training epoch 1843…
I Finished training epoch 1843 - loss: 6.417722
I Training epoch 1844…
I Finished training epoch 1844 - loss: 8.443151
I Training epoch 1845…
I Finished training epoch 1845 - loss: 6.085240
I Training epoch 1846…
I Finished training epoch 1846 - loss: 6.817554
I Training epoch 1847…
I Finished training epoch 1847 - loss: 6.163227
I Training epoch 1848…
I Finished training epoch 1848 - loss: 6.673455
I Training epoch 1849…
I Finished training epoch 1849 - loss: 6.459898
I Training epoch 1850…
I Finished training epoch 1850 - loss: 6.594937
I Training epoch 1851…
I Finished training epoch 1851 - loss: 6.687344
I Training epoch 1852…
I Finished training epoch 1852 - loss: 11.897445
I Training epoch 1853…
I Finished training epoch 1853 - loss: 33.757729
I Training epoch 1854…
I Finished training epoch 1854 - loss: 1402.510132
I Training epoch 1855…
I Finished training epoch 1855 - loss: 4485.072754
I Training epoch 1856…
I Finished training epoch 1856 - loss: 2153.560059
I Training epoch 1857…
I Finished training epoch 1857 - loss: 1923.557007
I Training epoch 1858…
I Finished training epoch 1858 - loss: 2507.593750
I Training epoch 1859…
I Finished training epoch 1859 - loss: 1716.422974
I Training epoch 1860…
I Finished training epoch 1860 - loss: 1413.547974
I Training epoch 1861…
I Finished training epoch 1861 - loss: 1260.555908
I Training epoch 1862…
I Finished training epoch 1862 - loss: 1238.079102
I Training epoch 1863…
I Finished training epoch 1863 - loss: 1010.995300
I Training epoch 1864…
I Finished training epoch 1864 - loss: 944.558167
Run parameters:
python -u DeepSpeech.py --noshow_progressbar
–train_files /home/user/DeepSpeech/test/train.csv
–test_files /home/user/DeepSpeech/test/train.csv
–train_batch_size 31
–test_batch_size 30
–n_hidden 200
–epochs 4000
–checkpoint_dir /home/user/DeepSpeech/test/model/
–export_dir /home/user/DeepSpeech/test/model/
–alphabet_config_path /home/user/DeepSpeech/test/alphabet.txt
–lm_binary_path /home/user/DeepSpeech/test/lm.binary
–lm_trie_path /home/user/DeepSpeech/test/trie

Please copy/paste text content, don’t use screenshots. Also, please share context on your training, there is nothing we can do to help you here.

changed at your request!

What are those ? No validation set ?

Why this value ?

Сan `t specify the same file for the test?
because in the example script there were 200

It’s just going to overfit. Since you don’t give more context, I can’t know if it is what you want or if you are making a mistake.

What example ?

run-ldc93s1.sh in the folder Deepspeech/bin

This one defines a model of 100 and not 200, and it’s purposely overfitting. It’s just here as a basic sanity test to help ensure training setup is okay.

What should I do to properly train the model

  1. – N_hidden 1024
  2. add test dataset
    ?

First, please answer to the question I asked in the very first reply:

please share context on your training, there is nothing we can do to help you here.

context ?My audio files?
https://colab.research.google.com/drive/1gDh8XRNXMDWse_Kx52aWy_TtEGSry8qN
there is the same)_

No, what you are trying to achieve.

Train my own langauge model,i n our language there are no full-fledged voice recognition systems, respectively, and datasets

Well, have you followed the documentation ? Do you understand what you are doing ? You need lots of data, and proper train / dev / test repartition of it.

That does not document where train.csv is coming from.

My train.csv has a format photo_2020-01-13_18-37-34 may not be correctly composed?

Please avoid using screenshots.

I don’t understand your question. Can you please explain where your train.csv comes from. How much data do you have ?

I wish you could understand,I have about 33 entries, of which 32 are for training 1 for the test, the question was why did the loss suddenly increase?

I prepare all the data myself

Finally. Not a question I could answer until you accept to be more specific about your context. 32 files for training, that’s far from enough.

Your training setup is likely very inconsistent: no validation set, so it’s overfitting. Small dataset, so it’s overfitting. Way too much epoch, so it’s doing random.

I can’t do divination, if you don’t explain I cannot know and I can’t help you.

So please divide into train.csv, dev.csv and test.csv.

Well thank you!You could not help me, even though a little example