Error when trying to train


(Sawantilak) #1

Hi,
I was following the instructions to train the model, but I getting the below error. Can some one please help me identify why I am getting this error and how can I get this fixed?

(deepspeech-venv) sawan@Sawan-Office:~/pfiles/ds_home/DeepSpeech$ ./DeepSpeech.py --train_files …/…/…/data/deepvoice/cv_corpus_v1/cv-valid-train.csv,…/…/…/data/deepvoice/cv_corpus_v1/cv-other-train.csv --dev_files …/…/…/data/deepvoice/cv_corpus_v1/cv-valid-dev.csv --test_files …/…/…/data/deepvoice/cv_corpus_v1/cv-valid-test.csv
Traceback (most recent call last):
File “./DeepSpeech.py”, line 1828, in
tf.app.run()
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py”, line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File “./DeepSpeech.py”, line 1785, in main
train()
File “./DeepSpeech.py”, line 1465, in train
next_index=lambda i: COORD.get_next_index(‘train’))
File “/home/sawan/pfiles/ds_home/DeepSpeech/util/feeding.py”, line 95, in init
self.files = self.files.sort_values(by=“wav_filesize”, ascending=ascending)
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/frame.py”, line 3619, in sort_values
k = self.xs(by, axis=other_axis).values
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/generic.py”, line 2335, in xs
return self[key]
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/frame.py”, line 2139, in getitem
return self._getitem_column(key)
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/frame.py”, line 2146, in _getitem_column
return self._get_item_cache(key)
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/generic.py”, line 1842, in _get_item_cache
values = self._data.get(item)
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/internals.py”, line 3843, in get
loc = self.items.get_loc(item)
File “/home/sawan/venv/deepspeech-venv/local/lib/python2.7/site-packages/pandas/core/indexes/base.py”, line 2527, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File “pandas/_libs/index.pyx”, line 117, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/index.pyx”, line 139, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/hashtable_class_helper.pxi”, line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item
File “pandas/_libs/hashtable_class_helper.pxi”, line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: ‘wav_filesize’


(Lissyx) #2

Can you make sure you post the stacktrace using backquotes? Otherwhise some characters are missing and it’s harder to read :slight_smile:

like that

(Lissyx) #3

The error mentions wav_filesize, can you make sur your CSV files does have that column?


(Sawantilak) #4

I checked and the header “wav_filesize” exists in all the files. I am trying to attach the csv files here, but this site is not allowing me to upload as I am a new user.

Would appreciate anyone who can help me out with this issue. Any pointers on how I can troubleshoot this one?


(Sawantilak) #5

Thanks lissyx for taking out the time to reply and trying to help me. I found out what was wrong, had to run DeepSpeech.py in debugger mode to find out.

Turns out there are 2 sets of *.csv files getting generated and I have to pick the csv files in the root folder to train. I was picking the wrong files which had the below headers. I pointed to the right files, problem solved :slight_smile:

filename text up_votes down_votes age gender accent duration

Thanks :slight_smile:


(Phanthanhlong7695) #6

what can i do to fix that ?


(Sawantilak) #7

You are probably using the wrong csv file. See if there are csv files in the parent folder with 3 columns. Use those csv files.

Sawan.


(Lissyx) #8

I’m taking the opportunity, since you found your issue, would you mind updating the title with a better wording to make it easier for others to find your specific problem? Thanks!