Checkpoint : WAV data chunk 'data' is too large

Hi, I’m looking to use deep speech to pick up my own accent better and learn a bit about speech recognition systems.

But upon setting everything up, it gets to here

Use standard file APIs to check for files with this prefix.
I Restored variables from most recent checkpoint at /home/----------/SpeechDemo/deepspeech-0.5.1-checkpoint/model.v0.5.1, step 467356
I STARTING Optimization
Epoch 0 | Training | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000

and at the end of the traceback

tensorflow.python.framework.errors_impl.InvalidArgumentError: WAV data chunk ‘data’ is too large: 2147483648 bytes, but the limit is 2147483647
[[{{node DecodeWav}}]]
[[{{node tower_0/IteratorGetNext}}]]

That byte size is in the gigs and it being one byte off seemed strange to me, could it be a wav file size ?

I’m using checkpoint 0.5.1 and 0.5.1 source, on the CPU. I was wondering if anyone has came across this and could explain it.

Thanks alot for the project and the models, its amazing .

Maybe: https://github.com/mozilla/DeepSpeech/issues/2271?

1 Like

Hi Carlfm01
Yeah that thread helped me find a solution.
I’m unsure as to whether it was a corrupt wav file or a wrongly formatted wav file but their was a solution in the thread about using ffmpeg in a bash script to process the files and their file structure. I tried it out then started the training process again and it worked

#!/bin/bash

for name in *.wav;
do
    ffmpeg -i "$name" -c:a copy "tmp/$name"
done

Computer slowed down to a grind but that’s just the hardware im running, but it was training.
Thanks for the help.