Error while running sample model on Raspbian GNU/Linux 9.4 (stretch)

When trying to use the pre-trained model in Raspberry 3 with Raspbian Stretch I get this error.
I did the below steps to download the files for ARM architecture:
python util/taskcluster.py --arch arm --target /path/to/destination/folder.

I suspect I am running out of memory as the error takes some time to appear while my memory consumption builds up. When the memory consumption reaches 95% I get the error .
I have tested with below commands:
./deepspeech models/output_graph.pb models/alphabet.txt models/lm.binary models/trie audio/output_left.wav -t

The error I ma getting is listed below :

TensorFlow: v1.6.0-16-gc346f2c
DeepSpeech: v0.2.0-alpha.5-0-g7cc8382
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
terminate called after throwing an instance of ‘util::ErrnoException’
what(): native_client/kenlm/util/mmap.cc:122 in void* util::MapOrThrow(std::size_t, bool, int, bool, int, uint64_t) threw ErrnoException because `(ret = mmap(__null, size, protect, flags, fd, offset)) == ((void *) -1)’.
Cannot allocate memory mmap failed for size 1599616228 at offset 0
Aborted

I also tried by removing the lm.binary and trie file from the command :

./deepspeech models/output_graph.pb models/alphabet.txt audio/output_left.wav -t

TensorFlow: v1.6.0-16-gc346f2c
DeepSpeech: v0.2.0-alpha.5-0-g7cc8382
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc
Aborted

Yes, you are running out of memory. Either add swap or use mmap model using convert_graphdef_memmapped_format as documented in the README.md

yes ,after increasing the swap memory ,the model started working .
Thanks for the help .

Using swap is a fallback, I’d recommend avoiding that as much as possible: it kills performances, and your memory card might not like that.

yes , I am trying to compile the convert_graphdef_memmapped_format binary to create a mmap file to reduce the size of the freezed graph .

Why? Just download it: https://index.taskcluster.net/v1/task/project.deepspeech.tensorflow.pip.r1.6.cpu/artifacts/public/convert_graphdef_memmapped_format (linux amd64 binary, other platforms are available).