DeepSpeech 0.7.3 on Windows Failed to initialize memory mapped model

Hi, I am new to deepspeech and am running into some trouble with the pre-trained model. Everything I see on here isn’t applicable to me so hoping someone has encountered the same issue. The error I am getting is when I try to import the pre-trained model. Unfortunately, the error is not very descriptive at all and just says “RuntimeError: CreateModel failed with ‘Failed to initialize memory mapped model.’ (0x3000)”. Please see below for my sample code.

import deepspeech as ds
ds.Model(r"..\deepspeech\deepspeech-0.7.3-models.pbmm")

I’m running 64bit, Win10, and ds.version() does indeed return 0.7.3. Any thoughts would be greatly appreciated.

Are you sure that’s a valid path? Are you sure you’re not running the TFLite version of the DeepSpeech package?

Hi Reuben, thanks for the reply. The path is definitely valid. I believe I’m not using the tflite version (I also tried to load the .tflite file to no avail). How can I double check that I’m not running the tflite version?

Have you tried running the DeepSpeech.py script from command line? Sometimes that output is more descriptive if you have a general problem with your setup.

He’s trying to use the native client, not train a model.

Sorry, the fact that you got this specific error already means you’re not running the TFLite version. This error can only happen due to two things: an invalid path, or a corrupted input file. So I’d double check you can reach that path from Python, for example by running a script like

import os
os.path.exists(r"..\deepspeech\deepspeech-0.7.3-models.pbmm")`

And also try redownloading the model files.

is this a valid path? "../deepspeech/deepspeech-0.7.3-models.pbmm" should be tested

So the path was indeed valid, but because it was on a network drive instead of a local drive it seems like the io lag caused some issues. When I move the file locally, I am able to create the model without issue.

Perhaps it would be worth updating the error message to be more informative for users. If it’s “common knowledge” that the error can only be caused an issue with the path/reaching the file or a corrupt input file, pehaps that knowledge should be institutionalized in a more robust error message.

(Reuben thanks for the help but you shouldn’t assume I’m a male it’s only mildly offending.)

No, I think it has nothing to do with IO lag. I don’t know the internals of Windows, but I would not be surprised if there was limitations regarding performing mmap() on network-mounted drives.

Except that at code-level, we can’t tell why there was a failure.

Also, your error message mentionning a failure to initialize mmap model was already pretty specific, but we lacked your context.

I just ran this again vs the network drive location and it worked. Doesn’t appear to be any mmap windows specific issue at play.

Perhaps you could run a one line file check to ensure access and return a sensible error? I’m sure your users would willingly trade a few milliseconds of overhead for a few hours debugging something that doesn’t have the most straightforward exception. Anyway, just a suggestion.

Thanks all - very helpful.

Sorry for that.

We may be able to tell data loss from invalid path from the TF error info.

@jmn319 can you try with that same path but using the native client binary from https://github.com/mozilla/DeepSpeech/releases/download/v0.7.3/native_client.amd64.cpu.win.tar.xz ?

Basically extract that archive and see what happens when you do ./deepspeech --model "..\deepspeech\deepspeech-0.7.3-models.pbmm" --audio some_file.wav

Sorry for the delay. This was also successful. Please see below for the full output.

I was confronted with this error today. It turned out that I change the path of the model and when I redownload the model again in the right path, it worked

I was stuck today with this error as well, what solved the issue for me was downloading the model from the browser instead of using curl (on a mac).