Python inference and raspberry pi DeepSpeech 0.4.1

Hi,

I am trying to do INFERENCE with python3 and RaspberryPi3 and a custom trained tiny model with DeepSpeech 0.4.1.

I am not using native_client because I have to recompile it again with the new version (different numcep and different sample_rate) it and I want to try some issues before.

But I am unable to install the ds_decode with pip3:

pip3 install $(python3 util/taskcluster.py --arch arm --decoder)

Collecting ds-ctcdecoder==0.4.1 from https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl
HTTP error 404 while getting https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl
Could not install requirement ds-ctcdecoder==0.4.1 from https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl because of error 404 Client Error: Not Found for url: https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl
Could not install requirement ds-ctcdecoder==0.4.1 from https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl because of HTTP error 404 Client Error: Not Found for url: https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl for URL https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.v0.4.1.arm-ctc/artifacts/public/ds_ctcdecoder-0.4.1-cp35-cp35m-linux_armv7l.whl

It is the wrong command?

Tanks,
Mar

Can you explain what you are exactly trying to do?

When you states a custom trained tiny model with DeepSpeech 0.4.1 it feels like it’s already trained. So I don’t understand why you would need ds_ctcdecoder: this is only needed at training time, and thus not available for ARM.

Hi Alexander,

Yes, I am not training.

I am trying to execute an inference with python, that is because I need the ds_decoder.

Could it be available for ARM? is it very difficult to build for this arch? do you plan to include it in the future?. If you want me to collaborate including this, I am ready.

I am aware of the native_client for inference, but I have to rebuild it with every new version you release (my custom model have different: sample_rate, mfcc parameters, and numcep).

It would be extremely useful to be able to run some python tests in advance, for a quick rough performance evaluation even among different DeepSpeech versions.

Thanks a lot,
Mar

Again: you don’t need ds_ctcdecoder for running inference.

There’s no other solution than rebuilding for now.

No, we don’t plan to make it available for ARM. Though, nothing stops you from building it yourself ? make -C native_client/ctcdecode with proper cross-compilation setup, or even on-device.

But so far, this is not something we want to support, there’s already too much to take care about.

If you are really limited by rebuilding native_client for those reasons, maybe you can help us by making the code less-tied to those parameters?