Unable to install deepspeech on centos 6.9

Hello,
Greetings!

I am new to DeepSpeech and trying to prepare the setup for deepSpeech.
I am facing below issue-

(deepspeech-venv) [root@localhost DeepSpeech]# pip install deepspeech
Collecting deepspeech
/root/tmp/deepspeech-venv/lib/python2.7/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/root/tmp/deepspeech-venv/lib/python2.7/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Could not find a version that satisfies the requirement deepspeech (from versions: )
No matching distribution found for deepspeech

I am following below link for installation

Below is the system info
(deepspeech-venv) [root@localhost DeepSpeech]# python --version
Python 2.7.6
You have mail in /var/spool/mail/centerstage
(deepspeech-venv) [root@localhost DeepSpeech]# cat /etc/redhat-release
CentOS release 6.9 (Final)

Please let me know what is the problem here? Why I am not able to install deepspeech.

Please try pip install --verbose deepspeech and paste the content. Also, i’d advise against installing as root, that’s not a good habit to take.

Also, what is your pip version? Try pip --version, you might need something 9.x. In this case, try pip install --upgrade --user pip, and use the $HOME/.local/bin/pip

Hello Lissyx,

Thanks for the response!

My pip version is 9.0.1
(deepspeech-venv) [centerstage@localhost DeepSpeech]$ pip --version
pip 9.0.1 from /home/centerstage/tmp/deepspeech-venv/lib/python2.7/site-packages (python 2.7)

Output of ‘pip install --verbose deepspeech’ command is in the attached file-
The output of this command is too long and I am neither able to paste that nor upload as a doc here. As this system reports error that new user can only put 5 links in a post. Can you suggest me how to upload it?

pastebin.mozilla.org ?

Thanks Lissyx!

Please find the link below containing the output of ‘pip install --verbose deepspeech’ command.
https://pastebin.mozilla.org/9078682

I am now not working as a root user as well.

Thanks!

Okay, can you try this?

$ wget https://pypi.python.org/packages/45/45/6f153f51407486d541723620bcec68f942236692c5fcd328e7dac5301ca1/deepspeech-0.1.1-cp27-cp27mu-manylinux1_x86_64.whl && mv deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl && pip install --user --upgrade deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl

I’m wondering if there’s a slight difference with your Python version?

Can you run this as well?

 python -c 'import sysconfig; import pprint; pprint.pprint(sysconfig.get_config_vars())' 

hello, everyone I am trying to run deepspeech on ubuntu and getting this error

How to resolve this?

Hello Lissyx,

On applying upgrade command it shows below output-

(deepspeech-venv) [centerstage@localhost DeepSpeech]$ pip install --upgrade deepspeech-0.1.1-cp27-cp27mu-manylinux1_x86_64.whl
deepspeech-0.1.1-cp27-cp27mu-manylinux1_x86_64.whl is not a supported wheel on this platform.

On running
(deepspeech-venv) [centerstage@localhost DeepSpeech]$ python -c ‘import sysconfig; import pprint; pprint.pprint(sysconfig.get_config_vars())’

I got below output and this time also not able to install deepspeech.
https://pastebin.mozilla.org/9078692

Looks like I missed one part of the command: mv deepspeech-0.1.1-cp27-cp27mu-manylinux1_x86_64.whl deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl && pip install --user --upgrade deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl

Right, this might be the lack of --enable-unicode=ucs4 in the CONFIG_ARGS parts. If that’s the case, renaming to deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl should trick it, but it might behave erratically with Unicode.

It could be useful that you file an issue on Github about supporting this setup :).

Hello lissyx,
Thanks for the response!

Now I am able to install deepspeech.
But when I tried to install the requirements it throws error while installing tensorflow-1.5.0

(deepspeech-venv) [centerstage@localhost DeepSpeech]$ pip install -r requirements.txt
Collecting pandas (from -r requirements.txt (line 1))
/home/centerstage/tmp/deepspeech-venv/lib/python2.7/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/home/centerstage/tmp/deepspeech-venv/lib/python2.7/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Using cached pandas-0.22.0.tar.gz
Collecting progressbar2 (from -r requirements.txt (line 2))
Using cached progressbar2-3.35.2-py2.py3-none-any.whl
Collecting python-utils (from -r requirements.txt (line 3))
Using cached python_utils-2.3.0-py2.py3-none-any.whl
Collecting tensorflow==1.5.0 (from -r requirements.txt (line 4))
** Could not find a version that satisfies the requirement tensorflow==1.5.0 (from -r requirements.txt (line 4)) (from versions: )**
No matching distribution found for tensorflow==1.5.0 (from -r requirements.txt (line 4))

Meanwhile I just switched to Centos version 7.3 and I am able to successfully setup the deepspeech project along with tensorflow.

This is only needed if you intend to train. The fact that you have been able to install confirms it was just the unicode stuff. But if TensorFlow upstream itself fails, then all the bets are off and we cannot help you on that.

Hello Lissyx,

Thanks for your help and the quick responses!!

I have a question and please let me know if this is the right place to ask it or should I open a new discussion window for it?

When I ran the decoder using the default model on a audio .wav file of ~4 sec it took ~38 sec of inference time.

(deepspeech-venv) [centerstage@localhost DeepSpeech]$ deepspeech …/models/output_graph.pb …/hiroshima-1.wav …/models/alphabet.txt …/models/lm.binary …/models/trie
Loading model from file …/models/output_graph.pb
2018-02-27 17:26:13.741657: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.469s.
Loading language model from files …/models/lm.binary …/models/trie
Loaded language model in 2.297s.
Running inference.
on a bright cloud less morning
Inference took 38.391s for 4.620s audio file.

Why is it taking this long of time?
How to improve the speed?

It depends on a lot of parameters: CPU, IO subsystem. Can you give more details ?

Hi Lissyx,

Below are the output of top/iostat commands while the deepspeech binary is decoding the ~4 sec audio file.

  1. Working on 12 CPU machine

top - 18:04:49 up 39 days, 2:50, 11 users, load average: 2.27, 1.64, 0.98
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 17.0 us, 0.3 sy, 0.0 ni, 82.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 15688592 total, 2214132 free, 3858156 used, 9616304 buff/cache
KiB Swap: 16383996 total, 15744108 free, 639888 used. 10986616 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1344 centers+ 20 0 5362376 2.740g 1.519g S 201.0 18.3 0:57.56 deepspeech

  1. CentOS Linux release 7.3.1611 (Core)

  2. iostat output while the deepspeech binary system is running

[root@localhost DeepSpeech]# iostat
Linux 3.10.0-514.26.2.el7.x86_64 (localhost.localdomain) Tuesday 27 February 2018 x86_64 (12 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
1.55 0.00 0.51 0.08 0.00 97.86

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.61 0.82 38.20 2763476 129148525

Please let me know if you need any more information.

What’s your exact CPU model, amount of RAM ? Hard-drive or SSD? It could the loading of the model itself that takes some time. Try artifacts from TaskCluster on master branch, C++ client in native_client.tar.xz contains “-t” option and you can use the mmap trick. It should all be accessible from readme. I’ll give you direct pointers if you don’t find but right now I cannot :-/.

Hello Lissyx,

I have downloaded DeepSpeech from below link and installed for python using the README.md file present in Deepspeech directory

There is one more README.md file present in the native_client directory which stats about building the Tensorflow and DeepSpeech libraries.

I am not getting your point from the above statement.
Can you please elaborate more on it?

  1. My system contains HDD.
  2. Total RAM is 15688592 kB.
  3. Loading the model takes 0.469 sec while loading the loanguage model takes 2.297 sec
    Loaded model in 0.469s.
    Loading language model from files …/models/lm.binary …/models/trie
    Loaded language model in 2.297s.

Please find below the details on CPU.

  1. vendor name
    [centerstage@localhost ~]$ cat /proc/cpuinfo | grep vendor | uniq
    vendor_id : GenuineIntel

  2. model name
    [centerstage@localhost ~]$ cat /proc/cpuinfo | grep ‘model name’ | uniq
    model name : Intel® Xeon® CPU E5-2609 v3 @ 1.90GHz

  3. Architecture
    [centerstage@localhost ~]$ lscpu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 12
    On-line CPU(s) list: 0-11
    Thread(s) per core: 1
    Core(s) per socket: 6
    Socket(s): 2
    NUMA node(s): 2
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 63
    Model name: Intel® Xeon® CPU E5-2609 v3 @ 1.90GHz
    Stepping: 2
    CPU MHz: 1200.117
    BogoMIPS: 3808.33
    Virtualization: VT-x
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 15360K
    NUMA node0 CPU(s): 0-5
    NUMA node1 CPU(s): 6-11
    [centerstage@localhost ~]$

  4. frequency/speed of the processor
    [centerstage@localhost ~]$ lscpu | grep -i mhz
    CPU MHz: 1200.117

  5. Multiple processor
    [centerstage@localhost ~]$ cat /proc/cpuinfo | grep -i ‘physical id’ | uniq
    physical id : 0
    physical id : 1

  6. Number of cores
    [centerstage@localhost ~]$ cat /proc/cpuinfo | grep -i ‘core id’
    core id : 0
    core id : 1
    core id : 2
    core id : 3
    core id : 4
    core id : 5
    core id : 0
    core id : 1
    core id : 2
    core id : 3
    core id : 4
    core id : 5

Thanks for those details. HDD + your CPU, it might be slow, but I don’t have an overview of how much. To try with mmap, please download native_client.tar.xz from https://tools.taskcluster.net/index/artifacts/project.deepspeech.deepspeech.native_client.master/cpu and convert_graphdef_memmapped_format from https://tools.taskcluster.net/index/project.deepspeech.tensorflow.pip.r1.5/cpu

Following those steps: https://github.com/mozilla/DeepSpeech/blob/master/README.md#making-a-mmap-able-model-for-inference please produce output_graph.pbmm from output_graph.pb. Then, using the deepspeech binary from native_client.tar.xz you can run inference (use .pbmm instead of .pb), and add an extra -t argument at the end.

In the end,

$ ./deepspeech …/models/output_graph.pbmm …/models/alphabet.txt  …/

With multiple audio files in .../, it will load the model once, and perform multiple inferences. We should then be able to know better.

Someone filed an issue for the lack of Python 2.7 unicode build similar to yours. I’ve just merged the fix, it should be available as soon as https://tools.taskcluster.net/groups/PrzjPY-ITSK6cn9x4yr3yg completes. Python package can then be installed from https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.cpu/artifacts/public/deepspeech-0.1.1-cp27-cp27m-manylinux1_x86_64.whl or https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.cpu/artifacts/public/deepspeech-0.1.1-cp27-cp27mu-manylinux1_x86_64.whl

This is not yet published to pypi registry (will be with v0.2.0 release)