Hi all,
I’m trying to install the latest version of DeepSpeech on an AWS EC2 C6G instance (https://aws.amazon.com/ec2/instance-types/c6/), but I’m getting “pip._internal.exceptions.DistributionNotFound: No matching distribution found for deepspeech”. I’m assuming this means the Arm-based AWS Graviton2 processor is not supported? Has anyone had any luck installing DS on this architecture?
Not an expert on plattforms, the ARMv7 version might work, but that is only for tflite models. @lissyx is more knowledgeable and might add sth in the next days.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
3
Please, can people assume others dont know what is in their mind and share complete context?
I have no idea what is provided by C6G, I have no idea what you intended to do, I have no idea what commands you performed before getting this error, I have no idea of the OS setup you have.
We have linux/armv7 and linux/aarch64 builds. It seems Graviton2 is Aarch64, have you tried installing our aarch64 linux tflite python wheel ?
Sorry, but how would I go about installing a specific python wheel?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
6
Sorry for raising like that but we have documented guidelines and it’s adding cognitive load and noise to constantly having to ask people about their context, and it’s inefficient in the end. Please use the aarch64 wheel from github releases matching your python version
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
7
I get the error “Could not find a version that satisfies the requirement deepspeech (from versions: none)”
I tried downloading the following file and installing it: deepspeech-0.9.3-cp37-cp37m-linux_aarch64.whl but I get the following error: “deepspeech-0.9.3-cp37-cp37m-linux_aarch64.whl is not a supported wheel on this platform”
I tried downloading the following file and installing it: deepspeech-0.9.3-cp37-cp37m-linux_armv7l.whl but I get the following error: “deepspeech-0.9.3-cp37-cp37m-linux_armv7l.whl is not a supported wheel on this platform”
thanks in advance
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
10
Python version ? Steps for creating the venv ? Details of the cpu, like uname -a?
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
11
Ive also asked for pip install verbose, please share complete output, just the error might not be enough.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
12
Steps for creation the venv:
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
source $HOME/tmp/deepspeech-venv/bin/activate
CPU details:
Linux ip-172-31-30-152 5.4.0-1035-aws #37-Ubuntu SMP Wed Jan 6 21:02:01 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
pip install verbose log:
It’s very long so I uploaded it here:
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
15
I would advise to try 2, then 3 and then only in last resort, 1.
Also, depending on your usecase, we have bindings in other langages, including nodejs.
If you are just looking at transcription from CLI you dont need the python or nodejs bindings, just download the native_client linux aarch64 package from our github release, it should be ABI compatible in your case.
@lissyx I was able to get it working by using the node bindings and the tflite model… however the tflite model seems to be significantly slower… is there any way we can get pbmm working on this architecture (aarch64 aarch64 aarch64 GNU/Linux)
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
19
I’m sorry, but we are still on the same inefficient path: “Significantly slower” is not actionable. Tracking performances is hard. Judging with words is wrong: is it slower than your expectation? Slower than being able to run realtime?
What are you trying to achieve? What is your goal? What are your metrics? What are your requirements?
Have you verified with the native_client C++ client directly, in case the inefficency is from NodeJS (could be possible?) ?
What data to you treat? What is amount of audio?
Rebuild yourself. We switched to TFLite runtime because TensorFlow runtime was too slow on those systems.
1 Like
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
20
FTR, we are faster than realtime on RPi3 in armv7 as well as aarch64 mode, using those binaries. This is our baseline goal.