Hi lissyx,
Sorry for the long post
And thank you in advance for your patience and help.
On my Xavier I tried the 7.0 deepspeech whl file you told me.
But I could not get a better performance on memory consumption as discribed in
hacks.mozilla.org/2019/12/deepspeech-0-6-mozillas-speech-to-text-engine
We now use 22 times less memory …
Details
What I did:
I prepared 2 different environments (docker) to compare the performance between deepspeech
with tensorflow (3.6 someone release github.com/domcross/DeepSpeech-for-Jetson-Nano/releases)
and
with tensorflow lite(python3.7, you provided)
For python3.6
As someone released the deepspeech 6.0 for Arm64 in
github.com/domcross/DeepSpeech-for-Jetson-Nano/releases
1.I Downloaded the DeepSpeech-0.6.0 wheel from this release, then pip3.6 install deepspeech-0.6.0-cp36-cp36m-linux_aarch64.whl
2. Download the libdeepspeech.so
file as well and put it in your search path.
For python3.7
I install all the requirements for deepspeech(7.0) and then installed
deepspeech-0.7.0a1-cp37-cp37m-linux_aarch64.whl
- the information for my xavier is
root@DeepSpeech_v060:~# uname -a
Linux DeepSpeech_v060 4.9.140-tegra #1 SMP PREEMPT Tue Nov 5 13:37:19 PST 2019 aarch64 aarch64 aarch64 GNU/Linux
Software:
* Name: NVIDIA Jetson AGX Xavier
* Type: AGX Xavier
* Jetpack: UNKNOWN [L4T 32.2.3]
* GPU-Arch: 7.2
- Libraries:
* CUDA: 10.0.326
* cuDNN: 7.6.3.28-1+cuda10.0
* TensorRT: 6.0.1.10-1+cuda10.0
* VisionWorks: 1.6.0.500n
* OpenCV: 4.1.1 compiled CUDA: YES
docker pull nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-samples
â‘ 22s to load
root@DeepSpeech_v060_lite:~# python3.7 mic_vad_wakeup_060_local.py -v 0 --model ./deepspeech-0.6.1-models/output_graph.tflite --lm ./deepspeech-0.6.1-models/lm.binary --trie ./deepspeech-0.5.1-models/trie
②144s to load
root@DeepSpeech_v060:~# python3.6 mic_vad_wakeup_060_local.py -v 0 --model ./deepspeech-0.6.1-models/output_graph.pbmm --lm ./deepspeech-0.6.1-models/lm.binary --trie ./deepspeech-0.5.1-models/trie
though did take much less time to load the tflite model
by loading the model I mean from executing ①or ② to :
Listening (or press ctrl-c to exit)
- the performance of memory on Xavier is like below:
free -m ↓
unit(mb) total used free shared buff/cache available
â–ˇMem: 15690 1500 12768 22 1420 13948(before loading Deepspeech)
â‘ Mem: 15690 2457 11701 23 1530 12982(deepspeech.whl 7.0+Tensorflow Lite)
②Mem: 15690 2992 10939 22 1758 12451(deepspeech.whl 6.0+Tensorflow)
Did I miss any step?
And please kindly tell the fuction of libdeepspeech.so
#if I only used python, do I need libdeepspeech.so also?
In your 7.0 release page there is no specific version of libdeepspeech.so for Arm64
Does that mean there is no need to update the libdeepspeech.so?
Sorry for the long post
And thank you in advance for your patience and help.