Where can I find cross compiling for aarch64?

Hi

I need help to cross compile Deep Speech for some S912 board.

What I have done so far?

  1. I created a docker with following modules.

FROM ubuntu:18.04

RUN apt update && apt install -y
build-essential
curl
git
wget
libjpeg-dev
openjdk-8-jdk
gcc-aarch64-linux-gnu
g+±aarch64-linux-gnu
build-essential
autoconf libtool
cmake pkg-config
git python-dev
swig3.0
libpcre3-dev
&& rm -rf /var/lib/lists/*

Install Anaconda

WORKDIR /
RUN wget “https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh” -O “miniconda.sh” &&
bash “miniconda.sh” -b -p “/conda” &&
rm miniconda.sh &&
echo PATH=’/conda/bin:$PATH’ >> /root/.bashrc &&
/conda/bin/conda config --add channels conda-forge &&
/conda/bin/conda update --yes -n base conda &&
/conda/bin/conda update --all --yes

Then installed bazel

gcc --version

Install an appropriate Python environment

conda create --yes -n tensorflow python==$PYTHON_VERSION
source activate tensorflow
conda install --yes numpy wheel bazel==0.16.1
conda install -c conda-forge --yes keras-applications

When I execute following command, it will compile tensorflow for amd64(my base platform). What shall I do for aarch64?

bazel build --config=opt
–action_env=“LD_LIBRARY_PATH=${LD_LIBRARY_PATH}”
//tensorflow/tools/pip_package:build_pip_package

Once tensorflow is compiled, how to get DeepSpeech?

Or do we have any link which explains these steps? For example I gone thru DeepSpeech native client compilation for Asus Thinkerboard, but unable to get any clue.

We have builds for ARM64, tested on LePotato, which is S905. Isn’t it good enough ?

You don’t need to build the TensorFlow Python wheel. Please check the docs.

That’s an old thread. Now, you should just follow the docs to build in native_client/README.md, just using --config=rpi3-armv8 --config=rpi3-armv8_opt as we define in https://github.com/mozilla/tensorflow/blob/bea86c1e884730cf7f8615eb24d31872c198c766/tools/bazel.rc#L70-L74

Current master ones: https://tools.taskcluster.net/index/project.deepspeech.deepspeech.native_client.master/arm64

You can also find them attached on github releases: https://github.com/mozilla/DeepSpeech/releases/download/v0.4.0-alpha.0/native_client.arm64.cpu.linux.tar.xz

ok. I will try to install this latest one and get back to you. Thanks.

Hi

May be a simple issue. I am getting this error

Error: Trie file version mismatch (2 instead of expected 3). Update your trie file.

It looks like it is due to model is from 0.3.0, where as binary is 0.4.0. If so, where can I find model for 0.4.0?

Note: If I use 0.3.0 binary with 0.3.0 model, it runs.

Also, I assume, if I want to use it in python (basically import deepspeech), I need to clone the git and follow the steps mentioned

cd native_client/python
make bindings
pip install dist/deepspeech*

we have binaries on taskcluster as well: https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.arm64/artifacts/public/deepspeech-0.4.0a0-cp35-cp35m-linux_aarch64.whl

This is because you are using trie file from master with older binaries. You might want to checkout v0.3.0 git tag

In task cluster we only have it for python3.5, where as I am using python 3.6. This is where I have started to explore cross compiling it.

Wrt model, in task cluster (as well in release pages), model is available only for 0.3.0 and not for 0.4.0alpha. With 0.4.0 binary, 0.3.0 model gives above error

You can use the v0.3 model, but you need to use the v0.4 trie file, which has landed in the master branch recently. It’s at data/lm/trie.

Thanks. it works. Is there a possibility to get whl for python3.6?

In addition, output is a garbage in case of 0.4.0 with latest trie.

sekar@aml:~/DeepSpeech/native_client.arm64.cpu.linux/0.3.0$ ./deepspeech --model …/data/models/output_graph.pbmm --alphabet …/data/models/alphabet.txt --lm …/data/models/lm.binary --trie …/data/models/trie.2 --audio /home/sekar/DeepSpeech/Keras-Trigger-Word/file.wav
TensorFlow: v1.11.0-9-g97d851f
DeepSpeech: v0.3.0-0-gef6b5bd
i say does message to this is a esst
sekar@aml:~/DeepSpeech/native_client.arm64.cpu.linux/0.3.0$ cd …/0.4.0/
sekar@aml:~/DeepSpeech/native_client.arm64.cpu.linux/0.4.0$ ./deepspeech --model …/data/models/output_graph.pbmm --alphabet …/data/models/alphabet.txt --lm …/data/models/lm.binary --trie …/data/models/trie --audio /home/sekar/DeepSpeech/Keras-Trigger-Word/file.wav
TensorFlow: v1.12.0-rc2-5-g1c93ca2
DeepSpeech: v0.4.0-alpha.0-0-g8b0abd5
te avy tli sil h la nissi kile orh skp duz hah xn gurn x fo q l nae tald eu j h kaw pok dat y owo na ul nxr m fe e li sht ro riv oyot zi’e zl fh o’ri wk gi gs shy doh kas d ot bro zadi hayl ise mn esy ou deep x cry zit nu amr kiut c if klk unjo ohk
sekar@aml:~/DeepSpeech/native_client.arm64.cpu.linux/0.4.0$ aplay /home/sekar/DeepSpeech/Keras-Trigger-Word/file.wavPlaying WAVE ‘/home/sekar/DeepSpeech/Keras-Trigger-Word/file.wav’ : Signed 16 bit Little Endian, Rate 16000 Hz, Mono
sekar@aml:~/DeepSpeech/native_client.arm64.cpu.linux/0.4.0$

Yes, there was no python 3.5 on ARMbian, so that’s why we do not do any other build

Did you re-export 0.3.0 model ? There are changes that makes the model output gargabe with newer binaries if you dont.

Hi I am using balbes150 - Version 5.67 20181117, which is coming with Python 3.6.6. So looking for some option to get it.

We only support ARMbian / Debian Stretch, so Python 3.5. Please follow the documentation if you need to rebuild in your case.

Thanks. Where can I find steps to be followed for cross compiling it ? Or shall i need to compile natively in my S912 box?

I documented cross-compilation earlier in the thread. Mostly the same steps, just some extra --config to pass

So @sekarpdkt can you document your status here ? Have you been able to properly setup and cross-compile libdeepspeech.so ?