Building for ReSpeaker Core v2

Hey folks,

I am trying to build DeepSpeech native client for the ReSpeaker Core v2 based on ARM Cortex-A7.
Following the build from scratch guide, can you tell me which flags I am supposed to use, or if it is possible at all?

Greetings

You should have a look around tensorflow/tools/bazel.rc at r1.11 · mozilla/tensorflow · GitHub

This is where we defined the build flags for cross-compiling. So, following the native_client/README.md you should be able to add your own flags to the bazel build command-line. Either augment tools/bazel.rc on your side, or just throw the whole set of flags we define there (and adapt to your arch and os).

Hello,

@nick.duitz I know this a little old… where you able to get things running on the ReSpeaker?

@Jason_Peterson jeah that is indeed a little old.
I did, but it took like a day to compile. And because of missing CPU instructions, stt peformance was really bad so i ended up using MainRo’s deepspeech Server on a more powerful Setup https://github.com/MainRo/deepspeech-server. So the respeaker was only used to send the voicefiles.

And for voiceactivation I used mycroft https://github.com/MycroftAI/mycroft-precise/wiki/Training-your-own-wake-word

Can’t find my old Project Files though.

Not if you use cross-compilation as documented

Do you mean bad as in super-slow ? On Cortex-A7 I’m not surprised, and with out current model complexity I don’t see an easy way. Though, your first post was from September 2018, model evolved since, maybe you can get something better now?

According to this maybe using the Tflite model is worth a try?

1 Like

Another way to say what I said :slight_smile: