I managed to get this working on the 0.51 DeepSpeech tag with the corresponding TensorFlow 1.13 Mozilla fork the other day with a workaround and running it on a Raspberry Pi 4. I’m not within reach of my Pi 3 at the moment, although I would expect it to work there, too. It was markedly faster with TensorFlow Lite compared to te .pb and .pbmm. I did it on the 0.51 tag mainly so I could use the prebuilt .tflite model at v0.51 (and looking forward to 0.6 given Google's On-Device Speech Recognizer and https://github.com/mozilla/DeepSpeech/pull/2307).
See below for the workaround.
It was late then, and now I’m on a flaky connection and without sufficient power…so I haven’t gone back in and verified if the .cc files even need these edits (I had just been working against those as they seemed the best culprit from what I could piece together originally), but anyway the BUILD file definitely needed to be touched up for this workaround (it’s the stanza for the cc_library with name = “tensor_utils” part in the BUILD below).
I did this from a pristine Ubuntu 18.04 Docker container, not the GPU accelerated Dockerfile bundled with DeepSpeech (although I imagine that would work if you have an Nvidia GPU handy). By the way, here’s the thing in action on a Pi 4: https://www.icloud.com/sharedalbum/#B0B5ON9t3uAsJR . Like I say, it was late. Binding this for Python, readying the model for execution (instead of doing a full run top to bottom), and taking out artificial delays in the dialogue would make it run a bit faster.
Anyway, the diff.
(venv) root@d1d4b2c0eb07:/tensorflow# git diff
diff --git a/tensorflow/lite/kernels/internal/BUILD b/tensorflow/lite/kernels/internal/BUILD
index 4be3226938..7e2f66cc58 100644
--- a/tensorflow/lite/kernels/internal/BUILD
+++ b/tensorflow/lite/kernels/internal/BUILD
@@ -535,7 +535,7 @@ cc_library(
":neon_tensor_utils",
],
"//conditions:default": [
- ":portable_tensor_utils",
+ ":neon_tensor_utils",
],
}),
)
diff --git a/tensorflow/lite/kernels/internal/optimized/tensor_utils_impl.h b/tensorflow/lite/kernels/internal/optimized/tensor_utils_impl.h
index 8f52ef131d..780ae1da6c 100644
--- a/tensorflow/lite/kernels/internal/optimized/tensor_utils_impl.h
+++ b/tensorflow/lite/kernels/internal/optimized/tensor_utils_impl.h
@@ -24,9 +24,9 @@ limitations under the License.
#endif
#ifndef USE_NEON
-#if defined(__ARM_NEON__) || defined(__ARM_NEON)
+//#if defined(__ARM_NEON__) || defined(__ARM_NEON)
#define USE_NEON
-#endif // defined(__ARM_NEON__) || defined(__ARM_NEON)
+//#endif // defined(__ARM_NEON__) || defined(__ARM_NEON)
#endif // USE_NEON
namespace tflite {
diff --git a/tensorflow/lite/kernels/internal/tensor_utils.cc b/tensorflow/lite/kernels/internal/tensor_utils.cc
index 701e5a66aa..21f2723c3b 100644
--- a/tensorflow/lite/kernels/internal/tensor_utils.cc
+++ b/tensorflow/lite/kernels/internal/tensor_utils.cc
@@ -16,9 +16,9 @@ limitations under the License.
#include "tensorflow/lite/kernels/internal/common.h"
#ifndef USE_NEON
-#if defined(__ARM_NEON__) || defined(__ARM_NEON)
+//#if defined(__ARM_NEON__) || defined(__ARM_NEON)
#define USE_NEON
-#endif // defined(__ARM_NEON__) || defined(__ARM_NEON)
+//#endif // defined(__ARM_NEON__) || defined(__ARM_NEON)
#endif // USE_NEON
#ifdef USE_NEON
diff --git a/tensorflow/lite/tools/make/Makefile b/tensorflow/lite/tools/make/Makefile
index 994f660dba..805b699d23 100644
--- a/tensorflow/lite/tools/make/Makefile
+++ b/tensorflow/lite/tools/make/Makefile
@@ -1,3 +1,5 @@
+#!/bin/bash
+
# Find where we're running from, so we can store generated files here.
ifeq ($(origin MAKEFILE_DIR), undefined)
MAKEFILE_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
You should be able to bazel build TF and make the libraries.
bazel build --config=monolithic --config=rpi3 --config=rpi3_opt --define=runtime=tflite --config=noaws --config=nogcp --config=nohdfs --config=nokafka --config=noignite -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
In the DeepSpeech repo you may hit some errors and need to update the TFDIR path and the RASPBIAN path in native_client/[definitions.mk](http://definitions.mk. But you should be able to make the binaries.
make TARGET=rpi3 deepspeech