EdgeTPU board support?

Is it possible to run DeepSpeech on a EdgeTPU board? I’ve been trying to compile the tflite model use edgetpu_compile but encountered couple of errors.

I followed the instructions here to do a full integer quantization, but it gives the following error:

RuntimeError: Quantization not yet supported for op: CUSTOM

I suppose this is due to AudioSpectrogram and Mfcc ops.

The error is gone if I remove:

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

But the output model is not recognized as quantized by edgetpu_compile, it complains Model not quantized when I tried to compile it. It makes sense since the model is full integer quantized.

Thank you!

You should be able to comment out the AudioSpectrogram and Mfcc ops from the inference graph and see if that works. I think others have tried and run into further problems but I don’t know for sure. If that works, please share the results! You could then adapt the native client to workaround the lack of the feature computation subgraph.

I’ve already extensively tried and there was no way to get close to something working because of limitations compiler-wise / edge-tpu wise. Even stripping a lot of components from the model, it was not working

1 Like