Error with sample model on Raspbian Jessie


(Foti Dim) #1

When trying to use the pre-trained model in Raspberry 3 with Raspbian Jessie I get this error. I suspect I am running out of memory as the error takes some time to appear while my memory consumption builds up.

I used the binary version of Tensorflow using the instructions here. The only change that I did was:

python util/taskcluster.py --arch arm --target /path/to/destination/folder

where I used the

–arch arm

to download the right architecture.


(Lissyx) #2

Thanks!

Hm, two things:

  • try without the language model (remove lm.binary and trie arguments)
  • I don’t think it is a memory issue, the lack of OpKernel is the root cause I guess

(Foti Dim) #3

@lissyx I get a different error without the arguments which seems to be OpKernel related. Any suggestions?


(Lissyx) #4

Well, it looks like this broke when we switched from being tensorflow ~r1.3-ish based to r1.4. So I will have to take some time to see why it is broken, but I suspect that it’s macro-related, and this is where something is going wrong: https://github.com/mozilla/tensorflow/blob/08894f64fc67b7a8031fc68cb838a27009c3e6e6/tensorflow/core/platform/platform.h#L46-L58

See the comment I made about DT_INT64 :slight_smile:


(Lissyx) #5

It’s not clear in your latest reply whether you tested with our binaries or with yours. If it’s yours, please re-test with ours. Those should be with r1.3-based: https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.234ff38f743c5591c3ea78404c8f17151f8c559e.arm/artifacts/public/native_client.tar.xz


(Lissyx) #6

@fotiDim Right, so it’s likely I did an error when handling the merge of r1.4 TensorFlow. They use RASPBERRY_PI define to do the same as we did with __ARM_RPI__. Our build does properly define __ARM_RPI__ but not RASPBERRY_PI, and the check is against the absence of those defines. So it means out builds are defining IS_MOBILE_PLATFORM while it should not. Once tasks in https://tools.taskcluster.net/groups/XmLsLKXdQ3OTfzfdDQPM3Q gets completed we can retrigger a build against those artifacts and it should solve the issue :). I’m on a flight right now between Paris and Boston, and given the build time of TensorFlow (3h at least) it’s likely I won’t be able to handle all of that before I do reach Austin.


(Lissyx) #7

Okay @fotiDim, I have been able to send a PR on DeepSpeech that uses those now artifacts. Once https://tools.taskcluster.net/groups/FbxT3V_YTua3qSRNEwqauQ/tasks/Wnpx0NPjS3G20t76jbl15Q/details completes, you should be able to download native_client.tar.xz from the “Run artifacts” tab and it should work better.


(Lissyx) #8

@fotiDim here is the proper link https://queue.taskcluster.net/v1/task/Wnpx0NPjS3G20t76jbl15Q/runs/0/artifacts/public/native_client.tar.xz


(Lissyx) #9

So, it looks like I will be able to merge that:

Reproducing your error with current bogus artifact:

pi@raspberrypi:~/nc_r1.3 $ ./deepspeech ~/tmp/output_graph.pb ~/wav/LDC93S1.wav ~/tmp/alphabet.txt 
Invalid argument: No OpKernel was registered to support Op 'SparseToDense' with these attrs.  Registered devices: [CPU], Registered kernels:
  device='CPU'; T in [DT_STRING]; Tindices in [DT_INT64]
  device='CPU'; T in [DT_STRING]; Tindices in [DT_INT32]
  device='CPU'; T in [DT_BOOL]; Tindices in [DT_INT64]
  device='CPU'; T in [DT_BOOL]; Tindices in [DT_INT32]
  device='CPU'; T in [DT_FLOAT]; Tindices in [DT_INT64]
  device='CPU'; T in [DT_FLOAT]; Tindices in [DT_INT32]
  device='CPU'; T in [DT_INT32]; Tindices in [DT_INT64]
  device='CPU'; T in [DT_INT32]; Tindices in [DT_INT32]

	 [[Node: SparseToDense = SparseToDense[T=DT_INT64, Tindices=DT_INT64, validate_indices=true](CTCBeamSearchDecoder, CTCBeamSearchDecoder:2, CTCBeamSearchDecoder:1, SparseToDense/default_value)]]
Segmentation fault

Using build I made:

pi@raspberrypi:~/nc_r1.4 $ ./deepspeech ~/tmp/output_graph.pb ~/wav/LDC93S1.wav ~/tmp/alphabet.txt 
she had your dark suit in greasy wash water all year

(Foti Dim) #10

@lissyx with the latest binaries you uploaded I am getting this error:


(Lissyx) #11

Would you mind trying with a LDC93S1 sample model ? https://queue.taskcluster.net/v1/task/HLIgJrzcSgGlMR1Xn3C8fw/runs/0/artifacts/public/output_graph.pb


(Foti Dim) #12

Boy you are fast! New error this time…


(Lissyx) #13

You passed no model file, only the directory :slight_smile:


(Foti Dim) #14

Ooops, my bad. That’s what happens when it’s late :slight_smile:
So no errors this time! Recognition was not accurate though.


(Lissyx) #15

Awesome, it decodes :). Bad results might have several causes, but it’s another topic :slight_smile:


(Foti Dim) #16

@lissyx shall I open another discussion for the accuracy or should I wait for the next release first? Not that I only face the accuracy problem on the Raspberry Pi.


(Lissyx) #17

Yes, please open another thread so we do not pollute this one :slight_smile:


(Foti Dim) #18

@lissyx I am giving it another go. I made a clean install and installed everything from scratch. I am getting the same error and your link for the updated binaries does not work anymore. Can re-upload or even better put them on Github?


(Lissyx) #19

This should be good: https://index.taskcluster.net/v1/task/project.deepspeech.deepspeech.native_client.master.arm/artifacts/public/native_client.tar.xz


DeepSpeech native client compilation for Asus Thinkerboard
(Foti Dim) #20

I gave it another go with your files. I tried another power supply and I had the same problems. There must be a fundamental difference with mine and your setup. Is it possible to export the img from your SD card and I burn onto my SD?