AWS Neuron / Inferentia / inf1.xlarge support?

Hey all,

I’m doing som experiments with getting DeepSpeech to run on these AWS Inferentia instances.

Thought I might just ask if anyone else has had experience getting this going. @sayantangangs.91 I believe you mentioned you were going to experiment with doing this (in another thread from March).

It doesn’t ‘just work’ out of the box due to (I assume) the need to switch up to the AWS Neuron library instead of CUDA. Not sure how all this will look to tensorflow and/or the compiled DeepSpeech library.

If nobody else has thoughts, I’ll try to update back onto this thread with anything I find out, anyway.

Thanks

1 Like

there’s mostly no detail easily available on that, but i have the feeling you need to feed the model in tensorflow format to their compiler, which is obvisouly not going to work well

if they have TFLite delegate implem, it might work, but anyway it would require quite some work on tensorflow itself

1 Like