Loading model from file /Users/tringuyen/Documents/DeepSpeech/myresult2/export/output_graph.pb
TensorFlow: v1.12.0-10-ge232881c5a
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-04-11 17:15:48.773540: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Invalid argument: No OpKernel was registered to support Op 'StridedSlice' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[{{node lstm_fused_cell/strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](lstm_fused_cell/BlockLSTM:1, lstm_fused_cell/strided_slice/stack, lstm_fused_cell/strided_slice/stack_1, lstm_fused_cell/strided_slice/stack_2)]]
Traceback (most recent call last):
File "/usr/local/bin/deepspeech", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/deepspeech/client.py", line 80, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File "/usr/local/lib/python3.7/site-packages/deepspeech/__init__.py", line 14, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 3
Loading model from file /Users/tringuyen/Downloads/models/output_graph.pb
TensorFlow: v1.12.0-10-ge232881c5a
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-04-11 22:27:31.442884: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.42s.
Loading language model from files /Users/tringuyen/Documents/DeepSpeech/mymodels/vnlm.binary /Users/tringuyen/Documents/DeepSpeech/mymodels/vntrie
Loaded language model in 0.00239s.
Running inference.
với chồng muối ù hình u rưng ăn ỏi t phiện
Inference took 10.810s for 5.375s audio file.
lissyx
((slow to reply) [NOT PROVIDING SUPPORT])
11
Then there is something wrong when you export, but I don’t know what since you have proper git checkout and proper tensorflow installation. And @reuben does work on Mac and has no issue either.
You mean exporting a model from a checkpoint? I’ve never had any problems with any version, as long as I use the same version of TensorFlow in the client build and the Python package used for exporting.