@GoBa sir i have the same issue. but it resloved.
which version git clone deepspeech you run in your pc, that similar version pip deepspeech you installed. that is not sync then it is throw this issue.this is my perspective only.
i did this,
pip3 install deepspeech-gpu
deepspeech --model models/output_graph.pb --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio 6.wav
virtualenv -p python3 $HOME/tmp/DeepSpeech_v0.2.0/
source /home/dell/tmp/DeepSpeech_v0.2.0/bin/activate
cd git-lfs-linux-amd64-v2.5.2/
sudo ./install.sh
git clone GitHub - mozilla/DeepSpeech: DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
DeepSpeech-0.2.1-alpha.1
cd DeepSpeech
pip3 install -r requirements.txt
python3 util/taskcluster.py --branch “v0.2.1-alpha.1” --target new_native_client/
change requirements.txt → tensorflow-gpu==1.11.0
build checkpoint:
python3 DeepSpeech.py --n_hidden 2048 --initialize_from_frozen_model …/models/output_graph.pb --checkpoint_dir fine_tuning_checkpoints --epoch 3 --train_files audio_folder/audio_file_train.csv --dev_files audio_folder/audio_file_dev.csv --test_files audio_folder/audio_file_test.csv --learning_rate 0.0001 --decoder_library_path new_native_client/libctc_decoder_with_kenlm.so --alphabet_config_path data/alphabet.txt --lm_binary_path data/lm/lm.binary --lm_trie_path data/lm/trie
export .pb model:
python3 DeepSpeech.py --n_hidden 2048 --initialize_from_frozen_model …/models/output_graph.pb --checkpoint_dir fine_tuning_checkpoints --epoch 3 --train_files audio_folder/audio_file_train.csv --dev_files audio_folder/audio_file_dev.csv --test_files audio_folder/audio_file_test.csv --learning_rate 0.0001 --decoder_library_path new_native_client/libctc_decoder_with_kenlm.so --alphabet_config_path data/alphabet.txt --lm_binary_path data/lm/lm.binary --lm_trie_path data/lm/trie --export_dir funetune_export/
deepspeech --model funetune_export/output_graph.pb --alphabet …/models/alphabet.txt --lm …/models/lm.binary --trie …/models/trie --audio …/6.wav
(DeepSpeech_v0.2.0) dell@dell-OptiPlex-7050:~/Documents/DeepSpeech$ deepspeech --model funetune_export/output_graph.pb --alphabet …/models/alphabet.txt --lm …/models/lm.binary --trie …/models/trie --audio …/6.wav
Loading model from file funetune_export/output_graph.pb
TensorFlow: v1.6.0-18-g5021473
DeepSpeech: v0.2.0-0-g009f9b6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2018-10-01 23:42:40.476190: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-10-01 23:42:40.558563: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-10-01 23:42:40.558907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.43
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.57GiB
2018-10-01 23:42:40.558918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-10-01 23:42:40.684696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3311 MB memory) → physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Invalid argument: No OpKernel was registered to support Op ‘Pack’ with these attrs. Registered devices: [CPU,GPU], Registered kernels:
> [[Node: lstm_fused_cell/stack_1 = Pack[N=2, T=DT_INT32, axis=1](input_lengths, lstm_fused_cell/range_1)]]
Traceback (most recent call last):
File “/home/dell/tmp/DeepSpeech_v0.2.0/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/dell/tmp/DeepSpeech_v0.2.0/lib/python3.5/site-packages/deepspeech/client.py”, line 81, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/dell/tmp/DeepSpeech_v0.2.0/lib/python3.5/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 3
solution:
pip3 install deepspeech-gpu==0.2.1-alpha.1
(DeepSpeech_v0.2.0) dell@dell-OptiPlex-7050:~/Documents/DeepSpeech$ deepspeech --model funetune_export/output_graph.pb --alphabet data/alphabet.txt --lm data/lm/lm.binary --trie data/lm/trie --audio …/6.wav
Loading model from file funetune_export/output_graph.pb
TensorFlow: v1.11.0-rc2-4-g77b7b17
DeepSpeech: v0.2.1-alpha.1-0-gae2cfe0
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2018-10-02 00:16:42.960099: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-10-02 00:16:43.035727: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-10-02 00:16:43.036300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.43
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.50GiB
2018-10-02 00:16:43.036311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2018-10-02 00:16:43.238818: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-10-02 00:16:43.238844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2018-10-02 00:16:43.238849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2018-10-02 00:16:43.239010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3234 MB memory) → physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Loaded model in 0.382s.
Loading language model from files data/lm/lm.binary data/lm/trie
Loaded language model in 3.47s.
Running inference.
Inference took 2.352s for 4.362s audio file.


