yeah …solved UTF 8 special space
after i run the script… i got something like this… i searched on internet, but iam unable to figure out it.
I STARTING Optimization
Epoch 0 | Training | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000
Epoch 0 | Validation | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 | Dataset: minigir/train/train.csv
Traceback (most recent call last):
File "DeepSpeech.py", line 931, in <module>
absl.app.run(main)
File "/home/metlife-vad/.local/lib/python3.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/metlife-vad/.local/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "DeepSpeech.py", line 915, in main
train()
File "DeepSpeech.py", line 642, in train
dev_loss = dev_loss / total_steps
ZeroDivisionError: float division by zero
hey @lissyx … i am just training on three examples will it enough for training (this is just to check weather iam getting results or not) so that i can go further training of more number data.
What’s your training flags / command line ?
i mean three audio files transcripts … the csv file has three audio transcripts
Please, reply to what I asked.
i didn’t understand !!
Well then just say it. I need your python DeepSpeech.py [...]
full command-line
iam running this code
#!/usr/bin/env bash
set -xe
if [ ! -f DeepSpeech.py ]; then
echo "Please make sure you run this from DeepSpeech's top level directory."
exit 1
fi;
python3 -u DeepSpeech.py \
--train_files minigir/train/train.csv \
--dev_files minigir/train/train.csv \
--test_files minigir/train/train.csv \
--train_batch_size 48 \
--dev_batch_size 40 \
--test_batch_size 40 \
--n_hidden 1024 \
--epochs 64 \
--early_stop True \
--es_steps 6 \
--es_mean_th 0.1 \
--es_std_th 0.1 \
--dropout_rate 0 \
--log_level 1 \
--learning_rate 0.000025 \
--report_count 100 \
--export_dir metlife-models/ \
--checkpoint_dir metlife-models/check_point \
--alphabet_config_path metlife-models/alphabet.txt \
--lm_binary_path metlife-models/lm.binary \
--lm_trie_path metlife-models/trie \
"$@"
Ok, if you only have three audio files, please try using batch size not above 3.
okay … i got it …well i will try now
hi @lissyx
Loading model from file metlife-models/output_graph.pb
TensorFlow: v1.13.1-10-g3e0cc53
DeepSpeech: v0.5.1-0-g4b29b78
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-11-25 15:24:22.279971: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-25 15:24:22.320943: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-11-25 15:24:22.321039: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321083: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321185: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
Specified model file version (0) is incompatible with minimum version supported by this client (1). See https://github.com/mozilla/DeepSpeech/#model-compatibility for more information
Traceback (most recent call last):
File "/usr/local/bin/deepspeech", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/deepspeech/client.py", line 88, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File "/usr/local/lib/python3.7/dist-packages/deepspeech/__init__.py", line 23, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 8195
i searched whole internet i didn’t find the solution for this and iam using correct deepspeech version
You have the error here … You need to share more details, but it looks like you exported wrongly.
sudo deepspeech --model metlife-models/output_graph.pb --alphabet metlife-models/alphabet.txt --lm metlife-models/lm.binary --trie metlife-models/trie --audio minigir/wav/tmp2.wav
Loading model from file metlife-models/output_graph.pb
TensorFlow: v1.13.1-10-g3e0cc53
DeepSpeech: v0.5.1-0-g4b29b78
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-11-25 15:24:22.279971: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-25 15:24:22.320943: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-11-25 15:24:22.321039: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321083: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321185: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
Specified model file version (0) is incompatible with minimum version supported by this client (1). See https://github.com/mozilla/DeepSpeech/#model-compatibility for more information
Traceback (most recent call last):
File "/usr/local/bin/deepspeech", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/deepspeech/client.py", line 88, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File "/usr/local/lib/python3.7/dist-packages/deepspeech/__init__.py", line 23, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 8195
what is model compatability and error code 8195. still did’t understand
Ok, seriously, read the links, share the informations I am asking.
Please avoid using sudo when it’s not necessary.
iam using sudo for some permission to access
Your setup is likely wrong, there’s absolutely no reason you should have to do this …
Model version is documented and checking is here to ensure you don’t try to run a model that is not compatible with a binary. SInce you have still not documented your export phase, I cannot help you. And since we are now close to 100 messages to help you I’m really getting close to the end of my patience.
https://deepspeech.readthedocs.io/en/latest/Error-Codes.html
So please make an effort.
i tried to upgrade to newest model of deepspeech it still showing the same error.
how can i downgrade or re-export it