Failed using my own model

Sir,

I will let you know how everything is going from what Lyssix told me. The problem with your solution, it’s that I don’t have Nvida GPU. I have an AMD, so I can’t use gpu command. However I appreciate your help and I thank you :slight_smile: !

This time I can’t even train.

So what I did :

  • sudo apt-get install python3.6
  • alias python=’/usr/bin/python3.6’
  • . ~/.bashrc
  • sudo apt install git
  • tar xzvf git-lfs-linux-amd64-v2.5.2.tar.gz
  • ./install.sh
  • sudo ./install.sh
  • git lfs install
  • git clone https://github.com/mozilla/DeepSpeech
  • wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.2.0/deepspeech-0.2.0-models.tar.gz | tar xvfz -
  • sudo apt install virtualenv
  • virtualenv -p python3.6 $HOME/tmp/deepspeech-venv/
  • source $HOME/tmp/deepspeech-venv/bin/activate
  • pip3 install deepspeech
  • cd DeepSpeech/
  • pip3 install -r requirements.txt
  • git checkout v0.2.0
  • pip3 install -r requirements.txt
  • python3 util/taskcluster.py --target .
  • ./bin/run-ldc93s1.sh

What I get :

  • [ ! -f DeepSpeech.py ]
  • [ ! -f data/ldc93s1/ldc93s1.csv ]
  • [ -d ]
  • python -c from xdg import BaseDirectory as xdg; print(xdg.save_data_path(“deepspeech/ldc93s1”))
  • checkpoint_dir=/home/xa/.local/share/deepspeech/ldc93s1
  • python -u DeepSpeech.py --train_files data/ldc93s1/ldc93s1.csv --dev_files data/ldc93s1/ldc93s1.csv --test_files data/ldc93s1/ldc93s1.csv --train_batch_size 1 --dev_batch_size 1 --test_batch_size 1 --n_hidden 494 --epoch 75 --checkpoint_dir /home/xa/Bureau/checkpoint/ldc93s1 --decoder_library_path ./libctc_decoder_with_kenlm.so --export_dir /home/xa/Bureau/exportModel
    Traceback (most recent call last):
    File “DeepSpeech.py”, line 1976, in
    tf.app.run(main)
    File “/home/xa/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/platform/app.py”, line 126, in run
    _sys.exit(main(argv))
    File “DeepSpeech.py”, line 1927, in main
    initialize_globals()
    File “DeepSpeech.py”, line 336, in initialize_globals
    custom_op_module = tf.load_op_library(FLAGS.decoder_library_path)
    File “/home/xa/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py”, line 58, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename, status)
    File “/home/xa/tmp/deepspeech-venv/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py”, line 516, in exit
    c_api.TF_GetCode(self.status.status))
    tensorflow.python.framework.errors_impl.NotFoundError: ./libctc_decoder_with_kenlm.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl11string_viewE

Check the documentation, this downloaded latest master, which is now based on TensorFlow r1.11 while 0.2.0 is r1.6, that explains the symbol not found.

So I need to do :

  • python3 util/taskcluster.py --branch v0.2.0 --target .

In order to get the right native client? I’m sorry for asking so much, but I’m blocked since a moment.

Yes. Also, please try to use proper code formatting otherwise it’s painful to read and we can miss some informations that are being interpreted as message formatting.

It seems it’s working but it’s taking a lot of memory to launch the inferences. I’m gonna launch some tests during these 2 next weeks and I will keep in touch with you.

However, I would like to thank you all guys for your help !

Can you be more precise ?

Well, I gave like 11Go of RAM and when I launched the inferences the VM freezed and I had to shut it down. I gave 2 more Go and it works, but it seems a lot.

Strange, I just verified, valgrind --tool=massif reports heap allocation ~650MB.

Not in my case, it went up really fast. But I will give you my feedback from my futur tests.

Without language model:

With language model:

How to make two different virtual env ?

I am working on DeepSpeech 0.4.1
tensorflow-gpu==1.12.0
python 3.6
All requirements are installed. I have downloaded pretrained model and using common voice audio to check pre-trained model.

This is what I am getting. I couldn’t find the solution of this problem.
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio HF1.wav
Loading model from file models/output_graph.pbmm
TensorFlow: v1.6.0-18-g5021473
DeepSpeech: v0.2.0-0-g009f9b6
2019-01-28 18:24:10.033574: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Invalid argument: No OpKernel was registered to support Op ‘Pack’ with these attrs. Registered devices: [CPU], Registered kernels:

 [[Node: lstm_fused_cell/stack_1 = Pack[N=2, T=DT_INT32, axis=1](input_lengths, lstm_fused_cell/range_1)]]

Traceback (most recent call last):
File “/home/rc/.local/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/rc/.local/lib/python2.7/site-packages/deepspeech/client.py”, line 81, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/rc/.local/lib/python2.7/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 3

Kindly help me. Thank you!

Looks like you are using 0.2.0 binaries, this cannot work with a 0.4.1 model.

Can you guide me how to fix it ?
Thank you!

I had been working on DeepSpeech 0.2.0 but I left using it because decoded results were not separated with space due to CTC decoder problem in 0.2.0

I downloaded Deepspeech 0.4.1 in different directory. Help me in changing the binaries which are causing problem.

Thank you!!!

I’m not sure I can fix your system for you, there’s nothing magic here: setup a new virtualenv if you are using Python binaries and pip install. It’s all documented.

Its resolved, my audio file was not in the required directory. everything else is working perfect.
Thank you!

what is tensorflow version should we used? I clone Deepspeech from this link


and clone Mozilla tensorflow ver 1.14 from here:
git clone https://github.com/mozilla/tensorflow.git