Error while fine tuning

Dear all;

I’m fin tuning deepspeech v9.0 on English data set that i downloaded from common voice
I’ve installed cuda 11.1 an cudnn and my gpu model is NVIDIA GTX 1080 ti

when I run the following command I get error that mentioned in below.

command:

    `python3 DeepSpeech.py --n_hidden 2048 --checkpoint_dir ../DataBase/deepspeech-0.9.0-checkpoint --epochs 5 --train_files ../DataBase/dd/en/clips/train.csv --dev_files ../DataBase/dd/en/clips/dev.csv --test_files ../DataBase/dd/en/clips/test.csv --learning_rate 0.0001 --export_dir ../DataBase/dd/export_fine --train_cudnn`

and error is :

`I Loading best validating checkpoint from ../DataBase/deepspeech-0.9.0-checkpoint/best_dev-1466475

I Loading variable from checkpoint: beta1_power
Traceback (most recent call last):
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1365, in _do_call
return fn(*args)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1348, in _run_fn
self._extend_graph()
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1388, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op ‘CudnnRNNCanonicalToParams’ used by {{node tower_0/cudnn_lstm/cudnn_lstm/CudnnRNNCanonicalToParams}}with these attrs: [seed=4568, dropout=0, num_params=8, T=DT_FLOAT, input_mode=“linear_input”, direction=“unidirectional”, rnn_mode=“lstm”, seed2=247]
Registered devices: [CPU, XLA_CPU, XLA_GPU]
Registered kernels:
device=‘GPU’; T in [DT_DOUBLE]
device=‘GPU’; T in [DT_FLOAT]
device=‘GPU’; T in [DT_HALF]

 [[tower_0/cudnn_lstm/cudnn_lstm/CudnnRNNCanonicalToParams]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “DeepSpeech.py”, line 12, in
ds_train.run_script()
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/train.py”, line 976, in run_script
absl.app.run(main)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/absl/app.py”, line 303, in run
_run_main(main, args)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/absl/app.py”, line 251, in _run_main
sys.exit(main(argv))
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/train.py”, line 948, in main
train()
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/train.py”, line 527, in train
load_or_init_graph_for_training(session)
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/util/checkpoints.py”, line 137, in load_or_init_graph_for_training
_load_or_init_impl(session, methods, allow_drop_layers=True)
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/util/checkpoints.py”, line 98, in _load_or_init_impl
return _load_checkpoint(session, ckpt_path, allow_drop_layers, allow_lr_init=allow_lr_init)
File “/home/medrik/AVA/DeepSpeech/training/deepspeech_training/util/checkpoints.py”, line 71, in _load_checkpoint
v.load(ckpt.get_tensor(v.op.name), session=session)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py”, line 324, in new_func
return func(*args, **kwargs)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/ops/variables.py”, line 1033, in load
session.run(self.initializer, {self.initializer.inputs[1]: value})
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 956, in run
run_metadata_ptr)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1180, in _run
feed_dict_tensor, options, run_metadata)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1359, in _do_run
run_metadata)
File “/home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”, line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op ‘CudnnRNNCanonicalToParams’ used by node tower_0/cudnn_lstm/cudnn_lstm/CudnnRNNCanonicalToParams (defined at /home/medrik/AVA/tmp/abolfazl/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) with these attrs: [seed=4568, dropout=0, num_params=8, T=DT_FLOAT, input_mode=“linear_input”, direction=“unidirectional”, rnn_mode=“lstm”, seed2=247]
Registered devices: [CPU, XLA_CPU, XLA_GPU]
Registered kernels:
device=‘GPU’; T in [DT_DOUBLE]
device=‘GPU’; T in [DT_FLOAT]
device=‘GPU’; T in [DT_HALF]

 [[tower_0/cudnn_lstm/cudnn_lstm/CudnnRNNCanonicalToParams]]

`

Please read the guidelines before posting, they ask you to read the documentation. Hint CUDA dependency. Which is very common, many posts. The guidelines ask you to search before posting. First post in search for your error would have been.

Yes, tensorflow 1.15.4 requires CUDA 10.0 and CuDNN v7.6. The error is 100% related to that.