Error when fine tuning DeepSpeech with custom dataset

I’m following this tutorial to fine tune deepspeech v0.7.4 by training on a small custom dataset. The only difference between the tutorial and what I’m doing is that I’m using a different dataset. However, when I try to train my model, it throws this error:

Traceback (most recent call last):
  File "DeepSpeech.py", line 12, in <module>
ds_train.run_script()
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 955, in run_script
absl.app.run(main)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 312, in run
_run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 927, in main
train()
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 473, in train
gradients, loss, non_finite_files = get_tower_results(iterator, optimizer, dropout_rates)
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 312, in get_tower_results
avg_loss, non_finite_files = calculate_mean_edit_distance_and_loss(iterator, dropout_rates, reuse=i > 0)
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 239, in calculate_mean_edit_distance_and_loss
logits, _ = create_model(batch_x, batch_seq_len, dropout, reuse=reuse, rnn_impl=rnn_impl)
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 190, in create_model
output, output_state = rnn_impl(layer_3, seq_length, previous_state, reuse)
  File "/content/DeepSpeech/training/deepspeech_training/train.py", line 128, in rnn_impl_cudnn_rnn
sequence_lengths=seq_length)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/layers/base.py", line 548, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 854, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/usr/local/lib/python3.7/dist- 
packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in converted code:

/usr/local/lib/python3.7/dist-packages/tensorflow_core/contrib/cudnn_rnn/python/layers/cudnn_rnn.py:427 call
    initial_state = self._zero_state(batch_size)
/usr/local/lib/python3.7/dist-packages/tensorflow_core/contrib/cudnn_rnn/python/layers/cudnn_rnn.py:452 _zero_state
    res.append(array_ops.zeros(sp, dtype=self.dtype))
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/array_ops.py:2338 zeros
    output = _constant_if_small(zero, shape, dtype, name)
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/array_ops.py:2295 _constant_if_small
    if np.prod(shape) < 1000:
<__array_function__ internals>:6 prod
    
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:3052 prod
    keepdims=keepdims, initial=initial, where=where)
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:86 _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/ops.py:736 __array__
    " array.".format(self.name))

NotImplementedError: Cannot convert a symbolic Tensor (tower_0/cudnn_lstm/strided_slice_1:0) to a numpy array.

Here’s a link to the colab notebook that I’m working in. The error is ocurring in the second to last cell in the notebook. I’ve copied this notebook from the notebook that was used in the tutorial article, and I’ve flagged all the parts that I’ve changed from the article with the comment DIFFERENT THAN ORIGINAL. Testing the model using the --test_files flag with the path to my dataset works just fine, it’s only training that has a problem. This makes me think the problem may not be about how the dataset is structured or imported, but how I set up the training environment for DeepSpeech.

Any help is much much appreciated.

Sorry, no maintenance any more. Check out coqui.ai.